Graduate students from Stanford and Cornell universities claim that they have developed a computer program that can detect an internet troll with 80% accuracy just by analyzing their first five posts online.
The accuracy goes up to 90% after analyzing the first ten posts for a user.
The research studied comment threads on the CNN, Breitbart and IGN websites over an 18 month time frame and divided users into two groups: “Future-Banned Users” and “Never-Banned Users”.
After analyzing the writing styles of 40 million online comments made by 1.7 million people and looking for antisocial triggers, they were able to develop an algorithm to detect “Future-Banned Users” (A.K.A. “Trolls”) internet trolls with 80% accuracy.
Out of nearly 2 million users, 50,000 ended up being banned during the time of the study.
With data provided by the Disqus comment service used by all three sites in the study, the researchers were able to study these banned users as well as deleted comments for “troll signs”.
They found that posts made by trolls are less readable and tend to be far off-topic.
In addition, comments made by trolls tend to rely on more negative adjectives and swear words than the average online comments. Trolls also tend to make more online comments per day as well as more comments per thread than the average online user.
Unlike the average internet user, internet trolls tend to have a history of deleted posts.
Trolls also have a worse writing style and tend to be more argumentative than the rest of the commenters.
This research could pave the way for programs that can weed out abusive comments as well as comment spam on websites before it becomes a nuisance and a problem.
You can read the entire research paper Antisocial Behavior in Online Discussion Communities at http://arxiv.org/abs/1504.00680.