Too often, technology is pegged as the culprit of bullying. Researchers from the Software Agents Group at the MIT Media Lab have tried to change the stigma, however, by developing an algorithm that identifies certain groups of words within a post and assigns them to 30 themes suggesting sensitive topics and possible harassment.

Media Lab Research Assistant Karthik Dinakar and his colleagues partnered with MTV’s  website A Thin Line, which encourages teenagers to post their problems anonymously so other teenagers can leave advice. The stories often focus on bullying and sex, and so the group fed all 5,500 stories through an algorithm to define the themes, which ranged from “duration of a relationship” to “using naked pictures of girlfriend.”

The goal is to have this new algorithm integrated into various social networks, so that when someone leaves a potentially offensive post, a box could pop up notifying a moderator of the post, ban negative posts or warn the commenter about the consequences of cyber bullying. Dinakar told the New Scientist he wanted to create a detector “that can pick up even the subtlest of attacks, such as ‘liking’ a negative Facebook status to make a nasty point.”

Check out the article:

http://www.slate.com/blogs/future_tense/2012/07/11/mit_media_lab_researchers_create_artificial_ingelligence_to_flag_cyberbullying_.html