AI in the war against troll farms and outsourced online hatred
Companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. It’s a soul-killing job better left to AI. Photo: A content moderator from TaskUs in BGC.
Mass harassment online has proved so effective that it’s emerging as a weapon of repressive governments. In late 2014, Finnish journalist Jessikka Aro reported on Russia’s troll farms, where day laborers regurgitate messages that promote the government’s interests and inundate opponents with vitriol on every possible outlet, including Twitter and Facebook. In turn, she’s been barraged daily by bullies on social media, in the comments of news stories, and via email. They call her a liar, a “NATO skank,” even a drug dealer, after digging up a fine she received 12 years ago for possessing amphetamines. “They want to normalize hate speech, to create chaos and mistrust,” Aro says. “It’s just a way of making people disillusioned.”
All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on—clamoring for—the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.
Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”
Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.
September 21st, 2016 at 23:59
I read this article in Wired’s print version about a year ago. If I remember right, the moderators end up having a form of PTSD after long bouts of exposure to graphic images, and they eventually move on from their jobs.
It seems to make sense on Google’s part to work on an algorithm that takes the human element out of tasks like this.