It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Google’s Clever Plan to Stop Aspiring ISIS Recruits
Google has built a half-trillion-dollar business out of divining what people want based on a few words they type into a search field. In the process, it’s stumbled on a powerful tool for getting inside the minds of some of the least understood and most dangerous people on the Internet: potential ISIS recruits. Now one subsidiary of Google is trying not just to understand those would-be jihadis’ intentions, but to change them.
Jigsaw, the Google-owned tech incubator and think tank—until recently known as Google Ideas—has been working over the past year to develop a new program it hopes can use a combination of Google’s search advertising algorithms and YouTube’s video platform to target aspiring ISIS recruits and ultimately dissuade them from joining the group’s cult of apocalyptic violence. The program, which Jigsaw calls the Redirect Method and plans to launch in a new phase this month, places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for. Those ads link to Arabic- and English-language YouTube channels that pull together preexisting videos Jigsaw believes can effectively undo ISIS’s brainwashing—clips like testimonials from former extremists, imams denouncing ISIS’s corruption of Islam, and surreptitiously filmed clips inside the group’s dysfunctional caliphate in Northern Syria and Iraq.
Google has a new plan to fight internet trolls, and it starts and ends with AI
Jigsaw, an organization that once existed as Google’s think tank, has now taken on a new life of its own and has been tasked with using technology to address a range of geopolitical issues. The latest software to come out of the group is an artificial intelligence tool known as Conversation AI. As Wired reports, “the software is designed to use machine learning to automatically spot the language of abuse and harassment — with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators.
Conversation AI learns and automatically flags problematic language, and assigns it an “attack score” that ranges from 0 to 100. A score of 0 suggests that the language in question is not at all abusive, whereas a score of 100 suggests that it is extremely harmful.
And it looks like it’s working. As Wired notes, “Jigsaw has now trained Conversation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 percent certainty and a 10-percent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack.”