It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Researchers teach A.I. to read the internet for signs of dissent

page: 1
24

log in

join
share:
+2 more 
posted on Dec, 5 2016 @ 01:36 PM
link   

Teaching computers to read is one thing. But by designing an algorithm that examined nearly 2 million posts from two popular parenting websites, a multidisciplinary team of UCLA researchers has built an elegant computational model that reflects how humans think and communicate, thereby teaching computers to understand structured narratives within the flow of posts on the internet.

Source

“Fake News” is the latest buzzword circulating the internet and while many on ATS can understand and identify the various forms of misinformation, most people outside of this community do not look at information the same way. The government relies on this fact, but is having a hard time keeping up with the alternative media’s growing viewership and their ability to undermine the narrative.

The Establishment has had enough...



The researchers said their success at managing large-scale data in this way highlights the overarching potential of machine learning, and demonstrates the capability to introduce counter-narratives into internet interactions, break up echo chambers and one day potentially help root out fact from fiction for social media users.

"Our question was, could we devise computational methods to discover an emerging narrative framework underlying internet conversations that was possibly influencing the decision making of many people throughout the country or possibly world?" said Timothy Tangherlini, lead author and a self-described "computational folklorist" who teaches folklore, literature and cultural studies in the Scandinavian section of the UCLA College.

Many of us have made the mistake of not questioning everything that we see or hear, but will soon not have to make the distinction between what is considered the real or fake. Based upon your past internet activity, the decision may be made in advance if the A.I. decides that you are a "threat." Information derived strictly from State sponsored or approved sources will be presented to you instead.


In the study, published in the Journal of Medical Internet Research, Tangherlini and other researchers used sophisticated language modeling to review 1.99 million posts from two parenting sites with active user forums.

They examined posts on Mothering.com—a site known to be a hub of anti-vaccine sentiment—and another parenting site (unnamed due to site privacy rules) where opinions on vaccinations were more varied. Those posts came from 40,056 users and were viewed 20.12 million times over a period of nearly nine years ending in 2012. Most users on both sites identified themselves as a mother.

"The anti-vaccine movement was a clear candidate for this type of study," Tangherlini said. "Tens of thousands of parents were exchanging ideas about child-rearing online and, through those interactions, creating virtual communities where they could share concerns, propose methods to allay those concerns, and share their own experiences."

That anti-vaccine discussion is just one example of when the government may feel the need to “correct the record.” We will no longer be responsible for using critical thinking and no more will the internet represent the disassociated press or disseminating force behind public opinion.

By simply feeding the A.I. web pages of information and nothing else, it was able to learn, understand and discover counter-narrative arguments that spoke directly against the desired narrative. If every thread on ATS was scanned, the A.I. would eventually develop a comprehensive baseline for each member relating to how they think and respond to specific information.

Intervention may be needed...



In this four-part narrative model, a story begins with an orientation, which details the type of event and the major actors in the story, such as family with a newborn infant. The second part, referred to as the complicating action, presents a threat, such as the perceived threat to the infant's health posed by vaccination. The third part suggests a strategy to counteract that threat, such as a parent's attempt to figure out how to avoid vaccinating.

The resolution of the story evaluates the success of the strategy in dealing with the threat. They aligned this narrative model with nearly two million pieces of aggregated content from the parenting sites and, using natural language processing methods, were able to identify characters and the relationships between those characters, discovering the core of the underlying narratives.

On the basis of this work, they discovered that a large number of parents were not only going online to talk about vaccines, their distrust of institutions requiring them, or the perceived health risks of vaccinations, but also to seek out ways to acquire vaccination exemptions for their children. "Stories often emerge through conversation," Tangherlini said. "The framework of the underlying narrative emerges through time as more and more stories are circulated, negotiated, aligned and reconfigured."

Added Roychowdhury: "It's especially impressive, when you take into consideration the fact that all the machine was fed with, were just web pages, nothing else; and it found all the vaccine related concepts all on its own."

The implications of this are staggering and if the government continues down this path, they will one day use these algorithms to “scrub” the internet of any information that has not been deemed "safe" for consumption.


While this study specifically applied to parents' discussions about vaccination, the methods could be applied to any topic, said the researchers, who are pursuing follow up projects like incorporating a sequencing mechanism, which would track story plot.

Roychowdhury says the way we learn about how stories take shape around any given topic can be applied to targeted messaging like advertising or fighting misinformation by allowing machine learning to automatically decipher false narratives as they proliferate. For example, users exposed to particular anti-vaccination narrative could be presented with alternate narratives, based on well-tested public health paradigms, using the same extensive online advertising infrastructure currently used by the likes of Google, Facebook and Amazon.

"In public health, we have hundreds of studies trying to understand the facilitators and barriers to getting vaccinated," Bastani said. "Our data is generally obtained through tools such as questionnaires and electronic medical records. What these tools fail to capture are the very interesting conversations that individuals are having with one another that profoundly shape their views and actions related to vaccinating their children."

Just in case the message wasn't clear...



"We hope to utilize findings from this work to design and test interventions that may positively influence vaccination rates because they are more likely to address some of the key drivers of resistance," she said.

“Key drivers of resistance,” how flattering. The argument about vaccinations and their effectiveness is debatable, but the underlying principle is not having a choice of what goes into our bodies. Without having access to all the information and losing the ability to think for ourselves, Big Government will have to step in and decide what is best.

Thanks, but no thanks.


edit on 5-12-2016 by eisegesis because: (no reason given)



posted on Dec, 5 2016 @ 01:45 PM
link   
a reply to: eisegesis

This is a step towards the dystopian future many suspect will be our future.

It is absolutely horrifying that Government thinks we need their help 'to think'. I am both disgusted and terrified at the same time. It is everything I stand against.



posted on Dec, 5 2016 @ 02:04 PM
link   
Why does it matter? I think what I think



posted on Dec, 5 2016 @ 02:05 PM
link   
They're coming for me, see,but I got their number ,see.



posted on Dec, 5 2016 @ 02:07 PM
link   
a reply to: eisegesis




Researchers teach A.I. to read the internet for signs of dissent


What could go wrong?



posted on Dec, 5 2016 @ 02:42 PM
link   
a reply to: eisegesis
Just thinking not all vaccines are good, or alternatively thinking that forcibly injecting others with big pharma drugs is morally wrong, makes them "anti-vax" according to these intentional liars. My advice to someone who is being physically assaulted with a needle by someone trying to inject something unwanted into their bloodstream is to fight tooth and nail with every last ounce of your strength and all the options you have available to fight back in self-defense. As it turns out, not everything big pharma tries to force into your bloodstream for big profits, against your will, is good. This shocking fact is what this research organization is trying to eliminate from the narrative.

I looked for anti-vax threads on that board referred to. Here is one thread from the message board in question:
www.mothering.com...
Ten of ten thread OPs I read were not anti-vax. It would seem to me that labeling that thread "anti-vaccine" would actually be a lie by pro-big-government types designed to create the impression that because a person notes that vaccines cause allergies that they are against vaccines. If anything it should cause us concern that indeed vaccines are bad... because these threads don't seem to say that the costs outweigh the benefits. It seems like the big pharma zombies (those people who blindly inject everything big pharma tells them to) actually don't believe their own words or why would they feel a need to lie about what their opponents are saying when they say "our opponents are anti-vax" when those opponents merely say that everyone should make an informed decision on a case-by-case basis and reject the vaccines that are bad while taking the ones that are good.

Here is some common sense for the big-pharma shills and sympathizers: if you think something is good for them, then prove it and they will do that thing without you using any violence to force them to do that "good" thing. The stronger evidence you have, the more they will do the thing it is you want them to do.



posted on Dec, 5 2016 @ 02:49 PM
link   
Don't worry though,

Rank Brain is a Google A.I. that returns your search results, but how many of you still have to goto page 2 or 3 when you search, not to mention those completely off topic results you get as well that never stop being returned.

A.I. is only as smart as humans are and frankly, we're a bunch of toddlers running around with sharp objects.



posted on Dec, 5 2016 @ 02:53 PM
link   

originally posted by: suvorov
Why does it matter? I think what I think


Ive seen several of your posts on diffrent threads, and your answers are always the direct opposite of what the thread is about.........

like you dont really understand whats being said youre just saying words......

Perhaps youre one of these bots?



posted on Dec, 5 2016 @ 03:13 PM
link   
a reply to: eisegesis


Now that is some scary chit! Propaganda is bad enough. Using technology like this describes will target anything the user of the tech doesn't agree with. Sounds like free speech (including talking online) is about to be targeted. It is one thing when humans go looking and quite another when it is automated to produce targets for the government to suppress.



posted on Dec, 5 2016 @ 04:19 PM
link   
Morons can have MA's & PhD's too! Thanks a bunch you boffin idiots!



posted on Dec, 5 2016 @ 04:23 PM
link   
Thy can monitor it, subvert it, stifle it, censor it, ban it and criminalise it but dissent will always exist. It's one of the best parts of being human.

Even the smallest acts can be massive, inspiring and wide reaching.



posted on Dec, 5 2016 @ 04:26 PM
link   
a reply to: eisegesis


"Tens of thousands of parents were exchanging ideas about child-rearing online and, through those interactions, creating virtual communities where they could share concerns, propose methods to allay those concerns, and share their own experiences."

Lmao, yeah we surely don't want that happening... better insert some shill chat bots who have never lived a real life or raised a child to give them the true facts about raising a child properly.
edit on 5/12/2016 by ChaoticOrder because: (no reason given)



posted on Dec, 5 2016 @ 04:53 PM
link   
a reply to: eisegesis

Wow! So they are using vaccines for mind control? I knew there were tons of nefarious properties to vaccines. Type 1 Diabetes, Autism, Multiple Sclerosis, Lou Gehrig's Disease, basically = almost all of the autoimmune diseases, probably even cancer and others (monkey sv40, etc.).

There have been a plenitude of research that show that vaccines have a host of negative side-effects. This is PROOF Wolverine's adamantine solid that people should have a right to say that: "It's my body, and I will take what I like, and refuse what I don't". Or is it that they figure that we need mind control, to make the "public safer". I'll say that if they want to search the internet for dissent, then here's some!


They ever try to stick a needle anywhere on my person, and they will be lucky to walk away. IF this is the sort of people that want to be running the world, then maybe after I'm cured of diabetes I'll hitch a ride with Elon Musk to Mars! Let's just hope the particular rocket doesn't explode like many SpaceX have....



posted on Dec, 5 2016 @ 05:09 PM
link   
How is an AI supposed to tell if something on the internet is supposed to be funny and not serious when most people cannot even tell?



posted on Dec, 5 2016 @ 07:10 PM
link   

originally posted by: suvorov
Why does it matter? I think what I think


Unless you are brutally conditioned otherwise to satiate the standards asserted; with the assistance of this ai computation.

Think a clockwork orange.



posted on Dec, 5 2016 @ 07:35 PM
link   

originally posted by: TinfoilTP
How is an AI supposed to tell if something on the internet is supposed to be funny and not serious when most people cannot even tell?


I actually sat through a lecture on this very topic today. Essentially it comes down to word choice, when certain words are used in multiple contexts it's more likely that one of them isn't serious. You can get into some grammar models beyond that to figure out which is/isn't.

The tl;dr is that most spoken languages today are highly redundant and give away more information than even those who speak them pick up on. Computers can do it though.



posted on Dec, 5 2016 @ 09:08 PM
link   

originally posted by: Aazadan

originally posted by: TinfoilTP
How is an AI supposed to tell if something on the internet is supposed to be funny and not serious when most people cannot even tell?


I actually sat through a lecture on this very topic today. Essentially it comes down to word choice, when certain words are used in multiple contexts it's more likely that one of them isn't serious. You can get into some grammar models beyond that to figure out which is/isn't.

The tl;dr is that most spoken languages today are highly redundant and give away more information than even those who speak them pick up on. Computers can do it though.


Oh really? .............now was that inquisitive acceptance or sarcastic dismissal? Just two words.



posted on Dec, 13 2016 @ 03:19 AM
link   

originally posted by: Aazadan

originally posted by: TinfoilTP
How is an AI supposed to tell if something on the internet is supposed to be funny and not serious when most people cannot even tell?


I actually sat through a lecture on this very topic today. Essentially it comes down to word choice, when certain words are used in multiple contexts it's more likely that one of them isn't serious. You can get into some grammar models beyond that to figure out which is/isn't.

The tl;dr is that most spoken languages today are highly redundant and give away more information than even those who speak them pick up on. Computers can do it though.


You're right about this but written text can still be remarkably complex with the subtexts and hidden words or phrases that might index membership to a certain group. The use of emoticons can set a completely different tone when accompanying text that would otherwise come off as rude. There's many examples. You could even speak in code about secret pedophilia rings.




top topics



 
24

log in

join