Help ATS with a contribution via PayPal:
learn more

DARPA Developing Anti-Disinfo Bots

page: 1
30
<<   2 >>

log in

join
+2 more 
posted on Dec, 10 2012 @ 05:47 PM
link   
Within the 2013 DARPA budget request is an interesting item. It falls under the program title Social Media in Strategic Communication (SMISC). This program was the topic of several threads last year. The purpose of the program is to investigate social media as a means of monitoring in order to track the development of various memes. The idea being that "adversaries" might be discovered by their attempts at spreading disinfo.

The Social Media in Strategic Communication (SMISC) program will develop techniques to detect, classify, measure and track the formation, development and spread of ideas and concepts (memes) in social media. This will provide warfighters and intelligence analysts with indications and warnings of adversary efforts to propagate purposefully deceptive messaging and misinformation. Social media creates vulnerabilities that can be exploited to harm U.S. interests and threaten national security and have become a key operating environment for a broad range of extremists. SMISC


Apparently the program has been successful. The 2013 budget has doubled to $16.72 million. While the first two years of the program were devoted to seeing if the idea was feasible, plans for 2013 have added a very interesting item.

Demonstrate methods for countering adversary influence operations using techniques of semi-automated narrative creation based on predictive social dynamic models.


Semi-automated narrative creation. That sounds like using bots to produce rebuttal to the disinfo produced by "adversaries". I wonder what else they might be used for. I wonder who's getting the contract for the development.

www.darpa.mil... (Page 61)




posted on Dec, 10 2012 @ 05:55 PM
link   
reply to post by Phage
 


I can't view this as anything but bad... It's not talking about trying to steer conversations based on merit, but based on predictive trends to the flow of a story they have pre scripted, with blanks to fill in based on any topic..

So the question then is, Who get's to decide what is true?

Mind manipulation is bad enough in person.. I disagree with it's morality even if their truth agreed with my truth 100%.
edit on 12/10/2012 by Dustytoad because: (no reason given)



posted on Dec, 10 2012 @ 06:00 PM
link   
reply to post by Phage
 


My guess would be possibly Ori Cohen - and whatever shell company he's founded lately.

On the lighter side... this means some of us are getting upgrades. I wonder who?



posted on Dec, 10 2012 @ 06:00 PM
link   

Originally posted by Dustytoad

So the question then is, Who get's to decide what is true?

Other bots. Apparently.

Tailor specialized algorithms to recognize purposeful or deceptive messaging and misinformation, persuasion campaigns, and influence operations across social media.

www.darpa.mil...



posted on Dec, 10 2012 @ 07:27 PM
link   
reply to post by Phage
 


Maybe the bots got at this topic? Does anyone else feel like this is the beginning of the end for the spread of minority thoughts in public?? In this way the majority opinion would always be held, so that no change can come unless sanctioned?

I thought this thread would get more attention.

In any case thanks for the link Phage. Will take me awhile to get through all of that.


the ministry of truth comes to mind..
edit on 12/10/2012 by Dustytoad because: (no reason given)



posted on Dec, 10 2012 @ 07:46 PM
link   
Well now, this aint 'your fathers spellchecker' is it?

Maybe they can couple this technology with what they're cooking up at the 'Istanbul Process' of which our esteemed representatives are attending this week.

Here's a snip:


The initiative's goal is to implement non-binding UN Human Rights Council Resolution 16/18, which itself calls for the criminalization of various forms of speech concerning religion.


Read More: AmericanThinker.com

Bots fighting bots? Don't forget to add/program the character flaws such as spelling, grammatical errors, and so forth so the bots will be believable. Do you suppose those bots will go 'off topic' in short order, each finding fault with the other? That'll be a hoot.

I get the feeling this could be used much in the same way the (insert alphabet name here) uses 'social engineering' to entrap stop alleged terr0rists.



posted on Dec, 10 2012 @ 08:51 PM
link   
the real question is

Which User Accounts here are these disinfo-bots being tested?



posted on Dec, 10 2012 @ 08:57 PM
link   
reply to post by Phage
 



The idea being that "adversaries" might be discovered by their attempts at spreading disinfo.


Seems apparent to me that this program may not actually be to protect the west from foreign disinformation, but prevent it from inhibiting western sources of disinformation.


We have seen Memes go viral, its a war for our minds and both sides have, who ever they may be, have planted their flags in our brains.

edit on 10-12-2012 by MDDoxs because: (no reason given)



posted on Dec, 12 2012 @ 11:14 PM
link   
If one thinks rationally about this scenario, the entire premise is quite ludicrous and appalling really.


Anti-gov't disinfo co-intel bots discoursing with undercover shill thruth bots.
(or something like that)

Thanks goodness I have a highly sensitive B.S. meter, and can figure out someone's (or thing's) agenda fairly quick. But for the more impressionable...?

It's good so see all this money, time and effort going towards such useful things.


It's good to know 'ideas' and 'concepts' need to be shaped, watched, monitored, countered, stunted, manipulated, and artificially controlled. What a time we live in. Shhh. I hear the Gulag coming.
edit on 12-12-2012 by Goldcurrent because: (no reason given)



posted on Dec, 12 2012 @ 11:17 PM
link   
reply to post by Phage
 


Thanks brothers. This made my day.

Imagining an army of bots that educate the ignorant is a dream come true.



posted on Dec, 14 2012 @ 01:51 AM
link   

Originally posted by Goldcurrent

Thanks goodness I have a highly sensitive B.S. meter, and can figure out someone's (or thing's) agenda fairly quick. But for the more impressionable...?


I kind of pride myself on my BS meter too. Unfortunately (for me, at least) I can usually only vaguely detect the slight aroma of BS. I don't necessarily know why I smell it or what the truth is.

But I guess it's useful to have it one way or another. I'd rather have a clue there's something foul in the air than not. The hard part is living with the knowledge that you'll probably never convince 99% of people you meet that something ain't right here.



posted on Dec, 15 2012 @ 09:07 AM
link   
To be part of organization where the work culture is challenging and environment is competitive, that makes an optimum utilization of my potential and gives me the opportunity to improve my skills and help me in persistent strive for perfection

--------------------------
electromagnet



posted on Jan, 10 2013 @ 01:22 AM
link   
I have worked (or currently still do work) for the leading social media monitoring and engagement company. The capability exists today to make sense of every social post on any given subject, posted to any website, blog, forum, etc... For instance, if I wanted to see what people were saying about Coca-Cola I could not only find every post, but also see trends in sentiment, spikes in activity, monitor competitors, and even script replies.

What makes this scary is after reading this, a video that immediately came to mind that I saw a few years ago with Secretary Clinton discussing a communication war that they are losing. I am sure I could find a link to the video, and I would urge someone to do so.

Basically viewing it through that lense, this is huge disinformation tool, designed not only to create white noise, but will also allow the disseminators of that information to separate the signal from the noise.. thus allowing them to have the only true version of opinion.

It is scary that this could really be detrimental to the entire purpose of social media.



posted on Jan, 10 2013 @ 01:25 AM
link   
reply to post by Phage
 


Leave it to D ARPA, they would do some thing like that.

So, who is to say they have not already, and are just publicizing the fact, for increased funding?




posted on Jan, 10 2013 @ 09:53 AM
link   
reply to post by ADVISOR
 

Actually, it's DARPA which provides the funding. Here are the contracts which were awarded.

Systems and Technology Research LLC
University of Southern California
SentiMetrix
Georgia Tech Research Corporation
Indiana University
International Business Machines Corp.

www.dod.mil... df



posted on Jan, 10 2013 @ 09:59 AM
link   
reply to post by Phage
 


Then in that case I want some too, money that is.

I always figured they worked on projects discretely, and then went public when mass interest was warranted for obviously reasons, as stated prior in my previous post.

A grant from them would be useful.




posted on Jan, 10 2013 @ 10:05 AM
link   
reply to post by ADVISOR
 

Not at all. DARPA doesn't really do much of its own research.

Here you go!
www.darpa.mil...
edit on 1/10/2013 by Phage because: (no reason given)



posted on Jan, 10 2013 @ 10:08 AM
link   
Well of course, thank you for the link.




reply to post by Phage
 



posted on Jan, 10 2013 @ 10:41 AM
link   

Originally posted by DerepentLEstranger
the real question is

Which User Accounts here are these disinfo-bots being tested?


Its just a sly way for Phage to come out of the closet so to speak


Sorry Phage, you should of expected it seeing your the #1 suspect of being a so called disinfo bot, not in my eyes, but I assume to many members here.



posted on Jan, 10 2013 @ 11:40 AM
link   
reply to post by Phage
 


Explanation: S&F!

N/A

Personal Disclosure: Bumped!





new topics

top topics



 
30
<<   2 >>

log in

join