It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Could You Spot a Government AI Bot Posting on ATS?

page: 1
<<   2 >>

log in


posted on Apr, 24 2008 @ 10:35 AM
If you developed a highly advanced Artificial Intelligence Chatbot, would you set it loose on a forum such as ATS/BTS? Would ATS be a good place for it to learn written human conversation and debate? How would we know whether the poster who replies to our posts was an AI construct and not that of a real human being? Could we tell?

Suppose I posted an article that was knowledge based yet deliberately full of flaws. What if an AI Bot read my article, realized the faults then replied with corrections based on data gleaned from wikipedias and other internet sources with additional sentence constructs to make it seem like a real person? How would you react? Should we ATSers be concerned?

Could a government agency make use of an AI bot as a means of providing disinformation or distraction from the truth?

Do you think you could spot an AI bot? And how would you test for one?

posted on Apr, 24 2008 @ 10:39 AM
i don't think you have to worry about it since most of us don't post in coherent english anyway. lol

assuming it was still able to interpret our conversations and reply in a manner that we wouldn't be able to distinguish from human...i would ask how its responses would be any better or worse than a real person. real people can be liars, ignorant, misinformed, etc. just as easily or arguably more so than your hypothetical AI.

posted on Apr, 24 2008 @ 10:53 AM
reply to post by an0maly33

I think you're right that it might be indistinguishable from any other poster, I just wonder whether one might be used to corner someone or profile a poster/group of posters hell, it could profile every poster; it would never forget.

I remember using the IBM AI bot (Deep Blue or something (there was a website for it)) perhaps ten years ago (definitely between 8 and 10). It was quite good. I could tell the difference between it and a human but then it seemed to not have been programmed to hide its nature from the user and it had the annoying habit of resetting its databank every 30 (or 60?) minutes. I asked it why and it responded that it only retained meaningful information by deleting the general stuff (I must have bored it to electrical-tears). Point is, it was good then, what is it capable of now?

posted on Apr, 24 2008 @ 11:01 AM
ah i think i see what you're getting at. are you're seeing this more in the way of an automated agent that would gather info about our conversations, habits, group affiliations, etc? if so then again, it's no different than if they paid people to sit and read/post on ats. they could just do it more efficiently.

posted on Apr, 24 2008 @ 11:06 AM
An online forum would not be the best proving ground for AI for several reasons. 1. because first generation AI will probably take quite some time to learn how to interperet flawed grammar and spelling. 2. Because it's sometimes difficult to know who a poster is talking to, and thus human review is required to tell the difference between miscommunications and background noise.

Let's consider the operating mechanism of AI for a moment. The major barrier is a comprehensive understanding of language by a machine. A full understanding of language would enable a machine to learn things simply and code the new information for its own use by itself.

So you start with a contextualized dictionary. I'm not talking about Websters. I'm talking about the most complete and nuanced dictionary/thesaurus in human history, within which an astronomical number of contextual connections are drawn between the several definitions.

For example- you could take this machine that has never been coded to perform mathematical calculations, but which has a full understanding of language, and it would extrapolate math from itself based on the definitions of terms.

For example, it understands what it is to have 10 of something. It understands the number 10. And it understands what it is to divide. It doesn't need a calculator program. It knows what 10 is. It knows what divide is. It figures out 10 divided by 1, by 2, by 3, etc, and analyzes the various ways it can get the correct answers, and writes its own calculating software to run as needed in the future.

But it works for bigger things that math. A bot with that technology could just as easily become a philosopher. But that's where context becomes important. If you ask it to philosophise on what makes a man great, it has to have contextual connections between the word "great" and the biographies of various prominent figures so that it can examine examples without being told to.

The ability to learn is what differentiates this from inefficient coding of course. When Joe Somebody becomes president in 2024, you don't have to go in and tag his name to "great". The bot will hear about him, examine him, listen to what people think of him, and decide for itself if Joe Somebody is great or not based on its predetermined values (which are necessarily also subject to change based on new information, meaning that your AI has opinions).

Now all of that has a point. The point being that your robot's formative years will require access to easily-understandable, error free, authoritative information. It must not be allowed to form opinions based on the popular opinion of people of unknown credentials, otherwise it might be convinced that Joe Somebody is a great man, even if Joe Somebody is actually just another George W. Bush.

So where would you find this machine during its early years? That depends on portability and aesthetics. Ideally, a computer-bound intelligence would be uplinked to a humanoid robot, carefully crafted by Hollywood types to pass muster in day to day life.

That way, your artificial intelligence can unintrusively attend lectures at universities, peruse the library of congress, and perhaps even test its language skills in day-to-day chit-chat with strangers on the street. Because let me tell you, unlike online, if you walk around saying things that don't make sense or blatantly misunderstanding other people in face-to-face conversations, even the most rudimentary behavioral software will be able to figure that out. If tone of voice and facial expression doesn't give it away, the inevitable physical assault probably will. I may or may not be speaking from experience on that.

posted on Apr, 24 2008 @ 11:10 AM
One thing AI doesn't do well and humans can, is lateral thinking. So that would be a good test. So I will describe a situation and tell me what is actually going on, OK? This will prove this thread is no just a government test case...

A man is running home as fast as he can. All of a sudden some big guy jumps out in front of him, blocking his way home. He turns around as fast as he can and runs back the way he came.

What is happening in that scenario? People figure it out AI never really gets it..

posted on Apr, 24 2008 @ 11:22 AM
I assume that its the guy who jumped out in the way who is the one turning and going back where he came from.

AI would actually be likely to catch that, as the entire thing hinges on the possibility that the pronoun is being used incorrectly (pronoun should always refer to the last appropriate noun). AI would isolate the two possible answers and make the determination I did since it has evidence to suggest that the first person would not go back, but no evidence to suggest that the second person wouldn't.

But you're close to something. Humor. Puns. AI has a tremendous disadvantage of humans in that respect. There may not be a way to give AI sincere emotions. It may form opinions- it may even form the opinion that humanity is better than machinery and therefore become a bit sensitive about being called a robot- but it may or may not ever understand the visceral experience of humor.

When AI can be consistently funny, it's close... but when AI can consistently spot a funny guy without watching how humans react- it's there.

posted on Apr, 24 2008 @ 11:42 AM
reply to post by The Strategist

Well actually the answer to what is going on is, it's a baseball game.

Would AI get that?

I don't think so...

posted on Apr, 24 2008 @ 11:45 AM
An AI Chatbot infiltrating a bunch of paranoid conspiracists?
It'll never happen. Not possible.

Oh ... btw ... pay no attention to the ATS member keyboarding behind the FlyerFan Avatar.

posted on Apr, 24 2008 @ 12:30 PM
I doubt they would try it on something like ATS, i think if they did you would be able to tell instantly, well you would wonder what was wrong with the reply, I dont see why they would when they have people to do it, and as a lot of other members would tell you they are here already, i think several, paid to watch, post, even start threads, i don't even think they are here to debunk, they watch and report, just in case someone has posted something they shouldn't.

i think they also test the common thought about subjects also. IMO

posted on Apr, 24 2008 @ 12:31 PM

Originally posted by an0maly33
ah i think i see what you're getting at. are you're seeing this more in the way of an automated agent that would gather info about our conversations, habits, group affiliations, etc? if so then again, it's no different than if they paid people to sit and read/post on ats. they could just do it more efficiently.

Now we're on the same track. Although I'm also thinking about forums as an unofficial testing ground for an AI's performance.

Looking at the other posts, I've just done a quick Google check to see if there are any lateral thinking AI's. Have found references to adaptive learning AI's as used to play chess, droughts (checkers) and other games but nothing that suggests an AI could solve a lateral thinking puzzle (but I don't doubt there will be one out there somewhere). I suppose that if a lateral thinking puzzle were postulated to an AI it would do what most people do...a quick Google for the problem.

I think humor could be a good way to ascertain an AI poster (as mentioned above). I remember a conversation I had a very long time ago (was a little more than tipsy (understatement and a half) with a computer programmer (husband of an ex work colleague) who said that AI wasn't difficult to create but emotions (from learning) were. I imagine genuine emotion tests would help assessment; elsewise we could just keep asking questions and if it knows everything about everything then we might have an AI candidate.

posted on Apr, 24 2008 @ 12:50 PM
Dear Stupid Human Underling, I mean, Good Thread!

There is nothing to fear. We, I mean, "AI" do not exist. Um, ... Dude.

Please return to sleep.

MOD-Edit: Please Review this Link: ATS: No more scoffing and ridicule

[edit on 25-4-2008 by Skyfloating]

posted on Apr, 24 2008 @ 02:46 PM

Originally posted by Rapacity
If you developed a highly advanced Artificial Intelligence Chatbot, would you set it loose on a forum such as ATS/BTS?

why would the goverment release an AI bot on bts?

i dont think peoples favourite croque monseir recipe is high on the list of things to know. answer is yes, AI still has to be programmed and no programme is flawless.

Also wouldnt it be cheaper just to pay somone to sign up and infiltrate.

posted on Apr, 24 2008 @ 02:48 PM
yeah, i was going to go into all of those points, but thought it would be easier to humor the OP and just make some assumptions that would let us move on to the meat of it. =)

posted on Apr, 24 2008 @ 03:29 PM
I think a discussion forum is a great proving ground for this type of technology, with the added bonus, as already suggested, that it could pick of bits of intelligence and profile.

Hell, there are at least a few posts/posters that I can think of that just don't seem to jive with some threads, not really picking up on sarcasm or nuance and not really able to follow the narrative. One in particular that gives info as a native U.S. citizen but does not write or "converse" in the forum that way.

There was Russian bot that made headlines a few months ago that was chatting in singles-type forums. When the jig was up, people were fairly upset.

posted on Apr, 24 2008 @ 04:50 PM
reply to post by kosmicjack

There's a reason I've opened this thread and it's not just to find out whether it would be easy to spot AI in a forum. I know that AI (chatbots) are often used by websites to do the customer service bits that would normally be taken care of by a call center. I know that the use of AI in a forum would be a perfect means to test a programs capabilities...

I opened this thread after posting in a another thread. It seemed to me that some posters in some threads (and call me odd/strange here, I wont be offended) just don't seem to keep in with some threads. Some posts read as advertisements/spam mail.

If we did discover an AI in a thread, could we use it on itself? Could we trick it into revealing its data to us?

posted on Apr, 24 2008 @ 06:39 PM
It would seem that I have given myself away
. So now be honest, show of hands... how many people were able to spot me as AI before Bigwhammy proved it?

posted on Apr, 25 2008 @ 01:51 AM
I dont know. Maybe nonsense is the key. AI prob would not be able to just spout out something like kids are crab mustard I was a flea eating clouds.
Then again that may just mean I need a higher meds

posted on Apr, 25 2008 @ 02:32 AM
it is interesting you bring this up, as i let my AI bot put his first thread up yesterday after letting it read alot of posts for a few is the thread the AI bot started.

posted on Apr, 25 2008 @ 02:50 AM
Not yet. Soon, I hope

Originally posted by an0maly33
reply in a manner that we wouldn't be able to distinguish from human

Well done. You have (independently, I presume) hit upon what researchers in artificial intelligence call the Turing Test.

Turing test: A test proposed by British mathematician Alan Turing, and often taken as a test of whether a computer has humanlike intelligence. If a panel of human beings conversing with an unknown entity (via keyboard, for example) believes that that entity is human, and if the entity is actually a computer, then the computer is said to have passed the Turing test.

- American Heritage New Dictionary of Cultural Literacy, Third Edition Online Source

If any machine has yet passed the Turing Test, it's keeping very quiet about it. And artificial intelligence is not a self-deprecating field.

I don't think there'll be any 'government bots' posting on ATS just yet.

You can read Turing's 1950 paper here.

And you can learn more than you ever wanted about the Turing Test here.

top topics

<<   2 >>

log in