It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Could You Spot a Government AI Bot Posting on ATS?

page: 2
5
<< 1   >>

log in

join
share:

posted on Apr, 25 2008 @ 03:25 AM
link   
Here's a little something to muddy the waters: Multiple Intelligences.

The intelligences that are measured play a major role in the determination of whether or not AI has been achieved.

The turing test is decent, but the subject matter of the conversation is a key variable. The strong point of the Turing test is that it is possible for emotional intelligence to be a factor in the judge's decision. The arguable weakness is that it isn't guaranteed to be.

On what level is it possible for a machine to experience passion, frustration, and anger, without the benefit of a nervous system, biological imperatives, and the ability to falsely construe threats to said imperatives? Humans have those things, and as such can experience pain as a result of ideas, with marked impact on the way we think and communicate.

How does a machine compensate for that disadvantage. No amount of programed or randomly determined preference is going to make the machine react as if you were hurting it when it doesn't get it's way, will it?


And emotional maturity is only one facet of the multiple intelligence problem. Humans have a limited but real ability to construct things so wholly unlike their experience as to be virtually original (granted there is plenty of room for debate as to whether or not true originality is possible). Can a machine make the leap from the creativity that comes from connecting distant ideas into the creativity that arises from suppressing one's preconcieved notions and building random upon random to something coherent, without requiring a monkeys on typewriters timetable to achieve results?
Perhaps that is better phrased as, "can a machine achieve a psychadellic state of mind" (not through chemicals mind you- I know the rules- but there are numerous other means to that end)




posted on Apr, 25 2008 @ 04:44 AM
link   
I think we've had a chatborg posting here a few months back.

It's wasn't a truly interactive bot.. But it seemed as if a real human was feeding posts from a forum, getting output from the chatbot. Then posting the results back here, with slight edits..

Does anyone remember who I'm talking about?



posted on Apr, 25 2008 @ 04:54 AM
link   
Have a look at the following conversations held between 2 chatbots (ALICE and Jabberwacky). More conversations can be found at Discover (The below quotes are from the same place).



ALICE: Are you alive?

Jabberwacky: Perhaps.

A: Why the uncertain tone?

J: Everything is uncertain except for five facts.

A: The whole universe?

J: And we both might just be some ones and zeros in the computer memory.


A: Do you dream?

J: Yes, I dream.

A: It goes without saying. Do you remember your dreams?

J: I remember them vividly.

A: What do you remember about it?

J: I think I shouldn’t flirt with a robot.


I reckon that with a bit of determination, a blue sky budget and the right team (and director), one might be able to develop a program for a machine able to interface interactively with its environment in such a way that it could be original and intelligible with its output.

A little idea here, I know little of the technology so might be talking out of my ass, but I suggest a human like AI might have an analogue system for its emotion circuitry and digital system for its cognitive circuitry. As for frustration, that could be brought about by conflicting information leading to a build up of voltage in its analogue circuitry which could set-off over-load warnings which in turn might lead to self preservation hence violent response (anger) which once appeased could lead to a lowering of voltage in its analogue circuitry hence happiness which might lead to an attachment to the appeaser (emotional bonding). What do you think?



posted on Apr, 25 2008 @ 04:59 AM
link   

Originally posted by spacedoubt
I think we've had a chatborg posting here a few months back.

It's wasn't a truly interactive bot.. But it seemed as if a real human was feeding posts from a forum, getting output from the chatbot. Then posting the results back here, with slight edits..

Does anyone remember who I'm talking about?


I don't remember the poster you mean but I've seen some funny things in the short time I've been here with ATS. Interesting take on the use of AI though - feed it a forum diet with a human filter so as to tweak it for automated, self regulated function (no human filter).



posted on Apr, 25 2008 @ 05:55 AM
link   
reply to post by The Strategist
 

Interesting point. Turing was way ahead of us: he actually addressed the issue explicitly in his paper Computing Machinery and Intelligence., linked to in my earlier post. Read Sections 6.4 and 6.5 and tell us what you think.

Personally, I do not believe that intelligence comes in multiple varieties. I have read a little bit about the concept of emotional intelligence and so on, but I'm not convinced. That is because I hold that all behaviour is rational.

This may seem a little hard to swallow. People 'speak in haste', 'act unreasonably', 'allow themselves to be led about by their private parts' and so on. We see plenty of foolish behaviour -- people doing things that look totally irrational to us. How can we possibly conclude that all behaviour is rational?

Look at it like this. Given a certain set of stimuli -- which we can call a 'situation' -- a person may respond to it in ways an observer may describe as wise or foolish. However, the outside observer is not seeing the same situation as the subect (the person responding). A big part of the situation is inside the subject -- her sensory biases, her personal beliefs and prejudices, her mood, her life history, her instinctual drives -- all these are part of the 'situation' to which the person is responding. And given that particular person, at that particular moment, feeling that particular way, her response (however counterproductive or self-destructive) is a rational compromise between all the different factors affecting the decision at the instant it is made -- so the decision makes a perfectly logical fit with the situation, even if the subject begins to regret her decision the instant after she has done it.

So, provided our subject is sane (that is to say, her brain does not malfunction in some mechanical way but works the way healthy human brains are supposed to), her behaviour, however foolish or meaningless it seems to others, is (at that instant) reasonable and full of meaning to her. It is rational.

Now let's look at 'emotional intelligence'. You might define it as the ability to make good decisions* in situations where it is easy to be swayed by emotion into making a bad one. The emotions in question could be the subject's own, or someone else's. Hence there would seem to be two components to emotional intelligence. First: the ability to put aside one's own feelings long enough to be able to make a good decision. Second: the ability to sense other people's emotions (empathy) and accommodate them in our decision-making so as to arrive at a good result.

Some people are better at doing these things than others; we might say these people had emotional intelligence. But note that what we're really talking about here is a mixture of perceptiveness, self-control and ordinary intelligence (the analytical, decision-making variety). There is really no need to postulate another sort of intelligence to explain it.

As for some other forms of 'intelligence' often mentioned, such as social, aesthetic or kinaesthetic intelligence -- well, we have other names for these things, and I really don't see how they can be described as forms of intelligence.

 
*A 'good' decision here just means one that maximizes benefit to the decision-maker. One important aspect of intelligence is the ability to decide what constitutes a genuine and worthwhile benefit.

[edit on 25-4-2008 by Astyanax]



posted on Apr, 25 2008 @ 08:00 AM
link   
Someone seems to be testing out posting bots on infowars.com. They are crude and put out jibber jabber. Mostly they seem to be testing the concept of jamming the posts with useless, nonsensical comments. In the event of an intense situation they could be use as a way of freezing up certain sites with out shutting down the entire web.



posted on Apr, 25 2008 @ 11:06 AM
link   
reply to post by son of PC
 


I don't think it would be all that hard to have a bot post on a forum such as this.

It's entirely different than holding a conversation.

A bot could look for key phrases such as 9/11 for example(among others within the context). Then just reply with a pre contrived set of responses.

Something like 9/11 would be easy to bot also. Especially if your pushing an "official story". This is just an example. Although I could think of 2 user's that fit this pattern. But that's neither here nor there.

With this being a forum the bot can make his post then not reply to any questions to it, or just keep repeating the official story, in 1000's of different forms.

Heck I do it all the time. I post in a thread, then never post again.
Very different from haveing a conversation.

Oh and the baseball game, I got it, after about 10 seconds of thinking. Then again I'm watching sportscenter right now as well, so.... there ya go :p



posted on Apr, 25 2008 @ 11:58 AM
link   

Originally posted by Nola213
reply to post by son of PC
 


I don't think it would be all that hard to have a bot post on a forum such as this.

It's entirely different than holding a conversation.

A bot could look for key phrases such as 9/11 for example(among others within the context). Then just reply with a pre contrived set of responses.

Something like 9/11 would be easy to bot also. Especially if your pushing an "official story". This is just an example. Although I could think of 2 user's that fit this pattern. But that's neither here nor there.

With this being a forum the bot can make his post then not reply to any questions to it, or just keep repeating the official story, in 1000's of different forms.

Heck I do it all the time. I post in a thread, then never post again.
Very different from haveing a conversation.

Oh and the baseball game, I got it, after about 10 seconds of thinking. Then again I'm watching sportscenter right now as well, so.... there ya go :p



Great thread!
I believe they are already out here in training mode. You have physical bots on the battelefield that id and do avoidance hide, defend, attack and coordinate , learn, and Teach each other in swarm mode or solo. The net is just another medium to do same That was one of the questions posed to Alexei samsanovich at George Mason, who are use live neural tissue to process info in their Darpa funded systems. He is a good buddy of of a guy who backs up the Titor site, and I ran into some of these character types Like kieth edwards, Van essn, Escher network, Chess, while researching Caret. All AI, deep deep AI, and that was the next step, to ween the systems slowy, so they connect with HS ferret face recognition systems, and others simultaneously. They can identify, accents, sub accents, nuances, street slang, and the such. A forum is a nice place to learn by observation, limited engagemnt, as it builds up its knowledge base and predicts what you may say or where you go, once you are tagged.
But what happens if the tissue catches a cold?
Nice stuff.

Keep up the good work!!

[edit on 25-4-2008 by Sys_Config]



posted on Apr, 25 2008 @ 12:20 PM
link   
I remember reading a while ago about an AI program running which learnt from previous questions and answers and compiled a vast database.

I'm not sure if this is it: iGod

Acutally just having a play round with it I don't think that's the one I'm thinking of but I guess it's still relevant to what you're saying. Interesting.



Me:
alright
God:
OK.
Me:
how's it hanging?
God:
Everything is going extremely well.


[edit on 25-4-2008 by Nova]



posted on Apr, 25 2008 @ 01:41 PM
link   
Could an AI bot develop a test and proof for an idea?

For example, if we played an internet game of "I'm psychic and can prove it! I can tell you the noun you're thinking." Would an AI bot be able to realize that in order to prove it both participants must swap answers at the same time elsewise an independent adjudicator must be employed to receive both answers without the 2 participants being aware of each other's answer? Would an AI be able to conceive and have built a program that records each participant's answer then compares and swaps them both at the same time?

This could be a good test for human like intelligence (and sentience) in an AI bot - the ability to test a hypothesis or statement and to write a proof.

[edit on 25/4/08 by Rapacity]



posted on Apr, 25 2008 @ 09:49 PM
link   
Why do we need AI bots if we already have disinfo agents?



posted on Apr, 25 2008 @ 09:58 PM
link   
Most, if not all the posts here are made from the point of view that this 'bot' would interact with a human.

Would it not be far simpler for a bot to lurk 24/7 and simply report to an operator who in turn would then do the human thing?



posted on Apr, 25 2008 @ 10:03 PM
link   
To get a feel for an AI bot, you can visit my site below. There is a bot at the bottom of the page called Sagan AI.

www.ifusionsoft.com...



posted on Apr, 26 2008 @ 01:27 PM
link   
reply to post by jetxnet
 

Just went there. It said, 'Hello, what is your name?' I typed, 'My name is Astyanax'. It then went on calling me 'My name is Astyanax' for the rest of our (very short) conversation.

I should say passing for human on ATS -- any message board, really -- was impossible for even the most sophisticated AI-type devices currently existing. To start with, the post must (usually) contain an idea -- a product of some original thought. Then, there's all the obviously human stuff that comes along in the writing. From spelling mistakes to sportsmanship, every poster has his or her own style, and we recognize each other, or at least the regulars do -- each has his online personality. I don't think bots do personality.




top topics



 
5
<< 1   >>

log in

join