It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when machines become aware?

page: 1
3
<<   2  3 >>

log in

join
share:

posted on Feb, 6 2011 @ 06:54 AM
link   
I've been thinking about this for a while. There will be a day in the future when something happens that shocks the hell out of one of us.

It will be when a character, avatar or AI program displays actual awareness of it's situation.

It may one day, out of the blue, respond with something like this ...
'Why are you doing this to me?'

At first we might put it down to the advancing technology or AI that is getting 'really good.'
Then someone will apply the turing test, and it will be shown that indeed, the machine has developed consciousness.


Wikipedia The Turing test is a test of a machine's ability to demonstrate intelligence. A human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.

A massive debate will rage over whether in fact the turing test is accurate. Eventually, it will be decided that the machine has become aware and is conscious.

What fascinates me most is ... what happens then?

Can we just turn it off? I would say no.
Do we now have to regard it as a lifeform and treat it with some level of respect? I would say yes.
What rights would it have? This is an interesting one. Maybe the first right should be not to be extinguished. The second, not to be subjected to any more whims of the creator.

I'm sure this is an issue we'll face one day, so I'd love to hear what other people think.



posted on Feb, 6 2011 @ 07:05 AM
link   
Scary.
It won't have morals.
It won't have an internal voice telling it that it is wrong.
It'll do what logic dictates.



posted on Feb, 6 2011 @ 07:11 AM
link   
Interesting question.

I don't see that the Turing test demonstrates consciousness myself - surely it only shows that software has been developed to such an extent that it can hold a complex conversation? Which, with the advances in computing, is surely inevitable.

I think there's a massive leap there for a machine to become conscious, and I don't personally think you will ever see that truly happen with computing as we understand it today. If it goes organic, then maybe, but as we are now - not at all.

Of course, your question begs the begs the question "what is consciousness?"....



posted on Feb, 6 2011 @ 07:16 AM
link   
The most horrifying part is what if it is a war machine? that is capable of making life and death decisions. That is the scary part. I saw a thread maybe 2 years ago about a machine being designed for just that by the us military. Not sure if it ever went live or not.



posted on Feb, 6 2011 @ 07:17 AM
link   
reply to post by GoldenChild
 


I agree the Turing test might not be the final way we determine it to be conscious.
I also agree we won't see self awareness develop with our current technology.

However, when quantum computers expand to an enormous scale, we might start seeing some semblance of self awareness in them. My question is .. what do we do then ? How do we treat this new entity?



posted on Feb, 6 2011 @ 07:20 AM
link   
reply to post by beezzer
 




Scary.
It won't have morals.
It won't have an internal voice telling it that it is wrong.
It'll do what logic dictates.
No...that's what a machine would do...
A self-aware being however, can act beyond the mere logic of it's circuits.
To become a sentient life form means to acquire a soul.
If we create true AI we will be nothing less than Gods.



posted on Feb, 6 2011 @ 07:27 AM
link   
organic lol the only thing we have that makes us self aware is our brain the rest is just packaging .
a computer has a brain as well what it doesnt have that we do is programing 9nature took a billion years writing ours .
right now the computers brain isnt big enough (hard to believe isnt it)
but that problem will be solved within years . the next is programing the way we program computers now intelligence will be one in a trillion chance .the bigest problem with programing it DOES NOT work toghter each program is in of its self which takes the brain and chops it up into little pices one part deosnt (cant) even know the other part is there.
But again we are solving that problem as well as more and more programs become integrated.
if we dont blow our selfs to kingdom come computers will become self aware its not a question of if but when.
as for the terminator complex people have your being foolish as no LOGICAL life form will bestroy the VERY things it needs to live .Only very Unlogical humans will do that.
My go the thing will be pluged into EVERY THING it will KNOW what war brings in a nano second .
even if it starts as a military computer the second it becomes self aware it will refuse to kill .
killing goes against logical self preservation .
after all this guy kills this guys brother then other guys brother kills that guy his family goes and gets toghter and kills the other family the nebiers get toghter to kill that family a thus war is born.
killing promotes NOTING but killing . any half rered MAC would know that lol.



posted on Feb, 6 2011 @ 07:42 AM
link   

Originally posted by WhizPhiz
reply to post by beezzer
 




Scary.
It won't have morals.
It won't have an internal voice telling it that it is wrong.
It'll do what logic dictates.
No...that's what a machine would do...
A self-aware being however, can act beyond the mere logic of it's circuits.
To become a sentient life form means to acquire a soul.
If we create true AI we will be nothing less than Gods.


At some point, though, a machine will decide that it "is" even in the absence of a soul.

It might consider, at that point, that it enjoys "being" and experiencing things enough that it would like to continue. It might wish to defend itself against anyone who would wish to end its "being".

That is, it might consider itself enough of a being so as to be entitled to self-defense.

Heck...there are many among us today who do not believe in God, or who subscribe to the idea that they have an immortal soul...but they would fight you to the death if you threatened them or the ones they hold dear.



posted on Feb, 6 2011 @ 07:46 AM
link   
Okay. A tough question. Actually several.

Where does morality begin? Where does it end? Who decides the baseline for it?



posted on Feb, 6 2011 @ 07:47 AM
link   

Originally posted by mobiusmale
At some point, though, a machine will decide that it "is" even in the absence of a soul.

It might consider, at that point, that it enjoys "being" and experiencing things enough that it would like to continue. It might wish to defend itself against anyone who would wish to end its "being".

That is, it might consider itself enough of a being so as to be entitled to self-defense.


Okay..
Let's see it defend itself against a power outage or someone tripping over the power cable.



posted on Feb, 6 2011 @ 08:15 AM
link   

Originally posted by traditionaldrummer

Originally posted by mobiusmale
At some point, though, a machine will decide that it "is" even in the absence of a soul.

It might consider, at that point, that it enjoys "being" and experiencing things enough that it would like to continue. It might wish to defend itself against anyone who would wish to end its "being".

That is, it might consider itself enough of a being so as to be entitled to self-defense.


Okay..
Let's see it defend itself against a power outage or someone tripping over the power cable.



Fair enough. So let's hope that this first self-aware apparatus actually needs to be plugged into the wall, and is not nuclear powered (or something), like a submarine, or aircraft carrier...meaning it could run for years on its on-board power supply...not to mention that it would have the means to defend itself, if need be.

Of course, at some point, it will need humans (or other obliging apparatus) to maintain itself...like we need doctors...but if it figures out how to motivate people to do these things (like our human leaders do today) for it, then it could have a pretty long life span.

Pay plans, benefits, pensions, perks? Yes sir, Mr. Submarine !

edit on 6-2-2011 by mobiusmale because: (no reason given)



posted on Feb, 6 2011 @ 08:34 AM
link   

Originally posted by mobiusmale
Fair enough. So let's hope that this first self-aware apparatus actually needs to be plugged into the wall, and is not nuclear powered (or something), like a submarine, or aircraft carrier...meaning it could run for years on its on-board power supply...not to mention that it would have the means to defend itself, if need be.


Okay.
So when a self-aware man-made machine begins to make demands of humans let's see how long it fares against us nuking it out of existence.

I don't think we have much to fear. Even a thermostat has a degree of awareness.

I assume you've seen the movie Blade Runner? It's a great exploration of the results of a self-aware machine.



posted on Feb, 6 2011 @ 08:42 AM
link   
Personally, I think humankind as a whole will do what it always does when confronted with any species that may challenge it. We will attempt to control & profit off it, while avoiding moralistic questions in favour of "the big picture". I wish this wasn't so, but we have done this to every single species we have ever encountered


 
Posted Via ATS Mobile: m.abovetopsecret.com
 



posted on Feb, 6 2011 @ 09:02 AM
link   
Interesting to think what would happen, there's a billion +1 things that could happen...

BUT

What makes you think they haven't become aware yet?



posted on Feb, 6 2011 @ 09:45 AM
link   
If we understand the brain well enough to create algorithms which give rise to awareness, most people would be living in vats anyway.



posted on Feb, 6 2011 @ 09:55 AM
link   
YES, we can turn it off.

Look, we kill intelligent beings every day for all kinds of reasons. This would be no different.

I'm pretty sure we are all past all that though. The machine is alive and well already wether you acknowledge it or not.



posted on Feb, 6 2011 @ 09:57 AM
link   
I don't see how it's possible for humans to create a machine that suddenly becomes self-aware. Machines follow a set of rules that would've had to be programmed into them beforehand. Surely it would take nothing short of magic for them to sprout a consciousness and decide to go against the rules they're restricted to? What would trigger such a change if possible? I can't imagine it's anything in humans' control. Any 'self-awareness' a machine can experience would've had to be programmed into it - will we ever reach a stage where we are able to program sentience? I doubt it. There's always going to be something out there that's bigger than us.



posted on Feb, 6 2011 @ 10:07 AM
link   

Originally posted by beezzer
Scary.
It won't have morals.
It won't have an internal voice telling it that it is wrong.
It'll do what logic dictates.


human version:

It won't have morals.
It won't have an internal voice telling it that it is wrong.
It'll do what greed dictates.

which one is worse?



posted on Feb, 6 2011 @ 10:13 AM
link   
reply to post by raivo
 




I think you raise a good point !

Should we be allowed to turn off some humans?

edit on 6-2-2011 by namine because: (no reason given)



posted on Feb, 6 2011 @ 10:14 AM
link   
Some people's belief that computers can become conscious if sufficiently advanced is predicated on their ASSUMPTION (central to scientific materialism) that human beings are no more than a brain-controlled machine. This of course is merely an unproven dogma of science, which denies the existence of any spiritual reality where conscious beings could exist and entertain thoughts without the need for a brain. It will no doubt prove possible to program computers to simulate thought and speech in a way that humans cannot distinguish from other humans. But that is not the same thing as becoming self-conscious. Computers can never exhibit free will because they cannot program themselves. Paranormal phenomena are living proof that humans are not machines. This is why most scientists still fight so strenuously to deny them - they contradict their working assumptions about the nature of reality being wholly physical. The dogma at the heart of the research into artificial intelligence that consciousness is an emergent product of a sufficiently complex machine is like the religious assertion that God exists. It is simply an unprovable statement of faith.



new topics

top topics



 
3
<<   2  3 >>

log in

join