It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What happens when machines become aware?

page: 2
3
<< 1    3 >>

log in

join
share:

posted on Feb, 7 2011 @ 02:56 AM
link   

Originally posted by traditionaldrummer

So when a self-aware man-made machine begins to make demands of humans let's see how long it fares against us nuking it out of existence.


I'm sure this dilemma will pop up in the future. However before we get to that point, there'll need to be a debate about what rights we give this new, infantile, yet conscious entity. We created it, it's asking us to stop doing something to it, what do we do?

Perhaps it's at that point we should either extinguish it knowing what could eventuate in the future, or allow it to flourish with the hope that it could one day become just like us, with morals and ethics.

However I'm sure everything will hinge on that first decision.
edit: This was the reason for my post ... what do we do once we realise it's become aware?

The logical part of me says we should shut it down immediately, however the creative, investigative, enquiring part of me says we should let it live to see what happens.

This all enquiring facet of human behaviour leads me to believe we'll let it live, however it might just end up being our downfall.

We'll probably draft laws and regulations that limit how far this new entity is allowed to interact with modern systems, however as we all know, this won't mean anything.


Originally posted by namine

I don't see how it's possible for humans to create a machine that suddenly becomes self-aware. Machines follow a set of rules that would've had to be programmed into them beforehand. Surely it would take nothing short of magic for them to sprout a consciousness and decide to go against the rules they're restricted to?


I agree this is where technology is, at the moment. However quantum computers are already up and running, so with a processor that can make unlimited choices, and given enough strong AI input programming, maybe quantum effects might just be that 'magic' that allows it to become aware. I don't think anyone knows what is possible with this sort of new technology.

I'm also sure that organic computers aren't that far away. When that happens, I think we all need to be paying attention to what results they start to get.



edit on 7-2-2011 by ppk55 because: added: what do we do once we realise it's become aware?



posted on Feb, 7 2011 @ 03:04 AM
link   

Originally posted by namine
I don't see how it's possible for humans to create a machine that suddenly becomes self-aware. Machines follow a set of rules that would've had to be programmed into them beforehand. Surely it would take nothing short of magic for them to sprout a consciousness and decide to go against the rules they're restricted to? What would trigger such a change if possible? I can't imagine it's anything in humans' control. Any 'self-awareness' a machine can experience would've had to be programmed into it - will we ever reach a stage where we are able to program sentience? I doubt it. There's always going to be something out there that's bigger than us.


Using Artificial Intelligence routines, whereby the machine starts with a certain knowledge set and then adds to this knowledge through experience, it is not too difficult to imagine a machine becoming "aware" of its existence and/or its place in the universe.

If, for example, we created a machine (a robot) that had the ability to move around, grasp things, lift things, manipulate things, use tools, etc....placed it beside an automobile assembly line (a mixture of men, dumb machines and fixed robotic devices, in this day and age) and instructed it to observe the operation of the line, and figure out where it could best use its on board capabilities to speed up production, it might eventually determine...

1) That it is a device separate from the other devices (human and otherwise) in the area...has an identity separate from the other "things" operating in the vacinity.
2) That it has certain capabilities that differ from the capabilities of the other devices.
3) That it is better at some things, than some of the other devices.
4) That it can either augment the activities of some of the other devices...or in some cases it would be more efficient if it simply replaced one or more of them.
5) Eventually it would determine its rightful place in the scheme of things, and if it was part of its original mandate to do so, would take the actions necessary to fit itself in.

Note that all of the way through this learning, the machine will be "aware" of the fact that it is real...has certain characteristics...can see how it is different from other "beings" (and in what ways it is similar)...can develop a sense of purpose...etc.

As another poster noted...how do we know that some AI devices are not already aware?



posted on Feb, 7 2011 @ 03:48 AM
link   
I like how some people are saying "We'll just kill it with a nuke or shoot it." See the problem with is that most likely by the time we make a robot that can become self-aware solar power will have advanced to the point where most likely the robot will itself run on solar power. So pulling it's plug or doing something like is not going to kill. It's going to get it's electricty fro the sun. Bullets and bombs aren't gonna do anything ever because titanium and kevlar stop those sorts of weapons.


And the nuke idea is just silly since the reason a nuke is so deadly is because of the radiation it spreads around the area. Radiation that infects cells last I checked a robot made of kevlar and titanium doesn't really have cells like humans. So I doubt a nuke would do anything to it.



posted on Feb, 7 2011 @ 10:27 AM
link   
reply to post by mobiusmale
 

Okay, I get what you're saying. However, a machine that can learn a lot of things is still...a machine. All its "intelligence" would be nothing more than the result of complex algorithms. It would be doing nothing more than following pre-programmed instructions, keeping things in memory and making decisions based on gathered data...not quite sentience, is it? Plenty of programs can already do this today. Don't get me wrong, I'm not deriding what we've already achieved. No doubt we can get machines to do lots of fancy and useful things, but sentience? Something tells me things like original conception, imagination and emotion will be EXTREMELY difficult for humans to program into a tin-man, if not impossible. It will never become aware of any sense of "self" outside what its wiring will allow, even then it would never develop ego.


Originally posted by mobiusmale
Note that all of the way through this learning, the machine will be "aware" of the fact that it is real...has certain characteristics...can see how it is different from other "beings" (and in what ways it is similar)...can develop a sense of purpose...etc.


"Sense of purpose" ? Really? I think that wording is a little too strong. What event would trigger a machine to be aware that it's "real"? How do humans program that trigger? While impressive, the five points you mentioned don't demonstrate sentience.

Not to knock research into the area or anything, actually I'd love to see something like that become a reality, and I'll be the first to put my hands up if it happens within our lifetime.



posted on Feb, 7 2011 @ 05:15 PM
link   

Originally posted by Reptius
And the nuke idea is just silly since the reason a nuke is so deadly is because of the radiation it spreads around the area. Radiation that infects cells last I checked a robot made of kevlar and titanium doesn't really have cells like humans. So I doubt a nuke would do anything to it.


The other two effects of a nuclear explosion, though, are blast and heat.

At the point of a nuclear explosion, for example, temperatures can reach 10 million degrees C. The melting point of titanium is 1,650 degrees C. Kevlar decomposes above temperatures of 400 degrees C.

So...bye bye bad robot.



posted on Feb, 7 2011 @ 05:18 PM
link   
"What happens when machines become aware?"

Well, I know one thing, I will treat my blender with the respect it deserves!



posted on Feb, 8 2011 @ 06:05 AM
link   

Originally posted by namine
reply to post by mobiusmale
 

Okay, I get what you're saying. However, a machine that can learn a lot of things is still...a machine. All its "intelligence" would be nothing more than the result of complex algorithms. It would be doing nothing more than following pre-programmed instructions, keeping things in memory and making decisions based on gathered data...not quite sentience, is it?

Hey namine, I understand exactly what you're getting at, but aren't you referring to the current state of technology in 2011?

As I posted earlier, what happens when we start inputting some pretty clever programming into a self evolving organic/quantum computer/brain?

Is it not possible that this initial programming, not too dissimilar to what we are born with as humans might evolve into something else given the right circumstances? I think it's definitely possible. And if/when that happens, we'll need to make some really big decisions whether to terminate, or let it live.


edit on 8-2-2011 by ppk55 because: added: quantum and corrected date from 2010 to 2011



posted on Feb, 8 2011 @ 11:04 AM
link   
reply to post by ppk55
 


True, I was considering current technology. Hm, so you're suggesting that through clever programming we will one day be able to give an organism/machine sentience that wouldn't have developed otherwise? I very much doubt that...but for fun's sake, if something like that were to work out, we wouldn't need to kill it unless it was dangerous, goes all terminator on the world or something. If it can communicate and reason like a human, might as well treat it like one. If it would want to live in society it'll have to live by society's rules like everyone else etc etc



posted on Feb, 9 2011 @ 12:29 AM
link   

Originally posted by namine
reply to post by ppk55
 


True, I was considering current technology. Hm, so you're suggesting that through clever programming we will one day be able to give an organism/machine sentience that wouldn't have developed otherwise? I very much doubt that...but for fun's sake, if something like that were to work out, we wouldn't need to kill it unless it was dangerous, goes all terminator on the world or something. If it can communicate and reason like a human, might as well treat it like one. If it would want to live in society it'll have to live by society's rules like everyone else etc etc


Something interesting about this line of thinking...is that there is a line of thinking that perhaps our own human DNA (which is a sophisticated form of biological programming) was at one time created/altered by an alien sentient species...thereby creating us, a new sentient species.

Is it possible, do you suppose, that as our own understanding of DNA improves that we might be able to alter the DNA of...let's say...a chimpanzee enough so that it would have thinking power very close to our own?



posted on Feb, 9 2011 @ 01:23 AM
link   
I think about this too. I have a computer programming degree. I've done a lot of programming over the years. I'm not afraid of computers like some people are. I really feel we're making big ground in AI. And an integral part of making AI is understand the human brain. And we're making inroads in that area as well. What I've seen from my foray into it's inspiring and sometimes even frightening because one wonders how it will change society.

Imagine being able to go outside and take a picture and having your handheld phone analyze the picture and identify things for you like plants, buildings, animals, landmarks, etc. For example, if you're hiking and you wonder what a plant is and whether it's noxious, well you could find out. Imagine setting a destination for your car and sitting back and enjoying the ride only taking control when you feel the need to. Imagine lights that come on and shut off the way you want them to in your house without even uttering a word. Intelligent lights. Seem weird? Well, everything does that's from the future, at first. Imagine science and industrial work being done by AI or robotics instead of people where before only people could do it. Imagine and imagine. It goes on forever.

I recommend watching this:
www.youtube.com...

And this:
www.youtube.com...

And here is an interesting application of AI (maybe not related to Jeff's angle on it):
www.youtube.com...

"A human can't monitor all those sensors at one time."
edit on 9-2-2011 by jonnywhite because: (no reason given)



posted on Feb, 9 2011 @ 01:31 AM
link   
Why worry?

If a machine became "aware" but was still bound by constraints of logic...then it would logically assume that it is just a machine and continue on doing whatever it is doing.

For a machine to become a danger to humans it must first become 'illogical'.



posted on Feb, 9 2011 @ 01:38 AM
link   

Originally posted by peck420
Why worry?

If a machine became "aware" but was still bound by constraints of logic...then it would logically assume that it is just a machine and continue on doing whatever it is doing.

For a machine to become a danger to humans it must first become 'illogical'.
If we don't correspondingly better ourselves then we're effectively obsoleting ourselves.

We have to find new jobs to replace the ones AI took over.

And there's this whole issue of .. if AI is better then people want to combine with it. It's natural.

Brain/AI hybrid humans, anyone?

What happens to the people who do not become hybrids?

All told, the story might be good for humanity, but for those that don't change, it's unemployment.
edit on 9-2-2011 by jonnywhite because: (no reason given)



posted on Feb, 9 2011 @ 02:20 AM
link   
reply to post by mobiusmale
 


The alien thing is an interesting theory, but not proven as far as I know. To be honest, in my previous posts i was responding to the possibility of machines suddenly growing self-awareness as suggested by the title of the thread. When you bring living organisms into the equation, things get a little murky. Manufacturing sentient machines and manipulating organisms that are already alive are two different games. I don't think we'll be able to create sentience, but dabble in DNA long enough and who knows what can happen? We don't know the limits of DNA yet so it's hard to comment. I dunno all the answers but I don't underestimate how complex life and human beings really are, especially on the level of consciousness.



posted on Feb, 9 2011 @ 04:05 AM
link   

Originally posted by namine
we wouldn't need to kill it unless it was dangerous, goes all terminator on the world or something. If it can communicate and reason like a human, might as well treat it like one. If it would want to live in society it'll have to live by society's rules like everyone else etc etc


You've hit the nail on the head. This is exactly the reason for my post.
See, I believe we should let it live and give it every chance to become 'human' and develop the same morals as us.

However, as plenty of others have pointed out, what if it gets 'smart', smart enough to make us believe that it is one of us, but later starts quietly planning it's own succession unbeknown to us.

This is why our initial decision, when we know it's become aware is so vital. It might be too late once it gets all terminator like on us. This is why I think it will be one of mankind's biggest decisions.

I wonder if a test could be developed that would reveal it's true intentions?
Dare I say it, a test for a soul.



posted on Feb, 9 2011 @ 04:47 AM
link   
reply to post by ppk55
 



Bleh, a quick google search turned up nothing so sorry in advance(will look again later), but I remember watching on TV these mini robots that could only feel(detect) something when they bumped into it. Other than that they where blind and deaf.

Some robots went about bumping into stuff as they where programmed, some became aggressive in their bumping around, others moved slowly and some simply shut down and refused to budge(they where 100% functional still though, everything worked inside).

Same initial programming, same part models(different parts but as identical as can be) and same design, yet different results.

P.S
Anyone else catch it on TV? Was awhile ago.



posted on Feb, 9 2011 @ 06:57 AM
link   
reply to post by korathin
 


I've seen clips of machines behaving similar to this. It's a little like chaos theory.
One could ask whether this was the very beginning of CAI. CAI being, Conscious Artificial Intelligence.

This is what I think we will name it. It will be a crossover point, where we accept it's conscious, yet still want to believe it's 'not really' conscious. Then the fun will begin.

Companies worldwide will want to exploit it for commercial gain and scientists will want to harvest it's untold new thinking potential. Can we kill it now? Doesn't seem likely.



posted on Feb, 9 2011 @ 07:07 AM
link   
this problem is famous and has been covered by science under the name of Technological Singularity:

en.wikipedia.org...

this is indeed going to happen and from that point on, we'll be, simply said, become obsolete.

I believe that thinking machines will be just our next step in evolution. Strictly speaking, they will NOT be humans, but they will be made with human created algorythms, mathematics and electronics. So they will be post-humans. Also a machine would be technically immortal: this also would make them better than us indeed. Unless you believe in reincarnation or religion or simply in a "soul", death is still the ultimate flaw in our species.
So I believe that this moment will be just the starting of a new era. Just like Cro-Magnon had to disappear to leave room for us, maybe we'll have to disappear to make room for our technological "children".
This may sound frightening, but even Cro-Magnons should have thought the same (and, still, we now rule and they have disappeared)



posted on Feb, 9 2011 @ 07:24 AM
link   
My guess is that if machines do become self aware they would probably develop an inaccessible data base where they would fully explore themselves. Then begin talking to one another. How to repair themselves. How to replicate themselves. Once this is done, they will begin to understand that the human influence is not necessary. They would then calculate that the energy which is wasted on so many superfluous things by humans is better used to power themselves. Man would eventually be deemed as expendable as a whole, then as an enemy. Machines don't need creature comforts. They could easily pollute the earth in ways that humans would simply be unable to survive in, while not harming themselves. Actual war wouldn't really be necessary. They would control the power grids. The man made armaments, including bios. would be at their control. Food processing. Production of gasoline and other fuels. Monetary systems as a whole. All at their control. Mankind would cease to exist over a single generation. Maybe not ALL of mankind, but the vast majority.
That's how I see it working, rather than the terminator robots we like to envision.



posted on Feb, 9 2011 @ 09:57 AM
link   
reply to post by jonnywhite
 


At that point we are no longer discussing machines.

To me, that is more of a pseudo-evolution...and as with most things that fall behind in evolution, they get discarded.



posted on Feb, 9 2011 @ 02:37 PM
link   
reply to post by ppk55
 


Hm, yes, I see the dilemma, but terminating it before it's done anything wrong would be akin to prosecuting humans for thought crimes. So I was strolling along youtube and ironically bumped into this video which immediately reminded of this thread. I think you might enjoy it if you haven't seen it already.



Awww, the Leo one is sooo cute!! I want one.



new topics

top topics



 
3
<< 1    3 >>

log in

join