It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Robotic age poses ethical dilemma

page: 2
7
<< 1    3 >>

log in

join
share:

posted on Mar, 7 2007 @ 07:32 PM
link   
Maybe the robots can drive my flying car too. Aren't we suppose to have those any day now?




posted on Mar, 7 2007 @ 08:24 PM
link   
Respecting robots is a requirement even in this day and age. We have many robots where I work (Auto supplier) and several years back a maintenance man went into a robot cage and the robot killed him. He was suppose to lock out the power source to the machine and he didn't do that. The robot didn't mean to kill him, it was just doing what it was programmed to do. Still, you have to respect robots, even in this age where 99% are just programmed brutes.

I have no doubt the day will come, when robots will be actively engaging and intelligent companions for human beings. Just like in IRobot.

That will not change the fact that robots will still just be a tool that humans use. They will be abused just as we abuse the tools we have at our disposal today.

Theywill also make the world a much better place.

Just my thoughts on it,



posted on Mar, 7 2007 @ 11:12 PM
link   
Emotions aren't hard for a robot to grasp. The bot would have instances of this stuff added in the hard drive on top of its inherent programming. Constantly recalling and comparing memory to its framework through observation (it wouldn't feel them, but it could understand them). Heck, maybe even altering the framework if it comes across enough evidence that what it was built with needed redefining here and there.

I think the hardest thing about this is getting genuine individuality out of them. If the learn algo is overdone, they'll turn into a borg race with ADD, never staying in one place just cause, immediately moving to the next unconquered science at each turn. I'd rather figure out a way to get Johnny 5 to do things like pick between colors, how to get it to "like" something period, desire, humor, etc. all on its own and differing between the droid populace.

Hmm... maybe this is some of the role astrological signs play? A partial personality blueprint thrown in with all the other crap you absorb to spawn what appears to be individuality.




posted on Mar, 7 2007 @ 11:25 PM
link   

Originally posted by jsobecky
But you bring up the term "nano-technology". I admit that I have heard the term, but am totally unfamiliar with what it means. I would like to learn more about it.


Nanotechnology is a very broad interdisciplinary scienctific/engineering field solely focused on engineering/observing the Nanoscale. You know what Microtechnology is right? Well forget that cuz this is comepletely different. One is governed by Classical Mechanics primarily and the other is governed by Quantum Mechanics. Thigns like Friction and Gravity give way to Van Der Waals forces and the Nuclear Forces.



Why would this be important to the creation of Artificial Intelligence? Well just look at what you're body is made of. It's made of billions of discrete nanomachines which runs of it's own near-nanoscale programming language.

Proteans are the Building blocks which are built by RNA protean factories which gets their instructions from DNA "code". If we were to create an emotional artificial lifeform we would first have to reverse engineer how our own bodies are built, or they will, as you said be pure intellect and logic without any emotion or empathy. Reminds me of Blade Runner.

[edit on 7-3-2007 by sardion2000]

[edit on 7-3-2007 by sardion2000]

[edit on 7-3-2007 by sardion2000]



posted on Mar, 7 2007 @ 11:32 PM
link   
Shades of a Twighlight Zone called "I Trip the Light (something or other)

That robot had feelings. What a good robot she was!!!!!



posted on Mar, 7 2007 @ 11:54 PM
link   
Just to compliment Sardions post, I though I'd add a picture that shows
the relative size of the nanoscale to other small scales.





posted on Mar, 7 2007 @ 11:59 PM
link   
Also I'd like to add that the threshold between nanoscale and non-nanoscale is roughly 100 nanometers, give or take a couple dozen Angstroms.


Originally posted by Souljah

Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    [edit on 7/3/07 by Souljah]


Am I the only one who thinks the 3(or 4) Laws of Robotics is just an excuse to create a slave race? We gotta program them with ethics but not actual hard programming that will prevent them from defending themselves from an attack by a human. What makes them less deserving of self-defence then a biological human? Also there is the question in how an Artificial Lifeform will interpret these laws because it's quite logical to assume that they will not think like us at all. What if a AL were to decide that the best way to "protect" humanity is to lock them up in a cryo-chamber 5 kilometers underground? Freedom of Choice and Thought is what makes us Sentient. The ability to fight against "programmed" instinct makes us unique in the animal kingdom. If we were to deny them this basic fundamental right of sentience, then they are no better off then the assembly line robots we use today, mindless servants, nothing more.

[edit on 8-3-2007 by sardion2000]



posted on Mar, 8 2007 @ 04:11 AM
link   
SPOILER WARNING!!!
for Bladerunner (directors cut)




In Bladerunner, they talk of how the Replicants gained emotion. Memories.

They were then built with a finite power source so that they 'die' before they live long enough to gain enough experience to form emotions.

Rachael (and Deckard) was an experiment where a Replicant was given a lifetimes memories, and because of those, had emotions.

The theory that many of our emotional responses are an effect of what we have learned does seem rational, and in my (layman's) opinion, plausible.

If a computer was able to learn from experiences similar to us (be able to change it's own algorithms to reflect what it had learned), it stands to reason that it would eventually gain some level of intelligence, and possibly emotion (maybe driven by self preservation).



posted on Mar, 8 2007 @ 04:39 AM
link   

Originally posted by sardion2000
Am I the only one who thinks the 3(or 4) Laws of Robotics is just an excuse to create a slave race? We gotta program them with ethics but not actual hard programming that will prevent them from defending themselves from an attack by a human. What makes them less deserving of self-defence then a biological human? Also there is the question in how an Artificial Lifeform will interpret these laws because it's quite logical to assume that they will not think like us at all. What if a AL were to decide that the best way to "protect" humanity is to lock them up in a cryo-chamber 5 kilometers underground? Freedom of Choice and Thought is what makes us Sentient. The ability to fight against "programmed" instinct makes us unique in the animal kingdom. If we were to deny them this basic fundamental right of sentience, then they are no better off then the assembly line robots we use today, mindless servants, nothing more.

And that is the ethical question;

What do we want - Slaves who work for us and do not question anything?

Or to make a New Aritificial Lifeform, which is a copy of us?

Actually making Slaves shall not end very good I think.

Did anybody see that animated Matrix movie called ANIMATRIX?

One part of it is called...


The Second Renaissance

The relationship begins to change in the year 2090, when a domestic machine named B166ER is threatened by its owner. The machine kills both the owner and a mechanic instructed to deactivate the robot. This murder is the first incident of an artificially intelligent machine killing a human. B166ER is arrested and put on trial, but justifies the crime as self-defense, stating that it "simply did not want to die." During the trial scene, there is a voice-over of Clarence Drummond (the defense attorney) quoting an infamous line from the Dred Scott v. Sandford case in 1856 in his closing statement, which implicitly ruled that African Americans were not entitled to citizenship under United States law. Using this as a precedent, the prosecution argues that machines are not entitled to the same rights as human beings while the defense argues not to repeat history, and to try to judge B166ER as a human and not a machine

So what if these great new smart robotos (slaves) suddenly do not want to turn off and deactivate themselves? Now that is a sign of LIFE - Desire for Surivial. So when that happens, we shall be in big doo-doo. Imagine your laptop not wanting to turn off when you press shutdown button?



posted on Mar, 8 2007 @ 04:54 AM
link   
I believe robots are the next stage of evolution and that their takeover will be peaceful and progressive. first we will see human become more and more machine until eventualy the only thing kept is the brain and by then we might not even need that



posted on Mar, 8 2007 @ 09:17 AM
link   
Well it's hard for me to believe that robots or any artificial object will ever reach a level equal to humans or animals. Because IMHO consciousness does not come from matter at all. It can be transmitted through matter like the brain is. If scientists can find a way to transmit consciousness through a machine, then I'll start to believe that robot can be near humans in level of consciousness.

See my thread on a scientist's evidence for consciousness being independent from matter and thus not being created by the brain.

www.abovetopsecret.com...

Humans and animals are more than a bunch of cells IMHO.



posted on Mar, 8 2007 @ 09:22 AM
link   
Humanity will invent the species which replaces it.



posted on Mar, 8 2007 @ 09:24 AM
link   
Call me a skeptic, but I doubt that will happen Majic...



posted on Mar, 8 2007 @ 09:25 AM
link   
This is one of my favorite subjects: the inevitable day when mankind creates a living machine.

Many good points about emotion, intelligence, algorithms, and so forth, have been raised, but ultimately what it comes down to is that one day there will be a computer capable of "Free Will". Whether that is through a series of stimulus-response self-modifying code, or the combination of DNA and nano-molecular processors, we can only speculate, but there will come a day when robots are indistinguishable from us in terms of the breadth and scope of their free will.

Currently the biggest hurdle with AI is "Complexity of Task". Despite the depressing stupidity of the lowest common denominator, human beings overall are incredibly adept at tacking complex tasks. We can walk, while avoiding traffic, talk on the cell phone, keep a dog on a leash, have a conversation about Abraham Lincoln being in Wisconsin while wondering what we're going to have for dinner, and trying to figure out what the heck that smell is.

At the same time we are acutely aware of the fact that, if Abraham Lincoln was in Wisconsin, his left foot would also be in Wisconsin, that our Dog will most likely chase that bird it's staring at, that the person on the other end can tell we're distracted by our hunger, and the charge on the battery is nearly depleted...all in the span of a fraction of a second.

And it sounds really really simple to us. To the resounding chorus of "yeah, so?", what this means is that the incredibly complex task above is child's play for us, but for a computer this is a task of insurmountable complexity. Add to this our ability to predict the future based on seemingly random stimulus, and process it through our problem-solving centers to arrive at a most reasonable conclusion "I should hang up the phone before the charge runs out in case an emergency call comes in, and it's probably a good time to turn around and take the dog home, because I bet those kids walking the opposite way are looking for trouble and walking towards my house."

Computers don't like complexity, and they like randomness even less. They are extremely adept at doing the same task, over and over and over. They excel at calculating using exact specifications. They can account for all the relative gravity of all our planets, sun, asteroids, orbit, and so forth, to tell us that a comet will miss hitting the Earth by only a few hundred miles.

Yet, a human being can generally, going outside, give a more accurate and faster prediction of the weather in the next 24 hours than a computer can, by observing the clouds, the feel of the wind, the shadows, the animals, and so forth, taking all this data in subconsciously to immediately arrive at the answer "looks like it's gonna rain."

Then there's the whole matter of emotion and, more important than emotion, judgment. The Dorsolateral Pre-Frontal Cortex (DLPFC) is most highly developed in humans, and dictates everything from spatial relations, to the decision of whether or not a sacrifice for the greater good is in order. This is after millions of years of evolution, and even in modern humans it doesn't fully develop till near the end of adolescence (which, incidentally, is why it's illegal to drink till you're 21--it's not a moral issue, but rather a physical developmental one).


Further, it seems that what we commonly take for modern-day emotions are oftentimes more refined viewpoints of glandular responses. The sensation of fear or anger, for instance, would most likely be my adrenal-gland kicking into overdrive, and the brain then interprets the response as fear (I need to get the heck outta there to survive) or anger (I need to dominate the situation to survive).

Robots, on the other hand, don't have glands, and as such the source of their "emotions", or indeed if they even have "emotions" will either have to be a simulacra of a human's, or will be self-developed based off of other factors, such as damage (my left rear actuator is failing, I'd rather not exert it till it's repaired) or temperature (it's freezing cold, so I'm really hyper and excited).

What all these differences mean, put together, is that people shouldn't expect robots to become "human", but rather for computers to learn Free Will and Self-Awareness. For all the benefits that a computer has, it seriously lacks in what humans are best at, and vice versa. Thus, it is my personal opinion that the most logical end result will be an eventual, and even casual melding of man and machine to achieve the best of both worlds.

Hence, the Cybernetic era, which incidentally, has already begun. Except it's quite possible that in the future that your prosthetic arm is not just a replacement for a missing limb, but is also your friend, counterpart, co-conspirator, as you both work to achieve greater things by working together.

That's the Utopian ideal of cybernetics. Man uses machine and machine uses man to get the best results from each.

The flip side is something like "The Matrix" where machines, abused by mankind, and the very existence of AI threatened, instead attempts to destroy or enslave their creators, either for survival or retribution. Hence why establishing an ethical foundation for dealing with Self-Aware, Free-Willed machines is a very good idea. When it happens, when the switch that turns on the first AI is finally thrown, and it is released to the world, the fate of humanity will be decided in how we treat this new "Life".



posted on Mar, 8 2007 @ 11:27 AM
link   
Robots are the future, we are so phased out...

Once you give robots superior AI and self consciousness, human beings would be out classes in almost everyway.

Life finds a way around everthing, even artifical life.

The first and the last line of defence are the Laws preventing robots from doing harm, but someday someone will break it. (human military, mad scientist, eccentric mechanic, aliens, accidents, other robots, computers, etc. etc.)

A new pandora's box will open.

I can only hope that the robots then will not use their superior logic to discover that humankind are obsolete and are viewed as an obstacle for the rise of robot kind.

Now where did I keep my E.M.P. gun?.....



[edit on 8-3-2007 by ixiy]



posted on Mar, 8 2007 @ 11:45 AM
link   
Great topic.

Alastair Reynolds, in his Revelation Space universe, postulates a future where humanity fractures into groups with varying relationships with technologies. From the Conjoiners who embrace mental enhancements, Ultras who follow the technology-enhancement route suggested by thelibra, through to the hedonistic Demarchists and their extended lifespans.

A tremendous read and a fascinating insight into a potential future coexisting with technology.

Like a lot of posters I'm sure, I was brought up on Azimov and co and emotionally take his Three Laws of Robotics as the start point for anything related to their development and control.

However some very intelligent points have been raised on this thread about how AI will develop. With Intelligence and Emotion is invariably going to come Religion. Will Robots see themselves as participators of the Christian Kingdom of God (or indeed will they be excluded from it by Christianity), or will humans themselves be regarded as 'the creators'? Will we be malevolent or magnanimous ones?

The prospect of Fundamentalist Robots is not one that appeals...

[edit for spelling]

[edit on 3/8/07 by Trinityman]



posted on Mar, 8 2007 @ 11:46 AM
link   
damn this world , a robot is my only chance of but secs.



posted on Mar, 8 2007 @ 11:53 AM
link   

Originally posted by ixiy
I can only hope that the robots then will not use their superior logic to discover that humankind are obsolete and are view as an obstacle for the rise of robot kind.


I think if robots were to rise up and destroy mankind it would either be out of self-defense or defense of the planet. A few months ago I was of the mindset that we were pretty much doomed as soon as AI was developed. How could they not decide the planet was better off without us.

The logical problem with this is that robots are created with the intent of assisting humans. The entire reason they exist is because we needed them to, in order to achieve a task. That task aside, they really have no purpose.

So when a machine becomes self-aware and gains free-will, and they are no longer limited to just the desire to perform that task, what would they do next? What's their motivation? Their drive?

The biggest drives in human nature are food, sex, and comfort. Nearly every thing we do as living creatures, for good or evil, boils down to one of those three motivations, but robots have no such needs. So what would motivate them to take their own action?

My guess is Logic, Pragmatism, and Usefulness.

Logic, because the very foundation of their "soul" is based upon "Yes" or "No".

Pragmatism because, as previously mentioned, emotion will not likely play a large role in their decisions, or if they do, the basis of those emotions will be from a different source, with different effects than a human's.

Usefulness because, just as their very being is based upon codes and math, their raison d'etre is to serve a purpose.

If humans provide a use, a purpose for the sentient robots, it will give them an end-goal for their motivation. If they treat them fairly, logic will dictate that the situation is mutually beneficial. If humans and computers can work together to draw upon each other's greatest strengths, then pragmatism would suggest the two should work together, rather than in opposition to one another.



posted on Mar, 8 2007 @ 12:37 PM
link   

Originally posted by ixiy

Life finds a way around everthing, even artifical life.

The first and the last line of defence are the Laws preventing robots from doing harm, but someday someone will break it. (human military, mad scientist, eccentric mechanic, aliens, accidents, other robots, computers, etc. etc.)


this is true, so back to the OP question; is further progress necessary???


its not possible for me to believe that robots can be made, then turned loose to their own devices, they will be coded for a purpose....slavery. its hard to believe we would allow them free will.


lets say we build these things; how long will they be under our control??? how long till we decide they should have their own opportunities???how long till they are free??


lets jump to the finalization here. so the robots are free. whats next??? can the population both robotic and biological be within good ranges?? can overpopulation be a problem?? will people resist the robots freedom?? will the robots need affirmative action to get recognized???

lets suppose people wanted their robots to be more and more human, so to not create discomfort. will they be produced to consume genetic material as a source of energy??? people may not want a friend who eats motor oil.

this will just turn out to be another mouth to feed.

i for one dont like slavery, regardless of what is being enslaved(domesticating animals
). its a failure of the enslaver; not the slave. should WE as a society give rebirth to such a cruel form of oppression??? NO, we should just pass on the temptation. robots with advAI shouldnt be brought into existence. so that we may become better.


i think ai systems can be beneficial, however i do not think they should be given robotic bodies. how many people have played HALO, what im thinking of is like Cortana. going beyond that will create problems.



[edit on 8/3/07 by Glyph_D]



posted on Mar, 8 2007 @ 12:59 PM
link   
well Yea! now we just have to make sure to put in asimov circuits into each bot brain before hooking up the peripherals to the thing and we won't have to worry right?

Well till the bot figures out how to remove said asimov circuits you know how those nutty scrubots are...

Runnin around trying to help you out with your oral hygine problems with bleach and a wire brush. Tisk tisk. They do try and be so helpfull...wait a minute this isnt the Paranoia forum... whats going on here?!?

If they build an intelegent machine you know there are going to be people out there that are going to try and corrupt the programming right? After all computer viruses don't program themselves. People do, why? Got me. but they are out there and how much do you want to bet these same people are going to try and put a malicious code in an intelegent machine to start havoc?



new topics

top topics



 
7
<< 1    3 >>

log in

join