It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Robotic age poses ethical dilemma

page: 1
7
<<   2  3 >>

log in

join
share:

posted on Mar, 7 2007 @ 10:37 AM
link   

Robotic age poses ethical dilemma


news.bbc.co.uk

An ethical code to prevent humans abusing robots, and vice versa, is being drawn up by South Korea.

The Robot Ethics Charter will cover standards for users and manufacturers and will be released later in 2007.

It is being put together by a five member team of experts that includes futurists and a science fiction writer.

"The government plans to set ethical guidelines concerning the roles and functions of robots as robots are expected to develop strong intelligence in the near future," the ministry of Commerce, Industry and Energy said.
(visit the link for the full news article)


Related News Links:
news.bbc.co.uk
www.ft.com
www.robotuprising.com
www.rfreitas.com

Related AboveTopSecret.com Discussion Threads:
Robots to Have Rights, Says UK Government
SCI/TECH: The New Military: Robots with Human DNA
SCI/TECH: With Japan aging, Toyota to staff factories with advanced robots



posted on Mar, 7 2007 @ 10:37 AM
link   
Looks like the Future is now. Reminds me of few SF novels I have read by mister Isaac Asimov and the first thing I thought of, was his Three Laws Of Robotics being implemented here.


Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm;" the rest of the laws are modified sequentially to acknowledge this.

Again here is the problem of ethical dilemma, since the robots would routinely carry out surgery by 2018 and every South Korean household will have a robot by between 2015 and 2020 - but what to do in case of a certain Robot Rebellion? Now we have all seen movies like The Terminator, The Matrix, I, Robot, Ghost In The Shell or A.I., which do represent a certain possible future for mankind, if the robots - or Machines - shall get too much power or the artificial intelligence to start to rebel against their creator.

But it also raises another question of so called Replicants - seen in the movie Blade Runner - where the borders between artificial and natural life will get brushed away, and there is a possibility of a human falling in love with an android and vice-versa.



news.bbc.co.uk
(visit the link for the full news article)

[edit on 7/3/07 by Souljah]



posted on Mar, 7 2007 @ 10:56 AM
link   
So, this poses an interesting question: How will we make the robots behave? They can only do what we program them to do, no?



posted on Mar, 7 2007 @ 11:00 AM
link   
I want to bring up another issue irt robots. That is the issue of intelligence. How could they ever be as "intelligent" as a human?

Maybe this question belongs in another forum.



posted on Mar, 7 2007 @ 11:01 AM
link   
If we make them, without a doubt some will be good and some will be evil.

It makes sense. They will start to think for themselves in due time.



posted on Mar, 7 2007 @ 11:06 AM
link   
The robots today are dumb.

They only perform what Humans tell them to.

The problem arises with the creation of artificial intelligence - and I know plenty of scientists are working on that for a long time, and they all say, that with current technological development, the only question is not IF but WHEN. Which only makes thinks more complicated, thinking that robots will sooner or later "emancipate" themselves. We are today few steps away from airplanes being controlled by computers, without any pilot. Now imagine how many military technology is controlled by computers. Now imagine a certain super computer, being able to think for Itself and start to manipulate every other computer on board certain airplanes, submarines, ships, tanks, missile silos... And you have a Hollywood move happening in front of our eyes.

And I agree with dgtempe:

If we make them, without a doubt some will be good and some will be evil.

[edit on 7/3/07 by Souljah]



posted on Mar, 7 2007 @ 11:18 AM
link   
I was working in hi-tech with AI scientists and engineers back in the mid 80's. I made it a point to ask them to explain what they meant by Artificial Intelligence. I never got a satisfactory answer, but I chalk that up to my own deficiency. Can anybody explain AI?



posted on Mar, 7 2007 @ 11:53 AM
link   
i disagree that they will be 'good' or 'evil'. they may seem that way to us, but if they do aquire intelligence then in all likely hood it will be a pure logic intelligence devoid of emotion, and good/evil are emotional/moralistic views.

if they start killing people it wont be cuz they are evil, but becuase they deduced it was the logical thing to do in their situation.

or i just need more sleep...

oh, but the simple solution is dont arm AI's or allow them access to networks. build in failsafes that are isolated electronicly from anything the ai could control. dont let the ai's control military weaponry under any situation.

[edit on 7-3-2007 by Damocles]



posted on Mar, 7 2007 @ 12:02 PM
link   

Originally posted by Damocles
i disagree that they will be 'good' or 'evil'. they may seem that way to us, but if they do aquire intelligence then in all likely hood it will be a pure logic intelligence devoid of emotion, and good/evil are emotional/moralistic views.

if they start killing people it wont be cuz they are evil, but becuase they deduced it was the logical thing to do in their situation.

or i just need more sleep...

Maybe you need more sleep, but you are spot on. The thing that will always seperate us from machines is emotion. How can that ever be programmed in, is my question?



posted on Mar, 7 2007 @ 12:09 PM
link   


Maybe you need more sleep, but you are spot on. The thing that will always seperate us from machines is emotion. How can that ever be programmed in, is my question?

How could we ever go to the moon? Or fly? That was the question only 200 years ago. I think it's possible, with nano-technology and biological/mechanic mix to build a robot.

After all a human is just a bunch of cells... just like a robot is. If you can manipulate each cell, you can create a robot with human skin, a kind of mix, and that could lead to emotions.

But do we need robots when we have enough humans?

Of course we could use them for dangerous things, or work, but with as technology increase, they will become our equal... then you lose the advantage of it being a robot with no conscience, no emotions, ect... a ``machine`` or a ``bunch of non-thinking metal``. Anyway I'm talking about stuff in 150 years from now.

Also robots could be programmed to oppress humans... imagine Hitler with an army of robots? Or the globalists fascists?

[edit on 7-3-2007 by Vitchilo]



posted on Mar, 7 2007 @ 12:18 PM
link   
there is a flip side to this also and i havnt seen anyone comment on it.

lets pretend we do create ai robots. lets pretend they are as intelligent as you or i. even smarter.

at what point do we lose the right to tell them what to do? at what point can we no longer justify having them do hazardous duty because they are 'just a machine'?

at what point do they become slaves?



posted on Mar, 7 2007 @ 12:19 PM
link   

Originally posted by Vitchilo
How could we ever go to the moon? Or fly? That was the question only 200 years ago. I think it's possible, with nano-technology and biological/mechanic mix to build a robot.

Going to the moon was merely a technological advancement.

But you bring up the term "nano-technology". I admit that I have heard the term, but am totally unfamiliar with what it means. I would like to learn more about it.

I just cannot see how emotions can be programmed without living cells.


You are probably more informed than I am on the topic, so if you have any pointers to help me understand, I would appreciate them.



posted on Mar, 7 2007 @ 12:21 PM
link   
Firstly - what is Intelligence?


Intelligence

Intelligence is a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. Although intelligence is sometimes viewed quite broadly, psychologists typically regard the trait as distinct from creativity, personality, character, or wisdom.

Imagine of giving all of that to Machines?

OK - I think they are already able to solve problems.

But, to Think Abstractly?

Give them Character?

Creativity?


Artificial intelligence

The term Artificial Intelligence was first used by John McCarthy who considers it to mean "the science and engineering of making intelligent machines". It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. The terms strong and weak AI can be used to narrow the definition for classifying such systems. AI is studied in overlapping fields of computer science, psychology and engineering, dealing with intelligent behavior, learning and adaptation in machines, generally assumed to be computers.

Basicly AI means, that we would give the machines the ability usually reserved for us only (and some smarter animal species). Which means that they would think, act, plan, learn like a Human. But the problem is in the so called artificial consciousness, which actually can not be written by logic and a 1 and 0 program. But that is a complex question for Philosophy, rather then Science.

But Robot Emotion is actually not so hard to do.


Artificial Emotional Creature Project

We have been building pet robots as examples of artificial emotional creatures since 1995. The pet robots have physical bodies and behave actively while generating motivations by themselves. They interact with human beings physically. When we engage physically with a pet robot, it stimulates our affection. Then we have positive emotions such as happiness and love or negative emotions such as anger, sadness and fear. Through physical interaction, we develop attachment to the pet robot while evaluating it as intelligent or stupid by our subjective measures.

Pet Robots with Emotion in 1995?


Designing A Robot That Can Sense Human Emotion

"We are not trying to give a robot emotions. We are trying to make robots that are sensitive to our emotions," says Smith, associate professor of psychology and human development.

Their vision, which is to create a kind of robot Friday, a personal assistant who can accurately sense the moods of its human bosses and respond appropriately, is described in the article, "Online Stress Detection using Psychophysiological Signals for Implicit Human-Robot Cooperation." The article, which appears in the Dec. issue of the journal Robotica, also reports the initial steps that they have taken to make their vision a reality.

Robots, that can be sensitive to our emotions and can detect them?


Emotion robots learn from people

Making robots that interact with people emotionally is the goal of a European project led by British scientists.

Co-ordinator Dr Lola Canamero said the aim was to build robots that "learn from humans and respond in a socially and emotionally appropriate manner".

"The human emotional world is very complex but we respond to simple cues, things we don't notice or we don't pay attention to, such as how someone moves," said Dr Canamero, who is based at the University of Hertfordshire.

Robots that can LEARN emotions from People?



posted on Mar, 7 2007 @ 12:57 PM
link   

"We are not trying to give a robot emotions. We are trying to make robots that are sensitive to our emotions," says Smith, associate professor of psychology and human development.

Ahh, now this makes sense. I can visualize it as being technologically possible, although I do not pretend to understand the inner workings of it.

But those two sentences clarify much to me, in terms of what AI scientists are trying to achieve.



posted on Mar, 7 2007 @ 01:17 PM
link   
Of course it does. There the how much is too much. Will the devices be used for bad. Will machines take over the world???

"The Matrix", "The Terminator", "2010", "Bladerunner", etc.



posted on Mar, 7 2007 @ 01:39 PM
link   

Originally posted by jsobecky
I was working in hi-tech with AI scientists and engineers back in the mid 80's. I made it a point to ask them to explain what they meant by Artificial Intelligence. I never got a satisfactory answer, but I chalk that up to my own deficiency. Can anybody explain AI?


I'm not exactly an engineer or anything, but I would have thought that all computer robotics would have to run on some sort of programs or algorithms that are input by humans. A robot could potentially learn, but only from what it has available to it, they will never have the artistic creation of humans.

For a computer program to be evil it would still have to be working on algorithms written by humans, even if they were "RANDOM: SHOULD I KILL SOMEONE OR NOT?" kind of things. But that wouldn't be the robot being evil, it would just be carrying out instructions from its program.



posted on Mar, 7 2007 @ 01:55 PM
link   
Overall, I personally think that we should worry more about abusing people and animals first, and worry about abusing our robots later, when it becomes an actual issue.

I'm sure the people currently being slaughtered in Darfur are not comforted by our hypothetical concerns for sentient machines.



posted on Mar, 7 2007 @ 02:00 PM
link   

Originally posted by malganis

Originally posted by jsobecky
I was working in hi-tech with AI scientists and engineers back in the mid 80's. I made it a point to ask them to explain what they meant by Artificial Intelligence. I never got a satisfactory answer, but I chalk that up to my own deficiency. Can anybody explain AI?


I'm not exactly an engineer or anything, but I would have thought that all computer robotics would have to run on some sort of programs or algorithms that are input by humans. A robot could potentially learn, but only from what it has available to it, they will never have the artistic creation of humans.

This was my point. How could they learn any emotions unless they were comprised of some living tissue?



posted on Mar, 7 2007 @ 02:04 PM
link   

Originally posted by SuicideVirus
Overall, I personally think that we should worry more about abusing people and animals first, and worry about abusing our robots later, when it becomes an actual issue.

I'm sure the people currently being slaughtered in Darfur are not comforted by our hypothetical concerns for sentient machines.

Not to sound insensitive or anything, but those are worries for the segment of our population that are "people persons". Not everyone has the same interests/abilities.



posted on Mar, 7 2007 @ 02:55 PM
link   
Checkout the technological singularity.

en.wikipedia.org...

If AI reaches the equivalence to human intelligence, it will soon become capable of improving its own intelligence with increasing effectiveness, far surpassing human intellect. Thats kinda scary..



[edit on 023131p://5603pm by semperfoo]




top topics



 
7
<<   2  3 >>

log in

join