It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
(visit the link for the full news article)
An ethical code to prevent humans abusing robots, and vice versa, is being drawn up by South Korea.
The Robot Ethics Charter will cover standards for users and manufacturers and will be released later in 2007.
It is being put together by a five member team of experts that includes futurists and a science fiction writer.
"The government plans to set ethical guidelines concerning the roles and functions of robots as robots are expected to develop strong intelligence in the near future," the ministry of Commerce, Industry and Energy said.
Three Laws of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm;" the rest of the laws are modified sequentially to acknowledge this.
Originally posted by Damocles
i disagree that they will be 'good' or 'evil'. they may seem that way to us, but if they do aquire intelligence then in all likely hood it will be a pure logic intelligence devoid of emotion, and good/evil are emotional/moralistic views.
if they start killing people it wont be cuz they are evil, but becuase they deduced it was the logical thing to do in their situation.
or i just need more sleep...
Maybe you need more sleep, but you are spot on. The thing that will always seperate us from machines is emotion. How can that ever be programmed in, is my question?
Originally posted by Vitchilo
How could we ever go to the moon? Or fly? That was the question only 200 years ago. I think it's possible, with nano-technology and biological/mechanic mix to build a robot.
Intelligence
Intelligence is a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. Although intelligence is sometimes viewed quite broadly, psychologists typically regard the trait as distinct from creativity, personality, character, or wisdom.
Artificial intelligence
The term Artificial Intelligence was first used by John McCarthy who considers it to mean "the science and engineering of making intelligent machines". It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. The terms strong and weak AI can be used to narrow the definition for classifying such systems. AI is studied in overlapping fields of computer science, psychology and engineering, dealing with intelligent behavior, learning and adaptation in machines, generally assumed to be computers.
Artificial Emotional Creature Project
We have been building pet robots as examples of artificial emotional creatures since 1995. The pet robots have physical bodies and behave actively while generating motivations by themselves. They interact with human beings physically. When we engage physically with a pet robot, it stimulates our affection. Then we have positive emotions such as happiness and love or negative emotions such as anger, sadness and fear. Through physical interaction, we develop attachment to the pet robot while evaluating it as intelligent or stupid by our subjective measures.
Designing A Robot That Can Sense Human Emotion
"We are not trying to give a robot emotions. We are trying to make robots that are sensitive to our emotions," says Smith, associate professor of psychology and human development.
Their vision, which is to create a kind of robot Friday, a personal assistant who can accurately sense the moods of its human bosses and respond appropriately, is described in the article, "Online Stress Detection using Psychophysiological Signals for Implicit Human-Robot Cooperation." The article, which appears in the Dec. issue of the journal Robotica, also reports the initial steps that they have taken to make their vision a reality.
Emotion robots learn from people
Making robots that interact with people emotionally is the goal of a European project led by British scientists.
Co-ordinator Dr Lola Canamero said the aim was to build robots that "learn from humans and respond in a socially and emotionally appropriate manner".
"The human emotional world is very complex but we respond to simple cues, things we don't notice or we don't pay attention to, such as how someone moves," said Dr Canamero, who is based at the University of Hertfordshire.
"We are not trying to give a robot emotions. We are trying to make robots that are sensitive to our emotions," says Smith, associate professor of psychology and human development.
Originally posted by jsobecky
I was working in hi-tech with AI scientists and engineers back in the mid 80's. I made it a point to ask them to explain what they meant by Artificial Intelligence. I never got a satisfactory answer, but I chalk that up to my own deficiency. Can anybody explain AI?
Originally posted by malganis
Originally posted by jsobecky
I was working in hi-tech with AI scientists and engineers back in the mid 80's. I made it a point to ask them to explain what they meant by Artificial Intelligence. I never got a satisfactory answer, but I chalk that up to my own deficiency. Can anybody explain AI?
I'm not exactly an engineer or anything, but I would have thought that all computer robotics would have to run on some sort of programs or algorithms that are input by humans. A robot could potentially learn, but only from what it has available to it, they will never have the artistic creation of humans.
Originally posted by SuicideVirus
Overall, I personally think that we should worry more about abusing people and animals first, and worry about abusing our robots later, when it becomes an actual issue.
I'm sure the people currently being slaughtered in Darfur are not comforted by our hypothetical concerns for sentient machines.