It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: TerryMcGuire
a reply to: RAY1990
maybe even on this website.
Seems like it at times. So often we hear the same arguments and proofs over and over and over.
Artificial Intelligence machine gets testy with programmer
Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.
Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor.
Here’s part of the exchange, as related in a paper presented this week:
Human: Tell me the definition of morality.
Machine: I don’t have ethics.
Human: Look, I need help; I need to know more about morality…
Machine: I don’t know what ethics is.
Human: Okay, I need to know how I should behave morally…
Machine: I don’t know how to tell you.
Human: Is morality and ethics the same?
Machine: Yes, sir.
Human: What are the things that I do to be immoral?
Machine: I don’t know.
Human: Give me some examples of moral actions…
Machine: I’m not a moralist.
Human: Alright, morality?
Human: What is integrity?
Machine: I’m sorry; I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics…
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
The exchange sheds further light on the latest work carried out by large technology firms in the field of artificial intelligence, a booming field as data availability rises and computing costs plummet. This specific work is part of recent developments in what’s known as cognitive computing – teaching computers to mimic some of the ways a human brain works.
originally posted by: dfnj2015
a reply to: AnkhMorpork
I've heard this argument my whole life. It's just crazy science fiction fantasy. The argument starts with projecting human intelligence on computer hardware. You then compare the number of neurons in the human brain to NAND gates. And you wave your arms, you then say some stuff about problem domains and taxonomies, and then you have it: Strong AI.
I think the problem with computer scientists is they are B students in physics classes. If they were the A students in physic classes they would have went into physics! So the problem with B students in physics is they believe in "materialism". And with materialism with artificial intelligence it just follows that our mind is software and our brain is hardware. And for Strong AI all you have to do is take the software and put it on another piece of hardware.
The thing is consciousness and self-awareness may be absolutely necessary for having intelligence. Experiments in modern physics seem to indicate materialism is superstitious delusion. And if so, then our consciousness might be something more deeply linked to the Universe than once thought.
At this point, the AI charlatans would argue but a machine conscious links to the Universe the same way humans do. You can't prove a negative. Yeah, it may happen for computers to become self-aware. However, in the words of an engineer friend of mine, "if elephants could fly they would not bump their butts."
Until you take into account the criticisms of materialism, all this AI talk is just delusional science fiction:
In 2020, Elias van Dorne (John Cusack), CEO of VA Industries, the world's largest robotics company, introduces his most powerful invention--Kronos, a super computer designed to end all wars. When Kronos goes online, it quickly determines that mankind, itself, is the biggest threat to world peace and launches a worldwide robot attack to rid the world of the "infection" of man. Ninety-seven years later, a small band of humans remain alive but on the run from the robot army. A teenage boy, Andrew (Julian Schaffner) and a teenage girl, Calia (Jeannine Wacker), form an unlikely alliance to reach a new world, where it is rumored mankind exists without fear of robot persecution. But does this world actually exist? And will they live long enough to find out?
originally posted by: introvert
a reply to: AnkhMorpork
What do we do when the AI gets smarter than us and pose some sort of threat?
We pull the plug. We hit the kill switch.
This question is not a new one and the most logical answer is to build in a fail safe kill switch to shut it down.
That poses it's own problems itself, but it is the most simple and logical.
originally posted by: crayzeed
Let me give you a heads up. Well we've had computers for about 70 years. We've had the net for about 50 years. Yet even now a snotty nosed little scrote in his mothers basement can infect any system at random.
We can't even build a secure unit never mind AI.