It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Humans are Irrelevant in the Future

page: 1
0

log in

join
share:

posted on Mar, 23 2004 @ 08:52 PM
link   
Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species.



First let us assume that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.

Eventually a stage will reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control and humans will no longer be necessary.




posted on Mar, 23 2004 @ 09:01 PM
link   
It's simple, machines can't think, they can solve problems, they can be creative, but they cannot think a random thought without prompting it.

I.e the machines will never decide for us because they are incapable of such a thing. Besides, machies need us as much as we need them. Ever watch the Matrix series? Good stuff.



posted on Mar, 23 2004 @ 09:08 PM
link   

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.



Interesting in that I ran across this article an hour or so but had to find it again.



Two years ago Anthony Tether, the agency’s director, said: “Imagine a warrior with the intellect of a human and the immortality of a machine, controlled by our thoughts.” The idea was used in the Terminator film, The Rise of the Machines, in which Arnold Schwarzenegger is faced with the T-X, a killer robot in human flesh.

Miguel Nicolelis, the neuroscientist who taught Ivy and Aurora, dismissed such Hollywood nightmares but added: “We have to be careful. If we make one mistake, then all our work will be undermined and all the potential benefits lost.

Meet the cyborgs: humans with a hint of machine
But then there's that saying:


....often imitated but never duplicated....




seekerof



posted on Mar, 23 2004 @ 09:12 PM
link   
robots CAN think, its called AI (im not shure)

imagine a robot that CAN think and learn from experiences... scary sh*t...

i luv robot movies...




posted on Mar, 23 2004 @ 09:12 PM
link   

Originally posted by MrJingles
It's simple, machines can't think, they can solve problems, they can be creative, but they cannot think a random thought without prompting it.

I.e the machines will never decide for us because they are incapable of such a thing. Besides, machies need us as much as we need them. Ever watch the Matrix series? Good stuff.


We need to look to the future of nanotechnology, artificial intelligence and robotics to find our answers. Scientists in Israel have created a computer from DNA, nanotechnology is progressing towards self replicating machines, and this is today’s technology. It’s not very difficult to imagine the progression and evolution of these machines to see that eventually they will have the ability to make better decisions than humans. It’s inevitable.



posted on Mar, 23 2004 @ 09:20 PM
link   
Seekerof,

Awesome article!


It proves that we are currently attempting to combine our biological human bodies with Silicon Valley technology. The first steps towards our cyborg future.

This is a quote from the article.

“VOLUNTEERS are to have microchips implanted on the surface of their brains in the first human trials of a technology that will enable people to control machines using the power of thought alone.”



posted on Mar, 23 2004 @ 09:25 PM
link   

Originally posted by they see ALL
robots CAN think, its called AI (im not shure)

imagine a robot that CAN think and learn from experiences... scary sh*t...

i luv robot movies...



Q. What is artificial intelligence?

A. It is the science and engineering of making intelligent machines. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world and imagine the future. Varying kinds and degrees of intelligence occur in people, many animals and some machines.



posted on Mar, 23 2004 @ 09:36 PM
link   

Originally posted by MrJingles
It's simple, machines can't think, they can solve problems, they can be creative, but they cannot think a random thought without prompting it.

I.e the machines will never decide for us because they are incapable of such a thing. Besides, machies need us as much as we need them.

Exactly. I remember some scientist made 4 rules for a robot. 1 of these rules stated that robots can't kill unless programmed to. so we 0wnz0r 7h3m, b307ch!



posted on Mar, 23 2004 @ 10:00 PM
link   
You could be referring to Sci-Fi writer Isaac Asimov's Three Laws of Robotics?

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

He also later added the Zeroth Law which overrides the other three.

Zeroth Law A robot may not injure humanity, or, through inaction, allow humanity to come to harm.



posted on Mar, 24 2004 @ 10:15 AM
link   
Asimov's Three Laws of Robotics makes many people feel confident that our future with A.I. will be without its problems. But if we analyze the three laws with real life scenarios we will find that this type of programming isn’t going to work and will not be employed. In order to give robots artificial intelligence we will need to give them free will at some point.

Asimov's Three Laws

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Let’s look at these laws as they would apply to real life situations.


You are a robot, programmed according to the Laws of Robotics, as stated above. On a piece of scratch paper, sketch out your logical reactions to the following situations:

1. A huge tree is about to fall on a child playing on the other side of the street. A crossing guard is holding a "Stop" sign at you, preventing you from getting to the child. What do you do? Explain every step of your reasoning.

2. The situation is the same as in #1, except now you realize that the tree will crush you if you try to save the child. In fact, you'll be crushed before you can even get to the child. Based on these rules, what do you do?

3. In a completely different situation, your owner orders you to jump in front of a speeding bus.

4. The situation is the same as in #3, except now your owner's ex-girlfriend is on the bus, you are a huge industrial robot with glittering tritanium armor plates, and you weigh twice as much as the bus does.

5. Same as #4, except now the bus is about to run over your owner.

6. Asimov eventually prefixed a Zeroth Law: "A robot may not injure humanity or, through inaction, allow humanity to come to harm," and the other laws were modified to preclude violating it.



posted on Mar, 24 2004 @ 10:20 AM
link   
Echo in here?


Btw lizard you could have just linked to this page:

jerz.setonhill.edu...




posted on Mar, 24 2004 @ 10:25 AM
link   
This is another one.

www.androidworld.com...



posted on Mar, 26 2004 @ 12:16 PM
link   
organic tissue with nanotech (dues ex) is where i think this is headed. human brains with nanotech and mech devices will turn us into a borg. i think it willbe more convient to repair human flesh with nanotech than mech components. as long as we push ourselves to our products and continue in paths where we need more computation power and production speed is required then inanimate matter comprising the natural calculator that is the universe will ultimately prove more efficent than the slow reasoning of the creative mind.



posted on Mar, 26 2004 @ 12:59 PM
link   
I do agree that we will see a merging of the biological human body and silicon based intelligence. We will also see artificial intelligence combined with nanotech and robotics. The result from either will be the end of human’s superiority and dominance in this world. If humans can’t compete intellectually with these forms of “life” then we will ultimately become a burden and our thoughts and minds will no longer be the thing that shapes our future, effectively giving the real control over to A.I.



posted on Nov, 9 2005 @ 05:04 PM
link   
Hi all, i love robot movies too, like terminator, robots, offcourse the matrix trilogy, im very interested in AI, i want to read books about it, to learn to understand it more, but i dont know were to start. Its really interesting, i, myself study in ict and well i want to know more about AI, what we can do now, what will be in the future. I read somewere on the internet, that the us military is working on robot sience since the 60ties!! Hope someone can give me a reall good book beginner or advance, please give me a few titles that i can order through amazon. I hope someone will reply cause i dont know were to start. Thanks very much,

Pazzie



posted on Nov, 9 2005 @ 05:29 PM
link   
I agree Asimov's laws are far to simple to control future AI. They are in infact really just made for is book about fictional robots with positronic brains .

The first law for example

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

We have no clue how a robot would interpret such a law. For example the part " through inaction, allow a human being to come to harm." This suggest a robot must act if there is a percieved threat to a human. Humans do many things that everyday that are dangerous driving your car for example.

A robot with those laws might decide that it can't let you drive because there is a chance you might come to harm in a accident. You command it to let you drive (second law) but that is over ridden by the first law.




[edit on 9-11-2005 by ShadowXIX]



posted on May, 27 2006 @ 08:52 PM
link   
Looks like the leading country in robotics is adopting Asimov's laws for robots.
Strange how real life follows fiction. Or is it maybe that Asimov was psychic?!?

source


Japan's Ministry of Economy, Trade and Industry is working on a new set of safety guidelines for next-generation robots. This set of regulations would constitute a first attempt at a formal version of the first of Asimov's science-fictional Laws of Robotics, or at least the portion that states that humans shall not be harmed by robots.

The first law of robotics, as set forth in 1940 by writer Isaac Asimov, states:

A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Japan's ministry guidelines will require manufacturers to install a sufficient number of sensors to keep robots from running into people. Lighter or softer materials will be preferred, to further prevent injury.



posted on May, 28 2006 @ 03:18 AM
link   
Computers are already better than humans at certain things.
Doubt it, but wouldn't that make humans relevant, I mean we did invent robots?



posted on May, 28 2006 @ 03:33 AM
link   
humans are already irrelevent, they just don't realize it
BTW artificial intelligence already exists in many aspects, just look into the game industry and it's only a matter of years before it gets better



new topics

top topics



 
0

log in

join