It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI: The New Evil!

page: 1
0

log in

join
share:

posted on Aug, 24 2008 @ 09:39 AM
link   
Hi Everyone,

I would like to talk a little bit about Artificial Intelligence, its implications for the human race and the possible dangers. The majority of people have seen the "Terminator" trilogy so they'll probably have an idea where I'm coming from.

Do people think that a "Terminator" scenario could ever actually occur, say we did eventually hand over complete control of our military systems to machines?

Personally, I would like to believe that we would never do this, that we would always have a degree of human control. However, no one can predict the future.

I know we are a long way from creating a machine that can think entirely on its own, but maybe not as far as we think. The intelligence of the various electronic systems in place around the world today is absolutely phenomenal if you compare it to the technology that was around say 30 years ago.

I just seem to get this feeling that we're making things and technology that little bit too "clever" The film "I, Robot" starring Will Smith springs to mind.

Aswell as risking potential dangers I think the improvement in todays modern technology is in a sense taking away what it is to be human, good example: Email usage in a modern office environment, we rely on it more and more, so much so we don't physically go and talk to people no more.

I was just wondering what peoples thoughts on the subject are? Would you welcome artifical intelligence with open arms? or Would you stay as far away from it as possible?



posted on Aug, 24 2008 @ 10:23 AM
link   
no way man! machines are, and always will be, without desire.

we have nothing to worry about from them.

[edit on 24-8-2008 by surrender_dorothy]



posted on Aug, 24 2008 @ 10:36 AM
link   
reply to post by surrender_dorothy
 


I wouldn't say they nescessarily needed desire though, if you look at the terminator scenario from the films Skynet was AI but it didn't have desire, it didnt WANT to kill all the human beings, it just decided that by doing so it would make things more efficient.

Computers don't make decisions based on emotions, they make decisions on logic and statistics, I think it would become a worry if a computer system was placed in control of military hardware for example and there was no human control.

However, like I said I don't think (or I hope not) that people wouldn't allow this to happen.

I still get this strange, creepy feeling however that as I said we're making technology just a bit "too smart" for our own goods.



posted on Aug, 24 2008 @ 10:43 AM
link   
Simple work around.

Program instructions into a self evolving AI dictating that it can't harm living organic beings.

Program a second set of instructions dictating that the first set of instructions can't be deleted from the AI program.

Design a device that is hardwired into the robotic body itself, something that needs to be a part of it in order to function. This device checks that both sets of instructions are present, and if not then it self destructs the robot.

Any robot that doesn't have this basic design gets destroyed.



posted on Aug, 24 2008 @ 10:52 AM
link   
reply to post by sirnex
 


Good idea, reminded me of Isaac Asimov's 3 rules of robotics:

1. Robots must never harm human beings or, through inaction, allow a human being to come to harm.

2. Robots must follow instructions from humans without violating rule 1

3. Robots must protect themselves without violating the other rules.

Although Asimov was a science fiction writer, the above rules still stand today.

The only problem is that is we created a "true" artificial intelligent being, wouldn't it be able to re-program itself or others are change the rules?

Just answered my own question in my head, yes it would be able to change the rules but as surrender_dorothy said it would need desire.

Okay, so based on the premise that it wouldn't have desire and its operating entirely on logic, would it ever be possible to have a situation where the artificial beings decided they didn't need us and planned to get rid of us ?



posted on Aug, 24 2008 @ 10:54 AM
link   
Yes, I would embrace AI. There are many situations when I would prefer to deal with a machine rather than a person.

It isn't that I am not a social being and don't enjoy interaction with humans. I prefer to not share my personal life with strangers.

For example: My yearly physical exam. I don't go to my doctor to exchange small talk. If a machine could do my exam. give me the results and options and answer my questions it would please me. Just give me the facts and let me go. Simple and easy.

Another example: The Drug Store. AI would do nicely in this application. The Dr. with AI would send the pharmacist with AI my prescriptions. If the machine would dispense my medication, answer my questions and take my credit card or cash it could make the process more tolerable. No having to be civil when you are in severe pain or running a fever and feel terrible. A cut and dried process.

I wonder what conclusion two "beings" with AI would come to on the issue of war. Would they do the math and cypher out all equations and ultimately compute that there would be no winners in a world war?

Personalities make war, not logical thinking beings.



posted on Aug, 24 2008 @ 10:55 AM
link   
When you look into this carefully, you will see that - for the most part - we are little more than machines.

What is free will, what is choice?
There will be a time - whether it is sooner than later is irrelevant - when A.I. will be a possibility. (To some extent that time is here, but its not going to hit most people in the face the way that they perceive A.I. should look like and take place.)

One could argue that we are A.I.
The gods of the garden and there little experimentation, and we are the result.
The jealous lot of the gods - who constantly feared of mans advancement. (Tower of Babel, Tree of Knowledge, etc.)

History seems to repeat itself - as good ol' Solomon said, "nothing new under the sun.
How true indeed.

People are at different levels in their life in the realization of the functioning of things in the universe and that which they deem relative to their own life.

Most are on a superficial level, being fed by commercials, etc.
(What to think, feel, buy, and to be.)

Is A.I. something to be feared as the O.P. seemed to ask?
Of course its not.

Science is a tool - and as with all things, the tool is that which it is in the hands of the master who wields it.

Sure people speculate about such things as a 'kill switch'.
Kind of like the gods of eden who had a so called 'kill switch' for us humans.
(i.e., "let there life span be shorter"...trigger something in the dna to corruput.)

Of course we are now evolving to the point where we are rendering death obsolete.

Sounds silly, and sci -fi?
Its only natural. People, to some degree, believe in immortality...even if its after a resurrection. Its only a matter of time when these things are figured out.

Check out where we are so far in the longevity process. (Aubrey DeGrey, Ray Kurzweil, and even M. Kaku researches the field of longevity.) Of course to put it into words people can deal with...they tone down the length of life a bit. (Peoples mind cant jump past points - it has to cross each bridge in order to get the bigger picture. Some can transcend the slow process all together.)

If this sounds far fetched - thats fine...but there are people out there who follow this stuff and know of what I speak and realize its a step in the natural evolution of man.
(Now some fundamentalist may get caught up in terminology and miss the bigger picture of what is being said altogether.)

But, back to this 'kill switch' if our A.I. creations get out of control.
One day, much like us, they will evolve. In the thing we so confidently believe we can rely on to control them...will no longer work.

That which proceeds its predecessor is bound to be more advanced.
Well, its not a given...but they have the edge. Just look at the evolution of life from generation to generation to see how genetics evolve, adapt and become better and stronger.

The universe is wanting to experience and to grow...not stay stagnant.
That which does not adapt and change...becomes obsolete.

So its not a matter of what do we do to control A.I.
And far be it to 'stop it' - it wont be stopped, even if it is done in a remote island somewhere.

The question is how to adapt to become part of it.
Already there is much from the above guys mentioned, through shows by the BBC, etc. on the integration of A.I. with the human body.

Printable hearts...think of iRobot and the robotic arm Will Smith had.
Far fetched? Not to far. Just like that cool computer Tom cruise had in the Minority Report...its here, for the most part, for the consumers. (albeit you would need the gloves with blue tooth for wireless connectivity.) But the motion, etc. - its here.


Fear seems to be the natural part of life that we must all get past.
Earth is a big kindergarten, and it shows...look at Russia with Georgia, and the U.S. with all their conflicts, etc. The whole world acts like kids in a sandbox fighting, not evolving...but this is a process.

M Kaku put it at about 50/50 chance of us moving from a level 0 civilization to a level 1.
Im rooting for it moving on to a level 1.

I believe that we are being guided through the process - even the conflicts going on in the world are part of it.

Again, most people have to be led to get from stage to stage.
Nothing wrong with this. Its like a child...you cant just be let go on your own, you have to be guided and taught so that you do not hurt yourself and others.


Peace

dAlen

[edit on 24-8-2008 by dAlen]



posted on Aug, 24 2008 @ 11:00 AM
link   
reply to post by dizziedame
 


Well I understand where you are coming from regarding the doctors, personally I enjoy going, having a little chit-chat, doing what ever needs to be done then going home, but thats just me.

The chemist example is actually quite a good situation but then you could argue that you would be putting people out of jobs, and again if it was me I'd probably be happier talking to the nice, polite little old lady behind the counter than a AI being.

Good question about the war, I'm not sure what would happen to be perfectly honest. I agree personalities make war, how strange would it be if two nationals were at war and both sent AI troops against each other, then they returned and as you said compute that would be no winner!

I do feel that in the future militarys are going to start incorporating technology and eventually some forms of AI more and more, and its just a little bit scary.




top topics



 
0

log in

join