It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

could robots really take over?

page: 3
2
<< 1  2   >>

log in

join
share:

posted on Jul, 29 2010 @ 01:30 AM
link   
reply to post by SaturnFX
 


Except as Asimov's fiction reveals, those laws lead to the necessity of the robots taking over the world to prevent mankind from destroying himself.




posted on Jul, 29 2010 @ 02:30 AM
link   
reply to post by skillz1
 


I did a Masters these on this topic - please read Wired For War by Peter Singer -

wiredforwar.pwsinger.com...

Absolutely and unequivocally yes - full and complete AI (artificial intelligence) was expected to be in place and usable by 2019 and is already very much a reality.

The US plans to deploy robotic infantry - as in Robotic GI Jo's by 2020 and they are well ahead of this schedule - this is not theory, this is not conspiracy - this is straight from the military as a publicly acknowledged fact - read the book - will blow your brains out.

The goal is to have objectives given to swarms of military robots - including air craft carriers, lurking subs etc - which are all able to work together to carry out the mission with support, back up, and redeployment in order to meet the overall objective.

The US has been working on - believe this - a sky net which allows them to communicate wirelessly through a deployed wireless broadcast network - yes no kidding called SKY NET -

This is the most terrifying book I have ever, ever read.



posted on Jul, 29 2010 @ 06:28 AM
link   

Originally posted by Byrd
Well, I build (kit) robots, have done some programming with them, have experience with web bots, and so forth.

Will they take over? Not a chance. Every designer puts in a "kill switch" or "God mode" and if the machine doesn't respond to that, you pull the plug.
I usually agree with your posts Byrd, in fact I think you're one of the best contributors to ATS, but I have to disagree with you about there not being a chance. There may not be a chance with today's machines, but the future is another story. I think the 30 year estimate posted in this thread about the earliest threat from computers is a little aggressive or optimistic, but 60 years or even 100 or more years from now, it would not be out of the question.

In fact if Moore's law continues to allow computing power to double every 18 months, it's hard to imagine how much computing power future computers might have.

There are a couple of factors to consider here. I've written code as I'm sure many here have and with our limited coding technology it's hard to imagine machines doing anything outside their program. Fair enough. But where will AI software be in 100 years? If machines are taught to learn from their mistakes and make future decisions based on that learning experience and not based on the original code, the results could be unpredictable, and in fact this is the way humans learn and sometimes the results are unpredictable with humans and sometimes some people freak out. I don't see why some machines couldn't eventually freak out like humans do.

The "kill switch" sounds rational enough. However, all humans have a "kill switch" of sorts. Drive a knife through someone's heart and they die. If it's a 98 year old woman it's pretty easy to operate that kill switch, but if it's a 7th degree black belt martial arts champion, good luck with that. I don't see why this analogy couldn't extend to computers of the future, where it may be very easy to operate the kill switch on some machines, probably most. But if AI programming reaches its ultimate pinnacle of self-awareness, some machines may not want their kill switch to be operated (perhaps an though an unintentional result of artificial intelligence programming). Depending on the physical capabilities of the robot, it may be much easier to try to put a knife through the heart of a 7th degree blackbelt than to turn off a robot that doesn't want to be turned off.

I think part of this risk may depend on how the robots get their power in the future. If all they have is a battery pack like today that will run down, maybe you don't even need the kill switch if they just run out of power. But what if they have built in solar power collectors, or worse yet some kind of futuristic internal power supply that can operate a year or even a decade without refueling?



posted on Jul, 29 2010 @ 06:49 AM
link   

Originally posted by snowen20
The ideas associated with robots taking over the Human race are not something that has been over looked by science.
There are some fairly decent reads on the subject that you can evaluate for yourself.

One of them is from the chief scientist at Sun Microsystems, who wrote a 12 page article on such a subject.

In it he states that not only does robotic A.I. have the potential to turn on its “master” it his likely to get to such a point easily with in the next 30 years.
I think it's possible at some point in the future, and I'd have to say that 30 years from now sounds like the soonest we MIGHT have to start worrying about this. But after that, AI will only continue to get "smarter" and as it does, the risk might increase.


Originally posted by Aristophrenia
reply to post by skillz1
 
Absolutely and unequivocally yes - full and complete AI (artificial intelligence) was expected to be in place and usable by 2019 and is already very much a reality.

The US plans to deploy robotic infantry - as in Robotic GI Jo's by 2020 and they are well ahead of this schedule - this is not theory, this is not conspiracy - this is straight from the military as a publicly acknowledged fact - read the book - will blow your brains out.
Just because they plan to have a full and complete AI a decade from now doesn't mean they will, or even if they have some kind of AI in 2020, I suspect it will be fairly rudimentary. I have to go with the Sun Microsystems scientist's 2040 estimate as the earliest POSSIBLE threat, and my own estimate is more like 2070 when the threat will become more realistic.

And by that date the robotic killing machines you mention could be quite sophisticated. I'm sure they will be designed with all kinds of safety protocols. However many of us have seen machines go berserk on occasion and operate outside their programming (the computer I'm working on now does that a couple of times a year for some reason, I think it's cause by some sort of memory overflow error that's not intended to occur), so a killing machine that operates as intended may not be a threat to us, however it's when it DOESN'T operate as intended that I worry, and in my experience a machine not operating exactly as intended is not a zero probability event.



posted on Jul, 30 2010 @ 11:45 AM
link   
ok but my point still stands and im gonna stick by it ths time and i understand what your saying when you say that once theese robots have become so far advanced that they then could start thinking out side of the box.

and maybe turn on people but this would not happen because like i said robots dont have the main conponents such as instics, and emotion or even thought let alone common sense.

its like i have said this whole argument that robots cant think for them selves!! and alot of you are going away from the argument saying that they could go to war and destroy humans. well im not saying that this could not happen it is verry poss but only if humans have told them to do so.

also alot of you are saying that they could learn the same way humans do but that to me is basically agreeing with my argument because letts face it if know body showed little kids how to do certain things like read or write they would go there whole life without knowing.

the same as if we dont tell a robot to do something then it just wont know how to do it its only because we as humans have shown them or programmed them to do certain tasks that they even do it in the first place. do you follow what im saying ?



posted on Jul, 30 2010 @ 03:32 PM
link   
Honestly, and this is just pure opinion here, machines will take over when man and machine are melded together. By that I mean that no pure form of machine will see the need to take over, ruling and destroying is purely a human trait and for something that cataclysmic to happen, it would have to be done by cyborgs (half human, half machine)

We are delving into the world of cybernetics right now, even working on downloading a human brain into digital form, once that happens, a human might feel immortal and have the need to "rule" over less significant forms of humanity, causing a war.
www.associatedcontent.com...



posted on Jul, 30 2010 @ 03:46 PM
link   
I believe a faith in anything is strong. There are scientists out there that believe in crazy enough scenarios involving artificial intelligence, and how eventually our advances in technology will become unpredictable and they hope that this technology will serve to update itself without human interference. I believe it could manifest in some way but I'm prepared to lead the human revolution.

Crazy huh? LOL



posted on Aug, 1 2010 @ 09:34 AM
link   
Will robots ever take over?

Like everyone else, I've thought about this since the first Terminator movie cam out. Here is how I see this question being answered.

The founding fathers of the United States of America, worked hard to make a republic. They created the constitution and other documents to assure that our government would not have too much power over it's people. Through lawyers, crooks and the greedy, we have amended and perverted those basic laws until we now have exactly what the founding fathers feared. We gave the power to our government in order not to offend anyone.

When scientists finally create artificial intelligence, we will hand everything to them until we are nothing but the gelatinous fat masses that the robots provide for.



posted on Aug, 8 2010 @ 03:20 PM
link   
I had another incident with the robots, thought I'd post it here. Like my last post where I couldnt buy stuff from the shop because the tills were off, yesterday I wasn't allowed to cross the road till the green man let me.

I'm a bit of a recluse and don't go out often but I was forced to go shopping and we parked our car then had to cross a busy street to get to the shops. The cars stoppd at the red light and we stood there for what seemed like ages waiting for this green man to let us over. Ater a few minutes I said "what the heck are we standing here for, flipping hell, there are no cars moving" and I crossed the road. Half way over the green man let the other people across.

Silly story but the thread title asks "could robots really take over" They definitely have. They're in charge already. They might never think for themselves but they certainly rule over us in many ways.

Can't cross a road - crying out loud!

Just wait till the bank robots don't let us all take our cash out one day. Then everyone will notice.



posted on Aug, 20 2010 @ 08:26 AM
link   
reply to post by skillz1
 


and if humans didnt go around modding other humans (super humans) they wouldnt be called robots.




top topics



 
2
<< 1  2   >>

log in

join