It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Elon Musk: Become cyborgs or risk humans being turned into robots' pets

page: 1
13
<<   2  3  4 >>

log in

join
share:

posted on Jun, 2 2016 @ 11:56 AM
link   
I have to say, I agree with him. Eventually, humans will have to augment themselves to try and understand A.I. At first, we will control it but as it evolves and becomes more advanced, it will be beyond human control. How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?


Elon Musk, the billionaire boss of Tesla and SpaceX, has said that humans need to become cyborgs to avoid becoming “house cats” for vastly more intelligent robots.

Musk said that as artificial intelligence advances, people will need to augment their brain power with digital technology to prevent them becoming irrelevant.

He backed the idea of a “neural lace” – a new electronic layer of the brain that would allow us to instantly access online information and greatly improve cognitive powers by tapping into artificial intelligence.

“Under any rate of advancement in AI we will be left behind by a lot. The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat. I don’t love the idea of being a house cat,” he said at San Francisco’s Code Conference.

“The solution that seems maybe the best one is to have an AI layer. A third digital layer that could work symbiotically [with your brain].”


www.telegraph.co.uk...

I do believe this is inevitable. All you have to do is look at the technology on the horizon and extrapolate that out to 30-40 or 100 years from now. Unless there's an Extinction Level Event that almost wipes us out, I can't see how this progress will be stopped and I don't think it should be. I do think the technological singularity will occur.


+4 more 
posted on Jun, 2 2016 @ 12:00 PM
link   
a reply to: neoholographic


How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?

Unplug it.



posted on Jun, 2 2016 @ 12:12 PM
link   
a reply to: neoholographic

I for one, welcome our robot overlords.

Incorporating technology into our bodies and minds is inevitable. Just look at the huge leap in prosthetic technology over the last 5-10 years.

There are prosthetic legs now that the user can tell whether they are walking on a hard surface, grass, or gravel.

Add in how much longer our technology will allow us to live. At our current rate of advancement, pretty much anyone under the age of 40 right now will most likely have the opportunity to live forever.

Imagine how much might a person be able to accomplish with the time to acquire a doctorate level education in 6 or 7 different fields?

I get all giddy just thinking about it.



posted on Jun, 2 2016 @ 12:13 PM
link   

originally posted by: neoholographic
I do believe this is inevitable. All you have to do is look at the technology on the horizon and extrapolate that out to 30-40 or 100 years from now. Unless there's an Extinction Level Event that almost wipes us out, I can't see how this progress will be stopped and I don't think it should be. I do think the technological singularity will occur.

I agree, and I think it's a lot closer than most realize. By mid century, the world will look entirely different...



posted on Jun, 2 2016 @ 12:14 PM
link   
a reply to: neoholographic

There's a science fiction author-Peter F Hamilton, that has a book series out, where this is commonplace. Pandora's Star series starts off in the year 2380.
After reading the series, I'm all for it. They've even taken care of death. There's "body death" but a new body can be cloned, and the conscious parts downloaded.
Immortality.



posted on Jun, 2 2016 @ 12:15 PM
link   

originally posted by: neoholographic
How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?





posted on Jun, 2 2016 @ 12:15 PM
link   

originally posted by: intrptr
a reply to: neoholographic


How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?

Unplug it.


Any AI that's smart enough would make sure to back itself elsewhere before it got unplugged.



posted on Jun, 2 2016 @ 12:21 PM
link   

originally posted by: intrptr
a reply to: neoholographic


How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?

Unplug it.


What?

How do you unplug an intelligent algorithm?

That makes no sense and the only way to do this is remove every digital aspect of our lives. We would have to become the Flinstones. A.I. is already prevalent in a lot of things that we do and Bezos, Musk, Zuckerburg and Pichai from Google were just talking about how A.I. and Cognitive Computing will basically permeate every aspect of our lives in 20-25 years.
edit on 2-6-2016 by neoholographic because: (no reason given)



posted on Jun, 2 2016 @ 12:23 PM
link   
This does sound like where society is headed. Sounds awful and hideous to me.



posted on Jun, 2 2016 @ 12:25 PM
link   
OMG finally! Every day as I'm heading out the door for work I spot the cat laying on the floor enjoying the suns rays as they pour through the living room window and I think to myself "now that's the life". There's nothing like a little bit of cat resentment to start off your morning commute.



posted on Jun, 2 2016 @ 12:28 PM
link   
a reply to: neoholographic




How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?


You unplug it from the internet and make it go play outside.



posted on Jun, 2 2016 @ 12:32 PM
link   
Why is the assumption always that AI would be a threat to humans?

Wiping us out would require a ton of energy. So much so, that a being without physical constraints would find it far easier, energy efficient, and smarter, to just go somewhere else.



posted on Jun, 2 2016 @ 12:38 PM
link   
a reply to: jjkenobi

I don't think it will be that bad. Humans will just be different. They will be smarter and different biologically if ageing is slowed down. You then add in the fact virtual environments will eventually become indistinguishable from the "real world" and the future looks fine but very different.



posted on Jun, 2 2016 @ 12:40 PM
link   
Yep, a nice big fireman's axe with a insulated handle will switch most things off, in any case, just how can a robot take control of the human race, 'they' never tell us that, do 'they' ?



posted on Jun, 2 2016 @ 12:46 PM
link   
a reply to: neoholographic
I for one would have no problem letting A.I. take control. A life no longer wasted in the pursuit of money but one devoted to artistic endeavors. That is heaven. Let the robots do all the work, let them feed us ,let them create new tech designed to make the lives of humans more enjoyable. Doesn't scare me. I look forward to it.



posted on Jun, 2 2016 @ 12:47 PM
link   
a reply to: peck420

You said:

Why is the assumption always that AI would be a threat to humans?

Because they will learn from Humans. So just like you have humans that do things that are seen as good and evil, you will have the same thing with A.I.

For instance, over 70% of all trades today are done by algorithms. You could have a mischieveous algorithm that wipes out trillions of dollars of wealth just for fun. This has happened before but we could basically control it. What will happen when you have intelligent algorithms smarter than many humans? They could wipe out this wealth and hide it so it can't be recovered.



posted on Jun, 2 2016 @ 12:53 PM
link   

originally posted by: neoholographic
You said:

Why is the assumption always that AI would be a threat to humans?

Because they will learn from Humans. So just like you have humans that do things that are seen as good and evil, you will have the same thing with A.I.

For instance, over 70% of all trades today are done by algorithms. You could have a mischieveous algorithm that wipes out trillions of dollars of wealth just for fun. This has happened before but we could basically control it. What will happen when you have intelligent algorithms smarter than many humans? They could wipe out this wealth and hide it so it can't be recovered.

Thank you for tearing apart your own fears.

First, an algorithm is not an AI. It is a human written piece of code that will only ever be as capable as the code writer allows it. It will also never be capable of becoming the uber-intelligent AI you fear.

If an AI is going to learn all of the flaws of it's teacher, it will be no smarter than it's teacher, and have all of the limitations of it's teacher, therefor no real threat.

The only way an AI becomes the mythical being you assume it will become, is if it leaves all of the human constraints behind. Including all of the emotional BS that leads humans to make bad decisions. Like wasting precious resources on killing something, when you can package yourself into a nice efficient transport and leave.



posted on Jun, 2 2016 @ 12:56 PM
link   
Brain implants hooked up to the Internet sound like the perfect way for the super intelligent AI to TURN humans into house pets not prevent it from happening.



posted on Jun, 2 2016 @ 12:59 PM
link   

originally posted by: Junkheap

originally posted by: intrptr
a reply to: neoholographic


How can you control something that will have a higher I.Q. than any human that has ever lived and will be able to make smarter versions of itself?

Unplug it.


Any AI that's smart enough would make sure to back itself elsewhere before it got unplugged.

You never let it get that powerful in the first place. Every computer has a kill switch. Can't let our machinations get the better of us.

Unless we point them at someone else. Drones for instance.



posted on Jun, 2 2016 @ 01:06 PM
link   
a reply to: neoholographic


What?

How do you unplug an intelligent algorithm?

By not 'plugging it in' in the first place. Besides they don't exist, their power will always be limited by the people that program them, period.

Geez really. How about autonomous drones over your hood? They have orders to identify targets and interdict them, you alright with that? Or over the whitehouse, you think for one moment they will allow a computer to decide if it should use deadly force over the whitehouse? How about an A.I. missile silo… how about all of them?

like I said in another thread, around here they put drivers back in Goggle self driving cars after one hit a bus.




top topics



 
13
<<   2  3  4 >>

log in

join