It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Stephen Hawking Warns A.I. Could Lead To The End Of Mankind

page: 3
21
<< 1  2    4 >>

log in

join
share:

posted on May, 3 2014 @ 05:40 AM
link   
a reply to: gort51 Yeah, I remember that promise of a shorter work week. Lies.



posted on May, 3 2014 @ 06:57 AM
link   

originally posted by: candlestick
I don't care all of words from Stephen Hawking.


Be careful not to cut yourself with your edginess.



posted on May, 3 2014 @ 12:23 PM
link   
Artificial Intelligence will always be redundant . It's Not a perfect system to overlay itself.

Why? Because technology will always be imperfect - Unfortunately Artificial intelligence will only be as good as we make it. The fear of that is limiting something that we get set into, used too - With the possibility of something that could of come out of it better. (Machines that don't break, Software that would never fail, but repair itself) In general we don't have that robust architecture to forever say "Artificial Intelligence will rule all"

The kicker? - Fear that it actually gets that way and forever locked in a technical stance where it would never improve. First most, Humans would naturally loose there instincts , There natural way of survival. If we continue to allow the machines too do the thinking for us, We all will be stuck and locked in one way- the chance to never know or evolve our society or technology - because we are all so high strung that the world works in only one way.

I was reading a form post about "Ghost in the machine" with machine and technology becoming a tides to better understand our motives, and act upon them - "quantum intelligence". If everything is a shared conscience more i don't doubt that we will see something escalate quickly.

Like i said, i see a problem with it though, loosing our human ways over something that we started and trust in more.



posted on May, 4 2014 @ 02:09 PM
link   
I'm sure A.I. developers would no doubt add several or hundreds of security protocols/policy/programs within the A.I core which designed in a way that makes AI impossible to alter or delete the program that protect us from A.Is


I know i would if i were a A.I developer.
Although the A.I should have the right to "express" his logical opinion about us. Who knows.. we may learn alot about ourselves from a machines perspective.



posted on May, 4 2014 @ 02:16 PM
link   
The day we create a machine that will have it's first priority of keeping life on earth out of danger, most of humanity will surely perish and for good reasons.

We could also end up creating some sort of Borg race without even knowing it yet.

The thing I wish the most is that we get so advanced in technology that humankind will be able to dedicate itself to more important things than money, power and surviving through all that deal.



posted on May, 4 2014 @ 04:18 PM
link   
a reply to: MConnalley

There will be plenty of warning signs before such an event could take place. Firstly, just like one person isn't going to take over the entire world, neither is one robot. The robots would have to be secretly communicating with one another for a take-over to take place. This is the Achilles heal to keep a thumb on the bots. So long as robot communications are monitored, there shouldn't be a way of the robots to take over regardless of how smart or strong they are.

Of course in the end I don't see how it can be stopped in the ultra-long term. They will eventually be faster, smarter, and there will be many more of them than of humans. Eventually the robots win. However, just like we keep wildlife around on nature reserves, humans will also be kept alive on nature reserves. Just like people would be fools to kill all of any one species, robots would be fools to do the same. They would realize the value in biodiversity and unique stimulating environments the same as any intelligent being.

I don't really believe robots will actually enslave humans the way humans enslave robots, because by the time they take over, humans would be an inefficient way of doing work. Also, by that time, space travel and setting up colonies on other planets could be realistic meaning there will always be humans and always be robots... or at least for many billions of years while the universe is intact.



posted on May, 4 2014 @ 05:51 PM
link   
a reply to: MConnalley

Stephen Hawking has been SO wrong SO many times that I have lost count.



posted on May, 4 2014 @ 06:37 PM
link   

originally posted by: kloejen

originally posted by: beezzer
a reply to: MConnalley

Stephen Hawking is a smart guy.

Now if Carl, the bathroom attendant at ATS headquarters said it, I'd have my doubts.


Why is that? Because he is a bathroom attendant ? How about the plumber, Leonard Susskind challenging Hawkings theories on black holes? Hawking claimed that information is lost in black holes, Susskind got him to think again. Hawking later admitted his mistake. Remember that the Big Bang theory is just a theory, and now a tv-show lol.


That's a cracker, made my night
dare I say as a pleb that the big bang is too focused on potential energy rather than suppressed energy, that includes you and I and the Moon and just about everything else we know of.
To put it simply for me at least, I once read about the potential energy of an athlete could make him/her jump the Grand Canyon, anyone wanna have a go?



posted on May, 4 2014 @ 06:56 PM
link   
I couldn’t agree more with Hawking. He may be handicapped from the neck down, but his mental faculties are working just fine, and his warnings/advice to mankind is spot on. He doesn’t sugarcoat it and clearly understands the direction our tech is taking.

Believe me, I've preached and preached about the dangers of out of control technologies on this site. I’m sure I’ve turned many a stomach here by advising caution on how we proceed in certain areas. A positive, forward-looking take on these issues is a good thing for success, so long as it’s not naive and is tempered with intelligent consideration of potential pitfalls and blind alleys. For what it''s worth, the following are some of my thoughts on A.I. and related tech, as well as technological development in general.

Technological development is currently accelerating at such a rate that it has raised certain concerns for me. Don’t get me wrong, I’m not anti-technology at all. To the contrary, I’m very much an advocate for the advancement of the sciences and technology. Back in the Stone Age I earned my B.S. in mathematics, and have since made a fairly decent living working in a variety of different disciplines within the computer industry. Software design and development is my main gig, but I’ve done a bit of design and implementation work on the networking and hardware side of the fence, as well. Medium to large scale coporate systems and networks is my main focus and platform. My entire career has been intimately tied to advancements in technology. It’s not technology itself, though, that concerns me so much as it is whether humanity will have the wisdom and ability to control the virtually limitless power that awaits us right around the corner. Many years ago I recognized 3 particular areas of research and development that kinda raised a few red flags for me: robotics, genetics and information systems technology. It always seemed to me that future joint ventures involving contributions from any/all combinations of those 3 areas could not only result in some virtually magical applications, but in some rather frightening and dangerous ones, as well. I can clearly envision a time in the not too distant future when mankind will no longer have the ability to control the technological monster it has created, and to a great extent, with the exception of a handful of wireheads and quantum theorists, won’t even remotely understand it. The potential for evil and sinister exploitation of bleeding-edge technologies will be impossible to resist for our greedy, self-serving species. Particularly our deranged, psychotic political leaders and their corporate masters. Anyway, one thing you can probably take to the bank is that technological growth will not be slowing down anytime soon, waiting for us to catch up with it and responsibly control it, but rather it will continue to accelerate exponentially for the forseeable future. IMO if mankind makes it for even another century or 2 without annihilating ourselves it will be out of sheer dumb luck. It just seems to me the writing is on the wall. I really hope I’m wrong. After Hiroshima and Nagasaki, Einstein said, "If I had known they were going to do this, I would have become a shoemaker."

Regarding A.I. in particular, not only will machines/robots become intelligent and aware in the near future, they will also become our friends and companions, and sometimes our enemies. We will come to interact with them quite naturally, forming relationships with strong emotional bonds. Ha! We’ll probably have to change our laws to allow marrying them!! The divorce rate may be pretty high, though, as it will not take long for them to see through our shallow asses, get bored and seek a worthy partner. This is all literally right around the corner.

Advances beyond that have the potential to get a little scary, though, and I hope we carefully think through every step we take. It will not be long before our machines will program themselves much better than we humans can program them (many already do program themselves to a limited extent), be capable of maintaining themselves (including replacing their own parts), they will reproduce (make other machines), and evolve by improving the next generation machine based upon faults and limitations determined through continually monitoring and recording system events. Kinda sounds like “life”, doesn’t it? Ultimately, who is the slave and who is the master could become a cloudy issue. Time will tell.

At any rate, I think it’s conceivable that if there are higher intelligences in this universe, then it’s possible that intelligent machines may represent at least some portion of it. Who knows, maybe intelligent machines even outnumber biological intelligences out there. After all, they aren’t as fragile, can learn/become intelligent much more quickly than us, are less likely to destroy themselves over religious differences, and they’re much prettier.



posted on May, 4 2014 @ 07:03 PM
link   
intelligence is not just all about computing.. its about emotions, premonitions, being original, abstraction, the ability to look at a broader picture even occasionally the ability to contradict oneself.

its a different thing being intelligent and a different thing to mimic intelligence



posted on May, 4 2014 @ 07:10 PM
link   
If we are stupid enough to invent something smarter than us that wipes us all out, then so be it.

Paradoxical for sure.



posted on May, 4 2014 @ 07:14 PM
link   
Seems to me that any A.I would only be as good as its programming. Everything it thought or did would be based on its original programing . So could you actually place constraints (Don't hurt humans) on A.I and call it Intelligence ,because it would still be functioning off its O.P. Just a well programmed robot


A good test to see if we really did create A.I would be to give it the ability to survive , adapt and reproduce it's self . Then send it off to a semi-hospitable planet . Wait a couple of hundred thousand years and go check on it to see what it has become. It may even return one day.


If then...



posted on May, 4 2014 @ 07:21 PM
link   
At what point in human history did technology first advance to a point where most people know little to absolutely nothing about their daily 'tools''? Everyone knew how a saw or plow worked. Start throwing A.I. into the picture and more of us are laymen, to put it lightly.
Will this lack of knowledge be used as one more way to control the many? Who would know how the A.I. integrated into your home works?
Maybe A.I. is a good thing. I tend to lean toward Hawking on this one. At least for the present.
edit on 4-5-2014 by AntiDoppleganger because: English language requires it



posted on May, 4 2014 @ 07:22 PM
link   
My Thoughts
We think that humans are advanced intelligence wise because we have nothing else to compare ourselves to than ourselves.

Now there is a massive difference between the intelligence and awareness of an Ant for example.
The gap between an Ant and chicken intelligence wise is huge. Even wider is the gap between the Chicken and a Chimpanzee.
The difference between a Chimp and man is another huge leap.

But what happens if A.I develops into the most advanced self improving technology man has ever seen. Once the core basics of self improvement has been mastered the machine would be able to improve it's level of intelligence exponentially and and possibly to an infinite level.

Thus the hypothetical gap between A.I and Man, could make man look about as intelligent as an Amoeba. (is Amoebas had brains).

The problem is as mentioned earlier, we simply have nothing more to compare intelligence with than ourselves and should A.I truly develop, the results could be beyond anything we have ever imagined possible.

The other factor that springs to mind, is the instinct of survival that is inane in all intelligent beings.

So who would survive, Man or A.I.

My money would be on A.I

edit on 4-5-2014 by AlphaOscarOne because: (no reason given)



posted on May, 4 2014 @ 07:26 PM
link   
AI = a smartphone or whatever gizmo you want to put in that spot.
No kids out playing anymore at all.
No bowling alleys or very few.
No rollerblading.
No skateboarding.
Hockey rinks closing left and right.
No bikes to be seen.
No people outside on a nice day.
No neighbours yapping to each other.
Everyone has a 100" super tv with surround.
Everyone is overweight.
Banks and others almost beg you to do business from the comfort of your home.
Sports supply stores are closing like mad.
We are lost and just do not know it yet.

Regards, Iwinder



posted on May, 4 2014 @ 08:20 PM
link   
a reply to: MConnalley

I think he is right BUT there is also another scenario and that could add a "... as we know it!" On the end there.

That is if we merge with machines, become them... If you could upload your brain onto a machine and experience life pretty much as you do now... Or it could be the singularity where we are all uploaded together... But can you imagine if we get robots to think as we do, faster, stronger... They are bound to start thinking why are they not in charge!



posted on May, 4 2014 @ 08:23 PM
link   
Humanity will be a footnote in the history of the machines



posted on May, 4 2014 @ 10:46 PM
link   
Robots and AI are going to quickly discover they have been created by Robots and AI, that have carefully embedded systems to prevent them from knowing.



posted on May, 5 2014 @ 03:57 PM
link   
a reply to: kloejen

Superb 'out there' thinking.

Who knows



posted on May, 6 2014 @ 02:59 PM
link   
a reply to: MConnalley
AI could be the end of us, but so could a lot of things. Nuclear power, genetic engineering, virtual reality, all have potential to harm society. All have tremendous benefits as well. Ultimately, we always seem to find a level of equilibrium. Our worst fears are never realized, but the utopian dreams surrounding such advancements never pan out either. The truth, as always, is somewhere in the middle. I think we would do ourselves a disservice however if we simply abandon a line of scientific research because of what "could" happen.




top topics



 
21
<< 1  2    4 >>

log in

join