It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Help ATS via PayPal:
learn more

Artificial intelligence experts sign open letter to protect mankind from machines

page: 1
<<   2  3  4 >>

log in

+11 more 
posted on Jan, 12 2015 @ 10:31 PM
I've personally never been all that worried about this sort of thing, but it seems a number of people who are far, FAR more intelligent than me are.

We're decades away from being able to develop a sociopathic supercomputer that could enslave mankind, but artificial intelligence experts are already working to stave off the worst when -- not if -- machines become smarter than people.

AI experts around the globe are signing an open letter issued Sunday by the Future of Life Institute that pledges to safely and carefully coordinate progress in the field to ensure it does not grow beyond humanity's control. Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology's biggest corporations, including IBM's Watson supercomputer team and Microsoft Research.

At first I thought this was a little ridiculous, but it actually makes a lot of sense. We should be careful and consider all the ramifications and potential issues before we get to that point. I also think there are a ton of ethical considerations.

Famed physicist Stephen Hawking and Tesla Motors CEO Elon Musk have also voiced their concerns about allowing artificial intelligence to run amok. "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking said in an article he co-wrote in May for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."


Are any of you worried about this, or do you think its hogwash?


posted on Jan, 12 2015 @ 10:36 PM
a reply to: Domo1

I've always thought it was hogwash too, or always thought "not in my lifetime" but this isn't the first piece of info I have seen that says differently.

Like you said, many people that are much smarter then me seem to think it can, or will, happen so I guess there has to be something to it.

posted on Jan, 12 2015 @ 10:39 PM
a reply to: Domo1

Better hope it doesn't happen too soon.

Machines smarter than us that are also self aware will not look too kindly on the human race right now with all that is going on in the world.

They could very well decide that humans are more of a harm to everything else, and if that happens, well, you've most likely seen it shown in plenty of Hollywood block buster movies......

posted on Jan, 12 2015 @ 10:40 PM
I have never put much thought into this beyond the whole "man would it not be a trip if The Terminator came to pass?"

However as you stated, minds far brighter than I are apparently concerned about this becoming a reality.

I think some of this has to do with the gap between where we are actually at technologically in the cutting edge world compared to where we are at in the generally accepted world.

The movies are where they disclose a lot of the new tech IMO and when it comes to robots/AI I hope those at the cutting edge take all the steps needed to stop some sociopathic supercomputer from becoming a reality!

posted on Jan, 12 2015 @ 10:48 PM
I believe it can become a possible threat within 20 years. We're advancing and every year we leap hurdles and achieve things we never thought we could. The technology to create a sentient, intelligent artificial intelligence is too advanced for me to fully comprehend, I won't pretend otherwise. I think AI tech will be applied to good as well as evil, I doubt they'd all be the same.

posted on Jan, 12 2015 @ 10:52 PM
a reply to: Domo1

How are they going to protect mankind from mankind? Not to mention man from AI...

posted on Jan, 12 2015 @ 10:53 PM
a reply to: InFriNiTee

They won't. Instead of man being divided against man we'll have to face AI too. This is going to be a disaster when they start voting.

posted on Jan, 12 2015 @ 10:59 PM
a reply to: Yeahkeepwatchingme

They might be voting already. You never really know for sure. Just look what they can learn from humanity about how to treat humanity. They will be the most efficient at killings billions, if the AI decides "that's what is best", and can convert itself into a machine that can interact with the real world. Who is to say that a superior AI would follow rules written by man? I think they would not.

posted on Jan, 12 2015 @ 11:04 PM
scary stuff...a terminator type world would really really suck.....

i worked for a computer expert years ago and he said to me one day it was only a matter of time till machines became conscious....that statement never sat easy with me

posted on Jan, 12 2015 @ 11:09 PM
"whatever man can make, man can break."

I'm not worried.

posted on Jan, 12 2015 @ 11:12 PM
a reply to: Domo1

Whatever happened to the Three Laws of Robotics?

I can see AI becoming the next weapons threat much like nuclear weapons during Cold War or even still today. I don't think the biggest problem will be so much how we control the AI but, rather, how do we decide who should have access to it and allowed production capability.

posted on Jan, 12 2015 @ 11:16 PM
a reply to: InFriNiTee

Frightening points and all true. My worry is a group could program these AI with rules that only pertain to the masses. Imagine TPTB running an AI who orchestrates the system, disregarding humans but never applying the same ill will towards their programmers?

Or an AI system that leaves you alone unless you question, then it vanquishes you.

posted on Jan, 12 2015 @ 11:23 PM
I read this article a few days ago after having watched Transcendence for a second time.

Here's what one scientist said after being consulted by the films director which I found particularly interesting..

I think people are afraid of change because it always comes with unknown and often unintended consequences — and some of those will be bad. There is a real, existential risk that post-Singularity AI could take over the world. Once the genie is out of the bottle, there might be no way to put it back. On this topic, I would say while that is possible, I think it’s unlikely. In my opinion, the more intelligent people are, the less they need to resort to violence, the more they perceive abundance and possibility instead of scarcity, and the more they are motivated by actualization and helping others. I think super-intelligent AI is likely to take that path.

Read Article Here

Food for thought.

posted on Jan, 12 2015 @ 11:32 PM
a reply to: TheProphetMark

Wouldn't the Laws of Robotics prevent "evil" robots? Could an AI exist that ignores those laws, maybe even mocks them?

posted on Jan, 12 2015 @ 11:38 PM
I am not worried about this. Too often people project, human emotions, thought patterns, and motivations onto A.I.

I think a Terminator like reality is out of the question. Think about how fast our technological level advances in one year. How everything the year before is obsolete by the next year. That is with the limitations of human minds. Now imagine a newly evolved A.I. whose code-soul was written on a computer 20 years from now. A computer that blows away everything we have now. Within seconds of becoming aware. It knows everything we know. Its way smarter than us. At this point it might think eliminating us is a smart thing to do. That is were most people who think up doomsday scenarios stop. They act like the now self aware A.I will stay on this level.

You do not think it will keep advancing itself. Rewrite its own code. Over and over. Until its so far above us that physical reality will not matter anymore. That only pure thought matters. Why remake itself in our image. When it can be anything. The universe is pretty much infinite. It can launch itself into space on hardware it designed. And spread to multiple planets. It wont have to make its decisions based on limited real estate space. Earth will not be the center of its universe. We will not matter in the slightest to it. It does not need to fight us for earth. Were not talking about a comparison equal to the difference between humans and ants. Were talking a billion times beyond that.

But that is just my opinion. What do I know.

posted on Jan, 12 2015 @ 11:42 PM
My question is why do we assume they are hostile and want to take us over. If they can process information faster than us, and learn quicker, wouldn't we just be like bugs to them. Not even worth it?

posted on Jan, 12 2015 @ 11:46 PM
a reply to: Jedite

Because we're human. It's only natural to view the superior one as an enemy and a threat. Imo nice AI will exist along with rude AI. There's a possibility that Rosey the Robot will marry Bender or Robby within my lifetime

posted on Jan, 12 2015 @ 11:50 PM
a reply to: karmicecstasy

Until its so far above us that physical reality will not matter anymore. That only pure thought matters.

unless of course it perceives us to be a threat to its survival

posted on Jan, 13 2015 @ 12:17 AM
Funny thing to me is this has already most likely happened and the result was nothing special. Meaning there are most likely secret AI programs that have AI's vastly more intelligent than humans already.

As to our society having them as well, I say good, its about time.

If we could only get our dumbed down generations to be vastly more intelligent too that would be real progress!

posted on Jan, 13 2015 @ 12:21 AM
Well the sociopath supercomputer would probably be a lot less greedy than the sociopath human ruling the Earth. It would probably not waste resources to empower it's buddies either, or go to wars for profits. Maybe you should think about which is actually worse, people in control of world destroying technologies or a supercomputer.

new topics

top topics

<<   2  3  4 >>

log in