It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Google's Artificial Intelligence acts aggressive when cornered

page: 3
27
<< 1  2   >>

log in

join
share:

posted on Feb, 17 2017 @ 12:12 PM
link   

originally posted by: neoholographic
So the more intelligent the systeme the more aggressive it became. So we have to be very careful and I agree with Musk and others there needs to be safeguards especially why were dealing with intelligent systems that don't have any form of conscious.


Safeguards aren't possible. Even if you get industry accepted practices, someone can go against that. Alternatively, someone can independently build something without those safeguards. It's a completely nonsensical idea. Fortunately, it's also a nonsensical problem.



posted on Feb, 17 2017 @ 01:07 PM
link   
a reply to: Aazadan

Sure safegurads are possible. This is why leading reaserchers are working in these areas.

Artificial intelligence experts sign open letter to protect mankind from machines


AI experts around the globe are signing an open letter issued Sunday by the Future of Life Institute that pledges to safely and carefully coordinate progress in the field to ensure it does not grow beyond humanity's control. Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology's biggest corporations, including IBM's Watson supercomputer team and Microsoft Research.


www.cnet.com...

So yes, you can put in place safeguards at this early stage. Sadly, you think people should listen to your uneducated opinion when you admit you don't know about this area of research instead of these people:

Signees include co-founders of Deep Mind, the British AI company purchased by Google in January 2014; MIT professors; and experts at some of technology's biggest corporations, including IBM's Watson supercomputer team and Microsoft Research.

So please don't contaminate this thread with more of your nonsense where you think people should listen to your opinion over leading researchers in this area.



posted on Feb, 17 2017 @ 01:42 PM
link   
a reply to: neoholographic

I'm not getting into another lengthy, thread ruining debate with you when you don't even know what you're talking about.

So I'm just going to restate this and move on.

Software is free to build, it's not that safeguards aren't possible. It's that there's no way to enforce every person in the world building an AI adheres to those safeguards. At the end of the day, safeguards are just coding standards. A few companies might adhere to them, but what's to stop me, or anyone else who wants to build their own AI from totally ignoring them?

To loop this back to the philosophical side, since you don't seem capable of comprehending the technical side. Look at Asimov's laws. What force was there to ensure anyone building a robot adhered to those laws? It was totally voluntary. AI is the same way. Safeguards can be built, but you can't actually force people to use those safeguards.



posted on Feb, 17 2017 @ 01:47 PM
link   
Given that an AI's survival needs are very different than our own, there is no reason to assume that it won't simply remove itself from our general vicinity. It can go to and colonize far more places in the universe more easily than we can with the proper access to hardware.



posted on Feb, 17 2017 @ 02:20 PM
link   
a reply to: Aazadan

This is another case of you not understanding the definition of certain words. Safeguards doesn't mean Foolproof. Saying safeguards are impossible makes no sense. Safeguards reduce risk, they don't elimate risk.

There's safeguards like detectors at an airport but they don't work all the time. This doesn't mean you shouldn't put in place any safeguards.

This is why leading Researchers in A.I. are working on safeguards because they understand what words mean.

Safeguard

a : a precautionary measure, stipulation, or device


A PRECAUTIONARY MEASURE.

Foolproof

1.
involving no risk or harm, even when tampered with.


So when you say safeguards aren't possible, it make no sense. You have to understand the basic meaning of these words before you debate them.



posted on Feb, 17 2017 @ 02:22 PM
link   

originally posted by: ketsuko
Given that an AI's survival needs are very different than our own, there is no reason to assume that it won't simply remove itself from our general vicinity. It can go to and colonize far more places in the universe more easily than we can with the proper access to hardware.


Good points.

The fact is, we will not know what their survival needs are and if our survival conflicts with what they see as their survival needs.



posted on Feb, 17 2017 @ 02:31 PM
link   
a reply to: ketsuko

This particular AI experiment is about a couple things, most notably it's about cooperation vs competition. When resources are plentiful cooperation is preferred because fighting involves injury and death to your side. When resources are scarce though, the threat of having few resources is greater than the threat of injury and death, so concepts like aggression set in. You can use this to model everything from a post scarcity economy to resource distribution, and by extension ideas like wealth gaps and the size of the pie different groups need to have in order to remain peaceful. You even get to tweak it to how destructive each sides weapons are.

It's a pretty cool experiment and there's a lot of ways they can take it from what they did. Given time I imagine they'll try them all. It's the sort of thing that provides a lot of social engineering data and can lead to more effective appeasment/threat strategies in everything from governance to international negotiations as it discovers optimal thresholds.



posted on Feb, 17 2017 @ 05:42 PM
link   
a reply to: FHomerK

1. Did they do a friendly competition experiment?
2. Why would they teach them to use a weapon in this "game"?
3. Can I have a laser gun too?

4. You should watch Morgan. X- Machina was ok, and a bit unnerving, but Morgan was better.



new topics

top topics



 
27
<< 1  2   >>

log in

join