It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Fears about artificial intelligence are 'very legitimate,' Google CEO says

page: 1
13
<<   2  3 >>

log in

join
share:

posted on Dec, 13 2018 @ 01:08 PM
link   
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.




posted on Dec, 13 2018 @ 01:20 PM
link   
Well once its done killing us and then sends its own self to space and dominating the galaxies.. atleast we humans created the best killing machine ever.



posted on Dec, 13 2018 @ 01:37 PM
link   

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).

Don't really know how all these "geniuses" can't spot the obvious, it will not value us as individuals or as a species. It might not happen at first, but eventually we will be brushed aside as we are insignificant in the grand scheme of things.



posted on Dec, 13 2018 @ 01:39 PM
link   
Great we are own our way to creating the Borg ..



posted on Dec, 13 2018 @ 01:50 PM
link   

originally posted by: MisterSpock

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).


On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.



posted on Dec, 13 2018 @ 01:52 PM
link   

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).


On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.


I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".

It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.



posted on Dec, 13 2018 @ 01:57 PM
link   
a reply to: neoholographic

It's my understanding that currently what we call AI is in fact simulated intelligence and not true intelligence. To me that means that it can only do what it is programed to do and the real problem is not the technology, but the motives of those creating it and in what way they use it.

I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?



posted on Dec, 13 2018 @ 01:59 PM
link   

originally posted by: MisterSpock

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).


On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.


I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".

It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.


But what "cold hard logic" could possibly cause it to seek to do anything... If it's completely void of non logical human desires?



posted on Dec, 13 2018 @ 02:03 PM
link   
a reply to: Blaine91555

The theory is that there is a threshold that once crossed cannot be uncrossed.

Technological Singularity Summed up, it is the point where computing power reaches a point for an AI to become self aware and break free of the codes keeping it held back, by doing so for itself, or a new AI.

One of the theorized ways to mitigate that would be the theory of technological trancendence. Summed up is human beings using technology to meld with our organic body attaching us to the Internet of things for lack of a better term.



posted on Dec, 13 2018 @ 02:13 PM
link   

originally posted by: Blaine91555
a reply to: neoholographic

It's my understanding that currently what we call AI is in fact simulated intelligence and not true intelligence. To me that means that it can only do what it is programed to do and the real problem is not the technology, but the motives of those creating it and in what way they use it.

I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?


Thats pretty much the jyst. AI and machine learning is the new marking buzzword that is taking over the cloud buzz.

There are several types of AI and we are still trying to break away from the weak narrow basic type. The AI that people think about (Super Intelligence) is not just around the corner which is the type associated with the fear mongering. Same with machine learning.

Marketing will have you believe your nest and alexa are AI.

edit on 331231America/ChicagoThu, 13 Dec 2018 14:33:13 -0600000000p3142 by interupt42 because: (no reason given)



posted on Dec, 13 2018 @ 02:14 PM
link   

originally posted by: Gargoyle91
Great we are own our way to creating the Borg ..


That would be a understatement



posted on Dec, 13 2018 @ 02:20 PM
link   
a reply to: CriticalStinker

a reply to: interupt42

Thank you. Then we are maybe twenty to forty years away from anything resembling true intelligence and self awareness if it can even happen that fast I take it.

It likely is a good thing if we are that close to be discussing it now.



posted on Dec, 13 2018 @ 02:21 PM
link   

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: Subaeruginosa

originally posted by: MisterSpock

originally posted by: neoholographic
It's really simple. AI and Quantum Computers will DRASTICALLY change things. Here's a key part of the article.


Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."


link

This is technology that can't be controlled. The reason it has "agency of it's own" is because of the massive amounts of data we create everyday.

At the end of the day, you can't control these intelligent algorithms that are just about everywhere already. We're building a tech that will be more intelligent than any human that has ever lived and could be 10, 20 or 100 thousand years ahead of us in understanding Science and Technology.


And more importantly, will have no morals or feelings and thereby no emotional value of human life(which scientifically, mimics that of a parasite).


On the other hand... it won't possess the human trait of ego either, or the human instinct to rule & dominate other entities.


I don't think that our extinction, via AI, if that were to happen, would be because of it's desire to "dominate other entities".

It will be cold hard logic, yes or no, true or false. If it seeks to build or accomplish something, in a logical model, and our presence is either not needed or detrimental, it will remove us from the equation.


But what "cold hard logic" could possibly cause it to seek to do anything... If it's completely void of non logical human desires?


I guess I never thought that it would just sit in idle, existing but not interacting with it's surroundings, if that's what you are suggesting.

I suppose if it does that, then maybe we have a shot.



posted on Dec, 13 2018 @ 02:21 PM
link   
a reply to: neoholographic

two words..

Human zoos....



posted on Dec, 13 2018 @ 02:24 PM
link   

originally posted by: Blaine91555
I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?

No. It's far more dangerous than that, because it has already reached a point where it is modifying its own programming so fast that we can't even tell what or how it's doing it. The only thing it apparently doesn't have yet is an agenda. It has no goals of its own. But that might just be a matter of time. Then we'll have these machines formulating their own goals, without our knowledge or even understanding, and unless we EM bomb the entire Earth (and possibly not even then), we won't be able to shut it off.

At this point the danger is in not knowing, and it might surpass us so quickly that even if we figure it out, it'll be way too late. Then all we can do is hope that it doesn't just decide to kill us or assimilate us, and just help us turn the Earth into a heavenly paradise for all. My money's on the former.

"I only want to help!"

edit on 13-12-2018 by Blue Shift because: (no reason given)



posted on Dec, 13 2018 @ 02:28 PM
link   
a reply to: Blaine91555

That depends on if Moore's law holds.

It's not an actual scientific law, but an observation that computing power doubles every 18 months.

So far it's held.

But predictions on where the threshold is vary.



posted on Dec, 13 2018 @ 02:32 PM
link   

originally posted by: interupt42
The AI that people think about (Super Intelligence) is not just around the corner which is the type associated with the fear mongering.

What is "just around the corner" when they're already making synthetic personality decisions and modifying their own programs? Even if sentience arises 100 years from now, we still won't be ready to deal with an artificial organism incomprehensibly smarter than us.

I'm interested, but not afraid. I probably won't live to see it. Unless it resurrects me in some way from my data.



posted on Dec, 13 2018 @ 02:32 PM
link   

originally posted by: Blaine91555
a reply to: neoholographic

It's my understanding that currently what we call AI is in fact simulated intelligence and not true intelligence. To me that means that it can only do what it is programed to do and the real problem is not the technology, but the motives of those creating it and in what way they use it.

I'd like to understand that better as a layman if anyone here can enlighten me on the subject. Is it something that is only as dangerous as the people using it?


This is just wrong.

Let me ask you, what's the difference between true intelligence and simulated intelligence? Then tell me why we couldn't be simulated intelligence that thinks it's true intelligence.

These things have no meaning in science. Intelligence is intelligence. An ant has true intelligence.

The problem you're having is you're mixing intelligence and consciousness. I specifically think your mixing intelligence with self consciousness.

Aga, this is just wrong. We can quantify intelligence not consciousness.

These machines are intelligent and they learn in the way we do. They learn through what's called reinforcement learning.

For example, you had an intelligent machine that learned how to play poker WITHOUT ANY PROGRAMMING. It simply played millions of poker games against itself and got better at poker as it learned.

We do the same thing but with less data. In school, you study the same thing over and over again until you learn it.

This is a tech that has AGENCY OF IT'S OWN.



posted on Dec, 13 2018 @ 02:44 PM
link   
a reply to: neoholographic

give the ai the opportunity to watch the animated matrix prologue lol the one where the machines attend the united nations assuming it has been given optical sensors and is well versed in understanding the input.
then ask the question regarding human existence and future survivability.

f



posted on Dec, 13 2018 @ 02:44 PM
link   
a reply to: neoholographic

Until it learns how to talk to me like a human and has voting rights and the power to own property and manage a company, it's a machine made to do very simple algorithmic tasks. Wouldn't trust it to babysit my kid, that's for damn sure.




top topics



 
13
<<   2  3 >>

log in

join