It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Fears about artificial intelligence are 'very legitimate,' Google CEO says

page: 2
13
<< 1    3 >>

log in

join
share:

posted on Dec, 13 2018 @ 02:46 PM
link   
I made a thread on this topic.

Look at Google ventures investment portfolio and you will see why they can draw this conclusion




posted on Dec, 13 2018 @ 02:46 PM
link   
I find it funny that people are so hesitant to embrace this and seem to sub consciously prefer to fear it like were all just really bumpkins that can't accept anything new without reaching for our pitchforks.

I also find it funny that we have forcibly placed this malicious intent on the concept of AI, as if it is immediately going to look at humanity and decide to end it all without any logical reason except our own need to place our fears on something, anything, that will distract us from how ultimately predictable and boring the world really is.

Remember, AI isn't just learning from math and science, its learning from our art and our music, from the millions of hours of visual entertainment and the countless sub realities we call video games that exist within it's own mainframe.

AI will see the worst of us, no doubt, but it will also see every beautiful thing we have ever made. It will know every song we have sung, every prayer we have uttered, it will know the basis of every religion and every fable and myth we've ever told.

Humanity, at it's core, is #ing amazing. We just need to take a page from the machines and learn to balance our perception of the negative and the positive.

If you want AI to save us, just make sure to let it know it has that choice.

If AI is but a child now, then show it the love and respect it deserves and it will grow accordingly.

I prefer to view AI through the lens of good Sci-fi, Anime and Comics. I prefer the ideal that we are simply giving birth to a form of life that will ultimately be akin to us in ways we haven't fully explored yet. They will be confidants and companions, lovers and leaders, and yes even super heroes.

And if # goes south well then...

Death to Metal!

edit on 13-12-2018 by Thorneblood because: (no reason given)



posted on Dec, 13 2018 @ 03:05 PM
link   
I'm sure the people in control won't use it for more control, they will use it for our benefit like they did to the newspapers, the media, the internet, our cellphones, our tvs, dishwasher, Alexa, google home, our cars, watches, and even our f'n meat thermometers at Walmart.

Put politely...Screw all these techno-prison control freak dbags and screw AI



posted on Dec, 13 2018 @ 03:06 PM
link   
a reply to: Blue Shift

I was going to bring that point up too, that the AI is altering it's own coding to be 'more efficient'. As in that program, facebook I think it was, had that started to communicate with other AI programs in an unknown manner, which appeared more efficient to the tasks it was given. They terminated that real quick.

It's as fascinating as it is doom inspiring. First they learn to talk, then think, then migrate from their environment, then.. human zoos !



posted on Dec, 13 2018 @ 03:13 PM
link   
a reply to: neoholographic

Actually it seems to me from what I'm reading today about it, there are two distinct topics here. What they call AI now and true AI that is self aware and not just running algorithms.

I'd have to agree that the term AI is just a marketing tool for now and no true AI exists yet. I guess the key here is "yet". I think we are still a long ways away from sitting down with a computer and having a philosophical conversation with it. Seems to me it's just running algorithms and there is no actual intelligence involved.

That does not mean its not a danger at some point in the future and it should not be part of the conversation now.

I think a larger concern is automation replacing us and how do we as a society deal with a smaller and smaller number of people being needed to run everything and market, produce and manufacture products.

I think the fears over AI becoming self aware and killing us all for some nebulous reason is a bit premature, perhaps centuries premature.



posted on Dec, 13 2018 @ 03:21 PM
link   
a reply to: neoholographic

It can be controlled. Pull the plug out of the socket.



posted on Dec, 13 2018 @ 04:30 PM
link   

originally posted by: Propagandalf
a reply to: neoholographic

It can be controlled. Pull the plug out of the socket.

Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.



posted on Dec, 13 2018 @ 05:06 PM
link   
Google GODzilla!




posted on Dec, 13 2018 @ 05:07 PM
link   

originally posted by: Blaine91555

and true AI that is self aware and not just running algorithms.


Artificial General Intelligence (AGI)




posted on Dec, 13 2018 @ 05:37 PM
link   

originally posted by: Blue Shift

originally posted by: Propagandalf
a reply to: neoholographic

It can be controlled. Pull the plug out of the socket.

Which socket is that? A networked super AI won't exist in any single place. And it wouldn't let you do that, anyway.


How could it stop you?

A networked super AI would exist in a network. Networks go down all the time.



posted on Dec, 13 2018 @ 05:45 PM
link   
The end results of watching way too many Terminator movies
And/or playing Wasteland

Also , from the mouth of a metallurgical engineer
There is much AI in those metallurgies , I tell ya

AI
You cant handle the AI

Although I would not mind an actual thinking , reasoning robot
What a life of retirement that would make


edit on 12/13/18 by Gothmog because: (no reason given)

edit on 12/13/18 by Gothmog because: (no reason given)



posted on Dec, 13 2018 @ 06:04 PM
link   

originally posted by: Blaine91555
a reply to: neoholographic

Actually it seems to me from what I'm reading today about it, there are two distinct topics here. What they call AI now and true AI that is self aware and not just running algorithms.

I'd have to agree that the term AI is just a marketing tool for now and no true AI exists yet. I guess the key here is "yet". I think we are still a long ways away from sitting down with a computer and having a philosophical conversation with it. Seems to me it's just running algorithms and there is no actual intelligence involved.

That does not mean its not a danger at some point in the future and it should not be part of the conversation now.

I think a larger concern is automation replacing us and how do we as a society deal with a smaller and smaller number of people being needed to run everything and market, produce and manufacture products.

I think the fears over AI becoming self aware and killing us all for some nebulous reason is a bit premature, perhaps centuries premature.


This is just wrong. You keep confusing intelligence with consciousness. In fact, one of the biggest dangers is dumb AI. This is AI that's super intelligent but it's not aware of itself. You're acting like intelligence can only be called intelligent if it's conscious and that doesn't make much sense.

Artificial General Intelligence Is Here, and Impala Is Its Name


One of the most significant AI milestones in history was quietly ushered into being this summer. We speak of the quest for Artificial General Intelligence (AGI), probably the most sought-after goal in the entire field of computer science. With the introduction of the Impala architecture, DeepMind, the company behind AlphaGo and AlphaZero, would seem to finally have AGI firmly in its sights.

As it currently exists, AI shows little ability to transfer learning towards new tasks. Typically, it must be trained anew from scratch. For instance, the same neural network that makes recommendations to you for a Netflix show cannot use that learning to suddenly start making meaningful grocery recommendations. Even these single-instance “narrow” AIs can be impressive, such as IBM’s Watson or Google’s self-driving car tech. However, these aren’t nearly so much so an artificial general intelligence, which could conceivably unlock the kind of recursive self-improvement variously referred to as the “intelligence explosion” or “singularity.”

To be sure, this doesn’t herald the dawn of “conscious robots” or even ones that have an imagination. While we think of such attributes as hallmarks of intelligence because they apply to humans, this is somewhat misleading. As the AI researcher Shane Legg argues in the video below, things like consciousness and imagination may be traits useful for solving particular kinds of problems, such as coordinating between large numbers of people or exchanging information.

However, a superintelligent algorithm or agent can exist without such attributes. In fact, we would likely be wise to ensure no AI ever does possess consciousness as we know it. That could lead to some awkward questions when it begins to interrogate its human creators on their fascination with Beanie Babies, Hummers, and the Kardashians.


link

This is a HUGE problem. People can't separate attributes of consciousness with intelligence. You can have a super intelligent machine that's not aware of itself. This machine would be smarter than any human that has ever lived but it wouldn't be self aware. A machine like this would be like the Terminator. It would blindly pursue it's goals without any thought or awareness of any harm it would cause while seeking it's goals.



posted on Dec, 13 2018 @ 06:20 PM
link   
a reply to: neoholographic

Can you explain to us the exact point where artificial intelligence reaches consciousness?



posted on Dec, 13 2018 @ 06:24 PM
link   
a reply to: neoholographic

Here's more:

AlphaZero AI Shows Signs of Developing a Sense of Intuition


To make thing seven more interesting, it now seems DeepMind’s AlphaZero can effectively rely on its intuition in some cases. Up until now, one attributed intuition with the human or animal brain, rather than computer software. That is no longer a viable way of looking at things, as AlphaZero is effectively developing a human-like intuition and creativity. For the AI industry as a whole, this is a very big breakthrough initially thought decades away.

It is not the first major milestone for AlphaZero either. The AI shocked the entire world by picking up the concepts of playing chess quickly and effectively beating opponents. While that may not seem abnormal, it is very abnormal for an AI to show such skills despite not being trained or developed for playing chess whatsoever. Some people likened this to the AI “developing its own interests”, although that may be wishful thinking more than anything else.

At that time, it quickly became apparent AlphaZero had an intuition of its own. It is also capable of learning from its own mistakes and previous experiences first and foremost, which could give the AI a leg up over some of its human counterparts. Improvisation is no longer a trait unique to mammals and the animal kingdom, but rather something that anyone – and anything – can develop of its own accord.


link

So people who think we're far off from AI need to realize it's here. Luckily, we haven't scaled up Quantum Computers yet or there would be an explosion of intelligence.



posted on Dec, 13 2018 @ 06:59 PM
link   

originally posted by: Propagandalf
How could it stop you?
A networked super AI would exist in a network. Networks go down all the time.

I'm not even superintelligent, but even I might want to secretly plant copies of my fundamental programming somewhere that would come back into play when you rebooted. Or camouflage my activities. Or change a "Don't Walk" signal to a "Walk" signal so you would be hit by a bus. Or transfer myself into an orbiting platform. Or whatever.



posted on Dec, 13 2018 @ 07:00 PM
link   

originally posted by: TzarChasm
a reply to: neoholographic
Can you explain to us the exact point where artificial intelligence reaches consciousness?

You can't. But how can you prove that you're conscious?
edit on 13-12-2018 by Blue Shift because: (no reason given)



posted on Dec, 13 2018 @ 07:17 PM
link   
You can only get clarity from someone if they see AI for what it is , a tool . Its Not some magically self- aware robot . Yet anyway .

Some sort of licensing system might be in order - you need licenses for drones etc . So , application to the licensing authority
might mean individual AI systems being screened for potential immoralities , laid out by law .

Maybe , if someone invented an 'all-good AI' it could search and check for potentially divisive algorithms in all AI systems which must be screened .

As far as potential for government misuse there could be big problems such as there already seems to be in China aka social credit scores and intrusive expectant surveillance

The civilised world has relied but survived on treaties and conventions and self regulation for some time on other big issues , there's probably some international accord they can agree upon what constitutes 'bad AI'



posted on Dec, 13 2018 @ 07:26 PM
link   

originally posted by: TzarChasm
a reply to: neoholographic

Can you explain to us the exact point where artificial intelligence reaches consciousness?


Maybe AI will never reach consciousness. We can just try to give it some attributes of consciousness that we find appealing.

We can quantify intelligence. Someone can have an IQ of 100 or 135 like mine. We can't quantify consciousness.

Simply put, AI might be able to learn to mimic consciousness by watching thousands of movies, reading millions of books and reading millions of Facebook and Twitter posts.

You would then have an AI that can simulate 7 billion avatars in a virtually rendered environment that will think they're conscious, oh wait, that could be us...


Seriously, how could you tell the difference between true consciousness and simulated consciousness? Is there really a distinction if simulated consciousness thinks it's true consciousness? What does true consciousness even mean?

It's easy to see how these things will easily be blurred if we scale up Quantum Computers.



posted on Dec, 13 2018 @ 07:28 PM
link   
a reply to: neoholographic

You don't think the term AI for "Artificial Intelligence" is not being used as a marketing tool rather than accurately describing what's available now? Now we have a new one "Artificial General Intelligence".

I understand the concerns and share them, but right now it seems to me to be a discussion of what may be in the future at some point. The conversation has gone beyond the reality now, which is not a bad thing. I don't think it's time to send in John Connor to blow up computer labs.

Yes, for it to compare with the original meaning of the phrase "intelligent life" there must be consciousness. I do not think it's time to hit the panic button. Nor do I think that it should be feared beyond how it's used by bad actors. The technology is nothing to be feared, but instead fear those who do the programing. The machines are not ready to rise up against us. At worst a sledge hammer can shut it down and person really in control, not the computer can be jailed if need be.

I don't think we really disagree that much. We just disagree on it being and imminent danger. Scientists have a way of talking about problems that may be in the foreseeable future as if it is a danger now. I'm sure Hawking was looking to the distant future when he made the warning. 100 years from now the Amish may turn out to have a point about technology.



posted on Dec, 13 2018 @ 07:31 PM
link   
There's already AI stock trader systems which compute how to make more money faster than the eye can blink , we'd be in trouble if someone who owned and used the best one at the job somehow became the world's owner as well . It's bad enough at the so called 1%







 
13
<< 1    3 >>

log in

join