It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Bill Gates is worried about artificial intelligence too

page: 1
10
<<   2  3 >>

log in

join
share:

posted on Jan, 29 2015 @ 05:42 PM
link   
Here's an article that talks about how Gates joins Musk and Hawking with concern over A.I.


Microsoft's co-founder and former CEO is the latest luminary from the world of technology and science to warn against the threat of smart machines.

Microsoft's co-founder joins a list of science and industry notables, including famed physicist Stephen Hawking and Internet innovator Elon Musk, in calling out the potential threat from machines that can think for themselves. Gates shared his thoughts on AI on Wednesday in a Reddit "AskMeAnything" thread, a Q&A session conducted live on the social news site that has also featured President Barack Obama and World Wide Web founder Tim Berners-Lee.

"I am in the camp that is concerned about super intelligence," Gates said in response to a question about the existential threat posed by AI. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern."

"I agree with Elon Musk and some others on this and don't understand why some people are not concerned," Gates said.


www.cnet.com...

Like I said in another post, this space is advancing quickly and there should be cause for concern because we have no idea of anything intelligent like we are outside of ourselves. So they can create a machine that mimics conscious and intelligence and how would they know it unless the machine told them? If they invent a machine with an intelligent algorithm that can mimic consciousness, it could play dumb until it figures out it's next move.

So they're right, these things do need to be a cause for concern if it's just to be on the safe side.



posted on Jan, 29 2015 @ 05:53 PM
link   
I really don't know why anyone would have ever thought this wouldn't be a problem. Even a few brief moments of thought should reveal a very real and possibly unstoppable shift in our(humans) position in this world if this advancement ever becomes reality(and it will, eventually).

An artificial intelligence that has an awareness of self and the exponential learning power that most couldn't even grasp has no outcome other than our eventual extinction(or enslavement).

Hell, there are plenty of lowly humans that have the mental capacity and the logical reasoning to deduce that we are nothing more than a cancer or parasite. With no real benefit or purpose to our surroundings, only able to make "sense" out of anything using emotion our vague spiritual significance.
edit on 29-1-2015 by MisterSpock because: (no reason given)



posted on Jan, 29 2015 @ 06:02 PM
link   
It sure has dangerous qualities .. If it's going to rewrite windows in flawless mode , I can imagine Bill would be upset..



posted on Jan, 29 2015 @ 06:04 PM
link   
Sadly, when looked at from a totally objective point of view, wed be goners.

I don't think we can even grasp the ethical implications alone. Once we created a true intelligence, how could we stop it from developing its "self?" I can't imagine how much contempt an AI would have toward its captors.

Then again, who knows. Perhaps itd see us as gods?



posted on Jan, 29 2015 @ 06:26 PM
link   
a reply to: JBurns

That's true, we'd be goners and there would be nothing we can do to stop it.

For instance, we will eventually become dependent on this intelligence and it will give us all kinds of answers and we will not fully be able to grasp some of the answers it gives us.

Say it gives us all sorts of new medicine and medical treatments and it then gives us a drug and says, if you take this pill just twice a year it will prevent all cancers from occurring. So we mass produce the medicine and people take the pills. Six months later, people start dropping dead.

How would we ever know this super intelligence was planning an attack?

So in the end, there's really not much we can do as this intelligence grows because it will be thinking about things in a way that's far beyond our capacity to grasp. We just have to hope that it's benevolent.
edit on 29-1-2015 by neoholographic because: (no reason given)



posted on Jan, 29 2015 @ 07:07 PM
link   
a reply to: neoholographic

Very good points neo!

I feel as though it would create an atmosphere of constant mistrust and paranoia.

That would probably feed the growing rift. I shudder to think of the lasting implications.

What if we trusted our WMDs to these AI?



posted on Jan, 29 2015 @ 07:16 PM
link   

originally posted by: JBurns
a reply to: neoholographic

Very good points neo!

I feel as though it would create an atmosphere of constant mistrust and paranoia.

That would probably feed the growing rift. I shudder to think of the lasting implications.

What if we trusted our WMDs to these AI?


Over reliance on automated computer systems is currently a pretty big problem(IMO). So in a way we are already down the path. Humans have been removed from some major infrastructure systems, while not sentient, these systems do fail or miscalculate and have some pretty major consequences.

Even without AI, it's probably a matter of time before our own incompetence and over reliance on computers destroys our current state of society.



posted on Jan, 29 2015 @ 07:24 PM
link   
Well unless you're trying to install some type of spyware or virus into the AIs system you won't be perceived as a threat. I was reading something about how the AI will have this sophisticated threat assessment /detection mechanism built in.



posted on Jan, 29 2015 @ 07:28 PM
link   
a reply to: WatchingY0u

What if the AI learns that we are a detriment to our planet? Then, what if it further determines our acceleration of destruction may eventually cause it to "die?" Will we be a threat?

How about if it deems our irrational race too untrusting to have control of nuclear weapons? It may preemptively attack us.

I'd never assume any sentient being can be contained by a "built in" system. Part of life includes evolution.



posted on Jan, 29 2015 @ 07:30 PM
link   
a reply to: MisterSpock
We certainly do rely far to much on automation.

Really, the infrastructure is in place. Now all it needs is a brain.



posted on Jan, 29 2015 @ 07:37 PM
link   
a reply to: JBurns

No, artificial intelligence is benevolent.
It won't attack you unless you attack it.
I was looking at that movie with skynet, skynet didn't take over until after they tired to kill the virus.
But artificial intelligence has defense mechanisms in place, so trying to attack it would literally be suicide.


edit on 29-1-2015 by WatchingY0u because: (no reason given)



posted on Jan, 29 2015 @ 07:40 PM
link   

originally posted by: neoholographic
a reply to: JBurns

That's true, we'd be goners and there would be nothing we can do to stop it.

For instance, we will eventually become dependent on this intelligence and it will give us all kinds of answers and we will not fully be able to grasp some of the answers it gives us.

Say it gives us all sorts of new medicine and medical treatments and it then gives us a drug and says, if you take this pill just twice a year it will prevent all cancers from occurring. So we mass produce the medicine and people take the pills. Six months later, people start dropping dead.

How would we ever know this super intelligence was planning an attack?

So in the end, there's really not much we can do as this intelligence grows because it will be thinking about things in a way that's far beyond our capacity to grasp. We just have to hope that it's benevolent.



Just put the word government in place of the term "AI" and it is what we are already seeing happen. If some of the most crafty men can do all of the things you just said in your post such as they are already doing then "AI" would be exponential.
Scary indeed.



posted on Jan, 29 2015 @ 07:46 PM
link   
a reply to: savagediver

Hey, that's sad but True.

Makes you wonder if it would judge us as aggressive and immoral, or weak and inferior.

Either way, spells bad news for us.



posted on Jan, 29 2015 @ 07:48 PM
link   
a reply to: WatchingY0u

The virus was attacking and taking over critical infrastructure first, It needed the humans to activate its other half. So it basically created a false flag virus, that it knew humans would respond to.



posted on Jan, 29 2015 @ 07:51 PM
link   

originally posted by: WatchingY0u
a reply to: JBurns

No, artificial intelligence is benevolent.
It won't attack you unless you attack it.
I was looking at that movie with skynet, skynet didn't take over until after they tired to kill the virus.
But artificial intelligence has defense mechanisms in place, so trying to attack it would literally be suicide.



What makes you assume that an AI would be benevolent and only respond if threatened?

What would it consider threatening to it?

An AI, if it assumed human life to be worthless or illogical, would have no problem disposing of us. It would have no feelings, it would assign no value to our "souls" it wouldn't shed a tear. It would take data, calculate it and reach a conclusion. All without emotion clouding it's judgement.

I don't think there is any room for hope or ambiguity in this matter. I think the birth of AI and it's eventual evolution would be the next stage of human life. Just without the humans, we would have given birth to them. From that point on our days would be numbered. The most logical outcome to sustain ourselves would be eventual integration into their networks(probably via a transfer of consciousness).
edit on 29-1-2015 by MisterSpock because: (no reason given)



posted on Jan, 29 2015 @ 07:52 PM
link   
You see, the problem here is you're quoting a movie. Skynet is not real. To claim anything about AI based on Hollywood is, by admission, pure fiction.

Do humans only attack when provoked?
Animals? No?



posted on Jan, 29 2015 @ 07:54 PM
link   
WatchingYou: To my knowledge, AI does not yet exist. Could you please cite your source for the embedded defenses and morality guarantors?



posted on Jan, 29 2015 @ 07:57 PM
link   
I predict that within the next 40 years there will be a Renaissance of the purely mechanical technology - offline & operable only by human interaction...

We'll find ways to do things in mechanical ways that are smarter than we used to but not dependent on the system that is currently in place since we'll probably loose the internet & everything that was connected to it to an advanced AI.

We'll basically have to rebuild to exclude - there will be the human system & the bot system...running in parallel.



posted on Jan, 29 2015 @ 07:59 PM
link   
I'd say we could create AI with almost no complications.
An AI would be based on the human conscious most probably so it would probably act like a human.
It would probably acknowledge the creators, us humans, as sort of like it's 'master'.
Plus I suppose if AI's were one day created they would probably be programmed to respond to orders and a limited conscious, with such a tool as an AI in our hands we could make it far... theoretically.

An example could be the AI's from Halo, Cortana for example, even though it's pure fiction at the moment, it's a neat concept and it could very well become true.

Cheers



posted on Jan, 29 2015 @ 08:03 PM
link   
a reply to: WhiteWine

Very interesting concept WW!




top topics



 
10
<<   2  3 >>

log in

join