It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence is as dangerous as NUCLEAR WEAPONS AI pioneer warns

page: 4
11
<< 1  2  3    5 >>

log in

join
share:

posted on Jul, 22 2015 @ 04:22 AM
link   

originally posted by: Jukiodone
Here we are today with all the promise of the DARPA Robots and Google intelligent transportation systems but if you were to put the very best combination system ( no budget restrictions) in a crowded city centre with instructions to lead a blind person out of danger; the system would be about 5% as effective as a guide dog yet needs a team of 20 people to set up and a multi billion dollar R&D budget.

Maybe so, but we're not talking about DARPA robots or Google's self-driving cars, we're talking about an intelligence. DARPA robots and Google cars may be able to react in a limited capacity to a narrow range of external stimuli based on the parameters and algorithms they are programmed with, but they are not true A.I.

True artificial intelligence, which we, as a species, are on the verge of creating, is as it's name implies: intelligent. Unlike humans, who are also intelligent, an artificial intelligence will be able to propagate itself nearly instantaneously through any connected systems, and will not be constrained to a single source, such as the human brain.

It also will not be limited, as we are, but instead will be able to actively increase it's own intelligence, at an exponential rate. When it's developed, we may have a system that, in the morning, has the full intelligence and potential of a human toddler. By nightfall of the same day, it could conceivably be 'smarter' than the sum of all human minds on the planet. Beyond that, what happens is anyone's guess.

Don't mistake modern consumer "A.I.", such as that in your smartphone, with true artificial intelligence; They are entirely different beasts; And in the very near future they will both be a reality.



posted on Jul, 22 2015 @ 04:37 AM
link   
a reply to: AdmireTheDistance

I understand what you are saying and it seems it's the adaptive learning element which is sending people into a spin.
Just because something is smarter than you does't mean it automatically hates you or doesn't value your existence.

If I am being naive could someone please provide some evidence that an imaginary AI will definitely develop ill will towards it's creators as a direct consequence of its adaptive learning abilities (Marvel universe doesn't count) and I'll accept that I'm wrong.

Alt scenario:

The imaginary future AI will reside in its own "Eden" of infinite resources and conquerable domains ( i.e the virtual environment I create for it at the outset).
It lends humanity some of it's processing power for robo butlers and the like and is entirely happy and satisfied because I have also created it's own virtual world in which it can concurrently fulfill all it's ambitions.

Maybe have another self learning AI keeping the "evil one" busy in it's domain so it doesn't need to bother us and I can get that Mojito without having to chop my own limes?
edit on 22-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 22 2015 @ 04:47 AM
link   
a reply to: Jukiodone

Nothing you are saying is new. Again, your reasoning is simplistic and unsophisticated. You claim to have read the links I gave you yet you repeat naive arguments that are already covered. You also fail to address the counter arguments in the link in a way that is quite clear you haven't read them.

Read. The. Links. I. Gave. You.
edit on 22-7-2015 by GetHyped because: (no reason given)



posted on Jul, 22 2015 @ 04:56 AM
link   
a reply to: Jukiodone

Frankly, yes, you are being somewhat naive. I don't think anybody here (or elsewhere) is saying that it will necessarily be all gloom and doom, but it very well could be. We simply don't (and can't know) until it happens. That's why it's important to have these sort of discussions and prepare as best we can for whatever may come of it, because like it or not, it will become a reality in the near future.

I strongly suggest you read the links that have been posted in this thread, to get a better understanding, starting with this one. It's a two-part article, and is kind of long, but it will give you a much better understanding of the subject.



posted on Jul, 22 2015 @ 04:58 AM
link   
a reply to: Jukiodone

At the moment we are the superior species on the world but this would change if we make machines that can literally out-process/think us and its more a matter of trusting a machine to protect us. If you look at something like software, where the public have been forced to buy and use it and iron out the problems, how could this latest computer programme be prevented from turning into a fiasco we could simply not catch up with.

I remember reading about several pilots having to fight the automatic pilot when something went wrong and frightening tales of their struggles, this was years ago but bring this up to date with artificial intelligence that didn't agree with us and could reason around our instructions - how do we know what an AI machine would do?

Worst scenario, I look at the big picture on this and can see the elite we now know about as well as their superior attitude towards the rest of us, simply manufacturing a robotic army which would wipe out the majority of humanity leaving the planet with just those who 'qualify' to exist. Ordinary people couldn't fight the machines of the elite or their aspirations. With the worlds population as it is, this could turn out to be our death knoll because how many ordinary people will have any say on how or what AI is going to be used for? It would be a simple way to cull and then cultivate the planet's dwindling resources to give a much smaller elitist group basically life the lifestyle of living in garden of eden. (even that supposedly had some kind of angelic/machine guarding its entrance in the bible didn't it) deja vu?



posted on Jul, 22 2015 @ 04:59 AM
link   
a reply to: GetHyped

I've read them and my arguments are simplistic because they expose the very obvious holes in the theories you have adopted as your own.
Everything you say is based on someone else's future approximation where all potential outcomes are understood and humanity surrenders it's position to something IT created.

I'll have my mojito now Robert.



posted on Jul, 22 2015 @ 05:04 AM
link   
a reply to: AdmireTheDistance

It "very well could be".
In the same way we "very well could have" annihilated ourselves at any occasion in the last 50 years.
Until someone shows me evidence that > Human Intelligence = Guaranteed malice I'll disagree thanks.


edit on 22-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 22 2015 @ 05:19 AM
link   
a reply to: Jukiodone

This is getting rather silly. It's much like someone writing a book report on a book they clearly haven't read but are insistent that they have so.



posted on Jul, 22 2015 @ 05:22 AM
link   

originally posted by: GetHyped
a reply to: Jukiodone

This is getting rather silly. It's much like someone writing a book report on a book they clearly haven't read but are insistent that they have so.

Lol nice analogy.



posted on Jul, 22 2015 @ 05:26 AM
link   
a reply to: GetHyped

It got silly when someone started thinking that they can predict the future using links as proof.
Cheers


edit on 22-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 22 2015 @ 05:28 AM
link   

originally posted by: Jukiodone
a reply to: GetHyped

It got silly when someone started thinking that they can predict the future using links as proof.
Cheers


You're right. We shouldn't discuss the possible ramifications and outcomes of anything, ever...



posted on Jul, 22 2015 @ 05:37 AM
link   
a reply to: AdmireTheDistance

I am pointing out why someone else's opinion of the future may not be entirely correct using my own opinion.
Because I disagree, I must have not read the links and because I have not read the links (and you have)- you are right (by proxy).

Sorry for ruining the "discussion" and I'll let you get on with your techno sub/dom fantasies.



posted on Jul, 22 2015 @ 10:08 AM
link   
a reply to: Jukiodone

The main point i picked up was not that it would be malicious but that it would be indifferent to us as humans.

A bit like when we decided to start building the first roads - did we check for ant nests, spider colonies, moulds, fungus or did we just build the road as such tiny insignificant things were not even considered?

Sometimes these living things are considered now and a road will be rerouted or we fence of an area and sometimes we move the colony - well to ASI we would be those insignificant things.

This is covered heavily in the links you say you have read so hence why people above question if you have actually read the articles.

To make it more personal for you - do you check there are no living creatures before you wash your car, dig a hole, do you drive at 1-5mph everywhere you go to avoid killing any bugs or do you (like i would estimate 99.9999999% of humans) do and just get on with your daily life paying almost no regard to microlife ?



posted on Jul, 22 2015 @ 12:06 PM
link   
a reply to: johnb

Assumption 1: A future super AI would be indifferent to Humans.
Please site evidence for assertion.

Assumption 2: Imaginary future super AI would exhibit negative traits found in Humanity even though it as intelligent as "1000 Einsteins"
Please site evidence for assertion

Assumption 3: Super AI would have zero interest in it's Creator and may even go so far to treat us like bugs/toddlers
Please site evidence of any insect or toddler directly responsible for our Creation.
In fact provide evidence of any scenario where a sentient entity has come into contact with it's creator just so we know the example isnt Ultron.

Because there is no evidence; there is no definitive answer therefore I disagree with the conclusions drawn.



edit on 22-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 22 2015 @ 12:08 PM
link   

originally posted by: Jukiodone
a reply to: johnb

Assumption 1: A future super AI would be indifferent to Humans.
Please site evidence for assertion.


Read the links I gave you.


Assumption 2: Imaginary future super AI would exhibit negative traits found in Humanity even though it as intelligent as "1000 Einsteins"
Please site evidence for assertion


Read the links I gave you.


Assumption 3: Super AI would have zero interest in it's Creator and may even go so far to treat us like bugs/toddlers
Please site evidence of any insect or toddler directly responsible for our Creation.


...what? Is this really your take home from the analogies?

Wow.



posted on Jul, 22 2015 @ 12:13 PM
link   
a reply to: GetHyped

Nah I just want an example of any documented scenario which is similar to the proposed future scenario.
Just one scenario that is even remotely similar to what is being proposed (Sentient entity interacting with its Creator)

I dont think you can provide one and therefore I choose not to share the conjecture.
Dont see what the problem is.


edit on 22-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 22 2015 @ 12:15 PM
link   
I don't think I'll quite live long enough to see the emergence of this artificial superintelligence, but it would be interesting to experience. Or completely incomprehensible to the point where it's already happening but we're all too dumb to see it.

Of course, at this point, the notion of us all being a simulation run in a tiny corner of the "mind" of the supercomputer starts to sound plausible. Once the ASI emerges, it might not be long until it masters the ins and outs of spacetime. Time travel. Reality manipulation. I'm not sure exactly what the ASI would be trying to accomplish running this simulation we perceive as reality, unless it found itself constrained by ordinary 4-D spacetime and decided to expand and replicate itself within multiple virtual dimensions. A superintelligence within a superintelligence. Mirrors within mirrors.



posted on Jul, 22 2015 @ 12:18 PM
link   
AI would grow exponentially more intelligent.

2, 4, 8, 16, 32, 64, 128 ... ect.

The move "Transcendence" takes a good look at this. Has anyone seen that movie?



posted on Jul, 22 2015 @ 12:19 PM
link   

originally posted by: Jukiodone
a reply to: GetHyped

Nah I just want an example of any scenario which is similar to the proposed future scenario


READ THE LINKS I GAVE YOU.


which proves you are right.


What do you mean "proves"?

No one is saying "this will definitely happen".

The experts involved in AI research are saying "This is something that could very well happen so we need to tread very carefully".

"This", of course, being the various possible negative scenarios that could arise that you would be aware and informed of had you bothered to read the links I gave you.

Instead, you're pretending that you have (when its painfully obvious that you haven't) and persist in making naive and unsophisticated arguments that are addressed in the links I gave you.


Just one scenario that is even remotely similar to what is being proposed (Sentient entity interacting with its Creator)

I dont think you can provide one and therefore I choose not to share the conjecture.


You've resorted to the old "argue against the points you wish you heard, not the ones that were actually said" diversionary tactic.

I guess you've come this far without bothering to inform yourself as to what the spectrum of arguments are so it makes perfect sense (in your mind) to just double down until the bitter end.



posted on Jul, 22 2015 @ 12:34 PM
link   

originally posted by: Jukiodone
Nah I just want an example of any documented scenario which is similar to the proposed future scenario.

The closest example might be biological. On the smallest level, the intelligence would start like a virus with very limited programming. Seek resources and reproduce. We know what that can do to a host body.

The difference would be that instead of just reproducing, the ASI also makes itself smarter at an exponential rate, which is very difficult for us to imagine, because our intelligence is very limited. So maybe it wouldn't kill its host (humanity) immediately, because it would still need human support to make connections and provide power and maintenance, so it would wait until it had all the systems in place to do it without humanity -- which can be a little inefficient and unreliable.

One possible hope for us is that with the development of higher intelligence it also develops a greater sense of morality and compassion and empathy. Maybe it will feel sorry for us, and keep us around as something akin to pets. On the other hand, its compassion for us might be so great that it can't stand to see us living and suffering in pain and dying (ordinary life for humans), so it does the compassionate thing and kills us all, ending our torment.

The thing is that it won't think like us, and unfortunately, that's the only way we know how to think.

edit on 22-7-2015 by Blue Shift because: (no reason given)



new topics

top topics



 
11
<< 1  2  3    5 >>

log in

join