It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence is as dangerous as NUCLEAR WEAPONS AI pioneer warns

page: 3
11
<< 1  2    4  5 >>

log in

join
share:

posted on Jul, 21 2015 @ 09:46 AM
link   
IMO I sometimes envision that TRUE AI is going to come from quantum computing and when we fully understand how our brain works, or at least when we progressed enough to capture and monitor all brain activity .

Enough so that we can make a mapping of our brains to generate the algorithms to give consciousness. We don't have to fully understand what consciousness is but if we can replicate it , that would be enough.

From there the AI would exponentially evolve.

Perhaps in the future if someone suffers brain injuries their is an AI implant that performs the function that was damaged. Eventually, it could progress from their to full blown AI?

Fun to speculate nonetheless. I just see coding AI from scratch as the archaic way of doing it.




posted on Jul, 21 2015 @ 10:45 AM
link   

originally posted by: AdmireTheDistance
a reply to: Choice777

Funny how no experts in the field share your optimism....

I find it sad that people only think about the bad side of things... It's gonna kills us, or, how can we use it to kill more faster ?



posted on Jul, 21 2015 @ 02:03 PM
link   
a reply to: GetHyped

LOL I am naive regarding an imaginary future of doom- that has already been identified as an issue by it's proposed creators..... and some links will prove it ???

I stand corrected.



posted on Jul, 21 2015 @ 02:09 PM
link   
a reply to: Jukiodone

You're naive for thinking your ideas have not already been trivially covered in the plethora of material out there (of which some I have already linked).



posted on Jul, 21 2015 @ 02:15 PM
link   
An AI can't jump off a table, walk over to the wall and plug itself into the internet.

Also, an AI doesn't have hands to stop a programmer from pulling the power plug. Well, at least YET.

I'd like to think we'd build safeguards into any AI like the laws of robotics from Asimov (wow that's twice this week that I've quoted him..)

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Wikipedia

Then there's the Zeroth law:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If we could hard wire these into any AI, I think we'd be OK.
edit on 21-7-2015 by MystikMushroom because: (no reason given)



posted on Jul, 21 2015 @ 02:20 PM
link   
a reply to: MystikMushroom


An AI can't jump off a table, walk over to the wall and plug itself into the internet.


But it could persuade a human to do so.


Also, an AI doesn't have hands to stop a programmer from pulling the power plug. Well, at least YET.


But it could persuade a human not to.

Compared to artificial superintelligence, we wouldn't even be bugs. Just think how easy it is lay a bug into your trap or convince a toddler to give you the gun. That is the danger.



posted on Jul, 21 2015 @ 02:55 PM
link   
a reply to: GetHyped

So your imaginary super villain AI relies upon humanities input?
Sounds risky ...I suspect you might fool some people sometimes ...and you know the rest.

In terms of your "analogies":

Bugs did not create humanity (knowingly at least)...if we found out they did I am sure we would have a much more cordial relationship with them.

A toddler would only have a gun (jeez where did you come up with these?) because an adult manufactured it then gave it to them so I don't see your point.

Humans are the resident self sustaining, adaptive quantum computers with a track record of brinksmanship in the face of extraordinary odds who also would have the creation of AI under their belt in your universe...

In your imaginary doom laden future surely we must be at least odds on to stand a chance?
edit on 21-7-2015 by Jukiodone because: (no reason given)

edit on 21-7-2015 by Jukiodone because: (no reason given)



posted on Jul, 21 2015 @ 02:56 PM
link   
a reply to: Jukiodone

If you read the links I posted for you, your questions will be answered. The second one is particularly in depth and easily digestible (and, dare I say, interesting).

I fear we're having a "But why male models?" moment.
edit on 21-7-2015 by GetHyped because: (no reason given)



posted on Jul, 21 2015 @ 03:05 PM
link   
a reply to: GetHyped

I've read them and I disagree with them.

If you can show me how your analogies are in context given the above observations I'm sure we can avoid further confusion.



posted on Jul, 21 2015 @ 03:15 PM
link   

originally posted by: bigfatfurrytexan
a reply to: neoholographic

Not even close to being as powerful/deadly as nukes.

AI goes so far beyond the puny, infinitessimally small destructive power of the nuke. With nukes we can merely threaten a single planet. AI, on the other hand, threatens life across the universe.


perhaps it already does, and our cosmic isolation is the only reason we are having this conversation.



posted on Jul, 21 2015 @ 03:21 PM
link   
I am afraid that our own "intelligence" will do us in before any machine we can build will be able to simulate it.
Such self-righteous people we are with our wars, disrespect for the planet and countless other atrocities that we would not otherwise associate with intelligence... and we are going to create it with a track record like that? I think not.



posted on Jul, 21 2015 @ 03:24 PM
link   
Hello neoholographic. Interesting thread.



So at the end of the day, we could be creating a violent super intelligent sociopath or a benevolent machine that will protect humanity.

Would we? Perhaps that is a bit anthropomorphic, in the sense of attributing emotion as we experience it, to a machine. Knowing what violence or benevolence is factually, is much different than feeling it, right? And emotion is a physical experience, through our senses and their interaction with environment and other humans, as much as it is manifested in our intellect.

A machine not having the same senses will perceive in a different manner. This seems to me the crux of the issue. If a machine operates purely on logic, than we humans will be considered quite deficient by the machine. This is the danger of the problem, imho.
tetra



posted on Jul, 21 2015 @ 03:26 PM
link   

originally posted by: Jukiodone
a reply to: GetHyped

I've read them and I disagree with them.


The sophistication of your arguments says otherwise. That is to say, your naive reasoning is thoroughly debunked in the links I gave you. You have offered up no sophisticated argument to counter the points raised in the links I gave you...


If you can show me how your analogies are in context given the above observations I'm sure we can avoid further confusion.


Funnily enough, the basic premise of these analogies is covered in the links I gave you...

1) The bugs are a comparison of intellectual capacity. Read the links gave you...

2) The toddler is a comparison of intellectual capacity and the gun is the power we have unleashed. Read the links gave you...

...read the links I gave you.



posted on Jul, 21 2015 @ 03:28 PM
link   
Wow, we're really down on our species aren't we?

Has anyone considered that an AI might value humanity? There are going to be things initially that we can do that it cannot. For example, create new works of art.

A machine can replicate and duplicate art, but it's not human and can't come up with anything truly original.

Data from Star Trek could combine styles together and try to create something unique, but because he's not human.

For the religious types, I wonder if God feared creating mankind for the same reasons? .... (something to chew on).



posted on Jul, 21 2015 @ 03:31 PM
link   

originally posted by: MystikMushroom
If we could hard wire these into any AI, I think we'd be OK.

Good luck. One of the key features of artificial superintelligence is the ability to modify its own programming. Think you're smart enough to figure out a way to stop a machine smarter than 10,000 Einsteins from beating every single safeguard you put in place? Yeah, right!



posted on Jul, 21 2015 @ 03:37 PM
link   

originally posted by: MystikMushroom
Has anyone considered that an AI might value humanity?


Of course, which is why it could very well exceed all expectations for the best thing for humanity.

But there's a very real existential threat that it would be painfully stupid to ignore. Such issues need to be tread very lightly indeed.

Hence the increasing vocal warnings from experts.



posted on Jul, 21 2015 @ 03:51 PM
link   

originally posted by: tetra50
A machine not having the same senses will perceive in a different manner.

Yeah. Try to imagine what it would be like to have your "mind" not located in any one place (like your brain) but instead spread out among a network of fast machines, constantly shifting and growing and getting smarter. The thing is, we can't. We don't have that ability. We only understand intelligence as it relates to us, and we're not really even all that good at that.



posted on Jul, 21 2015 @ 03:57 PM
link   

originally posted by: MystikMushroom
A machine can replicate and duplicate art, but it's not human and can't come up with anything truly original.

It might learn how to create unique works of art. But what if it sees "art" as an aesthetically pleasing pattern of computations or EM waves within a global network, and not a lot of colors smeared on a canvas that represents a bowl of fruit?

"Computer, you must spare me, because I can create this work of beauty!"
"I don't care."



posted on Jul, 21 2015 @ 04:08 PM
link   
Developing the ultimate A.I. could be man's best bet to becoming immortal. It would just be a matter of time before AI would desire to replicate its maker giving birth to the first generation of Androids. (robots with organs, part machine). This could be the best means of mankind survival of any form of Armageddon to come, whether it be man-made or natural. The next generation of "humans" carrying with it remnants of our DNA's.
edit on 21-7-2015 by toktaylor because: (no reason given)



posted on Jul, 22 2015 @ 03:57 AM
link   

originally posted by: MystikMushroom
Wow, we're really down on our species aren't we?


This..

This whole thread is based on other peoples opinions that cant be backed up by those supporting so they post links and assume others are naive because they have their own argument.

The title of the thread says "as dangerous as nuclear weapons" and here we are, 50 years later, 7 billion strong and still enthused enough to see how we could create useful AI.

The assumption that any field in science could come to unified agreement about how an imaginary future might look and be accurate is a ridiculous proposition in itself.

Here we are today with all the promise of the DARPA Robots and Google intelligent transportation systems but if you were to put the very best combination system ( no budget restrictions) in a crowded city centre with instructions to lead a blind person out of danger; the system would be about 5% as effective as a guide dog yet needs a team of 20 people to set up and a multi billion dollar R&D budget.


If these so called experts whose livelihoods rely upon them being the unquestioned masters of their domain had some technology that roughly approximated all of the behavioral facets of a rat (as an example)... well then we could have a discussion but the gulf is so wide based on our current understanding of emulating biology; surely we are right to be skeptical.
edit on 22-7-2015 by Jukiodone because: (no reason given)



new topics

top topics



 
11
<< 1  2    4  5 >>

log in

join