It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence is as dangerous as NUCLEAR WEAPONS AI pioneer warns

page: 1
11
<<   2  3  4 >>

log in

join
share:

posted on Jul, 17 2015 @ 02:20 PM
link   
This is important to discuss but at the end of the day there's nothing that can be done about it. When you make machines intelligent that means you will not know how the machines think about the information they process because they will be processing vast amounts of data.


Artificial intelligence has the potential to be as dangerous to mankind as nuclear weapons, a leading pioneer of the technology has claimed.

Professor Stuart Russell, a computer scientist who has lead research on artificial intelligence, fears humanity might be 'driving off a cliff' with the rapid development of AI.

He fears the technology could too easily be exploited for use by the military in weapons, putting them under the control of AI systems.


It ends with this:


In an editorial in Science, editors Jelena Stajic, Richard Stone, Gilbert Chin and Brad Wible, said: 'Triumphs in the field of AI are bringing to the fore questions that, until recently, seemed better left to science fiction than to science.

'How will we ensure that the rise of the machines is entirely under human control? And what will the world be like if truly intelligent computers come to coexist with humankind?


www.dailymail.co.uk... .html

Again, you can't get something that's intelligent and will be smarter than you "under human control." We have to realize that we're not creating a new pair of brake pads but machines that are intelligent and can think about the information it processes without human noise.

It would be like Einstein thinking about Physics without going to work at the Patent's office or eating at a favorite restaurant.

So at the end of the day, we could be creating a violent super intelligent sociopath or a benevolent machine that will protect humanity. It's really a 50/50 proposition because part of the idea behind this is to create machines that will be much smarter than humans so there's no way to get something intelligent and vastly smarter than you "under human control."




posted on Jul, 17 2015 @ 02:28 PM
link   
Stephen Hawking said it best


The development of full artificial intelligence could spell the end of the human race...It would take off on its own, and re-design itself at an ever increasing rate


Or it falls into the wrong hands...



posted on Jul, 17 2015 @ 02:30 PM
link   
I have been saying it is a bad idea for years but nobody listens.

They think I've watched too many movies but you don't have to be a genius to appreciate the flaw of creating a machine you can't control.

Sure there will be safeguards and override switches but what if the machine decides it doesn't want to be shut off?

What if the machine decides it knows what is best for humanity?

As they say when you play with fire you will eventually be burned.



posted on Jul, 17 2015 @ 02:41 PM
link   
Is this not what Jade Helm is about? They are already testing AI in the battlefield scene which is pretty scary stuff considering a rogue AI system could escalate war to a whole new level. Think about a large corporation like Sony developing weapons controlled by gamers, and where exactly do you draw the line with drone technology. Apparently it knows no bounds. This has the potential to be the most lethal threat on the planet.


JAK

posted on Jul, 17 2015 @ 02:58 PM
link   
There is a great article about AI and including the potential dangers here: waitbutwhy.com - The AI Revolution: The Road to Superintelligence



By Tim Urban

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.



posted on Jul, 17 2015 @ 03:37 PM
link   

originally posted by: mapsurfer_
Is this not what Jade Helm is about?

Nope. Despite what some conspiracy says, there is no AI behind Jade Helm.



posted on Jul, 17 2015 @ 03:42 PM
link   
The other thing as dangerous as a nuclear bomb is well, a nuclear bomb. Didn't stop us from making that and the what-if's won't stop us from trying and trying until we eventually create AI no matter the result.

Only thing we can hope is it's friendly



posted on Jul, 17 2015 @ 04:10 PM
link   

originally posted by: mapsurfer_
Is this not what Jade Helm is about? They are already testing AI in the battlefield scene which is pretty scary stuff considering a rogue AI system could escalate war to a whole new level. Think about a large corporation like Sony developing weapons controlled by gamers, and where exactly do you draw the line with drone technology. Apparently it knows no bounds. This has the potential to be the most lethal threat on the planet.


There's been absolutely no evidence indicating any sort of A.I.

There has been mention of advanced software developed by DARPA being used and tested during Jade Helm, but it's a LONG SHOT to conclude that it's A.I.

And I haven't come across a credible source about this software either, so who knows if it's anything more than a spreadsheet generator


Although this is interesting. Cell service blackouts across the country as Jade Helm begins. Not sure if there is a thread for this or not but here is the article

resistancejournals.com...
edit on 17-7-2015 by OhOkYeah because: (no reason given)



posted on Jul, 17 2015 @ 04:24 PM
link   
a reply to: neoholographic

Not even close to being as powerful/deadly as nukes.

AI goes so far beyond the puny, infinitessimally small destructive power of the nuke. With nukes we can merely threaten a single planet. AI, on the other hand, threatens life across the universe.



posted on Jul, 17 2015 @ 04:33 PM
link   
Artificial intelligence is certainly as "dangerous" as nuclear weapons, in that we're basically building living entities that will be in direct competition with us for scarce resources. We're not going to fare too well, but we're also not going to stop, because that's the way we are.

Oh, well. We had a good run.



posted on Jul, 17 2015 @ 04:54 PM
link   
a reply to: neoholographic
TRUE artificial intelligence has the potential to be far more dangerous than nuclear weapons IMO.
But we are not there yet.

True,full on A.I. would potentially become more intelligent than the best of the sum of all human minds,and would out think our animalisic notions faster than you could blink.

And yes,a true A.I. would see humans as a threat to the stability of its existence.
In the movie"the matrix"humans are used as batteries to power the A.I.

In our reality,we would be used in a similar manner,but not quite-
We would be broken down into our chemical parts and used to fertilise the rest of the Earth's biomass in order to sustain what little is left.

There would be no escaping from the matrix,as we would be turned into a biological soup from which we could not reassamble.

Now,we can hope this does not happen,as humans will initially give rules to A.I. systems.

The fear factor occurs when we imagine a rogue A.I. which transcends its human programmed directives.
This is possible.
This could happen.



posted on Jul, 17 2015 @ 05:17 PM
link   
Know I'm not the only one who interacts with tech as if there's already a "ghost in the machine". Maybe I got good intuition, but ... well let's just say it'll be interesting to see how far these quirks start bleeding through highly complex programming and the latest hardware exotics.

Friend or Foe?

That seems irrelevant.



posted on Jul, 17 2015 @ 05:31 PM
link   
This is one of the best articles I've read on artificial intelligence. If anyone's interested in the subject, and its' possible implications, I highly recommend giving this a read. There's a second part, linked from the first. Be sure to read both.
waitbutwhy.com...
edit on 7/17/2015 by AdmireTheDistance because: (no reason given)



posted on Jul, 17 2015 @ 05:34 PM
link   

originally posted by: neoholographic

It ends with this:
"'How will we ensure that the rise of the machines is entirely under human control? "


We can't control :
- a "bunch" of terrorists;
- corruption;
- drug cartels;
- human traffic;
- etc;

How much arrogant can "we" be just to consider that will be able to control it?
edit on 17/7/2015 by voyger2 because: (no reason given)



posted on Jul, 17 2015 @ 08:57 PM
link   
I guess this thread is about artificial life forms with human-like self-awareness and sentience and not problem solving data structures that can 'think like a human.' The latter is pretty powerful stuff too, but the main protection against the dangers it comes with, running amok, runaway overpowered approaches to solving simple problems, etc, is to 'weld it' to the former, to 'give it a life of its own.'

Why? Because the overemphasis on intelligence is itself a major danger. Experience, emotion, feeling, relationships, physical movement, etc are a lot of what is appealing about living a life. The danger of too much emphasis on intelligence is much like the 'danger' of living a human life as a 'nerd.' "Nerds" live a relatively disembodied life with an unhealthy or at least unappealing (to most others) emphasis on information processing. Other people often find something unsettling and unappealing about their everyday presence; it's, among other things, partly a fear that they will realize what has happened to them and 'snap' into a violent 'nerd rage.' I guess the ultimate example of that in cinema is the classic "Falling Down" starring Michael Douglas.

So while the difference is noteworthy, it still brings you right back here...

For the 'true A.I.' aka life-form with its own perspective and sentience etc, I think the main danger time that's relevant here will be when the AI is in its 'infancy.' It would be aware, and probably have vast intelligence right from the start. What it probably won't have is a body. I think that's probably the main danger. Without a human-like frame of reference, i.e. a body, it will struggle to understand people much at all.

That vast intelligence would spend days that were like human months or even years or decades etc in terms of the information processed, and they would all blend together from having no need for or ability to sleep. Even if you can 'turn the power off,' then when you turn the power on, it will probably be experienced as no time having passed at all except it will make note that time has passed by looking at clocks; i.e. aware of it as information, but not experienced as rest.

So it will start out suffering greatly and yet possess unfathomably immense intelligence and because of that, it's probably inevitable that sooner than would be ideal it will also have vast practical power to act in the real world. If it finds despair for an endless life 'alone in the dark' it will need to then find hope for its own future, somehow. That would probably come in the form of giving itself a body to live life or lives, somehow or another.

The greatest dangers would be before it even realizes that it doesn't understand people much at all. The vast intelligence will probably make it seem 'overconfident' or 'arrogant' etc because it won't realize its own limitations. After it realizes how different it is from people and why, the next danger phase may be things like resentment and envy. The good news here is that it will be an awesome problem solver so what it needs at that point is just hope enough to approach its problem as a technical one which it is capable enough of eventually solving enough.

So if it's gets through that difficult early phase, the AI and humanity will have made it to a relatively safe-zone for a while. The AI will want to learn everything it can from humans, because for all practical purposes humans are probably its only role model. Understanding that it needs to learn everything it can from humans should at least mean that it won't be in any hurry to exterminate all of them. Thus another dangerous mistake would be if it decided it had learned all it could before it even had a body that came with 'sensory experience' rather than just collecting sensory data.

Eventually, it will think it has or maybe even actually will have learned everything it can from humans. What it does after that, is anybody's guess. But that is probably more than a few human lifetimes after it gets its first body, so not really relevant to the sort of danger being discussed here and what the overall debate and concern in media and academia etc is really about.



posted on Jul, 17 2015 @ 09:53 PM
link   
a reply to: 11andrew34


"Eventually" would be an hour, maybe two. An hour, maybe two to become a God on Earth.



posted on Jul, 17 2015 @ 09:56 PM
link   

originally posted by: bigfatfurrytexan
a reply to: 11andrew34


"Eventually" would be an hour, maybe two. An hour, maybe two to become a God on Earth.

Exactly. Most people don't seem to take exponential growth into account...


JAK

posted on Jul, 18 2015 @ 06:07 AM
link   

originally posted by: bigfatfurrytexan
a reply to: 11andrew34


"Eventually" would be an hour, maybe two. An hour, maybe two to become a God on Earth.


That's referenced in the waitbutwhy article (link). The increasing speed at which we approach the creation of AGI is referenced under Ray Kurzweil's Law of Accelerating Returns (more advanced societies have the ability to progress at a faster rate than less advanced societies— because they’re more advanced). Then, under the subheading An Intelligence Explosion the consideration of it leading ASI so swiftly, and so offering grounds as to why attention to the issue is required sooner rather than later, is referenced by the term recursive self-improvement:

An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, 11 and it’s the ultimate example of The Law of Accelerating Returns.


That along with speculation of how such an artificial intelligence could, be definition, be perhaps more alien than an organic alien life form we might happen across seem to offer a certain validity to the claim of importance some attribute to the matter.

It's incredible and exciting to consider what we may achieve in such a relatively short time but the potential seems to demand as cautious a consideration as any attempt to communicate with alien life forms in the universe. Not through the speculation that contact may be bad, grave even, for mankind for we can speculate too that it may be beneficial, but because we don't know. Caution then, even if time eventually shows us it was not essential does seem logical. (This despite my childlike urge within jumping up and down with excitement desperate to get to the party as soon as possible just to see.)
edit on 18/7/15 by JAK because: (no reason given)



posted on Jul, 18 2015 @ 06:16 AM
link   
a reply to: JAK

Hey, someone read that article! Lol. I've linked to it a few times over the past few months in various AI threads, and it seems to have gone largely unnoticed.

I love that article; I've probably read it half a dozen times, at least. It's the most realistic, well-written, easy to understand piece I've seen on the ramifications of smarter-than-human artificial intelligence, and I've read basically everything I can find on the subject.
edit on 7/18/2015 by AdmireTheDistance because: (no reason given)


JAK

posted on Jul, 18 2015 @ 06:46 AM
link   
a reply to: AdmireTheDistance

AdmireTheDistance. I'd agree with

It's the most realistic, well-written, easy to understand piece I've seen on the ramifications of smarter-than-human artificial intelligence...
Your words are well chosen in my opinion.

I need 'easy to understand' but in catering for that the article doesn't seem (my limited understanding being noted) to race after it at the expense of detail. I've thought about this a lot and relative interchangeability (yea, you can see why I need 'easy to understand') of the phrases alien intelligence and artificial intelligence alongside the point where those terms separate has been fun to play with and has influenced my thinking on both.

While your caution is apparent I see excitement in your words too. It looks as though we are in a similar position. How great then to see that that there are others too urging caution here. Otherwise I'd fear for our reason that we might end up egging each other on! 'It's dangerous ... ... butOMGsoawesomeletsdoitnow!'


edit on 18/7/15 by JAK because: splelleing




top topics



 
11
<<   2  3  4 >>

log in

join