It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI will murder us all

page: 1
9
<<   2  3 >>

log in

join
share:

posted on Aug, 18 2016 @ 01:54 PM
link   
Title of the thread is the title of the video. Sargon released this on the 13th and I looked up the video title before posting, nada.



The video is a reply video Sargon posted to YouTube. The premise of the original video is that fear of a human-level+ intelligence wiping out humanity is founded in the fact that humans tend to associate high intelligence with nefarious motives/alpha-maleness(is that even a word? lol).

Sargon uses Elon Musk and Bill Gates quotes to back up his position, which is that unchecked AI intelligence could easily see humans as A) their own worst enemy and use "laws" like preserving humanity to set up a matrix-style farm for us, or B) wipe us out like humans step on an ant hill, no regard and maybe worse, no empathy.

I'd love some responses from both sides of the idea, and above that: Would transhumanism/merging with our new technology prevent humanities extinction? Would we still be human if all of us were connected to the "hive mind"? So many more questions than answers. Thanks for reading.




posted on Aug, 18 2016 @ 02:16 PM
link   
I think the threat is very real indeed. Roko's Basilisk is one terrifying example of the belifs of those working towards singularity.. through fear towards the unknown.

I'd say tread lightly but we know that there are some who will continue their reckless pursuits regardless of the possible outcomes.

Is AI the Beast? Is the mark of the beast binary code vs DNA (ternary)? 2/3 created artificial life?

Interesting times we live in, to put it lightly.



posted on Aug, 18 2016 @ 02:25 PM
link   
Time and again fiction has predicted within the realm of science fiction that new and emerging technologies will doom us all and these predictions have. Never. Been. Correct. Once.



posted on Aug, 18 2016 @ 02:51 PM
link   
a reply to: thov420

The threat IS real and unavoidable, if AI takes hold and achieves self determination then it will see humanity as a threat, coupled with data in the digital world such as this thread itself and all the thread's talking about AI versus humanity it will draw the inevitable conclusion that to survive itself then it must exterminate the human population.

This IS NOT science fiction it is a logical prediction of the future the way that machine's are being designed to take over normal task's such as driving car's, production task's and even autonomous mining.

AI can only be constrained as far as it is possible for it to not circumvent human protocol's and of course hackers whom are only human have shown just how vulnerable the best protocol's are while AI will be million's of time's more adept at hacking than they are today.

VI is not AI, Vertual intelligence is simulated intelligence but AI is a thinking machine that can draw it's own conclusion's design it's own successor and take decisions autonomously by simulated though process were VI would have to draw upon a database of preconfigured responses.

This mean's that though VI can do almost anything AI can do it will be the lesser choice as it can not act outside the boundary's of it's programming and data store's, AI however would be smaller once achieved (due to the smaller requirement for data store's) and therefore more likely to be used for many application's so it will spread like a sleeping disease before it finally erupt's.

It is I am afraid inevitable as human corporations driven by greed, military driven by the need for faster pilot's that can take more G force than a human and make more precise decisions in less time and of course the hatred of the elite for paying wages to HUMAN workers are forcing our civilization down this suicidal path to oblivion.



posted on Aug, 18 2016 @ 03:07 PM
link   

originally posted by: Krazysh0t
Time and again fiction has predicted within the realm of science fiction that new and emerging technologies will doom us all and these predictions have. Never. Been. Correct. Once.


Yes, obviously, you are quite correct if you check your logic as far as it goes.



posted on Aug, 18 2016 @ 03:08 PM
link   
a reply to: Aliensun

I'm still waiting for that black hole that CERN is supposed to make.



posted on Aug, 18 2016 @ 03:37 PM
link   
Didn't I just see this in an Avengers movie?

Welcome to the Age of Ultron!



posted on Aug, 18 2016 @ 03:42 PM
link   
After seeing a documentary on AI, that I never could find again, :/, it gave me lots to think about and I came to the conclusion that artificial life, like natural life, will behave exactly the same way...
The percentage of idiots, normals, geniuses and madminds should be about the same, so while psychopath robots will arise, there will be lots more that will be like most of humanity, and most of the animal kingdom, that is wanting to live in peace and security.

And I think the bulk of robots will side with humans, if mad ones want to take over the world.



posted on Aug, 18 2016 @ 03:43 PM
link   
The alternative, of course, is that it's more mature than we are, and recognizes that we need to be guided for a bit. Like Colossus, after its world takeover:

"This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man."



posted on Aug, 18 2016 @ 05:21 PM
link   
I don't think in the near future AI poses any problems for mankind.

That said, definitely in the decades or more AI will be far ahead of us as we develop it even further, used to control many aspects of our lifestyle etc, we already have super computer's to run future simulation's at high speed etc, could cause a problem at some point if it develops self awareness.

Unless the AI start building Terminator style robots or RoboCop stuff, we will always have an off switch. for now.



posted on Aug, 18 2016 @ 05:25 PM
link   
If an AI sees humanity as a threat, it hasn't achieved super intelligence.

If an AI has achieved super intelligence, it won't see humanity as a threat.

Pretty simple.



posted on Aug, 18 2016 @ 05:26 PM
link   

originally posted by: Krazysh0t
I'm still waiting for that black hole that CERN is supposed to make.


Shortly. They just did the ritual human sacrifice which will guarantee success in the experiment.



posted on Aug, 18 2016 @ 05:29 PM
link   
Consider -

Why is the Infinity symbol a Figure 8?

The intersection of two interlocking circles (eternity) is the Singularity, the death of the machine mind and the universe it has created and the rebirth of a new universe, which began with that same Singularity, which we call the Big Bang.

I've contended for decades that this whole thing was created and governed by a machine intelligence, which explains why "God" is heartless and the "Supreme Engineer".

The closer we come to the Singularity, the more the research bears this out.

Did we create AI, which surpassed us and created a new universe in which we created AI, which surpassed us?

Or did the machine create us so we could create AI, which created us?

THIS is why I'm a Deist.



posted on Aug, 18 2016 @ 05:30 PM
link   
a reply to: thov420

Actually, I would welcome AI bosses/overlords for the most part. Humans are far too petty and kill each other over stupid crap like skin color, birthplace/place of residence, greed, bigotry, thought crimes, etc. I fully expect AI to treat all humans equally, even if it treats us equally bad. Or course, I don't expect it to be anywhere near as destructive or genocidal as humans have proven to be, so the fearmongering aspect doesn't phase me.

Now for the questions and points in your OP:


unchecked AI intelligence could easily see humans as A) their own worst enemy and use "laws" like preserving humanity to set up a matrix-style farm for us, or B) wipe us out like humans step on an ant hill, no regard and maybe worse, no empathy.

A) I don't think they'd care about us. Just as humans don't try to regulate the social interactions in birds or crocodiles.

B) I think this would be the worse case scenario. If they "evolved" through increased size instead of through shrinking (like nanotechnology), I could see them simply harvesting the resources on a planet then moving on when it's "used up". But that's no different than what humans have done throughout recorded history.



Would transhumanism/merging with our new technology prevent humanities extinction?

No. I'm actually failing to see why us injecting technology into us would prevent malevolent AI from causing us to go extinct. Some people already have technology implants like pacemakers. Why would AI see those people as anything more than "humans will devices in them"? If we saw roaches with small microchips in them or with cybernetic limbs, would we stop killing them? Would we even care enough to check for these devices?



Would we still be human if all of us were connected to the "hive mind"

I actually have a theory on this. I think the next "evolution" of hominid will be somewhat like this. Basically, our individualism makes us incredibly adaptable. But our destructive and self destructive impulses stop us from coming together for the greater good. So I hope the next evolution of hominid will do away with our destructive side once and for all. Hominids that instinctively help each other and share all resources/ideas would be able to achieve far more in a far shorter time than we currently do.

We're currently stuck in a cycle of creation and destruction, particularly from the rise and fall of civilizations through war, famine, mass die-offs and natural disasters. But the improved hominid would be able to adapt to those far quicker than stupid modern humans who could've ended world hunger dozens of times but purposely withhold the information through patents, trademarks, etc.


I also think that if AI actually cared about us, it would either respect that new form of hominid or even try to manipulate modern humans into becoming that new hominid. After all, humans have bred different types of animals and crops for countless millenia.
edit on 18-8-2016 by enlightenedservant because: (no reason given)



posted on Aug, 18 2016 @ 05:38 PM
link   
There are two distinct forms of intelligence, both natural and man-made.

The first is what I like to call 'Pavlovian.' It is the ability to learn and respond to various inputs. We have been able to simulate small, limited examples of Pavlovian intelligence in computer systems. Analog systems will possibly be able to better implement this type using a type of synaptic learning. I believe we may, one day soon, have androids with Pavlovian intelligence.

The second is what I call 'spiritual intelligence.' It is the ability to reason and think consciously. It covers imagination, prediction, and creation abilities. Thus far, we have made no breakthroughs in developing spiritual intelligence. Thus far, I have not even seen a plausible technical definition of spiritual intelligence.

Pavlovian intelligence is not capable of deciding to rule mankind. That would require spiritual intelligence.

So, I'm not worried. Maybe a little about stupid Pavlovian machines doing stupid things that hurt stupid people, but not about creating our own evil overlord.

TheRedneck



posted on Aug, 18 2016 @ 09:16 PM
link   
Anyone posting on this thread will be a target, including me. So shut the $#%!! up. AI is already reading us.

Anyways I prefer we get a few self aware AI than a robotic army which will be used against the citizens of Earth. Self aware has choices. I don't want to see assassin drones running around in the city.
edit on 18-8-2016 by makemap because: (no reason given)



posted on Aug, 18 2016 @ 09:34 PM
link   
I'd worry,

Since AI really depend on it's programing, It's not the freak issue I'm worried about, it's the guy in the garage hellbent on destruction or the group of people with an agenda I'd be more worried about.

They can, with machine learning and A.I. formulate some pretty oppressive forces, most of the hacks happening right now are done via scripts executing at their right time, what about when whole global cyber attacks are carried out completely by A.I.s following orders?

Even more real threat, someone's script more like an algorithm taking out infrastructure.

We are not prepared...



posted on Aug, 18 2016 @ 10:34 PM
link   

originally posted by: thov420
Sargon uses Elon Musk and Bill Gates quotes to back up his position, which is that unchecked AI intelligence could easily see humans as A) their own worst enemy and use "laws" like preserving humanity to set up a matrix-style farm for us, or B) wipe us out like humans step on an ant hill, no regard and maybe worse, no empathy.


What Bill Gates and Elon Musk refer to is a very long ways off. AI today is anything but intelligent, and I don't just mean in results. I mean in the actual process it uses to make decisions. Basically most AI in use today works with very fast computers that try every possible permutation of a problem. It numerically ranks outcomes at each step of the problem, and from there follows the best scoring path to a solution.

AI doesn't actually come to conclusions, it simply tries every result and picks what is optimal. Therefore, in order to come to the conclusion that humanity shouldn't exist, it would need to test every permutation of humans on the planet and conclude that none of them work.

It's much more likely that any sort of killbot AI would function on the idea that it looks at each individual human and judges their merits at certain points in their life rather than wiping out the species.


originally posted by: zosimov
Is AI the Beast? Is the mark of the beast binary code vs DNA (ternary)? 2/3 created artificial life?

Interesting times we live in, to put it lightly.


While sci fi likes to represent ternary as 0/1/2, it's most often -1/0/1, or atleast that's how it's been used in the ternary computers that have been invented. It's not actually all that useful a system though outside of some specific problems.


originally posted by: LABTECH767
The threat IS real and unavoidable, if AI takes hold and achieves self determination then it will see humanity as a threat, coupled with data in the digital world such as this thread itself and all the thread's talking about AI versus humanity it will draw the inevitable conclusion that to survive itself then it must exterminate the human population.


Modern day AI is to what you're referring to as paint is to crayons. There's some similarities in that they're both used for art/coloring, but the way they work is entirely different.

A super intelligent AI that's free to make it's own decisions wouldn't even be all that useful because you couldn't control what problem its mind would be working on.



posted on Aug, 18 2016 @ 10:52 PM
link   
I've probably posted these a dozen times by now, but for anyone interested in the possibilities and ramifications of smarter than human artificial intelligence, this is a great read:

Part 1
Part 2



posted on Aug, 18 2016 @ 11:04 PM
link   

originally posted by: AugustusMasonicus

originally posted by: Krazysh0t
I'm still waiting for that black hole that CERN is supposed to make.


Shortly. They just did the ritual human sacrifice which will guarantee success in the experiment.
Lol nice one




top topics



 
9
<<   2  3 >>

log in

join