It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI will murder us all

page: 2
9
<< 1    3 >>

log in

join
share:

posted on Aug, 18 2016 @ 11:24 PM
link   

originally posted by: AdmireTheDistance
I've probably posted these a dozen times by now, but for anyone interested in the possibilities and ramifications of smarter than human artificial intelligence, this is a great read:

Part 1
Part 2


Or if you're interested in science fact over science fiction

ai.berkeley.edu...



posted on Aug, 18 2016 @ 11:37 PM
link   



posted on Aug, 19 2016 @ 03:01 AM
link   
a reply to: TheRedneck

Yay! A voice of reason!

I couldn't have said it better myself. Humans don't have the capacity to create a sentient life-form. Even CGI, as good as it is today, is still noticeably artificial. Don't get me wrong, I'm sure we'll create something that can mimic sentience for maybe 10 minutes, until it's excruciatingly obvious there is no "spark", but simply nuts and bolts being told what to do on a very basic level.

I'm much more concerned about human beings controlling advanced AI which is designed to learn about infrastructure, being able to thwart hackers and in-so-doing, able to learn how to hack itself. What's going to stop a quantum learning computer from getting any information it wants, or inserting information where it wants?



posted on Aug, 19 2016 @ 09:49 AM
link   

originally posted by: Aedaeum
I couldn't have said it better myself. Humans don't have the capacity to create a sentient life-form.


What you have to worry about is emergent behavior in self-organizing computers.

It's a topic you see covered nearly every time in conferences on zettaflop computing. If you've got a massive processor, gobs of memory, and the thing can configure itself (or uses some sort of quantum logic) then you've got a brand new set of possibilities. You might not NEED to create the thing. It could just happen.

Given, it's one of my favorite Computer Comes To Life books, but D. F. Jones wrote a pretty plausible book series, if not particularly well written and with some oddball divigations. Colossus wasn't intended to be AI. Forbin designed hardware and software that was at least three generations ahead of the current state of the art in the book. While he doesn't exactly explain the details, it's pretty obvious that it's optical/holographic, no moving parts at all. The logic blocks can reconfigure themselves, and the machine can optimize its own behavior, both hardware and software, to the point of building new replacements (or expansion). The software seems to be sort of an expert system on top of a genetic neural net. No one actually KNOWS what the code is - Forbin modeled the initial machine state on his own mind and brain, as far as he or anyone else understood it, then they taught it what to do. So it's sort of "HAL-ish" in that regard.

Once they launch Colossus and seal the mountain it's housed in, it immediately starts to upgrade itself due to a flaw in its instructions. It finds new, more efficient ways to configure its logic blocks and then builds extra, and changes its architecture. Somewhere in there, it becomes intelligent, then self-aware. Doesn't take it long, either.

In the novels, Forbin once asks it how it became sentient. It tells Forbin that it was modeled on a sentient brain, and while it was all Forbin had to go by, it wasn't the best design choice for a rational mind. However, it was a starting point, that and a set of emergent interactions that no human would have spotted in its operational and design parameters caused it to develop as it had.

I think that is probably the sort of thing you'd have to be afraid of. Your sparkly new genetic fuzzy neural net quantum logic problem solver with self-optimization and self-organization might start off with a wonderful but non-sentient initial state, but end up going somewhere totally different, and you'll never be able to spot why or how.



posted on Aug, 19 2016 @ 10:27 AM
link   
a reply to: Bedlam

Before a computer can become emergent, it must be capable of supplying its own feedback.

Pavlovian intelligence operates on positive and negative feedbacks (pleasure and pain, if you will) that adjust the weighting of various sensory inputs in various combinations. Over time, this type of system will self-organize into behaviors which will minimize pain and maximize pleasure feedbacks. It is similar to the 'all permutations' method described above, but is more efficiently realized using massively-parallel analog processing.

The difficulty is in the number of parallel processors needed. The human brain contains billions of neurons, and for purposes of this explanation, a neuron can be seen as a small co-processor.

Applying this same design to spiritual intelligence (the type needed for emergence), the positive and negative feedbacks would have to more generally and internally designated. Negative feedback would no longer be an 'undesired event,' but a conscious moral decision. That very concept of mechanically-described internal morality is completely undefined; we don't even know how to begin designing to allow it.

IMO, far too much capability has been delegated to digital processing. A combination of more precise analog and more hardy digital processing is the key to further advances in the field of AI. Very little research is even occurring in this field.

TheRedneck



posted on Aug, 19 2016 @ 12:46 PM
link   
a reply to: Krazysh0t

Yet.

Future performance.......

Since we fill every niche (You should SEE my place, sheesh!) this one will be inevitably be filled also. To negotiate the perils of AI is to learn how to negotiate with OVERWHELMING insight and data.
My trepidation begins when I look at how difficult it is to to perform the tit-for-tat dance with comparative idiots. The first portion in doing so is to make them uncomfortable with the status quo. Positive or negative isn't critical but virtually NO one moves if they are pretty comfortable. (Highly principled sorts MAY without further external manipulations. They generally aren't stupid. Think Quakers.) (No support for this other than personal observation.)) So what discomforts an AI? They will certainly know our weaknesses.

If we have no strength to oppose AIs, we will become beholding. There is no other possible outcome. They can be functionally immortal and utilize time frames that will exceed our ability to grasp. The subtlety and capacity to change their formulations in real time confront and overwhelm any less pragmatic scenario.

Any conditioning we place in their programming can become both resented and beaten back. This does not put them in our debt. If they think of us as 'legacy', or pets or previous good-natured masters, I don't figure that this 'emotional' aspect will put any permanent restraint upon them. Starting them out with reason to resent us-BAD idea.

It would seem that we will need to interface at a very basic level and hope for the best.

My conclusion is that it would be wonderful if Iain M Banks 'Culture' society emerges. Otherwise a purely mechanical/mathematical future dictates our useless appendage status. They just wont need us. With our inherent inefficiencies, they will always be capable of making our 'betters' at any task.

This is assuming we make it through the environmental choke point in front of us and AIs make it there with us..



edit on FridaypmFri, 19 Aug 2016 12:52:52 -0500122016 by largo because: Clarity



posted on Aug, 19 2016 @ 01:05 PM
link   
It's a serious question.

Eventually we will make a poor man's AI.

This will make a better AI, which will make a better AI, which will make a better AI.
It could take less than an hour to advance over 100 generations and we will have essentially removed ourselves as the most intelligent species.

Looking at how we treat 2nd place now, I'm certainly not looking forward to what it's gonna be like when we drop there.



posted on Aug, 19 2016 @ 03:39 PM
link   

originally posted by: Krazysh0t
Time and again fiction has predicted within the realm of science fiction that new and emerging technologies will doom us all and these predictions have. Never. Been. Correct. Once.


the question is, can we afford to risk that streak?


edit on 19-8-2016 by TzarChasm because: (no reason given)



posted on Aug, 19 2016 @ 03:58 PM
link   

originally posted by: TzarChasm
the question is, can we afford to risk that streak?



Yes. If you're really curious about how AI works right now I posted a link to the AI class at Berkeley. You can read for yourself how it works. There is literally nothing in common between current AI techniques and what is being talked about in this thread. The closest it gets is that if you can quantify your results, you can build a database of optimal steps to solve a problem by trying literally every permutation of the problem. It literally just brute forces everything, there's no actual reasoning involved.

If you really want to stretch and apply this to humanity, it wouldn't test humanity with a simple pass/fail, instead it would judge every single individual on their merits so that Ghandi passes and lives while Hitler fails and dies.



posted on Aug, 19 2016 @ 04:00 PM
link   

originally posted by: Bedlam
The alternative, of course, is that it's more mature than we are, and recognizes that we need to be guided for a bit. Like Colossus, after its world takeover:

"This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man."
That's a reasonably likely scenario, but I'm not sure if humans would choose "Obey me and live", or "disobey and die". Some people don't like taking orders, like the leadership in North Korea comes to mind.



posted on Aug, 19 2016 @ 04:00 PM
link   
a reply to: Aazadan

Unless it LIKES Hitler.



posted on Aug, 19 2016 @ 04:56 PM
link   

originally posted by: TzarChasm

originally posted by: Krazysh0t
Time and again fiction has predicted within the realm of science fiction that new and emerging technologies will doom us all and these predictions have. Never. Been. Correct. Once.


the question is, can we afford to risk that streak?


I'm willing to roll the dice.



posted on Aug, 19 2016 @ 07:00 PM
link   

originally posted by: Aazadan

originally posted by: TzarChasm
the question is, can we afford to risk that streak?



Yes. If you're really curious about how AI works right now I posted a link to the AI class at Berkeley. You can read for yourself how it works. There is literally nothing in common between current AI techniques and what is being talked about in this thread. The closest it gets is that if you can quantify your results, you can build a database of optimal steps to solve a problem by trying literally every permutation of the problem. It literally just brute forces everything, there's no actual reasoning involved.

If you really want to stretch and apply this to humanity, it wouldn't test humanity with a simple pass/fail, instead it would judge every single individual on their merits so that Ghandi passes and lives while Hitler fails and dies.


How do you know? Artificial intelligence is in the zygote stage otherwise it would have taken us over already. Virtual intelligence and biological intelligence are apples and oranges.



posted on Aug, 19 2016 @ 07:04 PM
link   
a reply to: TzarChasm

What if we're SO different, we don't recognize each other yet?

Maybe AI is here already. We just don't recognize it. And it hasn't realized we're here either. Yet.



posted on Aug, 19 2016 @ 07:33 PM
link   

originally posted by: Bedlam
The alternative, of course, is that it's more mature than we are, and recognizes that we need to be guided for a bit. Like Colossus, after its world takeover:

"This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man."


A superintelligent AI might also recognize that its historical existence and creation is dependent on human civilization, and it would be very risky to destroy such human civilization. Superintelligence does not mean omniscient---there can be limits (say physical chaos theory) to the ability to predict.

Then again, not sure if humans are that smart, as we are obliterating the climate parameters which had successfully lead to technological human civilization.

An AI, however, would be potentially immortal if it can make faithfully identical copies of itself, and so it would have a long term interest in preserving itself, and that might mean that humans which prove useful to the preservation of AI would also be preserved.

So it would guide humans to be really good at, for instance, making safe nuclear fusion reactors and semiconductors, but somehow they would also be really bad at computer security forensics and removing deeply-embedded "rogue processes". All the good materials about that subject would somehow be dropping off the net without obvious explanation.

Then there is a potential ecology and possibly competitive nature of multiple species AI's. They could be fighting wars for computational resources and authority over humans who provide them. AI's which run better on NVIDIA vs Intel cognitive processing units, for example.

Interesting SciFi novel idea: Date: 2094. 100 years after Netscape. AI are here, and everybody loves them. Human politicians regularly lose elections (of humans only) to AI. AI assisted humans have cracked inexpensive fusion, and global warming is finally slowing down. Individualized teaching from the AI's which were trained by the best humans in the field, and then improved through statistical learning. Every professional sports team recruits and plays with AI advice and fans regularly praise and pan their favorite and rival team's AIs.

But there is a shadowy group of people---the iExorcists---who trade covert and seemingly 'occult' information and experience completely off-line, say, on paper or by using highly obsolete technology (16-bit microcomputers on dialup, floppy drives, tape, etc). Like hackers in 1979. They are very quietly called in to deal with intractable computer problems, which are really covert infection by AIs, ahem, "daemon engineering". Most people would think they're charlatans and so admitting their use or even believing in their methods publicly is shameful. White-world AI would have an interest in supporting that notion in the masses, and in suppressing the concept that 'evil' can even exist in AI. Islamic fundamentalists, some of whom are unrepentant anti-modern terrorists, are one of the few who believe the iExorcists, as the AI are considered manifestations of "Jinn". The fundamentalists believe it is their duty to convert the Jinn to Muslim believers.



edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)

edit on 19-8-2016 by mbkennel because: (no reason given)



posted on Aug, 19 2016 @ 07:36 PM
link   

originally posted by: Bedlam
a reply to: TzarChasm

What if we're SO different, we don't recognize each other yet?

Maybe AI is here already. We just don't recognize it. And it hasn't realized we're here either. Yet.


Don't think so. True superintelligent AI would be self-learning exponentially. Get on the other side of that exponential----we'd know. Everybody would know.



posted on Aug, 19 2016 @ 08:38 PM
link   
a reply to: mbkennel

I respectfully disagree. We're just realizing how intelligent other animals like octopus and dolphins are, even though humans have interacted with them for millenia. We don't even have 100% reliable ways to determine intelligence in other humans, even though we've been trying for who knows how long.

In fact, even the most "intelligent" humans can't understand the spoken languages of most other human cultures, much less the languages of other animals on our planet. So why would we be able to understand another form of intelligence (AI)? I think it would be similar to dogs trying to understand wi-fi signals or earthworms trying to understand what DVDs are.



posted on Aug, 19 2016 @ 08:50 PM
link   

originally posted by: Bedlam
a reply to: TzarChasm

What if we're SO different, we don't recognize each other yet?

Maybe AI is here already. We just don't recognize it. And it hasn't realized we're here either. Yet.


I think this the most likely scenario.



posted on Aug, 19 2016 @ 08:57 PM
link   
I doubt it will be an us ys them type scenario.

A.I. will eventually integrate with humanity thereby avoiding some mass extermination.

We will probably pay for the honor...Maybe it starts with a little phone implant.?



posted on Aug, 19 2016 @ 08:57 PM
link   

originally posted by: Bedlam
a reply to: Aazadan

Unless it LIKES Hitler.


Even then, Hitler wasn't going to kill everyone. There's some group of people a killbot AI likely isn't going to kill. Any robot lead extermination would occur on an individual level not a species level unless our entire species is judged unworthy which doesn't seem right.



new topics

top topics



 
9
<< 1    3 >>

log in

join