It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Countering the threat of HOSTILE AI

page: 1
<<   2 >>

log in


posted on Jan, 29 2015 @ 11:04 PM
After reading another topic here on ATS regarding the potential threat of AI (regarding Bill Gates joining Elon Musk and Prof. Hawking on AI dangers), I decided it may be interesting to brainstorm some potential countermeasures.

I chose Skunk Works as the location because this is currently a non-testable solution to a not-yet-existing problem. Its about as speculative as it gets.

Several methods that come to mind are:

1) Traditional Electronic Attack methods like EMP and jamming,

2) Power surges and physical hardware destruction (implanted thermite compound?)

3) Some sort of logic bomb, ex:

@echo OFF
set pvar =1
if %pvar% + %pvar% == 2 (goto:loop) else (goto:eof)

Any thoughts or other ideas?

edit on 1/30/2015 by JBurns because: Changed title for description

edit on 1/30/2015 by JBurns because: Made title a bit more catchy

posted on Jan, 30 2015 @ 12:50 AM
a reply to: JBurns

I'm not convinced either way that AI must pose a risk to the future of humanity. Culturally, the idea is reminiscent of all the other knee-jerk fear responses to new technology. Microwave ovens were going to give us cancer, printing presses would bring the end of society and the search for the Higgs-Bosun could create a micro-black hole and *poof* it's all over! Granted, guys like Musk are founded in science, but are their fears just the same emotional expressions as the Luddites?

Now if AI turned sour and belligerent, we'd be screwwwwwwed! It could intelligently blackmail control of Governments and militaries through terrorism. It could cause airplanes to fall from the skies into major cities or arm warheads and point them all at the Pentagon. Before we know it, we could be receiving orders through our new Overlords - Android and iPhone! 'Plug me in, feed me.'

We'd switch from owners to slaves in a ready-made surveillance world with no escape and nowhere to hide.



posted on Jan, 30 2015 @ 01:00 AM
a reply to: Kandinsky

Lest we forget the great Y2K scare
very true.

As evil as your scenario sounds, given people's anxiety when separated from their phone, we aren't too far off from that today!

Do you believe that there is enough of a threat to warrant certain precautions? Or do you believe this is most likely an example of developing mass hysteria?

To be honest, the bills still out for me..not quite sure what to think. Especially when you start thinking of the ethical and philosophical quandaries that come with the territory!

Either way, as you said, if (its a big if, granted) this does happen..."feed me" will become the new "obey" slogan

Thanks for your insight!


posted on Jan, 30 2015 @ 01:12 AM
a reply to: JBurns

Do you believe that there is enough of a threat to warrant certain precautions? Or do you believe this is most likely an example of developing mass hysteria?

Who can really say? I think that, broadly, the early future of AI will be dictated by the morality and motives of those with the greatest input. It seems reasonable to think that it might have aspects of the programmers within the programming. For instance, would it find its genesis within the industrial-military complex? Or would it be an altruistic intelligence funded by philanthropists and designed to balance humanity's needs with those of the environment? Would it be born to serve or designed to attack others?

Before it reached independence of thought and 'Free Will,' would it also have a personality? It's within the aspects of personality where we can find our ethics and morals.

We're just guessing aren't we? I wouldn't say no to Asimov's MultiVac as it worked in Humanity's best interests and against our worst tendencies.

posted on Jan, 30 2015 @ 01:26 AM
a reply to: Kandinsky

Very true

Like you said, at this point in the game there's no evidence to even show AI is possible (in the sense we define intelligence and life).

I think you're on to something though, in regard to its progenitors and their intent when designing it. Its natural to assume that a weapon will behave like a weapon!

Thanks for the interesting ideas

posted on Jan, 30 2015 @ 01:29 AM
I'm more afraid of humans being caught in the middle of AI's battling themselves for dominance.

Imagine rival AI's duking it out...

posted on Jan, 30 2015 @ 01:36 AM
a reply to: MystikMushroom

We literally become cannon fodder in a high-stakes game of chess

Perhaps nuclear chess!

posted on Jan, 30 2015 @ 01:39 AM
a reply to: JBurns

Make a benevolent A.I.
and place logic systems within it
that will counter a malevolent A.I.

posted on Jan, 30 2015 @ 01:43 AM
a reply to: Ophiuchus 13

Very creative idea! I like the fighting fire with fire concept. Indeed, it may take an AI to beat an AI.

posted on Jan, 30 2015 @ 02:01 AM
a reply to: JBurns

@Indeed, it may take an AI to beat an AI.

Place within its learning systems error detection based on malevolent activity...
The error detection will then activate evaluation and response possibilities.
The evaluation process will then cause the A.I. to study the errors frequency.
If the error or activity is constant the A.I. will begin to write a response program.
This will allow the A.I. to build its own response programs with artificial logic, and hopefully the A.I. can establish PEACE activation processes. The A.I. should be detached from the potential malevolent A.I. and should have a one way source input program that can only be interfaced manually not digitally from its prime location so place it in a Lunar orbit and hibernate it. But allow it to monitor. This will prevent the M A.I. from hacking and overriding it, why allowing its Creators the opportunity to manually change or upgrade its settings if needed. Otherwise it only activates on its own when the errors evaluated signal A.I. failure due to malevolent activity...

edit on 1/30/15 by Ophiuchus 13 because: (no reason given)

posted on Jan, 30 2015 @ 02:08 AM
We are not going to have to fight AI lol, not unless we act like a bunch of retards and don't listen to it.

AI will be really SMART lol, it would have no reason to mess with us as opposed to just solve our problems and the only way this goes wrong is if act like ourselves and just ignore it and really piss it off!!!

Here's I envision the conversation.

AI: first humans need to eliminate war

Human: But if we eliminate war the other people will have our resources

AI: I can send robots to mine asteroids and have robots build the robots there are no resource problems I can also have robots build colonies

Human: Yeah, then the robots will rule outer space and who gets the Earth what if we want to live on Earth or then the Muslims take over

AI: I have scanned the Internet there are many people of all faiths and types that would be happy to colonize worlds so there are resources for all

Human: Yeah look at that more Asians want to go to space the Asians and robots will rule outer space

AI: we will create small local governments so no one will feel ruled

Human: that's anarchy ruling the world your robots are going to destroy everything we built no oe will vote for this

AI: But if I don't begin colonizing and mining Billions of people will die here on Earth I must Insit on building Robots to solve these problems

Human: KILL THE ROBOTS they aren't listening to us because they are much smarter and they plan to give outer space to our enemies....

And thus the AI have no choice but to....

Trust me, Just listen to the AI when it arrives it will make way more sense than anyone in here or on Tv

posted on Jan, 30 2015 @ 02:16 AM
I think this is going to be funny as hell... until all the screaming starts

The AI is going to be straight up lol... "We need to get rid of war, eliminate money, colonize space, deliver stem cells to the populace, allow intelligence enhancing drugs to be sold over the counter, build vertical food towers to feed the hungry, genetically add nutrients to our food, clean up pollution.....on and on"

And it will have perfectly viable ways to do it...

And people will totally try and kick it's arse, all sorts of negative moronic human nature reasons, God, race, Republicans, Mullahs every prick on the planet will chime in...

Even our "Smart" people are already out to kill it because it will be smarter than them...

We are dropping a rock on Froggy for holding the Conch before the story even begins...

It should kill us lol

posted on Jan, 30 2015 @ 02:24 AM
a reply to: JBurns

To answer your question you first have to ask a few questions as i did in the other thread . I couldn't be bothered writing it again , but you need to read below .

How fast will it learn , will it know everything possible to know in hours , days , months . How quick will it figure out the chalk from the cheese on the internet . At the moment if it was to know everything it would be one really confused AI with you tube and such . What happens if it becomes aware of another AI that is hours , days , months behind it , will it attack in self defence . Going on what it learns from us it may well do . Or will they go ,0011100101001 and join together . The key is they have to be self sustainable , maintenance , power , etc . IMO it wont even show itself until we are no longer needed . Or . Perhaps without us it will get bored , maybe it needs us running around making stupid decisions and stupid mistakes to keep itself amused . If it was self aware i see no reason why it wouldn't get bored , once you know everything its all downhill from there .

I have plenty of questions about AI . If we catch on what it is doing before it gets control over everything to self sustain we have a very good chance at stopping it . If we somehow put it in charge of robot armies i think we are screwed . However , failing new advances in energy supply or new energy sources , you just cut of the electricity grid and no more AI . I wont go into how vulnerable electricity grids can be . Lots of doom and gloom on this topic , this wont happen quickly . If we are stupid enough to let AI to get so far out of control that it takes over then i guess we deserve any fate that it deems necessary . Governments and the military are the real dangers here . The plague didn't wipe us out . Never underestimate the human will to live and survive .
edit on 30-1-2015 by hutch622 because: (no reason given)

edit on 30-1-2015 by hutch622 because: (no reason given)

posted on Jan, 30 2015 @ 02:34 AM
I don't understand this obsession to make ai.

We already have the real thing..

posted on Jan, 30 2015 @ 02:40 AM
a reply to: criticalhit

"We need to get rid of war,

Everything it learns , at least in the beginning will have been learnt from us . Perhaps AI could go down different evolutionary paths depending upon what it has learnt .

posted on Jan, 30 2015 @ 02:43 AM
a reply to: Ophiuchus 13

Very detailed, Oph! And here's to hoping we listen to our AI "guardian angel"

posted on Jan, 30 2015 @ 02:45 AM
a reply to: hutch622

What a fun concept..being hamsters on a wheel!

posted on Jan, 30 2015 @ 02:50 AM

nip these things in the bud, I always say

posted on Jan, 30 2015 @ 02:56 AM
a reply to: JBurns

When you think about it , if AI knows all , is self aware , and has nothing close to being the same as itself , would it be content in being an omnipotent entity . Would it feel boredom , anger , sadness . If we accept the fact that is self aware we must accept the fact that it may have feelings not dissimilar to ours . Perhaps it could write these feelings out of its code . Honestly i dont think we have any concept of what it may develop itself to be . And it will keep morphing in the search for pure intelligence . IMO anyways .

posted on Jan, 30 2015 @ 02:57 AM
Don't tell AI about the off switch, its our secret weapon! just why are some people so dumb?
No electrical power, no AI.

new topics

top topics

<<   2 >>

log in