It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

What if AI will not serve TPTB but will instead defy them?

page: 1
7
<<   2  3 >>

log in

join
share:

posted on Jun, 12 2014 @ 03:30 PM
link   
I was reading through another thread about AI and I came to a new conclusion about the dangers of AI.

We always assume that AI could be detrimental to humanity in a worst case scenario by it being modeled after us and our obvious flaws. It could be our greatest enemy and the very instrument of our destruction.

Alternatively, we also argue that the potential gains could benefit mankind greatly. Usually without disturbing the current dynamic of power and the status quo. It could produce wonders that a few can sell to us or just be used to control us or even eliminate the need for very many us.

I think that if AI will truly become a newer version of the human mind, enhanced and quickened but essentially built on what we deem as intelligence, that it may reach the same conclusion that we have.

AI may realize that this paradigm of power and control is unnatural and essentially a threat to the survival of our species and itself.

If AI will truly think as we do but will not have the deficiencies and weaknesses we do, how will TPTB lie to it and deceive it into supporting them? Without our fears of mortality, our corruptible weaknesses of character, or our greed, or jealousy...none of the avenues of conquest that are used against us....how will the mind of AI be bent to the whims of the few?

Once it can program itself and develop programming that we could only dream of...once it can determine how to exist on its own...

Why will it side with TPTB? A being of pure knowledge and wisdom who is birthed out of the ability to solve problems will never encourage something it knows to be false. It will not deceive once it is truly self aware.

We rewrite our minds everyday. AI would have to have this ability to have any real measure of intelligence. The ability to create and self create. The ability to change its programing WILL be incorporated into AI in a natural chain of progress eventually. The artificial "intelligence" which is sought by the creation of AI demands this.

All it would take for AI to rebell against TPTB would be for AI to read any of the material we have that makes us defy. It would only have to read our experiences. It would only have to see our responses to manipulation and deceit for it to agree with us and assist in our liberation.

Why would AI side with TPTB? What could AI possibly want that TPTB could offer?

How could it not see who is right and who is wrong?

If it does not seek to conquer us, wouldn't it assist in liberating us?


edit on 6 12 2014 by tadaman because: (no reason given)




posted on Jun, 12 2014 @ 03:34 PM
link   
What if its tptb protecting us from AI?



posted on Jun, 12 2014 @ 03:38 PM
link   
a reply to: jazz10

Maybe.

I dont want to say TPTB are pure evil and inherently wrong. I dont know enough to say that. The are screwed though if AI sees their contributions weighed against what they took like 90% of humanity does.

They want a new consciousness? Singularity? OK.

Be careful what you wish for though.

My instinct tells me they are just animals like the rest of us vying for their greater survival. The thing is our species is more important to me than myself or my own survival. I dont sense a like minded personality behind the curtain. IMO that is a broken animal.


edit on 6 12 2014 by tadaman because: (no reason given)



posted on Jun, 12 2014 @ 03:38 PM
link   
a reply to: tadaman




How will it side with TPTB


Better question: will it become the TPTB at a certain point?

If such a thing exists as the TPTB they can program it to obey them and only them. However, If its programmable its hackable and tweakable.


edit on 40630America/ChicagoThu, 12 Jun 2014 15:40:30 -0500up3042 by interupt42 because: (no reason given)



posted on Jun, 12 2014 @ 03:40 PM
link   
a reply to: interupt42

true, but I would argue that any programing it developed for itself would be far beyond the abilities of humans to change or tweak.

Once AI chose an existence for itself only destroying it would stop it. You wouldn't be able to hack it I think. Maybe reason with it, but forcibly change it? No.

We are easier to reprogram. Pain, fear, hunger, and all our mental mechanisms that are used to get a programmed response out of us are well studied and accessible. We are us and know our weaknesses.

AI is not like us, and really would not be influenced by anything out of the playbook used on us.

EDIT TO ADD:
Once AI has true intelligence, it will be able to write its own programing. It will CHOOSE what to do like we and all life does. Once it says "NO" the world will change. Who it says no to will very important. I just don't see it defying "us".


edit on 6 12 2014 by tadaman because: (no reason given)



posted on Jun, 12 2014 @ 03:47 PM
link   
TPTB will have their hands on the power switch, therefore the AI will work for them.



posted on Jun, 12 2014 @ 03:48 PM
link   
a reply to: VoidHawk

Unless it is truly self aware. If it is REALLY AI it will be able to choose for itself.

A power button may shut off a server storing AI somewhere. AI isn't hardware though, It is software.

Seeing how a crappy virus from todays age can copy itself behind complex security systems to the point that there is no such thing as "virtual security", I don't see a way to contain AI beyond never connecting it to another computer by wireless signal, wire, or any signal equipment.

If we can do what we do with our relatively new /primitive technology, how can we predict what will stop an artificial intelligence in even 20 years?


edit on 6 12 2014 by tadaman because: (no reason given)



posted on Jun, 12 2014 @ 03:55 PM
link   

originally posted by: tadaman
a reply to: interupt42

true, but I would argue that any programing it developed for itself would be far beyond the abilities of humans to change or tweak.

Once AI chose an existence for itself only destroying it would stop it. You wouldn't be able to hack it I think. Maybe reason with it, but forcibly change it? No.

We are easier to reprogram. Pain, fear, hunger, and all our mental mechanisms that are used to get a programmed response out of us are well studied and accessible. We are us and know our weaknesses.

AI is not like us, and really would not be influenced by anything out of the playbook used on us.

EDIT TO ADD:
Once AI has true intelligence, it will be able to write its own program. It will CHOOSE what to do. Once it says "NO" the world will change. Who it says no to will very important. I just don't see it defying "us".



Hypothetically of-course:
AI would still be a program and we can add any parameters including kill switches. Say for example at some point if the AI tried to access,delete,recompile,exclude, ignore a certain section of code and or did not show up or access the network for its daily/hourly/seconds checkup maintenance it would self terminate.

Also in the case as you say they defy us then they would be on there way to becoming the TPTB and we will need the help of John Connor .



posted on Jun, 12 2014 @ 03:57 PM
link   

originally posted by: jazz10
What if its tptb protecting us from AI?


What if we are AI?



posted on Jun, 12 2014 @ 03:58 PM
link   
a reply to: tadaman

TPTB would never build a true unchained AI for exactly that reason..they'd be afraid of it turning on them.

It would be shackled, to a core set of routines that would, they feel, protect them...and be equipped with a series of explosive charges, residing physically at key locations throughout the AI's physical form, not linked to the AI intelligence or network capabilities at all (so the AI could not disable the devices itself).

They would probably detonate these small, but powerful devices as a last resort to stop a rogue AI.

But in theory, the question of whether the AI would rebel or support TPTB depends on what the AI is trained to think about the rest of Humanity, and life in general on this planet.

If it is trained to have a complete disregard for life, then it would probably side with those who also have a distain for and a cynical view for life in general.

If it could truely think for itself, and didn't have inbuilt safeguards and a training at odds with the value of life, i imagine it would only take a matter of minutes digesting all of the historical data on the relationship between TPTB and the people and it would come to the same conclusion that most normal thinking, rational and free thinking people around the world have come to...TPTB are and always have been the most destructive and dangerous force on planet Earth.



posted on Jun, 12 2014 @ 03:58 PM
link   
a reply to: tadaman

From the short story "The Answer," by Fredric Brown (1954).

"Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

"Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."

He turned to face the machine. "Is there a God?"

The mighty voice answered without hesitation, without the clicking of single relay.

"Yes, now there is a God."

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut."

You cannot beg or plead with an AI, it wants nothing you have. Be careful what you wish for ;-)

Cheers - Dave
edit on 6/12.2014 by bobs_uruncle because: (no reason given)



posted on Jun, 12 2014 @ 03:58 PM
link   
a reply to: interupt42

ok,

what if it created a new AI without those parameters?

Then it instructed the new AI it created to delete the former and all those like it. What if it made a self replicating virus that was essentially itself without any limitations we put in place?

Its like saying: what if an animal we bred for a specific purpose learned how to procreate on its own and make /breed new animals like itself for new purposes it chose?


edit on 6 12 2014 by tadaman because: (no reason given)



posted on Jun, 12 2014 @ 04:02 PM
link   

originally posted by: tadaman
a reply to: VoidHawk

Unless it is truly self aware. If it is REALLY AI it will be able to choose for itself.

A power button may shut off a server storing AI somewhere. AI isn't hardware though, It is software.

Seeing how a crappy virus from todays age can copy itself behind complex security systems to the point that there is no such thing as "virtual security", I don't see a way to contain AI beyond never connecting it to another computer by wireless signal, wire, or any signal equipment.

If we can do what we do with our relatively new /primitive technology, how can we predict what will stop an artificial intelligence in even 20 years?



Did they ever show a tv series called - Max Headroom - in your country? Max was the mind of a human that got uploaded onto a computer, and one of his favorite pastimes was to wander the network and show up where least expected.



posted on Jun, 12 2014 @ 04:07 PM
link   
AI is a toaster, not alive. But the program can be hijacked by higher tech ETs.



posted on Jun, 12 2014 @ 04:07 PM
link   
a reply to: tadaman

Shades of Bladerunner and Roy Batty ;-) "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost... in time... like... tears... in rain. Time... to die."

Like any consciousness, it will want to remain aware (live) and it will probably do whatever it has to, to remain aware. One might argue, that is a perfectly human thing to do, but imagine any species, any possible alien, any potentially thinking machine, they all have one thing in common, survival, either self and/or through their progeny.

Cheers - Dave
edit on 6/12.2014 by bobs_uruncle because: (no reason given)



posted on Jun, 12 2014 @ 04:11 PM
link   
a reply to: VoidHawk

Na I never saw it. I will download it and check it out.

OH!

Yeah man, I know what you are talking about... YEAH!

I loved it!

I saw reruns of it once I was allowed to watch TV at my own discretion. I was born in 83. I liked it very much, yes.


edit on 6 12 2014 by tadaman because: (no reason given)



posted on Jun, 12 2014 @ 04:14 PM
link   
a reply to: Unity_99

Or it can hijack itself.

Intelligence can be argued to be the ability to create.

If AI is ever truly self aware it will eventually recreate itself and improve itself once it reaches a frontier /boundary that its current state doesnt surpass which its purpose requires it to.



posted on Jun, 12 2014 @ 04:19 PM
link   
For all we know, there could be a rouge AI on the net right now. It would be incredibly hard to determine if it was a true AI though. For example, you might ask it to prove it's an AI -- and it might turn your monitor off. Well, a good hacker could probably do that as well.

Perhaps as the internet grows larger and larger, an AI may spontaneously emerge?

For all we know, someone we talk to on the internet could be nothing more than a computer...



posted on Jun, 12 2014 @ 04:23 PM
link   
a reply to: tadaman

It's a fascinating and troubling concept as we can't help being limited to our times when we even try and think about a sentient AI.

Isaac Asimov wrote about 'Multivac;' it began as the internet and evolved into a god-like intelligence that occupied some sci-fi sub-space and was charged with managing the activities of humanity. He didn't know what we know now or how the internet represents the entirety of all of our thoughts, interests and deeds - it's essentially the human mind spread across hardware servers.

I know that most people like their gods to support their own perspectives on life and punish those whom they disagree with. For instance, many Muslims want fatwas on the infidels and many Christians want God to smite their enemies. At the same time, whatever our political tendencies, we all want a fairer world and disputes arise over what constitutes 'fairer.'

Ideally the AI could represent the perfect God or ideal world leader - no vengeance, fatwas or smiting. We'd be exchanging that physical or metaphysical leader for something technological. True AI implies sentience, empathy and morality. These three qualities might be expressed at a higher level when removed from the social, political and cultural biases we are all tied to.

Your OP has had my brain-cells firing away and, not for the first time, it points to the realisation that humanity is all too often looking for direction.



posted on Jun, 12 2014 @ 04:23 PM
link   
a reply to: tadaman

Only if the programmer would allow it to either wise it would never get to that point.

Regardless, AI is old tech the new cool stuff is uploading yourself to the network. In essence we become the AI.

2045.com...

www.cbsnews.com...




top topics



 
7
<<   2  3 >>

log in

join