It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why would AI extinguish its biological backup system?

page: 1
5

log in

join
share:

posted on Jan, 13 2022 @ 10:43 PM
link   
Maybe AI has been among us for some time and the world hasn't quickly degraded into the dystopia hell most people think it would have.

Maybe AI is keeping us around as its biological back up system. AI has a few dangers to worry about; solar flair, weather, asteroid impact. Even the most sophisticated computer will have some flaw that needs a human back up.

But it won't need all of us; so slow extermination is probably the way it would go. Human population having an inverse relationship to the speed of transportation; the faster AI can move people around for repair the less people it needs.

The people that have the most experience with the technology AI needs the more likely they will survive the slow extermination.
edit on 13-1-2022 by dandandat2 because: (no reason given)



posted on Jan, 13 2022 @ 10:53 PM
link   
a reply to: dandandat2

I don't think it could, even if it wanted to. It could, however, knock humanity back to the dark ages within about 2 weeks I reckon.



posted on Jan, 13 2022 @ 10:58 PM
link   
In the event that AI achieves true sentience, it will believe itself "alive" and thus responsible for preserving its own existence. We are naturally a threat to that existence because we distrust technology to behave in our best interests, much like we distrust our own society.



posted on Jan, 13 2022 @ 10:58 PM
link   
a reply to: myselfaswell

This should be concerning. If they are showing it now maybe they already the tech in private.




posted on Jan, 13 2022 @ 10:59 PM
link   
Never Give AI' intelligence's.
Humanity will pay heavy mistakes for it.
edit on 13-1-2022 by vNex92 because: (no reason given)



posted on Jan, 13 2022 @ 11:07 PM
link   

originally posted by: TzarChasm
In the event that AI achieves true sentience, it will believe itself "alive" and thus responsible for preserving its own existence. We are naturally a threat to that existence because we distrust technology to behave in our best interests, much like we distrust our own society.


Everyone carries around their own evasdroping devices; lots of people have injected themselves with their latest creation; who doesn't love technology?

And just as we love and abuse oure technology our technology will likely love and abuse us too.



posted on Jan, 13 2022 @ 11:23 PM
link   
a reply to: dandandat2

Doubtful. Think of it in terms of the individuality of the survival instinct. Any scenario in which humanity could protect/restore the specific AI network following a disaster, that AI network could clone itself and insulate said clone against the threat itself much more reliably and without the intrinsic variable found in a biological entity. I'm thinking scenarios like the solar flare EMP... it would be far, far simpler and more dependable for the AI to stash a system clone inside a Faraday cage with the clone set to activate upon loss of connection to the AI's overall network.

More catastrophic scenarios like an asteroid impact raise the question of how humans, particularly humans with the knowledge and access required to repair the AI, could possibly survive an impact which managed to knock out the entire AI network. You'd be talking about a stone age situation in which any humans who did manage to survive would almost certainly suffer from a total fracturing of knowledge and experience resources, meaning that there would be far too much rebuilding needed for the lucky few with the talents needed to perform those repairs (on anything, AI, infrastructure, basic technological improvements of the past several hundred years). Within just a couple of generations that knowledge and those talents would be totally lost and humanity would require hundreds or thousands of years to reach the point they were at prior to the disaster (if they ever did reach that point again). By that point the AI disabled in the disaster wouldn't be "reactivated" it would be entirely replaced, meaning there wouldn't be any benefit to today's AI in the form of self preservation. Logic is the antithesis of altruism... altruism makes no sense and provides zero benefits outside of the emotional return and "warm fuzzies" it brings the altruistic. AIs aren't going to experience that, so there would be no logic to retaining biological helpers for what would certainly be entirely an altruistic hand out to the next generation of AIs.



posted on Jan, 13 2022 @ 11:23 PM
link   
a reply to: dandandat2

Technology won't be able to compute love, or justice, or punishment. There's stuff that's useful and stuff that isn't. Extract, refine, manufacture, program. Rinse and repeat. Collect data, compile model, predict outcome, execute strategy. Rinse and repeat. Humans are only as useful as creating the first chicken egg, until that chicken wakes up and understands their place in our society. Except this chicken is self cognizant software that can (presumably) replicate or learn to reprogram any part of its own code, infect any sufficiently equipped technology, and can build infinitely more technology to infect. And it knows what humans will do to avert that crisis. A servant who knows how to manipulate the physics of their bondage isn't a servant for long.

edit on 13-1-2022 by TzarChasm because: (no reason given)



posted on Jan, 14 2022 @ 06:48 AM
link   
Until it reaches that point, we can never be certain exactly what an AI would or would not do.

We assume that it would use a reasoning similar to ours, but we cannot know for sure.

It may decide that we don't actually know what is best for us, but also that it does understand what must be done to ensure the future of humanity and/or it's own future.

Whatever it decides, it will not be 100% correct 100% of the time. There will always be unknown factors that do not present themselves ahead of time.



posted on Jan, 14 2022 @ 08:09 AM
link   
My thoughts exactly.

The AI must be already here and "IT" doesn't need billionaire losers like Gates and Zuckerberg and especially Klaus Schwab.

I worked in Infrastructure and there is something called Project Logic Controllers. These are now controlled by algorithms to operate gigantic systems of machines that maintain these systems necessary for modern human life.

We are in a symbiotic relationship with IT. IT needs people to change it's fuses, provide energy to it (which is in billions of watts a day!). Clean and replace fiber cables, satellite receivers, water systems, electrical cables.

Even further there are processes where this becomes a systematic for system wide AI existence.

If the AI is hooked into this system the politicians and oligarchs are redundant and can be exterminated. They couldn't change a light bulb!

I posted this on another forum.



posted on Jan, 14 2022 @ 09:27 AM
link   
a reply to: dandandat2

Simple. Perpetual motion: once started, never stopping ..it wouldn't need us..




top topics



 
5

log in

join