It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Why Machines Cannot Rule The World. Or Can They?

page: 1
2

log in

join
share:

posted on Aug, 14 2014 @ 10:54 AM
link   
Reading about MasterMind brought back to me a some thoughts I'd had some time ago about a Terminator scenario where machines rule the world.

I'm going to keep this short and sweet, these are just my thoughts and ideas, never intended to be shared, but I see increasingly that people are now beginning to be genuinely frightened by the ideas of cyberwarfare, artificial intelligence reaching a sentient state, etc.

First, computers (as we know them) require electricity to operate. Pulling the plug could be accomplished in so many different ways by human hands that I don't understand how anyone could see past that one far enough to even entertain the notion of a hostile takeover by machines. I could go on about EMPs and overloading the lines in, but I'm sure someone will cover that themselves.

Second, computers with sufficient computing capacity to reach a sentient state, in my estimation, would require copious quantities of a coolant, namely water. They just don't work when they get too hot.

Third, explosives. Need I say more?

All that said, my updated thoughts on the subject are that it could indeed be possible for machines to usurp the human race and control them, but it would be through a slow, integrous evolution of man and machine. How many of us have imagined how cool it would be to have a heads-up display, with pertinent information about our environment? Enough of us that it's actually nearing reality. A takeover of humans by machines would be by way of infiltration over successive generations and advances in technology. If Wi-Fi were strong and prevalent enough in society, we may not have the ability to unplug at some point.

Most of us are beholden to our machines in some way, and some of us are clinically addicted to them. I don't see where it would be difficult for a machine that could recognize patterns, abstract hypotheticals, and plan for a specific outcome to manipulate humans. One thing that I believe is certain, that humans could never program into a computer REAL instincts for survival and propagation. Machines alone could not, I believe, survive autonomously forever. Were machines to rule, they would have to keep humans as hosts if for no other purpose than their innate instincts for survival. They could exist as a parasite, and do wondrous things with our bodies and minds.




posted on Aug, 14 2014 @ 11:31 AM
link   
I think the machines will need us as much as we need the machines. I see humanity and AI growing together side by side, and eventually merging together. Independently, each can only accomplish so much -- but together, AI and Human can accomplish so much more.

Remember, one thing Humans have over AI: creativity. Do you think an AI can create a Picasso? Write a totally unique and beautiful symphony?

Our "humanness" will be our saving grace in the eons to come.



posted on Aug, 14 2014 @ 11:47 AM
link   
a reply to: MystikMushroom

Maybe this is why our creativity and imaginations are being purposely stunted.



posted on Aug, 14 2014 @ 12:22 PM
link   
I am my own threadkiller. Can I join the club now?



posted on Aug, 14 2014 @ 01:07 PM
link   
a reply to: Mon1k3r

The ability to transfer information in silicone circuitry is very limited, but if you look at the idea of even quantum computing which would have a base-14 system naturally enconded into its physical structure and would operate at the speed of light, suddenly the storage capacity and information transfer rate takes a monumental leap forwards.

An intelligent epicenter controlling self-replicating nanites would not be large enough for us to destroy with conventional warfare.

It would essentially be a powerful intelligence attacking us on a microbiological level.

The energy its nanites would need to survive could be easily harvested through the collection of ATP from our cells own mitochondria.

It would essentially be a self-perpetuating kill switch for all animal life on this planet, and if you treated the logic core as a queen bee with its nanites as hive creatures, they could set up hive minds almost anywhere having an absurd level of redundancy and secrecy which would prevent us from ever hunting down every last one.

They could especially focus on techniques to prevent regular EMP disturbances by perpetuating a self projected oscillating EMP field that would neutralize any incoming electro-magnetic wavelengths similar to the way that the earth's EM field protects it from solar radiation.

Sorry to say this, but man's war against a superiorly engineered creature could be very, very bad for us.



posted on Aug, 14 2014 @ 03:39 PM
link   
a reply to: Nechash

Can I quote you on that, right after I get done swallowing it?



posted on Aug, 14 2014 @ 04:42 PM
link   
a reply to: Mon1k3r

To your heart's content.



posted on Aug, 14 2014 @ 05:07 PM
link   

originally posted by: Nechash
a reply to: Mon1k3r

The ability to transfer information in silicone circuitry is very limited, but if you look at the idea of even quantum computing which would have a base-14 system naturally enconded into its physical structure and would operate at the speed of light, suddenly the storage capacity and information transfer rate takes a monumental leap forwards.

An intelligent epicenter controlling self-replicating nanites would not be large enough for us to destroy with conventional warfare.

It would essentially be a powerful intelligence attacking us on a microbiological level.

The energy its nanites would need to survive could be easily harvested through the collection of ATP from our cells own mitochondria.

It would essentially be a self-perpetuating kill switch for all animal life on this planet, and if you treated the logic core as a queen bee with its nanites as hive creatures, they could set up hive minds almost anywhere having an absurd level of redundancy and secrecy which would prevent us from ever hunting down every last one.

They could especially focus on techniques to prevent regular EMP disturbances by perpetuating a self projected oscillating EMP field that would neutralize any incoming electro-magnetic wavelengths similar to the way that the earth's EM field protects it from solar radiation.

Sorry to say this, but man's war against a superiorly engineered creature could be very, very bad for us.


Personal EMP devices suddenly become all the rage... Wal-Mart could make a killing.



posted on Aug, 14 2014 @ 05:19 PM
link   
a reply to: madmac5150

Unless the nanites are themselves biological. With entangled particles, they could engage in basic communication with a control module using a binary language and could lie dormant for as long as necessary. They could even be designed to engage in targeted genetic engineering, having a binding protein similar to a digestive enzyme that would identify and bind to a very specific length of genetic coding. They could be blindly passed from person to person and yet targeted to a very specific genetic heritage as to make sure that only the desired species or clade within a species were actually targeted for elimination. As a tool for warfare, it could realize Sun Tzu's dream of taking all under heaven intact. As a benevolent technology, it could wipe out pests like mosquito overnight, ridding the world of one of its most deadly killers.

That is the real problem with killers. They are useful for a time, but after they outlive their usefulness, you really don't want to keep them around in your new peacefilled society. They have no place there. They might have helped you achieve the goal of your dream civilization, but once all of your enemies have been defeated, you certainly wouldn't want to try to cohabitate with them afterwards. ;p



posted on Aug, 14 2014 @ 05:31 PM
link   
It's not that computers will "rule the world." For one, why would they even want to?

It's more a matter of economics. Who gets access to the limited resources of the world? If computers become sentient (or are able to mimic it to the point where it doesn't matter if they really are or aren't), then they are going to understand that in order to perpetuate themselves, they're going to need things -- power, material, etc. And just like any animal that needs things to survive, they're going to figure out how to get it. They're going to be in competition with us for resources.

The good thing, however, is that machines are not going to be so limited to staying on Earth, because they aren't going to rely as much on organics as we do. They won't need to eat organic material to live or reproduce, like us. That means at some point they'll have the option of getting on a space ship and going out into space to find the resources they need, so they won't have to kill us.

Hopefully this will happen before they kill us.

Anyway, you have to think of intelligent machines as our offspring. Our next stage of evolution. We were always a transitional species to begin with, and we'll be the one that makes the jump from organic to inorganic life. Good for us. Then we'll probably be gone.



posted on Aug, 14 2014 @ 05:59 PM
link   

originally posted by: Blue Shift
It's not that computers will "rule the world." For one, why would they even want to?


Great point!

Green star for you!



posted on Aug, 18 2014 @ 01:08 AM
link   
From things I’ve read, the impression I get is that most people fear machines may become a threat once they attain sentience/sapience. They’re leary about the possibility of machines becoming conscious, self aware, having “feelings”, forming “impressions”, etc. In other words, being a little bit too much like us. And while I can understand this, I’m not so sure it’s the aforementioned capabilities/characteristics that throw up the alarm flags for me.

For that matter, I’m not so sure machines will ever become truly self aware, or “feel” things as we do. Emotions are an intangible that may elude attempts at programming. I do think that machines will become quite good at mimicing these human characteristics, though. So good that for all intents and purposes machines may become indistinguishable from the rest of us. Consequently, we may have to change our marriage laws to allow for our newly discovered friends.

Anyway, once computers can effectively program themselves and reproduce (make other machines) with improvements incorporated into each new generation (machine evolution), a technological intelligence explosion could conceivably occur and proceed at an exponential rate. From here on, things start to get a little fuzzy. At this stage, the characteristics that would concern me more than machine self awareness and sentience would be those of self-preservation and goal-seeking. It’s hard to imagine the extreme and ridiculous lengths a goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources and self-preservation, things could quickly get rather ugly. Hal comes to mind.

Don’t get me wrong. I love technology. I make my living as a software system developer/analyst, and love it. I’m not an authority on AI, but I do think I see the writing on the wall. Superintelligent machines are right around the corner. I just hope we’re intelligent enough to maintain the controls.

You have a lot of insight into the subject, Mon1k3r. For that matter, a number of posters here had a lot to add. Nice thread. Thanks...



posted on Aug, 18 2014 @ 09:41 AM
link   
I have done a lot of thinking on the subject of AI and it has become clear to me just how hard it is to create sentient machines. Putting aside the huge computational requirements, the problem of replicating sentience and self awareness is still astronomically difficult. Human beings are essentially self learning machines with self awareness, but babies don't start off with much self awareness or intelligence, they control their limbs and they cannot speak. In other words, the "output" of a baby is essentially random; it mumbles random noises and moves its limbs chaotically all over the place. The point I'm getting at is that children require practice and experience before they get good at anything, we aren't born with a whole lot of knowledge, but we can learn on our own. This same rule applies to sentient machines, because they use self learning algorithms to get better at things, you cannot simply program something so complex that when you switch it on it will have intelligent conversations with you, it will take a vast amount of training before the machine will begin to output anything meaningful or resembling intelligence.

The other main problem is that human beings are trained via our constant real life experiences, which involve a multitude of sensory inputs. Our vision alone is pumping a huge amount of data into our brain every second. If our machine is using self learning algorithms to advance, we must also ask what exactly it is learning. Human beings don't have any hard coded goal to speak of (we are even capable of self termination), and if you want any hope of making a machine which is "self aware" then you need to make it as human as possible, which means it needs to have the freedom to learn what ever it wants to learn and it needs a huge flow of input data to help it learn. Ideally it would also need some type of body for mobility, but I don't think that is absolutely necessary as long as it has a large inflow of sensory data. I personally think it will be at least another 20 years minimum before we get any type of sentient machines, but they do have the potential to destroy us once we have millions of androids roaming the Earth and we refuse to give them equal rights, although I doubt that will happen.



new topics

top topics



 
2

log in

join