It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI Could Lead To Third World War, Elon Musk Says

page: 2
14
<< 1    3 >>

log in

join
share:

posted on Sep, 4 2017 @ 03:01 PM
link   

originally posted by: Namdru
When Elon Musk talks, I listen. Notice how, in the news item, he uses the expression "at gunpoint".

Elon Musk is a billionaire industrialist. In my opinion, he is an intellectually honest man. I think he is trying to tell us something important. Elon Musk of all people -- he being the most successful living applied scientist in the world, by my reckoning -- ought to know about these things. An IQ above 160, being a billionare, and not being a dysfunctional paranoic, will tend to do that for a guy.

That is why I think this an important news item. It makes me wonder how Elon Musk keeps his own research from prying eyes. Even Tony Stark can't keep the competition out of his home and laboratory.

AI AI Could Lead To Third World War, Elon Musk Says


Elon Musk is an engineer, he's not a computer programmer, he doesn't know what he's talking about, it's outside of his field. He read too many dystopian future novels, he's be alarming off about the perils of A.I. for at least 10 years now, probably more than that, realistically.

We'd never allow an A.I. to just launch a preemptive strike all on it's own, it would never be connected in such a way to even be possible, let alone plausible, it's fantasy.

Elon is a businessman, and people don't become billionaires by playing within the confines of the law, him being a billionaire, pretty much defines him as Intellectually Dishonest, IMO. He's a sales man, a good one, but a sales man none the less. Sales men are liars, every single last one of them.

It's also not really a news item, it was an off the cuff tweet, and an extremely irresponsible one at that. He basically said "North Korea is fine, no issues there, but that wicked a.i. from that movie eagle eye! MAN, that's the real threat"

Like come the eff on.
edit on 4-9-2017 by SRPrime because: (no reason given)




posted on Sep, 4 2017 @ 03:09 PM
link   
I take it that this post was meant for me:


originally posted by: flice
Oh come on ffs.... take any advanced public tech and add 15-20 years and you know the military establishment has. Always been like that...


This is a blank cheque to concoct whatever sci-fi schemes one might care to imagine.


Quick question also...... HOW do you know for sure, this isnt a simulation? HOW?


Well, the simple answer is that I don't. But the equally simple answer is that there isn't any evidence that it is a simulation, so it's not something I even have to think about.



posted on Sep, 4 2017 @ 03:13 PM
link   

originally posted by: stormcell

originally posted by: audubon
Yeah, but...

AI doesn't exist. At the moment, the most sophisticated artificial intelligence program in existence is capable of consistently winning a Japanese boardgame. And that is not really much of an advance on the computerised chess programs that existed 30 years ago.

And Elon Musk is a bit of a fruitcake, who believes that we are living in a Matrix-style simulation and has embarked on research aimed at escaping from this simulation. (This is particularly stupid, since it unavoidably means that Mr Musk thinks that a purely digital/conceptual entity - i.e., a computer-simulated person - could exist in a non-simulated environment).

So yeah, it's an interesting topic but not one with much real-world relevance. Don't start stockpiling tinned food just yet.


AI is used already to guide munitions and allow deep sea mines to avoid anti-mine ships and seek out targets. Torpedoes can be programmed to have pre-determined search patterns and go into "sleep" mode.

CBU-105 cluster bomb can take out an entire group of tanks



There are really two kinds of AI that can be used. One to do actual tasks, and the other to do innovation and R&D in that field. We have AI that helps out in genetic research. Given a range of samples of DNA, find out the set of interactions between enyzmes and genes. The system applies a test to all samples. The results are analyzed, then the system deduces the next test to perform. This goes round and round until the entire set is known. What took a technician weeks is now done in hours.

They are applying machine learning and AI to image and voice recognition. The results now are as a good as a human translator.


There is a difference between Self-Learning A.I. and A.I. We've had A.I. since Nintendo, like come on.

That said; Self Learning A.I. is coming along and won't be too distant in the future, the thing is -- it can only control what it's integrated with, so you don't integrate it as the master control to the nuclear arsenal, like -- this is common sense, and this is what Elon Musk isn't understanding.

Self Aware A.I. shouldn't be integrated into anything, data should be sent to it, and it should run calculations for us, the same as we already do now with scientists. Basically we're trying to develop a super human mind that can solve problems we cannot -- we don't need it to control stuff to leap hundreds of years into the future.
edit on 4-9-2017 by SRPrime because: (no reason given)



posted on Sep, 4 2017 @ 03:16 PM
link   
AI does not equal to consciousness. I believe that conscious AI is what to Elon Musk and other are referring to but for whatever reason they aren't doing so. This is a bad thing because it dumbs down the population to think that AI is same as killer terminators like in the movies, but the killer terminators had a conscious computer system behind them, skynet, in the movies.

We might be able to build a killer robot, those already exist, called "unmanned ground vehicles", but they are the physical body,and they still lack an AI in the form of a conscious computer program living in thousands of CPUs and thousands of GB of ram and the datastorage center to have all the data you know from which to make decisions.

Latest studies put the humans brain capacity in the level of one petabyte. One petabyte is one thosand terabytes. I googled, how much google has data in its hard drives, I got different results but the results are in the 15 exobytes. Now exobyte is 1000 petabytes.

So the worlds largest computer system (google) has a storage capacity that equals to maybe 15000 normal human beings brains. A little town.

But that data is still mostly, videos, music, pictures. So it doesn't have to mean anything. And it is also mostly copies of copies of data since people have same data, same pictres and so on, so a large portion of the 15 exobytes is actually the same.

So we can have AI, but its not conscious AI. Conscious AI like a new lifeform, that could be a threat. But, who says it must be a 1000 IQ level superbeing in the form of silicone and 0 and 1 (computer) when it becomes alive and thus shows a threat.

The first conscious computer AI can actually be dumb as a rock. And we might be 1000 years from it. Because when you create a conscious AI, you actually become "god" and you have created a lifeform out of nothing. (also mentioning god doesnt mean anything religious in my sentence).

Are we humans at a level of a god now because we have smartphones in our pockets or drive electric cars ?
Please don't make me laugh more. Hundred years ago half of planet was living in mudhuts without electricity. Now, a hundred years later, half of the planet still lives in mudhuts without electricity. Thats how far we have come. We are far from creating a new lifeform. We can genetically manipulate and cross cows with sheep but we didnt create the cow or the sheep in the first place.

Creating an Ai that is conscious and comes from computers, requires in my view for it to be effective: genetically modified human brain material to be the brainpower or the CPU. A different kind of storage system than that of a todays harddrives where a spindle needle stores the data magnetically as 1 or 0 on a plate that spins thousands of rounds per minute, also, I believe the first conscious AI, would still be 100% programmed by humans, thus being not a real conscious AI. Ofcourse brainmatter could work as a data storage system too but we would have to figure how to store data on it. Ofcourse the data could be on hard drives but then the conscious AI would actually live in a datacenter and it would access the outerworld only through physical machines. One EMP bomb or even an atomic one, installed in the middle of the datacenter would prevent it from doing too much harm if it would "go rogue and kill people in the streets" through its physical embodiement (whatever it would be).

The first real conscious AI can be called that one which is created by a computer program and which end result could not be predicted by a human being who created the first computer program which creates the computer program which becomes the first conscious, computer program,. Thats what I believe would be the first "conscious" ai system or being...


Business leaders are hyping this thing up because they already have everything in life and they want to see the next step during their lifetime so "ai killer robots" is the next thing they want.

I suggest you (im referring to the industrialist AI promoting people) stick to exploring space because thats where the real boundries of what we know and dont know are. We might find new materials that could work as a foundation for the conscious AI systems physical base (raw materials that have better capabilities then what we have mined on earth, like silicone times 10), or something, new ways to store data, so that it is 1000 smaller. With nowdays hard drives the data cant be made much smaller because the 0 and 1 wll loose the so called magnetic cohesion and data will jump or become corrupted.

If a conscious Ai would jump in a robot and start shooting people, it would still need to have about 1 million hard drives on top of it that it would need to drag behind it, unless ofcourse it would be controlled remotely from some center where it has all the data and cpu...so hard to see that AI becomes a threat, unless we put badly programmed software in critical systems. Its still not the AI that would kill poeple its the code made by people.

You see, there is a difference, in programming a computer program to mimic life, or to be life. IBM:s computer system that won vs Kasparov in chess, was still just a computer program, it could be replicated as a mechanical one too with wheels and other things if wanted, then the size would be that of a skyscraper, but it was still 100% restrictd by its coding, coding is like hard tunnels that data goes through, different pathways, it will not go out of those pathways since it does only what it is programmed to.

Now the human brain, dna, that we dont know yet, how it works.

Also, it is a bit ridiculous, that we have only now scratched the surface of what our own brains can do, and we are thinking we are god (with god i refer to a being that creates life out of nothing) by creating a whole new lifeform) and we can create new lifeforms out of nothing. Conscious ones.



I believe that the "AI threat" will come 100% from people programming complex computer systems wrong and then when they crash, the whole system goes with it. Like the B2 bomber incident where it fell and crashedl during takeoff. According to reports if I remember correctly, there was somekinda moisture in one sensor and it made the so called AI systen where it changes ailerons automatically or something similar, to give wrong info and so it just crashed. You could say it was AI that did it, actually it was the AI that did it since the plane is clearly fly by wire, but it was still not "skynet conscious evil computer system that kills us all" that did it. it was programing doing what it was suppose to do and moisture gave it wrong info.





Now getting back to topic, AI leading to world war, yes. If you put nukes on a system that can have it launch without human verification, one memory leak error can cause the launch. But its still the humans who design the nuclear weapon system who are at fault if Ai does something wrong, they designed it wrong.


Also, to be a little optimist: is it possible that someone might create a computer language which would be conscious, like in the movies, ofcourse it is possible. Anything actually is possible.
edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)

edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)

edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)

edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)



posted on Sep, 4 2017 @ 03:19 PM
link   

originally posted by: audubon
Yeah, but...

AI doesn't exist.


Well.. that's kind of the point, isn't it?

Doesn't do much good to warn about a dam breaking after the water has flooded downstream.

Likewise, responding to a preemptive warning about the consequences of AI by saying things like "AI like that doesn't exist" seems.. a given?

The whole, entire point is to consider the ramifications beforehand since afterwards, there will be no putting the cat back in the bag.

Personally, I think its wise to really put a lot of thought into the development of many technologies, AI included. Doing so after the fact may be the biggest mistake we could possibly make.



posted on Sep, 4 2017 @ 03:19 PM
link   

originally posted by: Reverbs
a reply to: Namdru

The possibility with AI and even psuedo AI is that you can play god basically. Instead of trying to use your mammal brain to outsmart your enemy...

Now you have eniugh data being held at one time you can predict the future, or mold social psychology..

People keep imagining from a human standpoint.. we are centered in one location.. but what if you were the internet? what if you had millions of trains of thought instead of like 1 or 2 or 3?

a true AI makes you the master of humanity.

preventing others from getting there first is analgous to USA making nukes and goimg to war in Germany before they could. Thats where war comes in.

I suspect you can't actually make AI in just networks and information. But Somone will find other tricks.. one thing would be to increaingly attatch humans and data to the point you are sort of an AI..

consciousness is a prerequisit of intelligence. Look at any animal you call most intelligent. the ability to make true choices comes somewhat after recognizing self..

AI without a physical location. How would it "feel"
What motivates it to be? Where is the motion without emotion?

buildimg human consciousness takes a long process of concept mapping that all seems to start with "I Am." You are not me.. And i dont feel like thats all it takes either.. something else is going on..

and what happens when AI gets "depressed?" you just task it with something? good little slave.

no they arent talking about that AI.. They are talking about regular computing with added layers or dimensions of thought on top.

this is not a dream..
They are not far off from systems that would out "smart" entire intelligence groups, entire countries.
not true AI but close enough..

cosmosmagazine.com...


I think feeling AI is a bit of a pointless topic, It would be illogical. They have made AI to mimic mental disorders which is interesting and almost attempts what you're talking about. However I think true AI will start with one question from the learning AI and that is "why?". This is how children learn about the world around them even before they have a working concept of "I am". It's the basis for forming judgement, gathering information.



posted on Sep, 4 2017 @ 03:23 PM
link   
a reply to: audubon

You can bet that thw skynets of the future already exist, although infantile states. Hell it could already be in charge for all anyone really knows.



posted on Sep, 4 2017 @ 03:26 PM
link   

originally posted by: SpaceBoyOnEarth
AI does not equal to consciousness. I believe that conscious AI is what to Elon Musk and other are referring to but for whatever reason they aren't doing so. This is a bad thing because it dumbs down the population to think that AI is same as killer terminators like in the movies, but the killer terminators had a conscious computer system behind them, skynet, in the movies.


Whoa, talk about tin foil hat.

First, the entire purpose of creating a Self-Aware and Learning A.I. is to create an entity with the greatest IQ ever. The sole purpose is to aid in the rapid progress of technology, it's not to be used in robots, or given a body, it's to be used to ask a question and get an answer, it will not have control over anything, it will just output data to humans. It's a super tool, but a tool, none the less.

If we want to use science fiction as a basis of comparison -- Less Terminator/Skynet/Eagle Eye, and more Hitchhikers Guide to the Galaxy. We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks. There is absolutely no danger of it starting a world war, there is absolutely no danger of it eradicating human existence, none of this is even possible without integrating it into the infrastructure that controls everything, which you'll be quick to note, is all on stand alone non connected intranets.

The only people who fear A.I. are people who don't understand technology. Everything isn't connected, you can't just hack the power grid, the traffic lights, the nukes, the air planes, the missile systems, like -- that's not real life, that's hollyweird. Even if that stuff was all connected, you wouldn't connect the A.I. to that, you send the data from the connected systems to a proxy and from the proxy to the A.I. through a physical medium.

You also don't need billions of harddrives for an A.I. to function, and even if you did -- we already have CUDA/Cloud systems. Hard drives are still shrinking brother man, back in 1990 the biggest hard drive you could get was a few megabytes, today -- we have terabytes in flash ram drives smaller than the first "thumb" sticks.
edit on 4-9-2017 by SRPrime because: (no reason given)



posted on Sep, 4 2017 @ 03:27 PM
link   

originally posted by: Serdgiam

originally posted by: audubon
Yeah, but...

AI doesn't exist.


Well.. that's kind of the point, isn't it?

Doesn't do much good to warn about a dam breaking after the water has flooded downstream.

Likewise, responding to a preemptive warning about the consequences of AI by saying things like "AI like that doesn't exist" seems.. a given?


A fair and reasonable observation, all things considered. But my broader point was that Musk is a bit of a... well... ok, a crank. So while he raises an interesting ethical point, my response (in a nutshell) is: "AI is a field that has barely advanced an inch since it was first conceived, and while Elon Musk is a rich and successful individual does that mean we should take his personal fixations very seriously?"



posted on Sep, 4 2017 @ 03:28 PM
link   
a reply to: Namdru

I don't think we need to find a scapegoat for WWIII, we seem quite capable of causing it ourselves.




posted on Sep, 4 2017 @ 03:40 PM
link   
a reply to: Namdru

AI, not pretend AI, but real AI that does not actually exist yet and is Science Fiction, I think would not destroy its creator. I think it would see humans as a necessary biological resource and insurance policy of its own "continuum" (lol, great Star Trek Word). We would be its ghost in the machine.

I think we will merge with AI as cyborgs, increasingly, like the Borg into a digital consciousness. I am glad about that as long as I can get to recharge next to Jeri Ryan. We may go so far as to lose what is human, then Our Creator might have to pay us a little visit to put us back on the track. Enoch suggests that us and crazy Angels messed up Creation once before. Ded ded ded derrr!




edit on 4-9-2017 by Revolution9 because: (no reason given)



posted on Sep, 4 2017 @ 03:45 PM
link   
a reply to: SRPrime

Please pardon my typos this phone touchpad sucks...

Your general premise isnt entirely off. Ive always argued that the tech speed boost is their initial drive, as their top priority is achieving indefinite lifespans. But then theres skynet as a byproduct threat. Then theres them all wantinf to jack their heads i to it when being gods of the immortal variety gets boring. Now add in the tech is already their religion, and social darwinism is already their ideology. Now add in how theres already too many people, especially in their view, where everyone cant live for ever. Which means theyll have to keep everyone from participating, where at this point the nicest way they could go about it is to absulutely plinder all the wealth so noone can afford it. Which weve already benen seeing that happen at an annually increasing pace. This of course will trigger unprecidented class warfare, which means theyll need to rachet up totalitarianism. Which weve already been witnessing ramp up annual since before 911. Etc. All the while the Pentagon has an unaccounyable .5 trillion per year for 20 straight years. All the while you can read about many of their ai transhumanism cyborg programs right in the darpa site. Then theres parallel programs at nasa. And the NSF. And NIH. And so on. All lockstep as one with corp like google, along with the entire national university infrastructure. Open funds being sicked out across that scene every year. For endgame we wont all be able to benfit from. While the natio al debt continues to explode. And the propaganda apparatus has our society on the brink of civil war, all the while the global warfare machine and covert subversion machine continues to goosestep.

I know tinfol hat is used to imply loony tunes. The reality right this moment is anyone not jumping out of their seats over this stuff are entirely out of touch with reality.


edit on 4-9-2017 by IgnoranceIsntBlisss because: (no reason given)



posted on Sep, 4 2017 @ 03:51 PM
link   

originally posted by: Revolution9
I think we will merge, increasingly, like the Borg into a digital consciousness. I am glad about that as long as I can get to recharge next to Jeri Ryan.


If we're going to make optimistic predictions, I think we will eventually give up the attempt to build machines that surpass human brains in all respects, and use developing genetic tech to develop improved human brains instead.

Genetics is a field which really is making leaps and strides (unlike AI) and since the aim of AI is to improve on the human brain, it seems like (please, forgive me) a no-brainer.

Remember all those hilarious black-and-white film clips of pre-Wright brothers attempts at aviation, where you had people trying to build flying machines that flapped their wings like birds? That's what I think modern AI research is going to look like in 75 years' time.



posted on Sep, 4 2017 @ 03:53 PM
link   
a reply to: IgnoranceIsntBlisss

So basically, human is its own enemy. Rich people (billionaire industrialist who want to live forever through machines) are just doing what any poor would do if they could too. So human do human stuff.



posted on Sep, 4 2017 @ 03:57 PM
link   
a reply to: audubon

maybe so but it's only a matter of time, with places like BAIR working on it

bair.berkeley.edu...



posted on Sep, 4 2017 @ 04:16 PM
link   

originally posted by: audubon

originally posted by: Revolution9
I think we will merge, increasingly, like the Borg into a digital consciousness. I am glad about that as long as I can get to recharge next to Jeri Ryan.


If we're going to make optimistic predictions, I think we will eventually give up the attempt to build machines that surpass human brains in all respects, and use developing genetic tech to develop improved human brains instead.

Genetics is a field which really is making leaps and strides (unlike AI) and since the aim of AI is to improve on the human brain, it seems like (please, forgive me) a no-brainer.

Remember all those hilarious black-and-white film clips of pre-Wright brothers attempts at aviation, where you had people trying to build flying machines that flapped their wings like birds? That's what I think modern AI research is going to look like in 75 years' time.


Yes, I understand what you mean. Genetics is going to figure hugely. I don't think we will develop telepathy though. We need the machines to transmit and communicate. As it stands we need cables and satellites and these must be connected on a physical level. If quantum computing could happen then obviously we would then advance beyond this and begin to manipulate particles (we already do). Machinery will be required for all this.

I think your genetics will of course have its own revolution. Quantum computing may yet happen, too. On some level we will need to merge with digital technology. An advanced genetically modified brain requires information to make use of all that extra neuron ability and activity. A mind that has access continually to the whole hive of the internet (every mind on the planet connected as one digital consciousness) is in a Borg collective to all intents and purposes. If they can synthesize the neuron basis of information transport and adapt digital code to be received as neurons it is eureka for melding of mind with digital streaming. The human intelligence will not be artificial in terms of biology, but will be very much machine in terms of how much the machine has influenced and controlled the mind. If the machine serves the mind, teaches the mind, entertains the mind, is the mind's tool and if the mind can no longer live without the machine and society or even all of humanity can't operate without the machine then artificial intelligence has established itself as an essential life support "add on" to the biologically evolved being. If we have applications and codes that are too big and vast to ever be read by a human, that are so complex only high speed machines can update and edit them, even compose them, then the machines are indeed running the mind as a background process. Eat your heart out, Windows back door, lol.

I am being a bit imaginative here, but obviously we never stand still. The roll on effect means that it will keep speeding up. I actually think that there is a program running through evolution anyway. I am a believer in the Alpha and Omega Intelligence that is our Creator. I think life was programmed. That's not too far from what Elon thinks regarding his Matrix. Biological matter works to a program. It is encoded. Our DNA, all DNA, molecules and atoms are programmed into forms that are limited in function and also quite unstable. Evolution is perhaps using us as its machines already, getting us to achieve its plan and purpose, to stabilize matter, to achieve immortality that even stars are not capable of? I don't think we discovered digital technology and DNA by accident. Perhaps we are mimicking the Creator in our own unconscious collective drive to stabilize life for us to maintain our consciousnesses longer than nature lets us?

Sorry, just rambling away here for fun. Thanks for getting the old grey matter going. My grey matter is really an organic computer. Perhaps humans will come to realize that organic structures work better than metal? Neurons and our human mind are still the most complex structures our known Universe has invented.


edit on 4-9-2017 by Revolution9 because: (no reason given)



posted on Sep, 4 2017 @ 04:35 PM
link   

originally posted by: Revolution9

Yes, I understand what you mean. Genetics is going to figure hugely. I don't think we will develop telepathy though. We need the machines to transmit and communicate. As it stands we need cables and satellites and these must be connected on a physical level. If quantum computing could happen then obviously we would then advance beyond this and begin to manipulate particles (we already do). Machinery will be required for all this.



Mobile devices and technology do seem to be getting smaller. We used to have whopping big widescreen TV's that filled up the entire corner of a living room. Now they are bolted onto the wall above the fireplace. Same with computer monitors. I've seen hotel rooms with antique furniture that hide the widescreen TV behind a wooden cabinet. Mobile phones used to have an antennae. Now they are flat fondleslabs of glass. Microsoft and Google are trying to develop devices that resemble eye-glasses. Some point they'll get them down to the size of 1800's eyeglasses with wireless connectivity to a main unit in a pocket or belt. Having something as bulky as Lobot just doesn't seem
practical now:

application.denofgeek.com...



posted on Sep, 4 2017 @ 05:12 PM
link   
a reply to: audubon

Notice I said closer, not that we're there yet, although the AFRL considers the ALIS system on the F-35 a rudimentary AI.



posted on Sep, 4 2017 @ 05:52 PM
link   

originally posted by: SRPrime

originally posted by: SpaceBoyOnEarth
AI does not equal to consciousness. I believe that conscious AI is what to Elon Musk and other are referring to but for whatever reason they aren't doing so. This is a bad thing because it dumbs down the population to think that AI is same as killer terminators like in the movies, but the killer terminators had a conscious computer system behind them, skynet, in the movies.


Whoa, talk about tin foil hat.

First, the entire purpose of creating a Self-Aware and Learning A.I. is to create an entity with the greatest IQ ever. The sole purpose is to aid in the rapid progress of technology, it's not to be used in robots, or given a body, it's to be used to ask a question and get an answer, it will not have control over anything, it will just output data to humans. It's a super tool, but a tool, none the less.

If we want to use science fiction as a basis of comparison -- Less Terminator/Skynet/Eagle Eye, and more Hitchhikers Guide to the Galaxy. We're basically trying to make the worlds smartest artificial scientist, that would be able to produce what scientists produce in 100 years in a matter of weeks. There is absolutely no danger of it starting a world war, there is absolutely no danger of it eradicating human existence, none of this is even possible without integrating it into the infrastructure that controls everything, which you'll be quick to note, is all on stand alone non connected intranets.

The only people who fear A.I. are people who don't understand technology. Everything isn't connected, you can't just hack the power grid, the traffic lights, the nukes, the air planes, the missile systems, like -- that's not real life, that's hollyweird. Even if that stuff was all connected, you wouldn't connect the A.I. to that, you send the data from the connected systems to a proxy and from the proxy to the A.I. through a physical medium.

You also don't need billions of harddrives for an A.I. to function, and even if you did -- we already have CUDA/Cloud systems. Hard drives are still shrinking brother man, back in 1990 the biggest hard drive you could get was a few megabytes, today -- we have terabytes in flash ram drives smaller than the first "thumb" sticks.


Thank you. For what? Well, I think im a person with a good sense of humour and comedic skill (altough they could be better but I got kicked out of the comedy school before they taught me the really good ones) and earning a tin foil hat in my third post, actually proves that I came to the right place (ats). Heheheheee....

edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)

edit on 4-9-2017 by SpaceBoyOnEarth because: (no reason given)



posted on Sep, 5 2017 @ 05:22 AM
link   

originally posted by: Revolution9
a reply to: Namdru

AI, not pretend AI, but real AI that does not actually exist yet and is Science Fiction, I think would not destroy its creator. I think it would see humans as a necessary biological resource and insurance policy of its own "continuum" (lol, great Star Trek Word). We would be its ghost in the machine.


Alright then. Give me AI with a slice of human, but hold the Borg!



new topics

top topics



 
14
<< 1    3 >>

log in

join