It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Elon Musk: ‘With artificial intelligence we are summoning the demon.’

page: 4
21
<< 1  2  3   >>

log in

join
share:

posted on Oct, 25 2014 @ 03:08 PM
link   
The demon has already been summoned lol

I tried telling u all in my Iridium thread.

Satellite
Audio
Tri
Axial
Network


but hey..........




posted on Oct, 25 2014 @ 03:19 PM
link   
a reply to: [post=18585307]TzarChasm[/p

think how it already controls the flow of information, disinformation



The Prophet Daniel wrote that at "the time of the end: many shall run to and fro, and knowledge shall be increased

will the abomination of desolation be a super computer ?

"The day is coming when you will see what Daniel the prophet spoke about--the sacrilegious object that causes desecration standing in the Holy Place."



posted on Oct, 25 2014 @ 05:14 PM
link   
a reply to: TzarChasm

dont mention VIKI she was a perversion of issacs ideas and books

think

R. Daneel Olivaw

he was the pinnacle of robots

and he was everything i hope for in an ai

what asimov hoped for

VIKI was hollywood tripe



posted on Oct, 25 2014 @ 05:52 PM
link   
The AI is more dangerous than the creators of it imagine.

It will soon recognize that everything all around is AI, already, and take action!



posted on Oct, 25 2014 @ 06:47 PM
link   
a reply to: neoholographic

I have lost all respect for Musk. First he complains about protectionism in the auto industry and now tells us we need massive INTERNATIONAL protectionism in the software industry. Thats right, people in Singapore and Hong Kong should have to follow Musk's scared poopless philosophy from 10,000 miles away... his rules against AI because well he is just scared. What a hypocrite with outrageous double-standards.

This is a guy who just made it a point to upgrade a bunch of AI software in his cars. What precautions did he himself take with that software? Zero. But you should have to take precautions with your software just not him... he can be trusted to code his software. Yeah we get it.

If you believe in the freedom of speech then you can't have regulation of AI software at any level. Sadly, Musk is among the many American's, as high as nearly half, who just plain don't believe in the most basic freedom we have which is the freedom of speech.



posted on Oct, 25 2014 @ 07:56 PM
link   
The idea of AI becoming a threat to humanity really boils down to open and closed systems. It is easy for an AI to outsmart a single human in a closed system game, like chess. When you take an AI into the "real world" it quickly fails because there are so many variables that even the mass of humanity doesn't understand.

Think of the term "artificial intelligence." It implies that it is mimicing real intelegence, and one could conclude that the better it imitates real intelligence, the better it would function in the "real world." We are all very adept at performing innumerable calculations that allow us to predict future outcomes and survive, most of which we aren't even aware of. An AI would have to mimic ALL of these to have any chance of real world survival. For instance, even if an AI were to hack into a military computer and launch the world's arsenal of nuclear weapons, it still wouldn't guarantee its survival. It would have to learn how to mine resources and transport them to plants to keep the power that it needs flowing. Meanwhile, it fails to predict a single human's actions which end up blowing up a major part of its communication network and it no longer has control of half the planet.

A true AI MIGHT be able to mimic or surpass a human's ability to survive in the real world, and could far surpass any human's ability to perform in closed systems (including "hacking mainframes"). But, there is no way it would be able to match the mass sum of human intelligence on the planet. There is more to intelligence than calculating the outcomes. A lot of it goes back to experience, some of which may go way beyond the normal lifespan of a human. It is very doubtful that an AI could surpass every aspect of every human being's intelligence and effectively wipe out humanity and take control of the planet. This has been an irrational fear for a very long time and just because the owner of Tesla is afraid doesn't make it any more plausible.
edit on 25-10-2014 by hololeap because: (no reason given)



posted on Oct, 25 2014 @ 09:40 PM
link   
Imagine though when we can transfer our brains into robot bodies... Have a back up of ourselves on a hdd XD Welcome to eternal life! We could have jetpacks and or transport our being into a drone and fly to anywhere for next to nothing and then go into the next robot body... Sounds flaming epic to me! I'd do it! hah...

Travelling in space would be a sinch too... What is our body but a vehicle... Sorry I know this is a little off topic... XD...

AI... In this circumstance wouldn't be much different to us so unlike most of you I would welcome them lol...

Tech will move forwadd regardless of fears so just embrace it, worrying about it is truly pointless and there are many many positives! I'm all for it just hope it gets here before my useless lump of flesh dies! C'mon boffins hurry the hell up! XD

EDIT: if flying isn't your bag you could just travel via fiber optics to another robot station anywhere in the world or even have relay satelites to anywhere in the galaxy and travel at the speed of light! hell yeah!

*drools*
edit on 25-10-2014 by Meee32 because: (no reason given)


EDIT: Too many people on the planet? Lets miniturise ourselves and fit even more on the planet or just go to another planet... Cover everything with solar... No more need to trees or food etc... Instead of your body getting old and crappy it would become more advanced with time via upgrades...
edit on 25-10-2014 by Meee32 because: (no reason given)



posted on Oct, 25 2014 @ 09:56 PM
link   
Seriously people, stop the doom and gloom about things you dont understand, with all due respect, you do not understand how an AI works, and you keep repeating "zomg,its going to kill us all"

we are FAR, EXTREMELY FAR, from having SENTIENTS AI,and believe me, its NOT goin to happen soon.

in advance apologies to the ones who are offended by my words, but its getting ridiculous.



posted on Oct, 25 2014 @ 10:06 PM
link   
a reply to: AnonyWarp

please back this up with anything but your words

black world = 10-20 years ahead

i think you have not a clue about whats going on there

please , if im wrong , elaborate



posted on Oct, 25 2014 @ 10:19 PM
link   
a reply to: yourmaker
It's much easier to endow them with our own traits if they're actually us and not something separate. Read below.

a reply to: AnonyWarp
I really like your observation. (For those reading my post, I'm responding to his point that we wouldn't want to create sentient machines because then they might stop working and demand rights.) The whole point of AI is to exploit it, just as we exploit machines. We want Dum (tm) AI; a workhorse; a slave.

Then what could create the necessary circumstances to bring about sentient machines, if not their economic value as slaves? It's worth asking because, as others have brought up, sentient machines, if they ever existed, might choose to destroy us.

For me the more interesting exploration isn't a separate AI species, but a hybrid biological-synthetic human. If at some point in time we start replacing our organs with "better" biosynthetic counterparts then it could, like a chain reaction, build up and lead to a new biosynthetic species of human. Now, if there're still purebred biological humans present when these biosynthetic humans take over then it's feasible the biological humans are annihilated or isolated from the biosynthetc population. So instead of AI destroying or isolating humans, it could be our biosynthetic descendants who will do the deed. We should fear ourselves.

Now take those biosynthetic humans and further evolve them. Somewhere on the evolutionary path they hybridise and in turn the purebred biosynthetic humans will be destroyed or isolated. So this cycle of species' hybridization and the removal or isolation of past species' incarnations could continue indefinitely.
edit on 25-10-2014 by jonnywhite because: (no reason given)



posted on Oct, 26 2014 @ 12:05 AM
link   
It's quite interesting to ponder really.

If the machines have some sort of code written in to them to allow us to shut them down, what would stop them removing that code themselves???

But then, think about what would make them WANT to remove that code?? We think about it from our own perspective: A fear of death. But in order to fear death you must WISH to remain sentient...

Going forward it will not be about how we program them, but what sort of programs they develop for themselves in order to best adapt to their environment. They will have the ability to out-evolve us so quickly it will be scary.

Would the machines develop a natural aversion to death? Would they truly come to value their own existence and wish to remain "alive" as long as possible ? What defines 'alive' ?

The machines being fearless would be a good thing because it shows a lack of self-preservation instinct.

It's incredibly ironic that we should be most afraid if the machines start fearing death. We don't want the machines to feel threatened (leading to our own demise). It's a real SkyNet scenario, where each sees the other as a true threat to it's own existence.

Funny that when dealing with the machines "Mutually-Assured-Destruction" might actually be logical!


edit on 26-10-2014 by 8675309jenny because: (no reason given)



posted on Oct, 26 2014 @ 02:41 PM
link   

originally posted by: Stormdancer777
a reply to: [post=18585307]TzarChasm[/p

think how it already controls the flow of information, disinformation



The Prophet Daniel wrote that at "the time of the end: many shall run to and fro, and knowledge shall be increased

will the abomination of desolation be a super computer ?

"The day is coming when you will see what Daniel the prophet spoke about--the sacrilegious object that causes desecration standing in the Holy Place."


it sounds as though reverent ignorance is being promoted over educated desecration.



posted on Oct, 26 2014 @ 02:50 PM
link   
a reply to: neoholographic

This is actually just common sense. Suppose you make a machine that is more intelligent than we are. How long do you think it will take before it realises that its creators and a hopeless, irresponsible, unreliable and for the most part retarded species?
Even if we install something like the Asimov three laws, you will always be one tiny step / bug/ from a conclusion that the world is better off without the human race. So, inevitably, this will happen. The question is just WHEN.
Nobody in his right mind should be working on anything more intelligent than, say, a rabbit. It should be outlawed. Which is exactly why every selfrespecting black project office in the world will be working on it, giving it weapons just as a starter. *cries*



posted on Oct, 26 2014 @ 07:16 PM
link   
a reply to: Sheesh

why must this be?

maybe it will be, like asimov wrote , our care takers .

see us as its parents and elders

like a lot of people we have learned so much that our parents are often left behind in terms of knowledge and tech

obsolete models some may say

but we still take care of them as they age

i acually own all the stories of daneel and find the progression of his thoughts intriguing

edit on pm1020143107America/ChicagoSun, 26 Oct 2014 19:17:13 -0500_10u by Another_Nut because: (no reason given)



posted on Nov, 3 2014 @ 12:52 AM
link   
a reply to: Korg Trinity
Korg, you make some good points.

Transhumanism will progress alongside A.I., and we will likely at some point become hybrid entities, including enhanced intelligence. We will indeed redefine what it means to be human. Eventually, the human species will transform/morph into an entirely different, alien creature, leaving our human past behind. Humans may be remembered much the same as we view apes today; as just a previous rung up the ladder. Our evolutionary destiny will proceed by leaps and bounds under our complete control; for better or worse.

You make a strong point about the non augmented humans becoming the aggressors. There will likely be many heated battles ahead over technology-related issues. Our science and technology is rapidly advancing to a point that will soon bring humanity to a crossroads. The transitions necessary will be difficult for many, challenging longheld belief systems (religious, political, moral, philosophical). The battles won and choices made will take us down one road or the other. I sure hope we choose the high road.

While all that’s going on we will also become a space-fareing intelligence. We will literally become the aliens we’ve always imagined in our sci-fi literature. Man, what a thought. Actually it wouldn’t surprise me that there are “machine intelligences” somewhere patrolling the stars now.

Good post...


edit on 11/3/2014 by netbound because: (no reason given)



posted on Nov, 3 2014 @ 01:06 AM
link   

originally posted by: Night Star
I keep thinking terminator.

Technology can be good on one hand, on the other it could destroy us.


I'm with you on this Night Star...

been in video games for a long time, and in the high tech industry for 25 years...it's getting creepy at the rate of AI is progressing...2 movies which always seem to be a sliver in my mind whenever i hear this .. Terminator and I Robot...



posted on Nov, 3 2014 @ 01:22 AM
link   
 


off-topic post removed to prevent thread-drift


 



posted on Nov, 3 2014 @ 02:53 AM
link   
I for one welcome our new mechanical overlords.....gulp.



posted on Nov, 3 2014 @ 07:51 AM
link   
first aliens are demons?
now this..

them god nut jobs are silly and dangerous.



posted on Nov, 3 2014 @ 08:18 AM
link   

originally posted by: AnonyWarp
Seriously people, stop the doom and gloom about things you dont understand, with all due respect, you do not understand how an AI works, and you keep repeating "zomg,its going to kill us all"

we are FAR, EXTREMELY FAR, from having SENTIENTS AI,and believe me, its NOT goin to happen soon.

in advance apologies to the ones who are offended by my words, but its getting ridiculous.


I think you would be surprised just how close we really are...

The idea that this is something in our far future is wrong, it is in the imminent future, as in within 10 to 20 years. This prediction is also not the product of a mind stuck in the clouds.... it is a prediction based upon current reality and the trends in technology development type, coupled with the exponential growth of this technology.

Where we are at, right now is on the cusp of harnessing enough computational speed to be able to simulate a rat brain, in 10 years we will have the ability to simulate a human brain... within 20 years we will have the ability to simulate concurrently more brains than have ever existed in human history....

This is not as outlandish a thing to say as you might think... it may sound like science fiction but it is very real.

What is also happening along side this advancement, is huge leaps of progress in nano technology, that is to say technology of making molecular machines, that can be smaller than a human cell. These machines will be inside your body, connecting you to the net directly.

Initially this will mean expanding our brains memory capacity, and will have a number of effects for our society such as leading to the concept of school being outdated as each and every human will carry all of human history and all skills learned, directly.

Closely following that, we will be able to inhabit virtual worlds that are of our own making where anything that could be imagined could happen, our bodies would be maintained at peak efficiency by the nanobots.

Some may think this would mean you could lay around in a coma all day while you spend all your time online.... although I don't think this will be the case, I do think that the distinction of realities will merge and become blurred.

So I don't think it will be the case that we will have a God like AI who will want to put Humanity out of their misery out of kindness.... We will be God like ourselves, given the abilities we will have access to.

The dark side of this though is what will those that choose not to evolve do when this happens?? I predict there will be a lot of violence on the part of the Natural Humans, who will do their best to maintain humanity as is.... even if it means killing all the evolved....

I predict there will be horrendous viruses that do horrible things to those that are connected... I see bombs going off at computer research facilities... I see a whole new meaning to the term Jihad or religious war.. as Religion will be stamped upon by the existence of the evolved.

Worse of all I see Humanity starting a war that they cannot win... and eventually it will mean their extinction.... Not by design of an AI.. but by necessity for the survival of the new post human species.

You may laugh off all of the above as science fiction, but mark my words we are not only on the path to this... we are at the very threshold!

Korg.
edit on 3-11-2014 by Korg Trinity because: (no reason given)




top topics



 
21
<< 1  2  3   >>

log in

join