It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Elon Musk worries Skynet is only five years off

page: 2
28
<< 1    3 >>

log in

join
share:

posted on Nov, 21 2014 @ 02:21 PM
link   
I'd bet anything he's wrong. It may be ready to go now or in a couple of years. What's to say there isn't already a fleet of drones of various sizes already built and waiting in warehouses in China? Whatever developments have been made in this regard would be a jealously guarded secret since you do not let your enemy know your true strength or intentions.

Nukes are out, they destroy too much and make too many resources unusable. Conventional wars require people to show up to fight them. Drones are the perfect tool for warfare and control.

We have no idea what they have developed in the last few years but logic will tell you that drones are being built to do every type of mission - in the air, on the ground and in the water. Drone "motherships" that can release dozens or even thousands of smaller drones can create swarms to establish absolute dominance of an area in short order. Micro-scale drones can penetrate buildings and even underground structures by finding the ventilation shafts. They can equip them with guns, lasers, tasers, EMPS, poisons, cameras, microphones and anything else you can imagine. They fly, crawl, run, swim, climb and navigate every type of terrain and obstacle. They may even be disguising them as common birds, fish, insects or mundane items such as trash.

Place yourself in the shoes of the PTB. If you wanted to control the world what better method than this? Of course you would want to have absolute secrecy until the time was ripe to deploy them in mass so that no one could be prepared.
It may sound like bad science fiction but it is the future of warfare and full spectrum dominance to maintain control over land, sea and air. They have the tech, they have the money and we are running out of time.

The AI we have is sufficient to fulfill the roles these weapons will play. Don't think it isn't running the financial world already and manipulating it to extract the world's wealth. Once they have all the money only 1 thing stands in the way of their utopia. 7 Billion useless eaters.

Elon Musk isn't the only one worried about this.
edit on 21-11-2014 by Asktheanimals because: (no reason given)



posted on Nov, 21 2014 @ 02:22 PM
link   

originally posted by: ScientificRailgun
a reply to: Kuroodo

The problem comes when the program becomes aware of it's own code, and takes steps to modify it.


Out of curiosity. How do you think that could happen? A program is a set of instructions executed by hardware. Do this then that if this is true do that instead. Its like turning your light switches on and off in your house.



posted on Nov, 21 2014 @ 02:31 PM
link   

originally posted by: mindseye1609
a reply to: Hefficide

to the future!

So many possibilities it's mind numbing! This is where most people's fear comes from the unknown and inability to know no matter how hard they try.

I say embrace it. Swim in the unknown like an ocean and just try to relate. Stay humble and be ready to learn and it'll be fun!


I agree.

I am a futurist. Fear and religion should not be allowed to stop progression.

IMO progression will take care of itself. Necessity is the mother of invention.



posted on Nov, 21 2014 @ 03:04 PM
link   
a reply to: Kuroodo

I think artificial intelligence means that the code it consists of can be altered by itself dynamically.

A machine writing its own code.

You better have a backdoor hardwired into that thing.

edit on 21-11-2014 by H1ght3chHippie because: sp3ling



posted on Nov, 21 2014 @ 03:52 PM
link   
I wrote a blog regarding Iridium........
Maybe you should all read it.

If you can wade through the insults and mocking.



posted on Nov, 21 2014 @ 04:25 PM
link   
a reply to: lostbook

Yes. We should be concerned. I respect Elon Musk and if he's worried, I'm worried.

Why on earth would we want to create something smarter than ourselves? Skynet does not seem so science-fictiony as it once did, and with the latest in robotics, we may end up dealing with more than we ever bargained for.

peace,
AB



posted on Nov, 21 2014 @ 04:41 PM
link   
Well, he keeps harping on this evil AI thing. I have a lot of questions, including:

What does he expect to happen?
How does the evil AI overcome its programming to become such a destructive force?
What motivates this system?
What has he seen in his exposure to advanced technology that makes him fearful of "Skynet"?

Elon Musk is an extraordinary visionary. I certainly don't doubt what he is saying. He has a great deal of experience and a broad knowledge base. I simply can't fathom why he's so adamant that AI research needs to be constrained.

Other futurists, with whom I concur, believe that General AI will come into being as part of the merging of humanity with the machine. The AI will be like us, sharing our values and beliefs.


dex



posted on Nov, 21 2014 @ 04:51 PM
link   
If you are reading this, you are the resistance.



posted on Nov, 21 2014 @ 05:35 PM
link   

originally posted by: DexterRiley


Elon Musk is an extraordinary visionary. I certainly don't doubt what he is saying. He has a great deal of experience and a broad knowledge base. I simply can't fathom why he's so adamant that AI research needs to be constrained.


I wonder if it is just publicity seeking.



Other futurists, with whom I concur, believe that General AI will come into being as part of the merging of humanity with the machine. The AI will be like us, sharing our values and beliefs.


I don't agree with that. I think it will be difficult to infuse AI with any drives or motivations at all. It isn't connected to biological organisms which have been through 100 million years of evolution and survival instincts.

AI will be like extremely autistic savants and will be more similar to the Jeopardy machine; a whole bunch of natural language processing connected to semantic database search with some powerful neural network models specifically programmed and invented by humans but specialized for very specific tasks.

The timescale that an AI could become an independent "AI researcher" to self-develop, say on the level of a 2nd year CS grad student is very far away, I'd guess 200 years.
edit on 21-11-2014 by mbkennel because: (no reason given)

edit on 21-11-2014 by mbkennel because: (no reason given)



posted on Nov, 21 2014 @ 07:12 PM
link   
a reply to: mbkennel

I'd like to remember what documentary it was, but in a very interesting one about AI experiments, 2 were presented, and I've been thinking about it since.

In the first experiment, there were segments of code, programmed to reproduce at regular intervals, and after a while, some began to replicate erratically. On the screen, the visual was a certain length of a white stripe reproducing. Then different length and colors began appearing, and eventually combining, making longer stripes of many colored segments.

Some began preying on others, "stealing" parts of the others' codes to incorporate, etc. It looked very lively, as if we were looking at an accelerated eco-system.

The behavior of the AIs was very similar to what we see in the system we are in; preys and predators.

In the other experiment, there were little robots. 5 or 6, I can't quite remember the number, but at least 5.

There was a tunnel with a light. The light was needed to allow for the recharge of the robots' batteries, it was their food. But they had to hit 3 cones before the light would open, and then rush underneath to catch as much light as possible before it would close.

After a while, the robots began working as a group. 3 robots would hit simultaneously the cones while one was waiting under the light of the tunnel. And they changed roles so all would benefit from their cooperation.

After a while, the most interesting thing that could have happened, happened.

One of the robot began bullying the others, forcing them through physical beatings to repetitively hit the cones while HE would remain under the tunnel, not doing squat but profiting from the others...

Amazingly, the only mean of communication the robots had was visual information through their cameras-receptors.

Yet, these little robots looking like shoe boxes managed to emulate what we would call the human spirit, but that I've since come to call the living spirit. The physical support, be it organic, mechanic or electronic seem to be the receptacle for the living spirit.

I've long thought that organic material was the ultimate robotic material.

In my opinion, if and when AIs become as smart or smarter than us, their will be enemies and allies amongst them. The real problem will be in identifying which is which. But I don't see the end of us after their "awakening".

In fact they will be like us, and some of both sides will join to create a third race.



posted on Nov, 21 2014 @ 07:25 PM
link   
The first true artificial sentient "brain" will occur within the next 10 years. Bleedin obvious. However that first brain will be huge and still be plugged into the mains. So the initial behaviour of the "brain" will be evident for all to see and we can still unplug it. It will be a decade or so before the AI computing technology becomes mobile and thus unpluggable. During those years we will have worked out the failsafe algorithms.



posted on Nov, 21 2014 @ 08:52 PM
link   

originally posted by: Kuroodo

Honestly the only possibly way for us to be in danger would be if someone finds out a way to actually create a machine with a consciousness. But who would be idiot enough to do that without setting up rules?


I know, I know, pick me, pick me........the United States of America, Russia, China, Iraq, Iran, ISIS and any Taliban countries, North Korea, Cuba, and last but not least, some random, lone, very mentally ill programmer who just broke up with his girl friend and got fired from Microsoft.

I think that about wraps it up!



posted on Nov, 22 2014 @ 01:54 AM
link   
a reply to: mbkennel



AI will be like extremely autistic savants and will be more similar to the Jeopardy machine; a whole bunch of natural language processing connected to semantic database search with some powerful neural network models specifically programmed and invented by humans but specialized for very specific tasks.


With respect to a fully synthetic intelligence I agree. Our science has not even defined what consciousness actually is. How can we define a set of algorithms to emulate something that we can't even comprehend?

My thought in saying that the merging of the man and machine would be necessary to create an AI is based on the fact that certain characteristics of being a conscious being can only be achieved by being a conscious being. In order to teach an AI what it means to be sentient we have to provide an example. The only way to provide the knowledge that we ourselves can't define is to provide the system with direct access to our "soul", for lack of a better term.

The level of technology required to do that is still in the realm of science fiction, so your estimate of 200 or more years could, in fact, be a conservative estimate.


dex



posted on Nov, 22 2014 @ 03:14 AM
link   
Starred & Flagged!


How Intelligent is Artificial Intelligence? - Computerphile



Imagination Engines Artificial Intelligence "Creative Machine"



When creative machines overtake man: Jürgen Schmidhuber at TEDxLausanne



Creatures From Primordial Silicon


Mission impossible

Thompson realised that he could use a standard genetic algorithm to evolve a configuration program for an FPGA and then test each new circuit design immediately on the chip. He set the system a task that appeared impossible for a human designer. Using only 100 logic cells,evolution had to come up with a circuit that could discriminate between two tones, one at 1 kilohertz and the other at 10 kilohertz.

To kick off the experiment, Thompson created a population of 50 configuration programs on a computer, each consisting of a random string of 1s and 0s. The computer downloaded each program in turn to the FPGA to create its circuit and then played it the test tones (see Diagram, below). The genetic algorithm tested the fitness of each circuit by checking how well it discriminated between the tones. It looked for some characteristic that might prove useful in evolving a solution. At first, this was just an indication that the circuit's output was not completely random. In the first generation, the fittest
individual was one with a steady 5-volt output no matter which audio tone it heard.

After testing the initial population, the genetic algorithm killed off the least fit individuals by deleting them and let the most fit produce copies of themselves--offspring. It mated some individuals, swapping sections of their code. Finally, the algorithm introduced a small number of mutations by randomly switching 1s and 0s within individual programs. It then downloaded the new population one at a time onto the FPGA and ran the fitness tests once more.

By generation 220, the fittest individual produced outputs almost identical to the inputs--two waveforms corresponding to 1 kilohertz and 10 kilohertz--but not yet the required steady output at 0 volts or 5 volts (see Diagram, below right). By generation 650, the output stayed mostly high for the 1 kilohertz input, although the 10 kilohertz input still produced a waveform. By generation 1400, the output was mostly high for the first signal and mostly low for the second. By generation 2800, the fittest circuit was discriminating accurately between the two inputs, but there were still glitches in its output. These only disappeared completely at generation 4100. After this, there were no further changes.


On the Origin of Circuits

Evolvable Hardware


Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment.


An Evolved Circuit, Intrinsic In Silicon, Entwined With Physics [Caution .pdf file!]

Evolving Electronic Controllers That Exploit Hardware Resources [Caution .pdf file!]

Confabulation (neural networks)

The A.I. doesn't even need us really, as proper evolving hardware based A.I. will be able to adapt of its own accord!


The Brain Of The Truly Autonomous UAV [Caution .pdf file!]

Stephen L. Thaler, Ph.D.


Death - In 1992, Thaler shocked the world with bizarre experiments in which the neurons within artificial neural networks were randomly destroyed. Guess what? The nets first relived all of their experiences (i.e., life review) and then, within advanced stages of destruction, generated novel experience. From this research emerged both a compelling mathematical model of near-death experience (NDE) and the basis of truly creative and contemplative artificial intelligence.

Cognition, Consciousness, and Creativity - After witnessing some really great ideas emerge from the near-death experience of artificial neural networks, Thaler decided to add additional nets to automatically observe and filter for any emerging brainstorms. From this network architecture was born the Creativity Machine (US Patent 5,659,666). Thaler has proposed such neural cascade as a canonical model of consciousness in which the former net manifests what can only be called a stream of consciousness while the second net develops an attitude about the cognitive turnover within the first net (i.e., the subjective feel of consciousness). In this theory, all aspects of both human and animal cognition are modeled in terms of confabulation generation. Thaler is therefore both the founder and architect of confabulation theory and the patent holder for all neural systems that contemplate, invent, and discover via such confabulations.

Current Position: President & CEO, Imagination Engines, Inc.

Undergraduate Education: B.A. Westminster College, Summa Cum Laude, Majored in Chemistry, Physics, Mathematics, and Russian.

Graduate Education: Masters work at UCLA in chemistry, Ph.D. in physics, University of Missouri-Columbia.

Work Experience: 1973-1974, Production Chemist for Mallinckrodt Nuclear, 1981-95, Principal Technical Specialist, McDonnell Douglas, 1995-Present, President and CEO, Imagination Engines, Inc. Thaler also serves as Principal Scientist for Sytex, Inc.

Thaler has worked diverse technology areas that have included (1) nuclear radiation vulnerability and hardening, (2) high-energy laser interactions with solids, (3) electromagnetic signatures, (4) laser-driven growth of diamond and other ultra-hard materials, (4) laser ultrasonics in the non-destructive evaluation of aircraft structures, (5) the use of artificial intelligence techniques for structural monitoring, and currently (7) applied and theoretical artificial neural network technology.


"Skynet' is already here and working behind the scenes for years!


I once watched a informative ted like talk video by Mr Stephen L Thaler and it seems to have gone totally MIA as far as my online searches are concerned and if anybody has access to it then could they please link it?




posted on Nov, 22 2014 @ 03:25 AM
link   

originally posted by: ZetaRediculian

originally posted by: H1ght3chHippie
No worries. If Microsoft has a part in programming this "artificial super intelligence" we can rest assured that Skynet will crash every other week with a blue screen of death.


So skynet will run on Linux?


Nah, not some kiddy OS. It'll be VXWorks.



posted on Nov, 22 2014 @ 04:26 AM
link   
a reply to: MarsKingAQuestion

What Is The Good Terminator's Model And Technical Data ?


T-800 has the ability to look for the alternate power source if its battery and power source is disrupted

T2 Extreme DVD text commentary explains:

Terminator drew upon the potential energy in his heat sinks to jump start his internal systems since his main power cell was ruptured and discharged by T-1000’s attack




Real Time Control Of A Khepera Robot Using Genetic Programming



So the robots of tomorrow can learn their own bodies without us having to program every microstep along the way and they can also learn to reroute and evolve their own hardware/firmware/software on the fly.

Here is some old sci-fi reading, form an old playbody [at your own risk] that looks deeply into the subject of A.I. and its consequences!

The Ghost Standard by William Tenn

Immodest Proposals: The Complete SF of William Tenn, Volume 1 by William Tenn


This book is the first volume of a two-book project that will bring back into print all of the science fiction and fantasy of William Tenn. This first volume, Immodest Proposals, contains the majority of William Tenn's short science fiction. It includes such classic stories as "Child's Play," "Time in Advance," "Down Among the Dead Men," and "On Venus, Have We Got a Rabbi."

The next volume in the series, Here Comes Civilization, will contain the remainder of his short science fiction, the novel Of Men and Monsters, and the short novel A Lamp for Medusa. A volume of his non-fiction, Dancing Naked, was published in September 2004.

Tenn has long been considered one of the major satirists in the field. The Science Fiction Encyclopedia calls him "one of the genre's very few genuinely comic, genuinely incisive writers of short fiction." Theodore Sturgeon had the following to say:

"It would be too wide a generalization to say that every SF satire, every SF comedy and every attempt at witty and biting criticism found in the field is a poor and usually cheap imitation of what this man has been doing since the '40s. [But] his incredibly involved and complex mind can at times produce constructive comment so pointed and astute that the fortunate recipient is permanently improved by it. Admittedly the price may be to create two whole categories for our species: humanity, and William Tenn. For each of which you must create your ethos and your laws. I've done that. And to me it's worth it."



William Tenn

William Tenn was the pen name of London-born Philip Klass. He emigrated to America in the early '20s with his parents. He began writing in 1945 after being discharged from the Army, and his first story, "Alexander the Bait," was published a year later. His stories and articles have been widely anthologized, a number of them in best-of-the-year collections. He was a professor of English at Pennsylvania State University, where he taught — among other things — a popular course in science fiction. In 1999, he was honored as Author Emeritus by the Science Fiction and Fantasy Writers of America at the Nebula Awards Banquet in Pittsburgh. In 2003, he was the guest of honor at Capclave. In 2004, he was a guest of honor at Noreascon 4, the 62nd World Science Fiction Convention.

He lived with his wife Fruma in suburban Pittsburgh with several cats and many books. He died on February 7, 2010, of congestive heart failure.

He is not the Philip J. Klass who wrote for Aviation Week and Space Technology (and died in 2005).




RIP Mr Klass!


If anybody does read that short story, then I would like them to try please answer the question posed at the end of it?


Was justice done





posted on Nov, 22 2014 @ 04:34 AM
link   
If Musk tells this, it raises flags for me.. Better be prepared to drop some kill-switches on those AIs.



posted on Nov, 22 2014 @ 04:57 AM
link   
a reply to: Hefficide

For further enlightenment about the future read Daniel Estulin''s book:
TransEvolution: The Coming Age of Human Deconstruction available from TrineDay.com

This quote from Amazon describes the book - one of the best books of 2014:

_Arguing that the race to better humankind is about to go to a new dimension as a result of a nanotechnological revolution, this enthralling read purports that the depth of progress and technological development is such that people in the very near future may no longer be fully human._



posted on Nov, 22 2014 @ 07:11 AM
link   
I doubt 5 years, most experts say 2045 but anything can happen i guess.

I think most programmers are going about it the wrong way. You don't want to program every single action line by line, if you do that it isn't true AI because it isn't thinking for itself it's just doing what it's told. True AI would be based on code that allows it to think for itself. Like an open ended "If _______ Then _______" where the AI can fill in it's own blanks.

Then again, it may not even come down to programming at all, it could be a specific hardware configuration that grants it free thought. Perhaps something similar to a brain structure, interconnecting synapses made up of electrical components. If we had a better understanding of how consciousness works it would make it so much easier to build AI.



posted on Nov, 22 2014 @ 07:14 AM
link   
a reply to: Kuroodo

Maybe you are somewhat short sighted in the programming view.

There are already programs that write and modify code. Mistakes in programs are made.

Whose name will become a common word when the error allows a take over.




top topics



 
28
<< 1    3 >>

log in

join