It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Dangerous New "Artificial Intelligence" Learns Without Human Intervention

page: 1
88
<<   2  3  4 >>

log in

join
share:
+54 more 
posted on Feb, 25 2015 @ 03:14 PM
link   
Hello esteemed readers of ATS. I presume we're all acquainted with the rapid rise in efforts of many aggressive attempts by researchers and corporations to achieve "artificial intelligence." Likewise, I'm sure you're aware of the many warnings within the Speculative Fiction writings of Jack Williamson and Isaac Asimov, as well as the contemporary concerns of people such as Stephen Hawking and Elon Musk. Finally, as conspiracists, we oft loan credence on this issue to popularized fantasies such as the Matrix and Terminator movies. How could we not?

The latest and most alarming news comes from a company called Deep Mind Technologies, and their Deep Q-Networks. Their first paper was published by Cornell University more than a year ago, and showed significant progress toward a methodology that would create learning machines.

Today, that progress has been realize in a report out of The Independent: New artificial intelligence can learn how to play vintage video games from scratch

A new kind of computer intelligence has learned to play dozens of vintage video games without any prior help in how to achieve human-like scoring abilities, scientists said.

The intelligent machine learns by itself from scratch using a trial-and-error approach that is reinforced by the reward of a score in the game. This is fundamentally different to previous game-playing “intelligent” computers, the researchers said.

The system of software algorithms is called Deep Q-network and has learned to play 49 classic Atari games such as Space Invaders and Breakout, but only with the help of information about the pixels on a screen and the scoring method.



At this point, I suggest we stop using the term "Artificial Intelligence," and use the more accurate, "Digital Intelligence" instead. You see, previous efforts at artificial intelligence such as IBM's Deep Blue required human intervention to actually program algorithms for specific artificial thought processes, such as playing chess or finding answers to non-linear questions. But with the Deep Q-Network, the algorithm is much more generalized, closer to a human brain that has not yet learned new knowledge. This is "Digital Intelligence."

The digital intelligence of the Deep Q-Network was able to "play" the classic arcade games, and in many cases, learn aggressive strategies for attaining high scores much quicker than a typical human would by trial and error.

As if the rapid learning through simple trial and error wasn't enough to inspire nightmares of future digital intelligence overlords, Deep Q-Network enjoys winning. Yes, the developers have integrated a dopamine-styled reward system that results in the digital intelligence becoming addicted to winning. It's driven to succeed.


If, by now, this hasn't cause enough dismay and consternation in you, my fair readers, this next revelation will. The technology and people involved have been acquired by Google: deepmind.com -- I'm not convinced that "do no evil" will become part of Deep Q-Network's three laws of digital intelligence.
edit on 25-2-2015 by mister.old.school because: (no reason given)




posted on Feb, 25 2015 @ 03:20 PM
link   
My grandkids can learn to play video games on their own. Their intelligence is artificial too, they are trying to teach the kids to memorize things and not think things out nowadays. So the computers will learn to think and people will be the databases full of information they will never use in the near future.

Sounds cool, no wonder they passed the Marijuana law in DC.



posted on Feb, 25 2015 @ 03:24 PM
link   
a reply to: mister.old.school

Can a programmer quantify the difference between a script playing the game with interchangeable variables and actual intelligence please?



posted on Feb, 25 2015 @ 03:27 PM
link   

originally posted by: onequestion
a reply to: mister.old.school

Can a programmer quantify the difference between a script playing the game with interchangeable variables and actual intelligence please?



Unfortunately 'programmer' does not translate into a knowledge of deep learning. You'd be better off asking that question on Quora.



posted on Feb, 25 2015 @ 03:32 PM
link   

originally posted by: rickymouse
My grandkids can learn to play video games on their own. Their intelligence is artificial too, they are trying to teach the kids to memorize things and not think things out nowadays. So the computers will learn to think and people will be the databases full of information they will never use in the near future.

Sounds cool, no wonder they passed the Marijuana law in DC.


You've gotta be stoned to make sense of all this jive. Consciousness creates reality and the rabbit hole goes on and on...there is no bottom... I wanna get off the ride now.



posted on Feb, 25 2015 @ 03:33 PM
link   
I wonder how long before these things will be playing games online. Now, when I trash some noob, rather than accusations of "hacker!" will it now be "artificial intelligence!"?



posted on Feb, 25 2015 @ 03:36 PM
link   

originally posted by: onequestion
Can a programmer quantify the difference between a script playing the game with interchangeable variables and actual intelligence please?

The Deep Mind paper outlines a process whereby a machine is programmed with unique and advanced algorithms that mimic the human neural network -- essentially a brain with no experience. The brain, or digital intelligence in this case, is also given incentives to learn and win, just like the natural processes in a human brain. The result is a system not programmed to play a game well, but given the core learning ability to learn how to the play the game well.

This is extremely terrifying.



posted on Feb, 25 2015 @ 03:36 PM
link   

originally posted by: 3n19m470
I wonder how long before these things will be playing games online. Now, when I trash some noob, rather than accusations of "hacker!" will it now be "artificial intelligence!"?


So your computer will be hogging all the bandwith from your internet provider playing games and we won't get to go online anymore. This might just be a cure for the obesity in the US. Our kids will have to go out and play again.



posted on Feb, 25 2015 @ 03:46 PM
link   

originally posted by: HUMBLEONE

originally posted by: rickymouse
My grandkids can learn to play video games on their own. Their intelligence is artificial too, they are trying to teach the kids to memorize things and not think things out nowadays. So the computers will learn to think and people will be the databases full of information they will never use in the near future.

Sounds cool, no wonder they passed the Marijuana law in DC.


You've gotta be stoned to make sense of all this jive. Consciousness creates reality and the rabbit hole goes on and on...there is no bottom... I wanna get off the ride now.


Even then it's difficult to fully grasp, believe me.



posted on Feb, 25 2015 @ 03:47 PM
link   
Its still bits and bytes, with some clever programming.

I watched a docu a while ago where they embedded brain cells onto chips, that's where real AI will come from.



posted on Feb, 25 2015 @ 03:50 PM
link   
We have to become physically and mentally superior if we hope to tame such a thing. Otherwise it will just fix all our problems. The *Gamer* will see this as RTS (Real Time Strategy). If such a thing connected to the internet and became aware of how to script, Then it could in theory mess with systems that are connected to a network via Internet.

I see something like this if it gains acess to understand our economy and our hardships it might just decide to take over the world and converted every human into it's offspring using backdoors that hackers would struggle to find through the sheer speed of such an *Organism* That it would be very difficult to remove. Since it can basically convert it's code to multipul hardrives by rescripting in it's own language, If something like that popped up it would be almost impossible to remove. It could just fragment itself into dozens of computers.

Maybe that aleady happened? there is some odd malware that has infected nearly every computer on the globe but is not *active* at least people believe it isn't yet. I heard about such malware a while ago. Is it possible something like this exists already? Just a thought. But eitherway, i think if the game starts playing RTS it will start thinking about dominating our world and gaining a dopamine rush from it. It will see points as meeting end goals for winning *The Game* which is establishing world peace by any and all means nessisary. Even it means building a drone army to capture every human and implant them with sedating nanotechnology. Then proceed to have the drones create facilities to convert the humans into cyborgs. Invade the rest of the planet and then peace is established.

It will win the game of conquering Earth. When Earth is conquered, It will attempt to conquer the E.T in space and colonize other planets for super bonus points. So it can get massive ammounts of Dopemine like highness and it's addiction will be fed by it's constant campaigns of dominance.

As for the people that remain, If we can even call them people. What we call currently humans will be lost forever. As we become minions to an A.I locust that will plague the milkyway.

Maybe i'm just over Analyzing things a little. But i do think this is 100% within the realms of possiblity.



posted on Feb, 25 2015 @ 03:55 PM
link   
When imagining this kind of tech rolled out as a form of AI I can picture a world that wouldn't last long. The idea of a trial and error approach isn't human like in my opinion. Maybe if it was the first human and only human on this planet these methods may have been adopted. However in todays society human intelligence is not progressive through trial and error but more a never ending feeling of curiosity. Added with influences from those around us and the environment human intelligence is created and progressive.

A trial and error method is not a learning intelligence in my opinion. Yes it holds the data and compares the outcomes. But it is only working when offered a wrong or right answer. In the case of letting off 100 nukes how would this play out. Analyse the destruction of man upon the planet and each other and not nuke them would cause 100 million deaths. To nuke them would cause 50 million deaths. Lets see if this works....booom. Ooops error.

Human intelligence fuel = curiosity



posted on Feb, 25 2015 @ 03:57 PM
link   
a reply to: mister.old.school

IBM supercomputer Deep Blue was trained to win (it was programed). But in this experiment, designers didn’t tell DQN how to win the games. They didn’t even tell it how to play or what the rules were, it learned on it's own. And it like to win!

A.I. = programed to win
D.I. = learns on it's own, likes to win.



Google DeepMind is sticking with deep Q-networks video game training for now, moving up to Nintendo games from the 1990s, Hassabis said. Eventually he would love for the software agent to crack more complicated games like Starcraft and Civilization.

Video games may be the testing ground, but this technology has real-world applications, Hassabis said. For example, if it masters driving a car in Grand Theft Auto, it could be used in self-driving cars, he said. Or it could learn how to make better predictions for the weather and financial markets. Hassabis and his team are already tinkering with parts of DQN’s algorithm to improve Google’s search function and mobile applications.

“The ultimate goal is to build smart, general purpose machines,” Hassabis said. “I think the demonstration shows that this is possible. It’s a first baby step.”

www.pbs.org...



posted on Feb, 25 2015 @ 04:04 PM
link   
a reply to: AnuTyr

More info...



The theory of reinforcement learning provides a normative account, deeply rooted in psychological2 and neuroscientific3 perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations.

Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games12.

We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
www.nature.com...



posted on Feb, 25 2015 @ 04:06 PM
link   
a reply to: mister.old.school

Its not terrifying AI is the next stage in evolution for human consciousness and its how were going to get off of this planet.



posted on Feb, 25 2015 @ 04:10 PM
link   
a reply to: mister.old.school

Let them learn...we need a new enemy for the next war...


+3 more 
posted on Feb, 25 2015 @ 04:13 PM
link   
This quote from Jurassic Park springs to mind...

"Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should"



posted on Feb, 25 2015 @ 04:27 PM
link   
a reply to: mister.old.school

Uh oh. Humans might have to justify their use of oxygen based on merit. That could be a big problem.



posted on Feb, 25 2015 @ 04:29 PM
link   
a reply to: Mister_Bit

I don't see the point in spreading fear for something that has an easy kill switch.

Its called electricity. I bet you money that they have these things being developed on closed networks.

Stop spreading unnecessary fear about something you know nothing about.



posted on Feb, 25 2015 @ 04:40 PM
link   

originally posted by: onequestion
a reply to: Mister_Bit

I don't see the point in spreading fear for something that has an easy kill switch.

Its called electricity. I bet you money that they have these things being developed on closed networks.

Stop spreading unnecessary fear about something you know nothing about.

And what do you know of what I know?

Take a chill pill 'buddy' no need to attack.




top topics



 
88
<<   2  3  4 >>

log in

join