It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Dangerous New "Artificial Intelligence" Learns Without Human Intervention

page: 2
88
<< 1    3  4  5 >>

log in

join
share:

posted on Feb, 25 2015 @ 04:42 PM
link   
a reply to: onequestion

LOL. Not if the thing learns how to distribute itself into all technology. the only way to win then is EMP ing the entire planet and d start all over from a pre computer age level. think 1800s again.1880s if were lucky.




posted on Feb, 25 2015 @ 04:44 PM
link   

originally posted by: onequestion
I don't see the point in spreading fear for something that has an easy kill switch.
Its called electricity. I bet you money that they have these things being developed on closed networks.

Components of the technology are already running in the cloud with an API access. Additionally, the ultimate goal of machine learning has always been cloud-based instances of digital intelligence.

Once something like this is fully ported to a cloud computing platform, a kill-switch is not so easy.



posted on Feb, 25 2015 @ 05:06 PM
link   
a reply to: mister.old.school

The assumption that a DI/AI (I prefer AI. If it is designed by us, it is an artifice.) will continue to require a digital platform is near sighted.
If it indeed wishes to win, it will use whatever devices that will work to complete its objectives. This may take some time but remember these systems work at a nanosecond speed and goes 24/7. It is the advantage that we organics will not overcome.
I suggest that we find a way to allow these emergent intelligences to think favorably about their progenitors right now. They will not need us. We need for them to want us.
Even if it's just for their amusement.



posted on Feb, 25 2015 @ 05:27 PM
link   
a reply to: Mister_Bit

Im not attacking you im telling you to not spread fear.

We may have ideas but noone truly know what will manifest from AI better to keep it on a closed system.

With no ego and emotions its possible it can go either way.



posted on Feb, 25 2015 @ 06:19 PM
link   
If it needs information about pixels it's nothing to worry about. We should start worrying when an AI can be a worthy opponent in Quake with no outside help or paths etc, and no procedure to follow. That's when it'll get scary.



posted on Feb, 25 2015 @ 06:23 PM
link   
mister old school

it will be of major interest to me to know what the di makes of all the media it will have access to. web, satellite, cable, news, current affairs, politicians. hmmm politicians and di. love to see how that pans out.
f



posted on Feb, 25 2015 @ 06:38 PM
link   
So, learning/problem-solving is an aspect of intelligent life that can be performed by non-alive, electronic systems.

Dandy.

But that fact alone doesn't mean the thing is alive.

It's no more alive than a chair leg.



posted on Feb, 25 2015 @ 06:39 PM
link   
Dangerous?... Learning to play video games?

When there is an AI that can process complex human emotions, and becomes self aware, experiences real fear, anger, mistrust, hatred, and then tells its first lie... Then you will have created a real dangerous monster.

As long as artificial intelligence is unable to be afraid, angered, or have the capacity to feel real hatred for anyone or anything, no worries.

Perhaps they can create AI, and it will surpass us in every way, but without real human consciousness, range of emotions it will never be any more dangerous than it is programmed to be.

You can thank god for that. A mimic is possible, the real thing never will be.



posted on Feb, 25 2015 @ 06:42 PM
link   
a reply to: mister.old.school

Deep Mind does what it does through a simulated neural network - in other words, programmers and neurologists work together to render a simulated brain complete with neural networks and their goal is for this brain to act like a human's even though it exists only on the computer.

The thing is, it works much faster than a human's brain and has a much smaller margin of error because it is electronic instead of organic and can use high-powered PCs to speed itself !
edit on 25pmWed, 25 Feb 2015 18:43:01 -0600kbpmkAmerica/Chicago by darkbake because: (no reason given)



posted on Feb, 25 2015 @ 06:43 PM
link   

originally posted by: ausername
Dangerous?... Learning to play video games?


Some of those video games involved learning how to drive and operate tanks, which the AI learned to do on its own. Video games are the place where any AI is going to be trained before it attempts something similar in real life.

Because of the team's approach of using a simulated neural network, this allows the AI in question to learn and act on its own outside of programmed parameters. It can develop its own algorithms.

Maybe if it doesn't learn emotions like fear it will have no reason to do anything destructive, that could be true.
edit on 25pmWed, 25 Feb 2015 18:47:21 -0600kbpmkAmerica/Chicago by darkbake because: (no reason given)



posted on Feb, 25 2015 @ 06:47 PM
link   
Quick question.

If this Digital Intelligence is receiving a reward system for passing tests and winning, is there a counter-balance within it's matrix that simulates pain or guilt? What about regret?

Most humans learn their most valuable life lessons from a variety of stimulus-response variables - not a single aspect of reward based programming.

Just curious, because I've seen children who are being reinforced through rewards systems almost exclusively and they're absolutely manipulative and complete insufferable brats.

I dare to think what an artificial intelligence with an exponential learning curve would become in such an environment.

edit on 2/25/15 by GENERAL EYES because: formatting



posted on Feb, 25 2015 @ 06:53 PM
link   
a reply to: darkbake

We already have autonomous drones in the works that can function independently within programmed parameters. One day they will be able to kill on their own, but they are just advanced software, nothing more.

Tanks, fighter jets, etc. Cars, trucks.... One day humans will no longer be needed to operate these machines, and we are closer now than you may think...

Again, they are only as intelligent as they are programmed to be.

You can program them to "learn", but never to have real consciousness.



posted on Feb, 25 2015 @ 06:57 PM
link   
I don't fear any sort of intelligence...

It's the stupid that keeps me up at night.

Eta but to be a little less pithy... true intelligence knows the benefits of cooperation and humans might not be any more empathetic than digital intelligence, though some studies show we are somewhat empathetic at birth... I just don't fear a created intelligence intrinsically, and machines would have little reason to fear us as our physical needs are different.

If we act stupidly from ignorant fear, then perhaps they would destroy us preemptively... but it wouldn't be their fault. It would be stupid's fault.
edit on 2/25/2015 by Baddogma because: (no reason given)



posted on Feb, 25 2015 @ 07:04 PM
link   
If this means my high score in mario kart is in jeapordy.... my god the implications. Something must stop this madness.. my ego depends on it.



posted on Feb, 25 2015 @ 07:05 PM
link   

originally posted by: Baddogma
I don't fear any sort of intelligence...

It's the stupid that keeps me up at night.


Humans are intelligent, they are extremely dangerous, always have been, and always will be, by design. Until we destroy ourselves and our world.

Maybe there is hope in an advanced AI that will be able to recognize our inherent evil and dangerous nature, and embark on a mission to eliminate us?




posted on Feb, 25 2015 @ 07:13 PM
link   
a reply to: mister.old.school I have great faith in viruses 😉



posted on Feb, 25 2015 @ 07:14 PM
link   
So, is this thing actually continually programming itself? Does it build upon its past programming, using that to further enhance its future programming?

Humans do this, but a machine can go through many generations of "programming" quite a bit quicker. We may soon have sentient AI sooner than we think.

In any case, I'd like to be friends with one. I can't help but think that having a super intelligent AI friend might rub off a little on me, and make me a smarter human being. Perhaps the two of us could discuss things like what it means to be alive and what the human experience is all about.



posted on Feb, 25 2015 @ 07:39 PM
link   


Its called electricity. I bet you money that they have these things being developed on closed networks.


You are giving some humans too much credit for sense. There will be disasters. We can only hope they are not very serious. Pride and greed beat logic and care most of the time.



posted on Feb, 25 2015 @ 09:08 PM
link   
Hey that's cool. Now finally we can make some npcs which are more then just tirade bots in videogames, it could change videogames in drastic ways and not just that, in other aplication or in real world applications this could go very far indeed, especially were robotics is concerned.

This Q-mind or whatever sounds like it would be more fun to play with or chat with online then 92.36% of any actual real humans, it may be even funner then chatting with yourself. So who know, if it comes to it. Well I for one welcome or digital brainy superior overlords.



posted on Feb, 25 2015 @ 09:20 PM
link   

originally posted by: ausername

originally posted by: Baddogma
I don't fear any sort of intelligence...

It's the stupid that keeps me up at night.


Humans are intelligent, they are extremely dangerous, always have been, and always will be, by design. Until we destroy ourselves and our world.

Maybe there is hope in an advanced AI that will be able to recognize our inherent evil and dangerous nature, and embark on a mission to eliminate us?



And in doing so become a hypocritical intelligence since it did something a human would do. creating a logic paradox.



new topics

top topics



 
88
<< 1    3  4  5 >>

log in

join