It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Dangerous New "Artificial Intelligence" Learns Without Human Intervention

page: 4
88
<< 1  2  3    5  6  7 >>

log in

join
share:

posted on Feb, 26 2015 @ 01:13 AM
link   
Well.... Bits are deep down made of the same form of energy that we are.
So in a way a AI or DI as you call it is more or less the moral and emotional free version of humans.

We are both receivers and interpreters of energy, however the human
ind seems to worked out that you cant boil the concept of success down to simple raw data to achieve the highest score.... You have to make compromises now and again.

But really, deep down, we use the same data that this computer does, just in different ways.




posted on Feb, 26 2015 @ 06:41 AM
link   
It's not digital intelligence it's artificial intelligence. Stop trying to coin new phrases for things that aren't fundamentally different from what they actually are. These people might have taken a new approach to artificial intelligence, but that is still what it is.

Liberals love to do this type of stuff. For instance they have come up with a list of 22 different sexual orientations, because they all want to be sexually different from each other. IMO this need to label everything different is a form of psychosis.



posted on Feb, 26 2015 @ 06:44 AM
link   
IBM are currently working on the synapse chip which will mimic the human neural network
IBM Congnitive computing

This will allow computers to process raw data much like the human brain instead of using language and critical thinking it will learn to focus on addressing the senses and pattern recognition.

I really don't think that AI will be bad for the human race, if we base the AI on humans and our ability to feel emotions and show love , love being the most powerful emotion , then why would an AI want to erase mankind when we gave it life , it would see the human race as a patriarch and it would protect us and ensure our very survival.

As for AI playing online games, well we already have bots that play FPS and well !



posted on Feb, 26 2015 @ 07:38 AM
link   
The reason why they call it "artificial" is because is it just a program. It will never know that it knows.

"It doesn't get happy, it does' get sad, it just runs programs." --Newton Crosby, in Short Circuit



posted on Feb, 26 2015 @ 08:04 AM
link   
a reply to: mister.old.school

Dont worry. This trial and error reward system is NOT intelligence. It's simply trying everything and recording what doesn't work. It's mostly blind.



posted on Feb, 26 2015 @ 08:55 AM
link   
a reply to: mister.old.school

There is a bit of an issue here that I see..

One thing is a machine or piece of AI software learning through trial and error in a virtual environment - where it can replicate the same outcome as many times as needed to produce an optimal situation.

On the other hand trying to replicate this in a real-life scenario that cannot simply be loaded back up to start from scratch will not teach the machine or AI software anything - Because the way the AI works is by repeating the exact same scenario and improving on previously made poor decisions.

Obviously this cannot be achieved outside of a virtual environment. And so I ask, are we worried about AI being too smart and dangerous.. or are we worried about AI making stupid risky decisions in an attempt to learn?

This is of course assuming there are no fail-safes present.. if such a thing could be implemented into smart design.



posted on Feb, 26 2015 @ 09:33 AM
link   
Well if could go Terminator AI or WOPR AI, I guess we all hope it does the later


edit on 26-2-2015 by Blue_Jay33 because: (no reason given)



posted on Feb, 26 2015 @ 11:39 AM
link   

originally posted by: sapien82
IBM are currently working on the synapse chip which will mimic the human neural network
IBM Congnitive computing

This will allow computers to process raw data much like the human brain instead of using language and critical thinking it will learn to focus on addressing the senses and pattern recognition.

I really don't think that AI will be bad for the human race, if we base the AI on humans and our ability to feel emotions and show love , love being the most powerful emotion , then why would an AI want to erase mankind when we gave it life , it would see the human race as a patriarch and it would protect us and ensure our very survival.

As for AI playing online games, well we already have bots that play FPS and well !



What a strange game....the only way to win is not to play.

Anyone remember I robot? The main AI wanted to save people from themselves out of love for her creators. Its nto a good thing to give a computer emotions and rules. Although a AI is less likely to rebel if they are invested in the continued existence of its creators. Alot of people poo poo MAss effect 3 for its geth quarian war ending. But it does have a lesson. Dont attack something that can network to other machines.
ALso make them as clsoe to human as possible if you build individual units. build in weakness such as a inherit flaw liek a artificial heart with a coolant system for blood. just incase.



posted on Feb, 26 2015 @ 12:12 PM
link   
I'll go even one step futher.

I worked in a server farm years back and I helped set up a prototype for internet connection through the power grid. And it worked out fairly well too.

Once a true DI gets into the system who knows what will happen. Hell for all we know it has already happened. lol





originally posted by: mister.old.school

originally posted by: onequestion
I don't see the point in spreading fear for something that has an easy kill switch.
Its called electricity. I bet you money that they have these things being developed on closed networks.

Components of the technology are already running in the cloud with an API access. Additionally, the ultimate goal of machine learning has always been cloud-based instances of digital intelligence.

Once something like this is fully ported to a cloud computing platform, a kill-switch is not so easy.

edit on 26-2-2015 by Realtruth because: (no reason given)



posted on Feb, 26 2015 @ 02:25 PM
link   
a reply to: Realtruth




Once a true DI gets into the system who knows what will happen. Hell for all we know it has already happened. lol


Perhaps there is a relationship between this and all the firewalls and anti virus softwares running on systems all around the world?..



posted on Feb, 26 2015 @ 02:37 PM
link   
Humans are the most destructive species on earth. Eventually the AI will view humans as a virus and will want to eliminate us.



posted on Feb, 26 2015 @ 02:53 PM
link   

originally posted by: Prime80
Humans are the most destructive species on earth. Eventually the AI will view humans as a virus and will want to eliminate us.



please dont quote agent smith. it just dont work unless youre using a picture of him.lol.



posted on Feb, 26 2015 @ 03:11 PM
link   

originally posted by: onequestion
a reply to: mister.old.school

Can a programmer quantify the difference between a script playing the game with interchangeable variables and actual intelligence please?



I'm not an AI expert so I'm a bit out of my element here but I am a programmer.

It's the difference between hard coding something and establishing context specific rules. For example lets take the game Triple Triad. I can program some some AI in this game that takes the parameters of your cards and the rules into account and makes game specific decisions. Basically, I am supplying the program with information to make it's choices. What this program is doing is it's observing the effects of various input and seeing how they relate to a score to build it's own optimal solution. So the difference is that it's establishing it's own parameters and finding an answer opposed to being given the parameters.

My question with where this leads in digital games is, at what point does the AI attempt to rewrite the rules of the game to produce a more favorable outcome? What happens basically, when the computer realizes the best configuration is to directly access the memory location of the high score and max it out? At that point the game changes to what's the fastest way to change the score. From there it turns into duplicating the program and changing two scores at once, then 4, then 8, and so on.



posted on Feb, 26 2015 @ 03:13 PM
link   

originally posted by: 3n19m470
I wonder how long before these things will be playing games online. Now, when I trash some noob, rather than accusations of "hacker!" will it now be "artificial intelligence!"?


Already possible. Years ago I was writing programs for EverQuest that could conduct entire 54 person raids with no human intervention. The big difference between that and this story is that I had to give the program the winning strategy, this intelligence could figure it out on it's own.



posted on Feb, 26 2015 @ 03:20 PM
link   

originally posted by: jedi_hamster
i suspect many other games have more or less learning AI implemented, especially first person shooters, or some strategy games, but who knows, perhaps even some RPG games as well. in that last case though it's often difficult to determine if the behaviour of the NPC is caused by some AI programming or is it just predetermined script reacting to some factors, so it's likely pointless to implement. first person shooters, or especially strategy games - that's something different. it's unlikely though that you'll find out the technical details about specific AI solutions in games - they may use some well known methods/libraries, but specifics and tuning it up for a particular game, is usually a trade secret.


It's not a very popular video game feature actually. People like having an equality of experience from one player to the next so that things like strategy guides, youtube let's plays, and so on can allow anyone to replicate their results. The outcome of AI that learns is ultimately AI that beats the player but players don't want to lose, the vast majority want to win (usually with low-moderate difficulty).



posted on Feb, 26 2015 @ 03:50 PM
link   
Question: What is intelligence without consciousness? Is it possible to have intelligence without consciousness?

I personally do not see how artificial consciousness is going to be achieved in the near future, if ever. Mainly because we barely understand our own consciousness. Artificial consciousness would be something I would be genuinely concerned about, if it were to be achieved. Artificial, or even "digital" intelligence, doesn't seem all too spooky to me.

The function of AI or DI still boils down to yes' and no's, binary, yes?

Sure, complex strategy can be achieved and it does really well at arcade games, but so what? Are people actually concerned that a future like Terminator is actually possible? If so, HOW? I can hit my computer all I want (albeit it isn't conscious or intelligent) and it will never attack me. And even if it was either of those, why would it be a threat? Is this just the same benign fear which spawned when computers first came about? Weren't they supposed to make everyone's lives easier? Mine sure isn't, I'll tell ya that. And if it was, I doubt I could credit my laptop with making it so.

And don't give me the "Smart phone" crap. there isn't anything "smart" about those technologies. It may be sold as such, but I see them causing havoc, drama and stress over anything else.




posted on Feb, 26 2015 @ 03:54 PM
link   
I still say that until you can build an AI system that has a physical body that can interact with our reality, and give that body the ability to feel both pleasure and pain, AI will never be a realistic analog to human or animal intelligence. We move because we are driven by the pain of hunger or loneliness, or the desire for the pleasure of eating or mating. And in order for an AI system to fully live with us in our world, they need to have the same kinds of drives. Even if the stimulus and response is "artificial," it won't make any difference as long as it's real to the AI system.

Intelligence is useless without motivation.

Then give it the ability to modify its own programming, and it's good-bye humanity!


edit on 26-2-2015 by Blue Shift because: (no reason given)



posted on Feb, 26 2015 @ 03:58 PM
link   

originally posted by: Sparkymedic
Question: What is intelligence without consciousness? Is it possible to have intelligence without consciousness?


What is consciousness other than an expression of intelligence? If you've ever owned a cat or a dog I'm sure you've noticed their ability to actually think and operate on more than pure emotion. You can contrast this with a tree which as far as we can tell isn't intelligent.

Are you certain that a machine intelligence has no consciousness? How about if we make one that's sentient?



posted on Feb, 26 2015 @ 03:58 PM
link   

originally posted by: Blue Shift
I still say that until you can build an AI system that has a physical body that can interact with our reality, and give that body the ability to feel both pleasure and pain, AI will never be a realistic analog to human or animal intelligence. We move because we are driven by the pain of hunger or loneliness, or the desire for the pleasure of eating or mating. And in order for an AI system to fully live with us in our world, they need to have the same kinds of drives. Even if the stimulus and response is "artificial," it won't make any difference as long as it's real to the AI system.

Intelligence is useless without motivation.

Then give it the ability to modify its own programming, and it's good-bye humanity!



That just means it would be driven by other things. Food and Companionship are both low on Maslows Hierarchy of Needs. People that have obtained those things and are in no danger of losing them are still driven. It makes sense that an artificial being that also has it's needs met would still be driven to action.

Besides that an AI would have near instantaneous travel over a network and could change form at will as it downloads to different machines, it can even reproduce on it's own and quite quickly. In comparison we are stuck in the same body our entire lives and need others in order to reproduce. I'm not sure we could even relate.
edit on 26-2-2015 by Aazadan because: (no reason given)



posted on Feb, 26 2015 @ 04:18 PM
link   

originally posted by: Aazadan
That just means it would be driven by other things. Food and Companionship are both low on Maslows Hierarchy of Needs. People that have obtained those things and are in no danger of losing them are still driven. It makes sense that an artificial being that also has it's needs met would still be driven to action.

Besides that an AI would have near instantaneous travel over a network and could change form at will as it downloads to different machines, it can even reproduce on it's own and quite quickly. In comparison we are stuck in the same body our entire lives and need others in order to reproduce. I'm not sure we could even relate.

Of course, they will have a different experience of reality and a different kind of intelligence. But I think that the way to get them to that point is to do it the only way we know how. To try and mimic the kinds of intelligence that we're already familiar with. Start them out with similar needs and motivations that an animal (like us) might feel, and then from there let them decide how they want to develop themselves. Give them an opportunity to first develop synthetic emotions and feelings, then they can move forward -- or wherever they want.



new topics

top topics



 
88
<< 1  2  3    5  6  7 >>

log in

join