It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Deepmind A.I. learns to play video games

page: 2
19
<< 1    3 >>

log in

join
share:

posted on Nov, 26 2014 @ 10:54 AM
link   
AI that plays computer games. isnt that called an NPC?



posted on Nov, 26 2014 @ 03:39 PM
link   
a reply to: ChaoticOrder

The reason we place so much weight in said test is because Turing is regarded as the father of theoretical computer science and artificial intelligence. In brief if someone can't tell whether they are interacting with a human or machine, then the machine is intelligent. All through the test was devised in 1950 its as relevant now as it was back then given the fact that logic has remained the same.

How do you know that we(Humans) have original thoughts or do unpredictable things? Maybe we are programed to react in a specific way by design. Considering the Holographic principle seems to be gaining credence as the days go by this could quite possibly be the case.

I suppose at the end of the day we humans have already created a digital universe by way of our networks and internet and populated it with simple creatures such as bots, programs and there like. Now we feel the need to somehow bestow a form of intellect and personality to said creatures. How very God like and also how very Human.


Let there be light eh?
edit on 26-11-2014 by andy06shake because: (no reason given)



posted on Nov, 26 2014 @ 03:54 PM
link   
Enchanting news...but as many have already pointed out, what about the dangers of misunderstanding some aspects of human interaction from too much of the wrong kind of stimulus programming?

Hopefully they will continue to balance out the virtual brain of Deepmind with all the wonderful and beautiful variations of human consciousness and artistic and creative expression, as well as encourage it to create new developments along that vein of expression so it doesn't fall into an algorithm of negative despondency and existential crisis should it ever reach self-awareness.

Call me overly sympathetic to all forms of consciousness if you so desire. But I really thing It's all going to be ok. I'm pretty sure the programmers on this project are very aware of the "Skynet Paradox".

edit on 11/26/14 by GENERAL EYES because: edit for minor clarification



posted on Nov, 26 2014 @ 04:02 PM
link   
a reply to: darkbake

we are doomed i tell you



posted on Nov, 26 2014 @ 05:10 PM
link   
take a look at this...
www.telegraph.co.uk...

although the vid is missing... can anyone find it?



posted on Nov, 27 2014 @ 03:31 AM
link   

originally posted by: andy06shake
a reply to: ChaoticOrder

In brief if someone can't tell whether they are interacting with a human or machine, then the machine is intelligent. All through the test was devised in 1950 its as relevant now as it was back then given the fact that logic has remained the same.

Given more than a few minutes with the AI and you will begin to realize it's a machine. If you know what sort of things to say you can easily know you're interacting with a machine. Just try to have a conversation which requires the AI to keep track of things it has previously said and you will quickly dispel the illusion that you are talking to a 13 year old child.

On the other hand if you're speaking to a real 13 year old child you could have conversations that last for hours and at no point will you think it's a machine. Saying a machine is intelligent just because it managed to fool a high percentage of people who spoke to it briefly is not a logical statement. The Turing test is outdated and the methodology is flawed.



posted on Nov, 27 2014 @ 03:56 AM
link   
a reply to: ChaoticOrder

"The Turing test is outdated and the methodology is flawed."

So you obviously have a better one at hand? I don't have that many conversations with 13 year olds myself so i really can't comment as to their conversation skills or abilities. The methodology of said test is not flawed, technology has simply moved forward, if the test was flawed then why would it still be used?

Keeping in mind that humanity still cannot actually define what our own consciousness is or where it resides. So any and all tests produced by us to gage whether or not an Artificial intelligence is sentient, intelligent or conscious will most lightly remain flawed to some degree or another considering we don't really know what we are looking for yet they are all directly linked somehow.
edit on 27-11-2014 by andy06shake because: (no reason given)



posted on Nov, 27 2014 @ 04:48 AM
link   

originally posted by: andy06shake
a reply to: ChaoticOrder

The methodology of said test is not flawed, technology has simply moved forward, if the test was flawed then why would it still be used?

It is not used very often anymore.

Mainstream AI researchers argue that trying to pass the Turing Test is merely a distraction from more fruitful research.[41] Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test."[77] There are several reasons.

en.wikipedia.org...

edit on 27/11/2014 by ChaoticOrder because: (no reason given)



posted on Nov, 27 2014 @ 05:03 AM
link   
a reply to: ChaoticOrder

Im on a phone right now so i apologise for the post edits and spelling mistakes in advance.

What better test do you suggest? And don't you think any other tests will at least contain a measure of similarity? Tests can be and probably have been updated over the years to accommodate our technological advancements regarding artificial intelligence. The logic however remains the same as does the fact that we still really have no idea as to how consciousness functions or actually emerges with regards to intelligence.



posted on Nov, 27 2014 @ 05:38 AM
link   

originally posted by: andy06shake
a reply to: ChaoticOrder

What better test do you suggest? And don't you think any other tests will at least contain a measure of similarity?

I don't necessarily think other tests need to have any similarity to the Turing test. But to be honest I haven't done much thinking about alternative tests. What I have done a lot of thinking on though is the qualities which I believe all conscious entities possess. I don't really think there is a perfect test, but I think a good test would measure all the different aspects of consciousness to ensure that it's not a cheap emulation.

For example a conscious mind should be capable of understanding cause and effect, it should be able to learn new rules about how the world works through observation and it should learn from the past to plan for the future. The AI should have expectations and make predictions about future occurrences because that is how we stay alive and it's how we detect errors and replace them with probable substitutions.

It should also have the capacity for originality and creativity, in other words it should be capable of innovation through using some type of imagination. Obviously it also needs to be capable of communication, meaning it should be able to communicate ideas and concepts using language.

It shouldn't just repeat phrases which are programmed into it, the AI should be able to follow the flow of a conversation and build relationships with the people it speaks to. The AI should also have the ability to lie to the people it speaks to, because deception indicates that the AI has planned for the future in order to achieve a certain goal by lying.

Having goals and ambitions is also another important quality of consciousness. Possibly the most important quality of consciousness is the capacity for self analysis and self diagnosis because that implies a model of the self and the ability to be aware of ones self. I guess at the heart of consciousness is what we call "thoughts", but don't ask me to explain what a thought is.
edit on 27/11/2014 by ChaoticOrder because: (no reason given)



posted on Nov, 27 2014 @ 05:51 AM
link   
a reply to: ChaoticOrder

I imagine the answers we may devise regarding artificial intelligence will have more in common metaphysics than logic.

As to what original thought is? I really don't imagine we have the tools at our disposal to address the question.



posted on Nov, 27 2014 @ 06:09 AM
link   

originally posted by: andy06shake
a reply to: ChaoticOrder

As to what original thought is? I really don't imagine we have the tools at our disposal to address the question.

I don't think original thought / imagination is such a hard thing to understand. It's basically what I would call chaotic extrapolation, meaning you can invent a new machine by using your prior knowledge of the world even though no one has actually built that machine before because you know physics. The chaotic part of it is when you're trying to come up with an original design, your brain is combining a bunch of existing ideas in ways that you've never combined them before. So you're essentially just trying out random ideas until you stumble across something which you think will work based on your understanding of how the world works. This is one reason why I think the randomness of quantum mechanics is absolutely essential to consciousness. Without quantum mechanics the universe would be deterministic and we'd all be completely predictable robots.
edit on 27/11/2014 by ChaoticOrder because: (no reason given)



posted on Nov, 27 2014 @ 06:16 AM
link   
a reply to: ChaoticOrder

I certainly agree that quantum computing should it actually ever come to fruition will open up whole new horizons regarding not only artificial intelligence but the rest of our universe in general. That being said we may very well be nothing more than predictable robots considering the grand scheme of things.

This short video addresses some interesting questions regarding our perception of reality, actually i think the dude is a member here on ATS.

www.youtube.com...
edit on 27-11-2014 by andy06shake because: (no reason given)



posted on Nov, 27 2014 @ 08:51 AM
link   
Glad to see they are going about AI the right way by making it learn for itself and not programming in every single code. Considering Google are behind this it shouldn't be too long before AI is mainstream/commonplace.



posted on Nov, 27 2014 @ 02:37 PM
link   
a reply to: darkbake

So what happens if you let this program learn how to work a virtual stock market game and then set it free on society?

Also.. Will it be able to play Halo on Legendary without dying on one playthrough?



posted on Nov, 27 2014 @ 03:25 PM
link   
a reply to: darkbake

Very Interesting. Hopefully it won't take on the idea of playing nuclear war like JOSHUA off of War Games



posted on Nov, 27 2014 @ 04:36 PM
link   
a reply to: ChaoticOrder

Lying isn't actually very important to AI's, as far as research and every day use goes it's an undesirable characteristic which is why no one includes it in their AI's. We already know how to make a computer program lie to us, it's not really all that difficult. You just have to calculate which has more value, giving the honest answer or giving the answer more likely to be accepted.

In practice including the ability to lie simply makes all other information gained from the AI unreliable.



posted on Nov, 27 2014 @ 10:09 PM
link   


Let's just hope if and when a true AI does emerge it decides to look kindly on us


What is your definition of "true AI", and will it really just "emerge", and can mere intellect make decisions, and can an artificial anything have kindness, and can it truly and really LOOK?

Just a few questions.. I think you are making a lot of assumptions as to what an artificial intelligence is capable of. When even a very advanced artificial, soulless intelligence, will never be able to make decisions on its own, unless it's specifically programmed to do so, and it will not have any kind of kindness, it can't actually have any interests or wants, it can't just -do- stuff, unless the algorithm demands that it does so.

Just pure, cold intelligence laying on a table, is not going to DO anything. It won't make decisions or form opinions. It will just stay and exist.

It will have to have a 'program' that USES it, for it to be good for anything. And then, the program dictates what it does, what it directs its attention to, and so on. Intelligence is just a tool, you need a USER for it. It can be a program, it can be a human, and in the case of 'natural intelligence', it is indeed a soul.

Soul is what makes decisions, learns from mistakes, knows right from wrong, has kindness, looks, feels and experiences, and has wants, needs, interests, ideas and so on. Without a soul, an A.I. would be still very useful for various purposes, but it can't fundamentally ever be anything that you seem to envision (or at least what could be interpreted as you envisioning something too human-like from how you worded things and what you wrote).

A.I. can't create anything, because it's just pure intelligence. A.I. with some kind of programs or guidance algorithms, or human beings, that utilise the intelligence, can do a lot of things with it. But there's never going to be a situation, where some artificially created intelligence becomes 'self-aware' by magic, all of the sudden, and then looks at humans, and starts making decisions.

However, it's not completely clear as to what your meaning of A.I. really is, so a lot is possible, I suppose. But what I mean is that just pure intelligence is not going to do any of the things you mention - it's not going to be a The Terminator (1984) nightmare scenario, and no amount of intelligence is ever going to become 'self-aware'. Awareness is not something that happens when you have enough intelligence - and even awareness is not enough for true sentience - you NEED a SOUL.

(For some reason, that's a topic and a word that's danced around but rarely touched in this world - what are people afraid of? Souls are what we are, that's a fact. Our physical bodies rot away, and even the etheric body vanishes - the astral body itself will some day transform, but our eternal part is the soul, the spirit, the self, without which, there would BE no natural intelligence, only a latent capacity of intelligence that would never be used.)



posted on Nov, 27 2014 @ 10:45 PM
link   

originally posted by: TzarChasm
AI that plays computer games. isnt that called an NPC?


Or a .. bot?

Ooh, a QUAKE (three, judging from the picture, not the original) being played by an artificial intelligence!

Never seen THAT before... (rolling eyes would be fitting right about here)

I mean, there have been bot-players for a LONG time, what's so interesting about that?

An A.I. that plays basically the simplest video games on a SEVENTIES video game console - why is that supposed to be impressive? Those games are VERY simple, they require NO intelligence whatsoever, they are mostly just REFLEX-tests.

So, he figured out a tactic (by mechanical trial-and-error) in each game to optimize the score. That doesn't require intelligence, it just requires repetitively just playing it until you have seen practically all possible outcomes and variations.

Again, those are very simple games, mostly you just need to have good relexes to excel in them. They are not anything you need intelligence for.

If the A.I. can complete adventure games, like Maniac Mansion, The Secret of Monkey Island, and so on, then it might be interesting. As long as it's not doing it by trial and error. I mean, trying every single command and object with every single other object and background object that's usable/operatable. That's not INTELLIGENCE.

That's BRUTE-FORCING.

So, it leans from errors, but so does a simple brute-force-algorithm. It won't try to same combination twice, if it knows the result was not desirable.

What's so advanced about it trying 3D games anyway? Haven't we have skilled bots that play a lot like humans in games like Unreal Tournament and Quake 3 anyway, for AGES? (I think I had them even in Unreal and Quake 1, with some tweaking, of course).

Playing simple games that work inside a computer already, is not interesting.

Make it intelligent enough to solve all the world's financial problems for the poor, or to solve the problems of North Korea without injuring or murdering anyone, and I will probably be interested. This is just another yawn-inducing supposedly advanced technology.

So, it can learn from its mistakes. That's the only interesting part, and even that's not THAT interesting, considering how excruciatingly long it took for it to learn to play pong. Just look at those amounts - hundreds of hours, and THEN it can play?

How long does it take for a four-year-old kid to learn to play a simple game, like that?

Back to the drawingboard...

This is exactly like those ASIMO videos - a robot that looks like its constantly constipated, and doesn't move anything like a human would, and that falls from stairs without having reflexes to at least try to stop the fall, 'running' around in a really slow and unnatural-looking way is just not that impressive.

Create a robot that can complete a complex TAI CHI sequence with such a grace and delicate movements that face masked, it's impossible to tell it apart from an experienced TAI CHI master, and I will probably be interested.

But a huge-bottomed robot, walking slowly, LIKE A ROBOT, is not that interesting. When a robot can be programmed to be as good a Martial Artist as Bruce Lee was by showing it Bruce Lee movies and letting it read Bruce Lee's books, then I will probably find it incredibly interesting.

These extremely tiny 'steps' are just NOT interesting.

This robot would definitely fail the Turing test, but the flaw in that test, of course, is that the humans on the other end should be relatively knowledgeable, intelligent, and perhaps even wise. If you put some moronic hicks to talk with it, I am sure even a simple modern A.I. would pass the Turing test.



posted on Nov, 28 2014 @ 12:36 AM
link   
a reply to: Shoujikina

I thought exactly the same thing. The learning AI, though interesting, probably took 10,000 attempts or so in order to train the neural network. Exploring and mapping out all input output combinations.

A five year old, might take 5, and you could tell the child generally how the game works and they would use that to help them. When could an AI do that? And then the five year old could tell a story and throw a ball and trick suzy into eating ants.



new topics

top topics



 
19
<< 1    3 >>

log in

join