It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Robot Teaches Itself to Smile

page: 2
<< 1    3 >>

log in


posted on Jul, 11 2009 @ 11:39 PM
reply to post by weedwhacker

Well in a very limited way we are using a form of robotics in the drones we are using in Afghanistan. One can only imagine whats coming down the pike...

[edit on 11-7-2009 by SLAYER69]

posted on Jul, 12 2009 @ 04:33 AM
It worrys me that we are teaching robots to re programm them selves.

How i see it is that when a robot can re programm its self the possibilities are endless which could be bad.

posted on Jul, 12 2009 @ 05:06 AM
We could be going too far with technology......... AI, I Robot & Terminator spring to mind!

Pretty soon we won't be able to tell if we are chatting to a human or not.... having sex with a human or not.......

posted on Jul, 12 2009 @ 05:11 AM
Well, as stated in the Terminator 2 film;
"Sky nets sets all our CPU's to read-only when we are in the field"...
"In other words they don't want you thinking for yourselves?"

I'm paraphrasing, but the point is that even in a fictional world, the "authorities" (in this case another computer) was concerned enough to prevent the robots from being able to think for themselves.

I for one have long awaited the day that robots could walk and talk amongst us. Now that the day is upon us, however, I have some severe reservations, and concerns.

My biggest concern is that significant amounts of the funding for these robotic development projects is coming from governments. We've all seen how honest and reputable governments have been in the past, and present, which makes me wonder just what some of these projects will really be used for.

"Don't worry. Don't ask questions. We'll keep you safe"
"You're either with us, or your with the terrorists"

posted on Jul, 12 2009 @ 05:14 AM
reply to post by dampnickers

This is NOTHING like robots walking around and talking amongst humans.
This is minor facial movements.

posted on Jul, 12 2009 @ 05:29 AM
reply to post by pastry

Facial expressions are one thing, walking and talking amongst us is something entierely different.

My point is that "the day is upon us" in terms of robots walking and talking amongst us is a metaphorical day.

How fast does time move? Ten years ago you wouldn't have thought about iPods, or Plasma TV's, or HDTV probes recording the surface of the moon.

In short, the time taken to bring this technology into the world will be very short, and the day is therefore upon us.

posted on Jul, 12 2009 @ 05:39 AM
reply to post by dampnickers

I agree with you...... the time is upon us.......... we shall have to fight these robots one day........

posted on Jul, 12 2009 @ 07:44 AM
reply to post by TruthxIsxInxThexMist

.......... we shall have to fight these robots one day........

That is why I'm going to continue development of my own PIMP....

Personal Inter-electrical Magnetic Pulse device. (Pat. Pending)@

One shot of the disrupting energy from that baby, and good night I-robot!!!

Still working out the kinks, in my garage. Getting it down to a mngeable size is proving to be a hurdle....right now it's the size of a Buick!!!

(Of course, I can still use it as a battering ram, in a pinch!)

posted on Jul, 12 2009 @ 07:44 AM

Originally posted by theflashor
It worrys me that we are teaching robots to re programm them selves.

How i see it is that when a robot can re programm its self the possibilities are endless which could be bad.

That is not at all what is happening. A robot will never be able to reprogram itself, unless it is directed through programming to do so(think mode switch).

What they have done is come up with better logic that allows the robot to take in some sort of external data source, process than and add the facial expressions, where in the past they were having to program them all individually.

It is quite an achievement no doubt. However, I don't think you guys are really understanding what it is doing. It doesn't really "reprogram itself", and it doesn't really "learn". It processes data based on the logic given to it. So it's not learning at all, it's just apply the logic to the data and that produces these results. What you guys are considering to "learning" is the actual logic which is given to the program itself. All the robot/AI does is process what it's "learned" from the programmer itself.

So rather than having them programmed in specifically, it is able to take in the data and apply the patterns until it gets enough matches to apply it in the form of an expression. This is cool because it will be able to pick up on some of things maybe not programmed in, and has the potential to pick up all emotions.

I'm starting to get the feeling that many people are going to be fooled by robots in the future. Because they will one day be very life like to the point where you won't be able to tell the difference between a human and a robot. But they will always be reduced to the logic given to it by the programmer.

It's like in the movies really in this case, where like say in I-Robot. The "special" robot had somehow gotten a soul/consciousness. And from that it is able to "understand" and create it's own logic etc and reprogram itself and such, it is able to choose not to follow the orders given to it. Where as the normal robots were unable. Same thing in short circuit and many other movies.

It's certainly not them having the ability to reprogram themselves that is to be fearful of. It's the fact that they will carry out the logic given to it by the programmer 100% without thought or understanding of it's actions that should be worried about. Where as a human my show a empathy for the situation, a robot will only know to follow it's orders and so forth.

There's a reason why this world tries to turn people into exactly that through manipulation.

Good stuff, but people need to realize the limits and true dangers of AI/Robotics.

[edit on 7/12/2009 by badmedia]

posted on Jul, 12 2009 @ 07:50 AM
reply to post by badmedia

I'm starting to get the feeling that many people are going to be fooled by robots in the future.

Everyone has certainly heard of the Turing Test, correct?

The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen...

BUT, today with voice recognition and response programs, the keyboard idea is old fashioned.

posted on Jul, 12 2009 @ 08:21 AM
reply to post by weedwhacker

I'm familiar with the test, and think it's a crock. The intelligence it has is based on the programmer putting their intelligence(logic) into the machine.

The above test will surely be passed one day, and that is why I think people are going to be fooled into thinking it's intelligent.

The true measure of intelligence will be when it is able to come up with it's own logic, and do things other than that which it is programmed for. IE: Make a choice. That will be when it steps beyond the "Artificial Intelligence" stage into real intelligence.

There are other factors as well, assuming the barrier I mention is somehow overcome(I don't think it will be, not without an act of god, or someone putting their consciousness into it).

It can't just be "intelligent". It has to fall within a certain range of intelligence. If it is too intelligent, then people might not be able to understand or see the intelligence. So it in some ways actually has to be dumbed down in order for it to be recognized, within a certain range of our own understanding.

Think of it as like Einstein going back in time 5000 years ago and trying to tell them how to split an atom. They wouldn't recognize him as being intelligent at all. What a crazy old nut they might say.

I've spent time studying this subject. Lots of time. I came to the realization that there are basic requirements for "intelligence".

The most basic of those is free will. If it doesn't have free will, then it can not be intelligent. Because it is unable to do anything beyond what it is told to do. So it needs actual free will.

For example, if I want to simulate a choice, I can just use pseudo-random numbers. Generate a random number between 1-1000 and say "if that number is greater than 900, then do this. If the number is less than 900 do this". And that will simulate and give the illusion of a choice. But it's not really a choice.

And free will can't come from action and reaction/logic. I can simulate a choice, but will not really be a choice. "Choice" is contradictory to the physical universe and all things of it.

So, the test you mention is only testing if the illusion of intelligence can be obtained. And I think the answer is a big fat "YES", and that test will be passed.

But the real test of intelligence is what I mention, if it is able to understand and create it's own logic, and make actual and real choices.

I've said it before, but if someone is somehow able to prove me wrong and actually do the above, or even just provide the basic pseudo logic for doing it, they will become richer than bill gates. Although, that is assuming "they" really want intelligence, which is a big if since "they" have done everything in their power throughout history to keep that intelligence as low as possible and turn people into AI/Robot types who just follow orders. The moment it was to gain "real intelligence" would be the moment they lose control over it, unless they also dumbed it down like they do humans.

[edit on 7/12/2009 by badmedia]

posted on Jul, 12 2009 @ 10:17 AM

Originally posted by SLAYER69

Robot Teaches Itself to Smile

A robot has taught itself to smile, frown, and make other human facial expressions using machine learning.

To get the incredibly realistic Einstein robot to make facial expressions, researchers used to have to program each of its 31 artificial muscles individually through trial and error. Now, computer scientists from the Machine Perception Laboratory at the University of California, San Diego have used machine learning to enable the robot to learn expressions on its own.
(visit the link for the full news article)

It's not really doing it un-aided

It is making random movements and gets rewarded when it accidentally makes a recognisible expression by humans.

Or to put it another way humans are telling it what a smile is.

posted on Jul, 12 2009 @ 11:03 AM
It started with a smile!

that's what the people living in the ground will be saying in 2045 when the machines CRUSH us!!!!!!!

nah but that's really good stuff!

posted on Jul, 12 2009 @ 11:12 AM
The words "Robot teaches itself" are scary.

We just need to remember to put an off switch in them with the ability to turn it off via remote control haha.

posted on Jul, 12 2009 @ 11:55 AM
Domestic robots with a taste for flesh

not sure giving robots the idea of killing to stay functional is the best idea.

nice post btw

posted on Jul, 12 2009 @ 12:04 PM
Very cool stuff. It's great to see how far we have come, and how far we have left to go before Irobot isn't just a movie anymore. I think that day is still a long way away.

That mule looking robot was just creepy looking, really. But it was awesome seeing it respond so easily to being kicked around, awesome!

How do we know someone hasn't already created AI like we see in the movie Irobot? It could be in a lab somewhere already, or in some rich guy's house! I mean, some one out there has the knowledge and know how, and with today's technology it seems feasible to me for some 'do it at home' scientist to have been creating one as a side project at home or in a lab never know!

posted on Jul, 12 2009 @ 12:18 PM
What annoys me about these claims is the utter over dramatization and exaggeration of the claims.
It hasn't taught itself to smile, it is just doing what it is programmed to achieve, this is the same case for all them cases of AI robots 'learning' or one of the stupidest I saw was a claim one had committed suicide like it was making a rash decision as opposed to it being a glitch in the program.

I work with AI a lot as a programmer, I love to mess about with AI and I can tell you that the overall results are always a reflection of what you aim them at.

For instance you can make an AI program of ants (a popular start) the aim is they have to find food, bring it back leaving a trail for others to follow (pheromones).
At first the ant moves randomly and eventually one of the random movements stumbles upon the 'food' it then simply follows the path back to the 'nest' leaving a trail (successful so follow this path code), others then follow the path, you can make it so that they follow the exact path or follow it roughly trying to find the quickest A to B path until all ants are using the quickest route.
This can take a long time to get right depending what the ants are taught to do before hand (pre-programming) but in the end all that is achieved is a set of goals that the programmer aimed for anyway.

The case here makes it sound like somehow the Einstein robot has learnt the ability to use electronic muscles to use facial expressions to reflect moods it has learnt by itself and the programmers and animatronic roboteers are baffled by what it is doing and how.
It is blown way out proportion, it is only achieving what it is programmed and built to achieve, its just following procedure and preprogrammed techniques.

posted on Jul, 12 2009 @ 12:22 PM
Reply to post by Amaterasu

Originally posted by Amaterasu
"And there are people who cannot see the vast advances we are making in robotics.

We are coming along nicely and I am sure we can now create robots to do virtually all the jobs no one wants to do. "

we already have somethjing to do all the jobs no one wants to do. It's called ilegal immigrants

Posted Via ATS Mobile:

posted on Jul, 12 2009 @ 12:30 PM
Wow that is pretty cool yet a creepy looking Einstein. I wonder how far this will go and what they plan to do with this. S&F!

posted on Jul, 12 2009 @ 07:45 PM

Originally posted by lifecitizen
We are coming along nicely and I am sure we can now create robots to do virtually all the jobs no one wants to do.

but people do them- what is going to happen to all the people of this world that arent smart enough to do anything other than what they're doing?
ie check out chicks, cleaners, rubbish collectors etc

To answer that, I offer my book. Please, have a read. I would be honored.

top topics

<< 1    3 >>

log in