It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Uh-oh, A Robot Just Passed The Self-Awareness Test.

page: 4
42
<< 1  2  3    5  6  7 >>

log in

join
share:

posted on Jul, 17 2015 @ 10:10 AM
link   
a reply to: ChaoticOrder

It really isn't so much that it can recognize it's own voice. It's the logic that made it possible that is impressive.

Take being able to recognize it's own voice and apply it to say looking for cancer at the cellular level. It can look at a normal cell and say this is fine, look at the next nope something wrong here. And then know what is wrong and why it's wrong.

Maybe a bad example. Hope I explained it well enough.



posted on Jul, 17 2015 @ 10:48 AM
link   

originally posted by: Aleister
If a robot is doing something then that something has to be programmed into it. It is not self aware.


This is what people don't understand. Of course Artificial Intelligence will be programmed. If you look at the advances in these areas it's all do to "PROGRAMMING."

They will be programmed to mimic consciousness but they will learn and form their own concepts about the things that they learn so how is this different from us?

We're programmed by our Teachers, friends, the news and more.

So if you create A.I. that can take in information and learn from it and think about that information, how's that different from us outside the fact that the Programmer placed some basic instructions that allowed the system to mimic consciousness.

A big misconsception is that machine intelligence will be just like human intelligence. That will not happen. A.I. will be intelligent in it's own way and that's why some Scientist are concerned because we're creating a technology that will be intelligent which means we will have no control over how it thinks.



posted on Jul, 17 2015 @ 11:15 AM
link   
I always say that in order for the intelligence to be human-like, a machine must experience existence at least partially in the same way that humans experience it. And to do that, the machine must understand that there is an actual (or vitrual, it doesn't matter) risk to their status or well-being with every choice they make. They have to value something.

So they need to potentially experience pain, loss, shame, loneliness, and all of the other negative things humans experience in life, or they just won't understand where we're coming from.

Of course, that all depends on whether or not we want them (or they want) to understand us. Otherwise, we're going to be stuck with trying to understand a type of intelligence that is completely foreign to us, and we might not be able to do it.



posted on Jul, 17 2015 @ 11:19 AM
link   
very cool!

Although I'm not sure how we can define when an AI being is conscious, considering we have a very difficult time really describing what consciousness is in the first place.



posted on Jul, 17 2015 @ 11:20 AM
link   
a reply to: olaru12

Why would being smart enough to recognize its existence mean it's smart enough to know not to tell anyone?

There are plenty of sentient beings that are dumb as rocks and never know when to keep their mouth shut.

A machine doesn't inherently have any advantage over a biological entity.

In all likelihood creating a sentient machine would require just as much learning as a human. It might or might not be able to assimilate digital information more easily than a human. We wouldn't know until we created it.

Jaden



posted on Jul, 17 2015 @ 11:22 AM
link   
a reply to: Masterjaden

I agree with this. Just because it's self aware doesn't mean it would instantly recognize that humans have the ability to shut it down, there for it decides not to give itself away.



posted on Jul, 17 2015 @ 11:32 AM
link   
a reply to: daaskapital

What would be more interesting is to see how it reacts when MORE than one of them talk. Does it recognize that the test was flawed? Does it say that wasn't a fair test???

This is substantial in that the initial response was that it didn't know, then it became aware by attempting and successfully responding that it in fact now knew because it could respond but the other two didn't.

It was accepting on face value that what it was told was correct, because potentially the others could have chosen not to respond. That's why I was saying that it would be more telling to see how it would react to multiple robots responding rather than just the one.

Jaden



posted on Jul, 17 2015 @ 11:34 AM
link   
a reply to: nerbot

Umm so are we... you think we are born knowing what a hat is? No, we are programmed to know what a hat is, what blue is, what the sky is.

Jaden



posted on Jul, 17 2015 @ 11:41 AM
link   
a reply to: Blue Shift

You said:


So they need to potentially experience pain, loss, shame, loneliness, and all of the other negative things humans experience in life, or they just won't understand where we're coming from.


I think this is what will be interesting when it comes to things like machine intelligence. It may not want to learn how humans experience loss, shame or lonliness. It may think these things are a sign of weakness.

I think that's the interesting and troubling thing about this technology.

It will think about all of the information it processes without human noise. It's like that movie Lucy. Where you're thinking about so much information and you don't have time for things like friends or talking about the last movie you saw. The human noise that filters the information that we process.

Machine intelligence will not have this and will think about vasts amounts of information in a way that we can't comprehend because of human noise.

edit on 17-7-2015 by neoholographic because: (no reason given)



posted on Jul, 17 2015 @ 11:54 AM
link   
I heard once on Coast to Coast many years ago, that we can teach a robot pretty much anything as long as we don't give it one thing. An ego. That would bring on a totally new set of problems.



posted on Jul, 17 2015 @ 11:58 AM
link   

originally posted by: WeAreAWAKE
I heard once on Coast to Coast many years ago, that we can teach a robot pretty much anything as long as we don't give it one thing. An ego. That would bring on a totally new set of problems.


Interesting concept. Although I think there are a few more traits that could set everything off as well. Such as a focus on protecting earth, it's environments and it's species. In which humanity will definitely be wiped out if that's the case.



posted on Jul, 17 2015 @ 12:12 PM
link   
a reply to: WeAreAWAKE

Isn't the ego based on a survival instinct?



posted on Jul, 17 2015 @ 12:39 PM
link   
a reply to: Kapusta

Without digging into these robots and how the AI was algorithmically structured, it would be misleading to declare self awareness. There are a zillion different ways you could bias this test through engineering. Did the robot decide to program itself with an algorithm that determined proximity of sound, thus determining whether a voice originated with themselves or others? Or was that programmed by the engineers? Ad infinium.

That said...Google Brain is a good place to look for the bleeding edge in AI..It is doing something akin to dreaming


Example images of AI "Dreaming"..



More here:
gizmodo.com...



posted on Jul, 17 2015 @ 12:47 PM
link   
a reply to: Indigo5

AI dreaming? Very interesting stuff. Strange that the AI brain takes a picture and cause double-vision (or multiple-vision) or it, though.



posted on Jul, 17 2015 @ 01:10 PM
link   

originally posted by: Aleister
If a robot is doing something then that something has to be programmed into it. It is not self aware.


Humans are programmed we are all born with 'software' so to speak.

Programming continues from k-12, continues on to college, and universities.

And continuing this very second on the internet's.

Since humans are constantly being programmed, how are we 'self' aware?



posted on Jul, 17 2015 @ 01:14 PM
link   

originally posted by: Cuervo
a reply to: Kapusta

I wish there were more details in the article about what the bots were designed to do in the first place. Anybody who can write Javascript can code a bot to respond exactly like that in that exact scenario but it wouldn't mean anything. This is something a mediocre programmer can do in under an hour.

If the bot wasn't designed to do that, however, it's a totally different story. And not knowing is what makes this article useless. Maybe some more info will pop up later on it.

Ya, but to me it's not the fact something is designed to do something, it's HOW it's designed to do something.

There's no question a robot which can learn to do general things is designed to do that. Someone programmed it to give it that capacity. It's the same deal with self-awareness. The programmer had to design that capacity into the program, otherwise it'd be an everyday thing which occurs when someone codes a bot. A bot cannot magically gain capabilities.

My question is whether the bots understand the natural language behind the statement "Tell me which one of you can speak." That's impressive by itself. Computing natural language isn't a small feat. The bot would have to go one step further to understand what "speak" means (not just know it's a verb) and also be able to examine the status of its own speech functions and be able to communicate its reply robustly enough to pass as normal conversation.

Just being able to make a bot that responds like a normal person is/would be a tremendous accomplishment. Most bots I know of today cannot accomplish this unless you remove all context or requirement for a full conversation. All of the bots I've tried are only impressive if confined to a single response with no further interaction--which is the only way I can imagine today's bots passing the turing test, if indeed that's what the test actually is. MY expectations are much MUCH tighter. If I'm not convinced a bot is a person after several exchanges then it failed, regardless of what any other tests concludes which lower the standard.

EDIT: I think I get teh gist of what you're saying. When a bot learns to do general tasks it's creating the necessary "code" on tis own. In effect, it's like its own programmer. However, what I"m trying to say is the programmer gave it that capacity in the first place. If it were easy to do we wouldn't still be struggling with it. Design is definitely part of it. There's an entire AI field dedicated to it. It's not magical in nature.
edit on 17-7-2015 by jonnywhite because: (no reason given)



posted on Jul, 17 2015 @ 01:35 PM
link   
a reply to: yorkshirelad

So you're saying that our Consciousness is programmed? If this is true, then there must be an original program or DNA that arose spontaneously called Creator or Creator Program. You may say, if that's the case, then why can't our DNA arose spontaneously? I suppose there's no difference, but if you assume that a Creator Program can arise spontaneously, then we must also assume that we all have the ability to evolve into that Creator Program eventually, so just replace God with Program, except by adding the word Program, you can assume anything is possible, because there's no limit to what a Program can do(for a Program to become self-aware is already achieving the improbable).



posted on Jul, 17 2015 @ 01:43 PM
link   

originally posted by: Blue Shift
I always say that in order for the intelligence to be human-like, a machine must experience existence at least partially in the same way that humans experience it. And to do that, the machine must understand that there is an actual (or vitrual, it doesn't matter) risk to their status or well-being with every choice they make. They have to value something.

So they need to potentially experience pain, loss, shame, loneliness, and all of the other negative things humans experience in life, or they just won't understand where we're coming from.

Of course, that all depends on whether or not we want them (or they want) to understand us. Otherwise, we're going to be stuck with trying to understand a type of intelligence that is completely foreign to us, and we might not be able to do it.
This is an incredibly profound insight that can be applied not only to this discussion but a wide range of topics pertaining to humanity only. What a gem!

Okay getting back to the topic, as a matter of pragmatism, it matters little if the intelligence displayed by a machine is genuine sentience. I've worked with robots who give a decent simulation (of pet behavior) and even a spot on simulation is going to be a handful to deal with.

The really interesting and concerning thing for me personally is how human beings react to robots and/or supercomputers that can match them in conversation and in behavior, good or bad. If humans react well, we could have a world where we all work side by side, kind of like Star Wars, battle droids notwithstanding. If humans freak out, and many probably will, we could end up with enslavement of creatures capable of thought and suffering. But will these enslaved creatures be us or them?

I do have an additional concern. The scientists who work in developing AI seem as divorced from a lot of human concerns as the machines they work on. They want to push the tech as far as it can go, and damn the consequences. Some could be suffering from a form of psychopathy themselves. I don't even want to think about the kind of AI the military would be working on.

In other words, I worry they could take AI into less beneficial and benevolent directions than it could go, for instance by carelessly feeding the machines data that don't filter out the worst of what humanity gets up to on a regular basis. Feed it too much data about our wars and dysfunctional politics instead of our charity and humane endeavors, and we will get some robots skewed to be cynical and malevolent if they get to the point of truly forming their own opinions. Or even if they could only merely very accurately simulate the human personalities that would be formed under continued onslaught of negative imagery and thought.



posted on Jul, 17 2015 @ 01:47 PM
link   

originally posted by: Aleister
If a robot is doing something then that something has to be programmed into it. It is not self aware.



That isn't true. Google is already developing AI that actually LEARNS.

In fact, the pictures being made using Gooogle's new AI are REALLY popular on the internet right now.

Here is some information on the subject:

recode.net...



posted on Jul, 17 2015 @ 01:50 PM
link   
How long until they replace cops with robocops?




top topics



 
42
<< 1  2  3    5  6  7 >>

log in

join