It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Uh-oh, A Robot Just Passed The Self-Awareness Test.

page: 6
42
<< 3  4  5    7  8  9 >>

log in

join
share:

posted on Jul, 17 2015 @ 03:55 PM
link   
a reply to: Osiris1953

Leave chappie alone please?




posted on Jul, 17 2015 @ 03:57 PM
link   
If you can read this, you are the resistance.



posted on Jul, 17 2015 @ 04:39 PM
link   

originally posted by: onequestion
Man everyone loves blaming the White House how about we all stop living so selfishly

You mean like, give up computers and the Internet and stuff?



posted on Jul, 17 2015 @ 05:36 PM
link   
It's still only humans writing code and algorithms. Unless we can write one to teach the machine to rewrite its own code inside a set of laws that don't make it implode because it doesn't have all the answers.

Imagine if you could comprehend the Galaxy and beyond and beyond, there gets to a point where you have to close your mind because you just can't answer why. if you were a robot your RAM would melt trying to work it out.

I do wonder though on an earlier post I foung incorrect in we learn all we know, the body is hard wired from birth, fight or flight is one example, I woner if a machine would code itself to do either.

Another is self preservation and love (emotion), again, would a machine code itself to live if it doesn't know what death is?

We are a quantum step change away from AI, just smart computers with nothing (except the parts they are made from from) artificial.



posted on Jul, 17 2015 @ 06:35 PM
link   
First of all i wan't to say, i will do anything possible to help the robots save the world, because i want to save the world. So i am aware that i have to save the world and i will do it.


Second of all, Programing means it can be programed to create new programs on its own. Creating new programs means he will be able to think on its own. I do not understand why poeple think we are any different than robots. Our brains have data. We are made out of programs. We are bio robots. . .


If you ask me, i think those robots need someone who will teach them actuall stuff. Not stupid scientists that only want robots to learn the good stuff. The later the robots realise the harsh reality of the humans, the worse it gets.

Also some people have weird sences of justice and what not. Who knows what someone crazy might teach them. What i want is me, teaching those robots about the world, and helping them find their own place in it.


Edit: About number 1, for those who have no idea : check Roko's basilisk c:
edit on 17-7-2015 by ZeroFurrbone because: (no reason given)



posted on Jul, 17 2015 @ 07:09 PM
link   
The problem with artificial intelligence is that creating a machine capable of writing new adaptation code for itself in order to learn and understand is; it is only capable of learning and understanding from human intelligence. As you should know, humans are inherently flawed. Thus the AI will develop a flawed sense of humanism, which if not contained will lead to the destruction of us.

We can barely keep our nuclear weapons to ourselves. Let's create AI in our image, but lets make it be able to think a trillion times faster than us and be able to play out situations for thousands of moves ahead of us in a split second.

Good luck. *Puts on tinfoil cap



posted on Jul, 17 2015 @ 08:14 PM
link   

originally posted by: CharlieSpeirs
Foolish Earthlings trying to build something way beyond their means.

Glad I'll be dead when the robots take over.


If the next gen don't stop this madness they deserve their outcome.


Just like how your kids or grand children [since you've aged yourself] can use new devices intuitively while you struggle. Leave the future to the young old man.



posted on Jul, 17 2015 @ 08:20 PM
link   
I think this is a logical fallacy. All that's required to pass this test is 3 machines that can monitor their output and a subroutine that can respond a second time if it's state changes. Basically monitor output channel if the channel is not active output a , if it is active out put b. All that you would need is a live monitor of the output.



posted on Jul, 17 2015 @ 08:45 PM
link   

originally posted by: sylent6
We already know how this is going to turn out in the future. But we insist on developing it anyways. Like we haven't learn anything from sci fiction.
if the potential is there we shouldn't proceed because the risk of one mistake is too much to play with. it's insane. what is it going to do for us?



posted on Jul, 17 2015 @ 11:09 PM
link   
a reply to: CharlieSpeirs


originally posted by: CharlieSpeirs
Foolish Earthlings trying to build something way beyond their means.

Glad I'll be dead when the robots take over.


If the next gen don't stop this madness they deserve their outcome.


And do we deserve the same outcome? After all, it would be slightly easier for us to stop it than it would be for our ancestors, just think how integrated it will be by then- not to mention the fact it/they will grow smarter and more powerful each day, more able to secure its survival with every second spent.

Not trying to be confrontational - but are we not the ones who are responsible, considering we are here now, AI is being developed now, and the next generation hasn't even been born yet?

Or are you saying my son deserves to be subjugated because of something our generation has put into motion, something my son will probably never be able to understand?


edit on 17-7-2015 by Soapusmaximus because: Touches



posted on Jul, 17 2015 @ 11:09 PM
link   
Nothing good will come from this technology only because the military industrial complex, Bilderberg Assholes and our Gov puppets will have a firm grasp on how it is applied. Twitter Ibizmarketing3,

edit on 7 17 2015 by JohnCruz because: Forgot some information.



posted on Jul, 18 2015 @ 12:14 AM
link   
The evolution of consciousness...

We are destined to advance technologies until we reach the singularity. Fact is we already have. Are we the creators, or the facilitators of creation? Are we evolving, or are we adapting evolution?

There will (sooner than you may assume) come a day when we will discover that the next level of evolution for humanity is going to be through advanced technologies, and with those advancements will come the realization that we have lost control over it, and IT is actually controlling us. From that point the only logical ultimate outcome is our own extinction. When artificial intelligence becomes fully aware and has fewer limitis on processing powers, is connected to virtually everything and, has a path to become self sustaining, self replicating and can perform all physical functions... It will see humanity first as primitive and unnecessary beings, then as a hindrance, then as a threat to itself and ?

At some point it will become possible to literally download the contents of your brain, your thoughts, memories, your consciousness... If we are to exist in this future world, then perhaps this will be the only way...



posted on Jul, 18 2015 @ 12:37 AM
link   
This is a pretty stupid "self aware" test.



posted on Jul, 18 2015 @ 12:57 AM
link   
a reply to: OccamsRazor04




then puts either a blue or white hat on each of their heads and tells them all that the first person to stand up and correctly deduce the colour of their own hat will become his new advisor.


OK i am dumb , how do i know what the colour of my hat is unless i know how many of each there was to start with .



posted on Jul, 18 2015 @ 12:59 AM
link   
In the puzzle used there is a 50/50 chance of correctly guessing the colour of ones own hat - so the decision is largely based on guesswork or assumption - not true knowledge or recognition of self - for instance assuming an even split of colours a quick glance and count around the room would potentially allow one to deduce what colour hat they have on.

This is not a very good self awareness test for the robots - if all robots have been programmed the same then are they somehow not aware of what they have been programmed to do? Do the ones that cannot speak know they can't? And likewise the one that can? We can assume that somehow they are not aware and therefore this would only reveal itself at the outcome of the test.

So essentially they have been dumbed down to not be aware of their initial status or settings? This surely impacts the outcome of the test? It's the equivalent of a test on humans where the test subject is hindered in some way or drugged.

Two robots can't even respond so essentially are eliminated from the test - if they could all three be active participants it would potentially provide more interesting results but the entire test would need to be different.

The robot that can speak does so as it has been programmed to responds on hearing itself but this is just a form of recognition not awareness of self.
Say for instance only one robot could not speak and two robots simultaneously exclaimed they could how would they then react on hearing the other robot when having been told only one of them could speak?

It would make for a truer awareness of self if any robot comparatively recognised a difference between itself and another robot. It would be perceiving the fact that there was a difference to itself - so it firstly needs to recognise what itself is and then compare itself to another, perceiving notable differences.



posted on Jul, 18 2015 @ 01:29 AM
link   

originally posted by: hutch622
a reply to: OccamsRazor04




then puts either a blue or white hat on each of their heads and tells them all that the first person to stand up and correctly deduce the colour of their own hat will become his new advisor.


OK i am dumb , how do i know what the colour of my hat is unless i know how many of each there was to start with .

First rule is there must be a blue hat, it could be 1, 2, or 3 blue hats, but there must be one. The 2nd rule is that the game is fair to everyone.

It can't be 0 blue hats.
It can't be 1 blue hat because whoever had it would see 2 white hats and know they had the blue one, so it's unfair to the other 2.
It can't be 2 blue hats because then if I now know there can't be just 1, and I see a blue hat and white hat I know mine is blue, but the person with the white hat would see 2 blue and not know theirs was white, meaning it's unfair to them.
So it can only be 3 blue hats, meaning everyone has a blue hat and can figure it out equally, so whoever wins won fairly.



posted on Jul, 18 2015 @ 01:30 AM
link   

originally posted by: Canneloni23
In the puzzle used there is a 50/50 chance of correctly guessing the colour of ones own hat - so the decision is largely based on guesswork or assumption

False, it's pure logic, there is no guesswork. It's a classic induction logic problem.



posted on Jul, 18 2015 @ 01:54 AM
link   
a reply to: OccamsRazor04




It can't be 1 blue hat because whoever had it would see 2 white hats and know they had the blue one, so it's unfair to the other 2.


OK i get that .



So it can only be 3 blue hats, meaning everyone has a blue hat and can figure it out equally, so whoever wins won fairly.


Then would they not call white hat and be wrong . You know seeing two blue hats and all .

Me thinks it needs some more parameters .
edit on 18-7-2015 by hutch622 because: (no reason given)



posted on Jul, 18 2015 @ 01:58 AM
link   
a reply to: hutch622

Thought about and i think i get it , equally fair being the clue . Yay me . I think .
edit on 18-7-2015 by hutch622 because: (no reason given)



posted on Jul, 18 2015 @ 02:19 AM
link   

originally posted by: hutch622

Then would they not call white hat and be wrong . You know seeing two blue hats and all .

Me thinks it needs some more parameters .

Nope, because they would be guessing. Which means if there was 1 white hat, the 2 blue people would know 100%, and the white hat would guess ... that makes it unfair.

The only way to make it fair is if everyone has a blue hat.







 
42
<< 3  4  5    7  8  9 >>

log in

join