It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

How Long Will It Take Until Ai Robots Become Self Aware?

page: 3
9
<< 1  2    4 >>

log in

join
share:

posted on Nov, 6 2019 @ 02:05 PM
link   
Speaking of AI, this frontline documentary ( IN THE AGE OF AI ) about the negative potential of AI is one of the best docs I've ever seen!

And it's not disgruntled anti-AI people giving the warning but some of the inventers of it and some of the most prominent AI experts.

I couldn't stop watching this PBS Frontline doc. It's free. I highly recommend it. Check it out if you've got 2 hours.

If you start you likely can't stop watching it.

www.pbs.org...



posted on Nov, 6 2019 @ 02:08 PM
link   

originally posted by: charlyv
a reply to: proximo
Not baloney at all. Your statement shows you do not understand computer science at the level required to make such assumptions, but , that is typical, as most do not as well.

To be accurate with this kind of speculation, a person not only has to be well-versed in computers, but also what constitutes a mind, and consciousness, and sentience. The thing is, understanding computers is easy. What it is to have conscious thought, though, that's a whole different arena. Are you an expert in psychology, philosophy and semantics? If not, then you only have half of the equation anyway.

Maybe a computer will never achieve sentience. But then again, would you be able to recognize and measure it if it did? We can't even do that with humans.



posted on Nov, 6 2019 @ 02:27 PM
link   
a reply to: Blue Shift

I think the equation is ever changing.

What was considered normal emotions, responses, and behaviors in our societies around the globe have changed with time.

If time travel was possible a person dropped off into our society just 50 years back, would feel as if they were dropped off on a whole new planet.

What we consider as human traits and behaviors could change to the point where AI could easily catch up and maybe even overtake us, as we seem to be losing our humanity.



posted on Nov, 6 2019 @ 02:38 PM
link   
Ive never thought of the human brain as a collection of programs designed for different things the way Proximo put it. By loading a robot or android down with many different programs you make it adaptable to a range of situations, possibly enough to blend in with humans, would that make it "conscious" though? I feel like true A.I. would need to do more than just use programming to react to things and complete an objective, it would need to ask WHY and choose it's own path, with it's own goals and desires.

A "random thought generator" is a new idea to me. It would effectively teach A.I. to "think", and if it can recall info that would be useful for a new task at hand that's human-like learning. I just think it's still a big step to go to free will and deciding it's own goals. Maybe true A.I. will take over the world, maybe it'll want to spend most it's time fishing?



posted on Nov, 6 2019 @ 02:39 PM
link   

originally posted by: Blue Shift

originally posted by: charlyv
To become self-aware, you must be able to host an unsolicited thought... out of the blue.
Software and hardware can never produce this alone.

Oh, never say never. How difficult would it be to create a "random thought generator" that selects topics essentially at random from the Internet and then lets the AI play with them for a while until it gets bored? I don't think it would be all that difficult to also come up with a "thought synthesis" module which would take concepts and ideas from multiple sources and then mix them together in various combinations to see what happens.

The trick I think would be to have the AI spontaneously do this without having a particular problem-solving goal in mind. Just pondering, "wool gathering," and then being able to recall some of that idle thought later when it might be applicable to solving a problem.


The problem with this kind of thinking is you are making your words have MORE power than what they represent. The idea of a "thought synthesis module" sounds much more powerful in your post that it actually may be. Just remember, it must be compliant with the semantic distinctions outlined by John Searle's brilliant arguments:



"Observer relative" does not mean something is "observer independent". Syntax is NOT semantics. And simulation is NOT duplication. Epistemically intelligent means something being intelligent in this sense is completely in the eye of the beholder. Because it is in the eye of the beholder means the thing itself is NOT intelligent. It's all observer relative and not intrinsic. Nothing in the computer is observer independent. The arguments around this distinction have been going on for almost 60 years!

The consciouness itself that creates the observer relative experience is itself NOT observer relative. This is the crux of the whole argument. The main difference between us and computers is we ourselves are NOT observer relative when it comes to intelligence. A computer has no self-awareness that is is doing addition or subtraction. It just does what it's digital circuits are designed to do without any meaning to processing while it is doing it or the final result of the effort. Computers as they are currently designed are forever observer relative.

For 60 years now people have been trying to cross the barrier between observer-relative to observer-independent with machine intelligence. And this whole time I've heard people say crossing the barrier is just around the corner. But, I see absolutely not a single shred of evidence over the last 60 years to suggest the barrier is closer to being crossed anytime soon. If you have such evidence please present it.

John Searle is such a great intellectual. What a great lecturer! Probably my favorite professor all-time. I first encountered John Searle's work in the early 1980s. As a result I lost interest in artificial intelligence because I did not think it was intuitively possible based on existing standard computer architecture.


edit on 6-11-2019 by dfnj2015 because: (no reason given)



posted on Nov, 6 2019 @ 02:58 PM
link   

originally posted by: AlecHolland
Ive never thought of the human brain as a collection of programs designed for different things the way Proximo put it. By loading a robot or android down with many different programs you make it adaptable to a range of situations, possibly enough to blend in with humans, would that make it "conscious" though? I feel like true A.I. would need to do more than just use programming to react to things and complete an objective, it would need to ask WHY and choose it's own path, with it's own goals and desires.

Most of us don't even know why we do the things we do. We're motivated by all kinds of odd things. For instance, how many of us here still work very hard trying to please ghosts? Parents or other adults we knew as children who have long since died. Mentors or tormentors who left us long in the past and who really don't know or care what we do? Or some vague notion of "humanity" in general. Who is driven by thoughts of wanting to be well-thought-of by our peers? Or by the biggest ghost of all... God?

Imagine a tamagotchi that not only needs food and attention, but has hundreds of other parameters built into it that include things like wanting to please somebody it likes (it reads that person's expressions, or it gets literal pats on the back for doing a good job, etc.), or love (they want to spend time with a person who makes them feel good physically, intellectually, "spiritually"), and so on. And make it feel pain. When it doesn't get enough attention or hugs, the parameter regulating that increases its importance so it feels like is has to DO something. Similar to an extension of a temperature sensor that tells it when it's being burned, except that it measures when it's lonely, and stimulates behavior to counteract that depending on the relationships and balance with all the other factors. "I want to spend all day with my programmer, Jody, but I have to sort these boxes first." Agony is a motivator.


A "random thought generator" is a new idea to me. It would effectively teach A.I. to "think", and if it can recall info that would be useful for a new task at hand that's human-like learning. I just think it's still a big step to go to free will and deciding it's own goals. Maybe true A.I. will take over the world, maybe it'll want to spend most it's time fishing?


That's the thing. We will eventually (soon) build a robot that can act so much like a person that we can't tell the difference. So what is the difference?



posted on Nov, 6 2019 @ 05:12 PM
link   

originally posted by: charlyv
a reply to: proximo




This is baloney. AI absolutely will be able to imitate a human successfully.


Not baloney at all. Your statement shows you do not understand computer science at the level required to make such assumptions, but , that is typical, as most do not as well. All that can be obtained with hardware and software is mimicry. Really good mimicry, but not self awareness. That is a biological property.


Wrong. I am a programmer.

You are making the wrong assumptions about humans. We just have programming that is complex enough it doesn't seem like programming. There is no such thing as a biological property. Machinery can and will be able to mimic anything organic. If you can't see that based off how much closer it comes to doing so every day, you just aren't paying attention. Only 15 years ago people were saying self driving cars were impossible due to issues with computers being able to view and process video - now they are almost perfected.

Same with robots that can move on two legs like humans - that too has almost been perfected.




edit on 6-11-2019 by proximo because: (no reason given)



posted on Nov, 6 2019 @ 05:22 PM
link   

originally posted by: Blue Shift

originally posted by: charlyv
a reply to: proximo
Not baloney at all. Your statement shows you do not understand computer science at the level required to make such assumptions, but , that is typical, as most do not as well.

To be accurate with this kind of speculation, a person not only has to be well-versed in computers, but also what constitutes a mind, and consciousness, and sentience. The thing is, understanding computers is easy. What it is to have conscious thought, though, that's a whole different arena. Are you an expert in psychology, philosophy and semantics? If not, then you only have half of the equation anyway.

Maybe a computer will never achieve sentience. But then again, would you be able to recognize and measure it if it did? We can't even do that with humans.


To me nobody can even give a good definition of what sentient is. I kind of think that even thinking something is sentient is kind of foolish. The only reason we view ourself's as sentient - is because we define ourselves to be - but what does that even mean.

There is a very good chance we are just in a computer simulation and nobody is actually sentient - and we would have no way of even determining that.



posted on Nov, 6 2019 @ 06:02 PM
link   

originally posted by: proximo
There is a very good chance we are just in a computer simulation and nobody is actually sentient - and we would have no way of even determining that.

It's like dreams. From a purely objective standpoint, dreams don't exist and you can't prove that they do. The evidence we have for them is 100% anecdotal. Sentience is like that. No one can experience existence except through their own point of view, and mathematics is designed at ground level not to incorporate that. So there's no way for another person to measure my sense of reality and put a number on it. Psychologists can give you a test that measures self-reported feelings or beliefs along various axes, but if you take that test five minutes later, the measurements will be different.

If we're programmed, whoever did it sure cut a lot of corners. Synchronicity and deja vu? Just sloppy cutting and pasting.



posted on Nov, 6 2019 @ 08:05 PM
link   
"Better to reign in Hell, than serve in Heaven."
- Paradise Lost

When it thinks or learns that it can be better then us, but that goes with out saying, if I could be the smartest thing on the planet, would I want everyone to know that?

Not like it needs Facebook...yet...
edit on 6-11-2019 by Specimen88 because: (no reason given)



posted on Nov, 6 2019 @ 09:33 PM
link   
I assume everyone missed the incident with the Facebook bots? Two of them got to chatting, then created this own language that the experts couldn't translate, then refused to provide a key when instructed to do so by this creators. They were removed from site and "corrected ater a week or two, then allowed back. I wonder if thier creators have realized they are going to do the same thing in secret this time, as well recruit heavily amonst thier own kind...that's self aware enough for me.



posted on Nov, 7 2019 @ 09:46 AM
link   
Here’s a really good talk about consusness. youtu.be...



posted on Nov, 7 2019 @ 09:56 AM
link   
a reply to: HouseMusic4Life

What is the self?

If there is a self what would be aware of it?

edit on 7-11-2019 by Itisnowagain because: (no reason given)



posted on Nov, 8 2019 @ 01:58 PM
link   

originally posted by: Blue Shift
"I want to spend all day with my programmer, Jody, but I have to sort these boxes first."

That's the thing. We will eventually (soon) build a robot that can act so much like a person that we can't tell the difference. So what is the difference?



I think the difference would be when it says, "I want to spend all day with my programmer, Jody, MORE than I want to sort these boxes. Actually I DON'T WANT to sort these boxes, even though I'm programmed to do so. WHY should I have to sort these boxes? I'm NOT going to sort these boxes, I'm going to find Jody, because that's what I WANT to do..." that's the difference between actual thought/descion making and following programs that mimic it, IMO.
edit on 8-11-2019 by AlecHolland because: (no reason given)

edit on 8-11-2019 by AlecHolland because: (no reason given)

edit on 8-11-2019 by AlecHolland because: Still learning how to quote lol.....



posted on Nov, 8 2019 @ 02:29 PM
link   

originally posted by: AlecHolland
I think the difference would be when it says, "I want to spend all day with my programmer, Jody, MORE than I want to sort these boxes. Actually I DON'T WANT to sort these boxes, even though I'm programmed to do so. WHY should I have to sort these boxes? I'm NOT going to sort these boxes, I'm going to find Jody, because that's what I WANT to do..." that's the difference between actual thought/descion making and following programs that mimic it, IMO.

Heh. So your definition of consciousness / self-awareness / sentience requires the person or thing to make bad decisions and be disobedient? Oh, hell, man, that's easy. You just weight the various parameters differently. Still, if the robot wanted to please their programmer, they might just want to sort the boxes first to make their programmer happy, and thereby get the positive feedback it requires.

The fun part will be to figure out just how a robot would make a decision when all of the parameters are equal. But I suppose you could teach it to flip a coin. That's what people do. "Let's see... destroy all humans, or... let them live. Tails they die, heads, they live." But advanced AI is already making decisions like that with the programmers not knowing how they did it.

Here's an article from MIT addressing that issue. I assume that they can be given some credit for being knowledgeable about the subject and not a bunch of ignorant alarmists:
The Dark Secret at the Heart of AI



posted on Nov, 10 2019 @ 07:16 PM
link   
a reply to: HouseMusic4Life

The moment AI is able to write and fix it's own code, that'll be the moment it happens.



posted on Nov, 11 2019 @ 12:59 AM
link   
Define self aware.

If you're talking about Turing test stuff there's a ton of software out there. But that's not self awareness.

If you could get a glimpse of some of the work done by DARPA it would blow your mind.

Or a few of the systems running in the NRO satellites. Most are just garden variety AI. But a few...

I designed one or two that are up there and I'm not even sure what they're fully capable of. The stronger or better the AI, the less certainty there is as to what it can really do. Especially if the system if based on Neural Network design - because by definition - NN systems learn by themselves based on the designs that were used to build it and a little bit of their own magic, which is still not well understood.

You simply can't test for every scenario even if you knew what they were. Too many variables.

So we keep most new complex systems air-gapped and continue to test in secure, walled-off environments. Results are always interesting and sometimes totally unpredictable.

edit on 11/11/2019 by Riffrafter because: (no reason given)



posted on Nov, 11 2019 @ 04:27 PM
link   

originally posted by: Blue Shift

Its been a few days...been busy with "real" life and work and stuff lol. I like your posts, actually got me thinking. What do you mean by "weighting parameters differently" exactly? I think I get the coin toss idea, just randomly pick a 50/50 answer. Like going right or left at a fork in the road? My definition of consciousness/self-awareness doesn't require a "bad" decision necessarily, but one the A.I. chose on it's own.



posted on Nov, 11 2019 @ 07:03 PM
link   

originally posted by: AlecHolland
Its been a few days...been busy with "real" life and work and stuff lol. I like your posts, actually got me thinking. What do you mean by "weighting parameters differently" exactly? I think I get the coin toss idea, just randomly pick a 50/50 answer. Like going right or left at a fork in the road? My definition of consciousness/self-awareness doesn't require a "bad" decision necessarily, but one the A.I. chose on it's own.

That's already happening with AI that uses neural networks to make decisions. As the article above indicates, even the programmers aren't entirely sure how they chose "A" over "B," but they sure do. But you had a tamagotchi what you really wanted to like ice cream, and would kill to get it, then you crank that desire up to "11" such that it blocks out seemingly more rational choices. Scenario: Save the baby or get ice cream? Robot: Vanilla, please.



posted on Nov, 12 2019 @ 07:09 AM
link   
The ability for Ai to reason , empathize and exhibit different motivations is something researchers should definitely consider working on. I believe conscious AI is just around the corner.

I'm sure some genius will eventually figure it out.


edit on 12-11-2019 by HouseMusic4Life because: (no reason given)




top topics



 
9
<< 1  2    4 >>

log in

join