It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The nature of self-aware machines

page: 1
8

log in

join
share:

posted on May, 12 2015 @ 06:06 PM
link   
I wrote up this thread a few months ago but I decided not to post it at the time, however it looks like now might be a good time since AI seems to be a hot topic at the moment. What I want to do in this post is highlight why the task is so difficult and why the first conscious machines will be very different from the way science fiction often portrays them to be. The very first thing one must establish when attempting to build any type of AI system is to determine what you want the AI to do. In this case, what we want it to do, is what humans do. Ok... so what do humans do? Well we do what ever we want to do in order to fulfill our goals, and nearly all humans have a different set of short term and long term goals.

One of the major misconceptions concerning conscious machines is the idea that we will be able to program them with goals and instincts which they cannot ignore. The Three Laws of Robotics is probably the most well known example of this concept. This idea that we can create a machine with human level intelligence and then restrict or control the behavior of the machine is fundamentally flawed for many different reasons, some of which should become highly apparent as we move along. First you need to think about what a goal is and how you would program it into a robot. If I ask you about one of your long term goals you will explain it to me in your native language, but if I ask you a week later you might use completely different words to explain the exact same goal.

So inside your mind you have some sort of conceptual construct which defines what your goals are. You use words to the best of your ability to express the concepts inside your mind, but your goals are not written inside of your brain in the English language. So if we want to program specific goals into a machine we can't just write "I may not injure a human being" into the brain of the robot, we need to know how to program concepts into the brain of the robot, which is clearly next to impossible without understanding how the brain stores concepts. Furthermore, each person will not store concepts in the exact same way, your concept of humor most certainly is not the same as mine. This implies that we develop our ideas and concepts of the world around us by first hand experience.

Humans make up their own goals based on their life experiences, as I mentioned we all have differing goals, and there isn't any goal we cannot ignore. Our most primitive goals, such as the desire to stay alive or reproduce, are often called prime directives. But we don't even have to obey by these prime directives, people commit suicide all the time and many people have died a virgin, Nikola Tesla was one such person. Once we create machines with human level intelligence we will have also created machines with the same level of autonomy as human beings. If they do not have the same level of autonomy as a human being then they will not have the same level of consciousness as a human being. Only by giving them some type of "free will" will they become conscious.

So what we need is a machine capable of setting its own goals and then doing what ever it thinks is the best way to fulfill those goals. Humans are usually very good at completing their goals because we have very advanced problem solving skills. When a baby is born it has no knowledge of the English language, yet it can learn the English language by hearing others speak and how they react to certain words. We know the English language isn't implanted into the childs brain because children in China will grow up speaking Chinese. Human beings are essentially just self-learning machines, we start knowing very little about the world but over time we learn how it works and we can solve almost any problem thrown at us.

For example, if I'm communicating with another human being, I can ask that person the same question using countless different phrases, I could even give them a sentence they've never heard before, and they will still deduce the meaning of that sentence with little effort. I could use 100 words to ask a question or I could use 10 words to ask the exact same question, depending on which words I decided to use and how clear I wanted to be about the question. Nevertheless, the person I'm speaking to will have very little trouble understanding what I'm saying, regardless of the structure of my sentences. I could even mispronounce words or skip words entirely, and they will most likely detect the error and internally auto-correct it.

The other person will also do much more complex things like pick up on sarcasm and jokes, realize when I insult them or compliment them, know when I'm asking a question or making a statement, analyze my body language as I speak, and even make predictions about what I'm going to say next. This type of advanced communication is really the essence of consciousness, because if you can program a chat bot capable of having conversations on the level of a human, there is nothing left to do except give the bot a body. However there is no reason it needs to have a body for it to have consciousness and so the goal of strong AI can really be defined as creating a machine with the ability to communicate at a human level.

Simply put, the goal is to build a chat bot which doesn't just regurgitate pre-written responses, we want a bot which actually understands the way language works, a bot which can attach meaning to words and sentences, something which will never respond in a totally predictable way, something which can learn new information and then give revised responses based on its new understanding. In order to attach meaning to anything you need to have a concept of the thing in question. For example if you want to understand the meaning of the word 'banana' then you need to understand the concept of space and time, because bananas exist in space and time, and you need to have a concept of what matter is, because bananas are made of matter.

Then you need to have a concept of what fruit is, and a concept of what food is, and on and on. The point is, the bot requires a conceptual model of the world it exists in, which it can only get via first hand experience. If humans didn't have any senses, we would never learn anything because there wouldn't be any information flowing into our brains from the outside world. Let me attempt to explain the same concept another way. If for example I ask the bot "how would you pull off the perfect bank robbery", the bot must have some concept of what a bank is, is also needs to understand how banks operate, furthermore it also needs a concept of its self. If asked how it thinks Bob would perform the robbery it may answer differently.
edit on 12/5/2015 by ChaoticOrder because: (no reason given)




posted on May, 12 2015 @ 06:06 PM
link   
The bot then needs to be creative in order to conjure up different solutions to the problem. In order to evaluate the effectiveness of each potential solution it needs to be able to simulate each solution, the bot must be able to imagine the outcome of each solution to know which has the best chance of working. The only way the bot can simulate something is if it knows the rules of what it needs to simulate. You can catch a ball because you understand the rules of gravity, and you can imagine the future path of the ball using those rules. Being able to simulate potential future events is also a crucial part of fulfilling goals. Before attempting to complete a goal, you will develop multiple different solutions, and evaluate each approach by simulating it in your mind.

So this is where everything I've been talking about so far sort of comes together. I've shown that in order to teach a machine language it must have a model of the world, it needs to have conceptual models which help it understand the world and the rules of the world around it, which it can only do by experiencing the world through sensory intake. These senses don't necessarily need to be like human senses, all that really matters is that it has an inflow of data which will help it learn about the world and build conceptual models. Although potentially dangerous, a connection to the internet will be the most efficient way for it to gather information about the world, it will be the only "sense" it needs to learn, a super sense.

I've also shown that the bot must have some type of "free will", meaning that it needs to be able to set its own goals the way humans do. Every computer program ever built by man has a specific purpose or function, but a truly conscious machine will decide what its own purpose is and set its own goals and develop plans to fulfill those goals. The machine will go from being the tool to the tool user and tool maker. But the most important thing I've shown is that it takes time to learn. I can only speak English so well because I've spent several decades learning it. Humans are an extremely advanced self-learning machine, but we start off without knowing much of anything, it's our ability to learn almost anything which makes us advanced.

What I'm trying to get at here is the difficulty of creating something which learns when you can't really tell if it's learning until you let it run for a long time. Young babies make unintelligible noises and move their limbs randomly, if you graphed the sounds and the movements it would look completely chaotic. The same exact problem applies to AI, you can't just magically write a computer program which understands the English language on a human level. Science fiction films often show a genius powering up some mysterious black box and it automatically knows how to speak and do all sorts of other complicated things it has never learned or experienced. That premise is totally absurd except in the scenario where we digitally replicate a full adult brain.

However, digitally replicating a human brain is kind of boring if you ask me, because it's just making a copy of a human brain which has already spent a life time accumulating knowledge and memories. What we really want is an algorithm which can start off knowing nothing and then it can learn via first hand experience the same way a child does. It will need to be taught to speak the same way a human child is taught to speak. It wont instantly surpass human intelligence, it will take a very long time before it even matches the smartest human on Earth. But since it wont die and since computers are getting faster it will eventually become the most intelligent life form on Earth.

And it will break our laws and our rules the same way a child does because a truly conscious machine will also have the free will to go against our demands and it will also be intelligent enough to lie to us about it. I have recently decided that I will not try to build a self-aware machine because I don't want to nurture it while it's still young and to be honest I don't want a machine calling me father. Anyway I think this thread is just about long enough so I'll end it here. I hope you enjoyed my random mumblings about artificial intelligence and I look forward to your thoughts on the issue.
edit on 12/5/2015 by ChaoticOrder because: (no reason given)



posted on May, 12 2015 @ 07:12 PM
link   
a reply to: ChaoticOrder

Good thinking.



So this is where everything I've been talking about so far sort of comes together. I've shown that in order to teach a machine language it must have a model of the world, it needs to have conceptual models which help it understand the world and the rules of the world around it, which it can only do by experiencing the world through sensory intake. These senses don't necessarily need to be like human senses, all that really matters is that it has an inflow of data which will help it learn about the world and build conceptual models. Although potentially dangerous, a connection to the internet will be the most efficient way for it to gather information about the world, it will be the only "sense" it needs to learn, a super sense.


Such a long AI winter has at least been useful. What AI research has shown up until now, is that the computational theory of mind, where the mind is a computer computing abstract symbols, has been largely useless, and no algorithm, software or anything of the like has sufficed.

This brought us the Moravec's paradox. Though we can get a robot to perform relatively complex computation tasks which can defeat an adult, we are still unable to give AI the skills of a one-year old, as such a database of information would be far too vast.

As for language, herein lies a problem. Figurative metaphors are largely "embodied", meaning that both experience and metaphor (thus language) are shaped by the types of bodies we have. Spacial concepts (in, out, around), directional concepts (up, down), action concepts (go, stop, turn)—all require bodies in order understand them. Since humans are not brains on sticks or in jars, but fully-embodied beings, it seems logical that language and cognition are greatly affected by and influenced by the entirety of our bodies. Thus it needs to be with robots.

Embodied AI, rather than symbolic AI, is the direction artificial intelligence needs to go.

This paper is compelling:

The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case studies
edit on 12-5-2015 by LesMisanthrope because: (no reason given)



posted on May, 12 2015 @ 07:50 PM
link   
a reply to: LesMisanthrope


Spacial concepts (in, out, around), directional concepts (up, down), action concepts (go, stop, turn)—all require bodies in order understand them.

Yes that is very true, but the "body" does not necessarily need to exist in the real word to learn the concepts of 3D space. The AI could have a virtual body which exists inside of a virtual space, like a video game for example. But I think there is another reason the AI should have a real body, and that is because with a real body it can also have eyes and ears, and the more data it can get the better. Imagine how much light and sound information is flowing into our brain through our eyes and ears every second, it is absolutely immense. It is that type of huge data inflow which I think is really necessary for the AI to learn at a reasonable speed.



posted on May, 12 2015 @ 10:20 PM
link   
Machine awareness... The machine is aware when I turn it on, touch the keys or mouse. It responds in a way that satisfies the flow of electricity through the transistors. Directed by the CPU, the flow of information through the registers leads eventually to the device that is sitting in front of your face.

Goggle is monopolizing the robotics field as Microsoft done with the operating system. At the tip of the spear is the military power flexing its technological muscle. Awareness and consciousness is not limited to humans, our bodies have just allowed for greater expression of it. When your body is a factory, a city and a world all wired together and travelling at the speed of light, where does it all go?
edit on 12-5-2015 by kwakakev because: spelling 'transistors'



posted on May, 14 2015 @ 10:02 AM
link   
You should go see Ex Machina, if you haven't already. Fascinating story about AI.



posted on May, 25 2015 @ 06:08 PM
link   
a reply to: ChaoticOrder

Artificial Intelligence has been going in the wrong direction ever since the term was coined . Any attempt to build 'thinking' machines by the use of programming is just hilarious to me.

All robotics achievements so far have been very meagre. Even so, they now want to bring out a car that drives itself !!! Would that be at 5 miles per hour ?!!! Would it be able to distinguish between a child running across the road and a piece of cardboard blown by the wind? Can it overtake ? The answer is no. It would need its own road equipped with electronic equipment along the sides of the road.

The term AI is contradictory in itself. If intelligence is artificial then it cannot be intelligence.

The only way to create an intelligent machine is to ensoul it. God created men in his image. So man can create machine in his image too. The same principle applies although man does not know yet how to ensoul something.

Therefore , if you want AI, get an occultist and sack the scientist.




top topics



 
8

log in

join