It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Here's 2 news items that bring us closer to Artificial Intelligence

page: 1
9
<<   2  3  4 >>

log in

join
share:

posted on Oct, 29 2014 @ 03:37 PM
link   
I have been saying A.I. will need 2 things. It will need a hive mind that mimics the collective consciousness of humans and it will need a quantum mind in order to take a quantum walk like I believe humans do.

First, I have been saying A.I. will need a hive like mind and this way you can give robots and machines a level of uncertainty. This also might give us a modicum of control because we can monitor the information in the robot hive or cloud mind.

Say you have a robot in New York who sees and learns about a red BMW. This information is stored on his classical brain and uploaded into the hive or cloud mind. A robot in L.A. would then come across a red BMW and it could access the cloud and learn about the red BMW from the information uploaded to the cloud by the robot in New York.

This will allow robots to have local experiences because of uncertainty. Here's more on that.

The Robot in the Cloud: A Conversation With Ken Goldberg


Q.

What is cloud robotics?
A.

Cloud robotics is a new way of thinking about robots. For a long time, we thought that robots were off by themselves, with their own processing power. When we connect them to the cloud, the learning from one robot can be processed remotely and mixed with information from other robots.

Q.

Why is that a big deal?
A.

Robot learning is going to be greatly accelerated. Putting it a little simply, one robot can spend 10,000 hours learning something, or 10,000 robots can spend one hour learning the same thing.

Q.

What are some examples of this?
A.

Google’s self-driving cars are cloud robots. Each can learn something about roads, or driving, or conditions, and it sends the information to the Google cloud, where it can be used to improve the performance of other cars.


Link

Secondly, you recently had Scientist saying A.I. may need to do a quantum walk. This goes back to us having a quantum mind which is a simple and eloquent way to explain consciousness. When we're making a decision it's like taking a quantum walk and then it's followed by a classical walk through one of those decisions.

For instance, if I'm deciding on going to the CVS or Rite-Aid across the street, I can first take a quantum walk to one of the stores or to both of the stores. Maybe I can add into the calculation that I used the last of the milk for cereal this morning so I will go to Rite Aid because milk is cheaper there.

I then take a classical walk to Rite Aid.

Like I said, this is a simple and elegant explanation of consciousness. The reason there will be opposition to this is because you then have quantum effects like entanglement, superposition and non locality which will easily explain things like twin telepathy, esp, near death experiences and other things associated with Psi.

Here's the article:

A Quantum Walk Toward Artificial Intelligence


But no matter how powerful these machines become, they may never develop true intelligence if we continue to rely on conventional computing technology. According to the authors of a paper published in the journal Physical Review X last July, however, adding a dash of quantum mechanics could do the trick.

The problem lies in part with the step-by-step processes that limit conventional artificial intelligence learning algorithms. The authors of the paper equate it with classical random walk searches. Random walks are sometimes described as being like the stumbling of a drunk person - each step is about the same size, but the direction of the steps are random. Random walkers can cover a lot of territory, and an artificial intelligence system that explores various problems with random walk learning algorithms can eventually learn new behaviors, but it takes a long time.

Quantum walks, on the other hand, describe a walker who doesn't exist at one spot at a time, but instead is distributed over many locations with varying probability of being at any one of them. Instead of taking a random step to the left or right for example, the quantum walker has taken both steps. There is some probability that you will find the walker in one place or the other, but until you make a measurement the walker exists in both.

Compared with a random walk, quantum random walks are much, much faster ways to get around. To the extent that learning is like taking a walk, quantum walks are a much faster way to learn.

That's not to say you'd need to make a full-blown quantum computer to build a truly intelligent machine - only part of an otherwise classical computer would need to be supplemented with a bit of quantum circuitry. That's good because progress toward developing a stand-alone quantum computer has been about as slow as the progress toward artificial intelligence. Combining artificial intelligence systems with quantum circuitry could be the recipe we need to build the HAL 9000s and R. Daneel Olivaws of the future.


physicsbuzz.physicscentral.com...

So you have a quantum mind(quantum circuity) with a classical brain (classical computer).

I was recently watching a special on the Science channel called Rise of the Machines. They had these drone helicopters that would learn from each other and they had like a hive mind. So one of these drones would learn how to fly through a maze and the next drone could access the hive mind and learn what the drone learned about the maze then wiz right through it where the first drone had to take time going through the maze as it was something new to it. Again, this ads uncertainty and also increases learning. Just like we can access the internet or read a book to learn from information that another person may have learned in say Japan.

As long as we control what goes into the hive mind, then that could add a measure of safety at least for a little while. Say you have a robot that sees a human get murdered and this is uploaded to the hive. We could just erase this information before other robots could access it. Eventually, the information will expand so rapidly, we will probably have no way of controlling what information is uploaded into the hive or what quantum walks the robots take with this information.
edit on 29-10-2014 by neoholographic because: (no reason given)



posted on Oct, 29 2014 @ 06:02 PM
link   
a reply to: neoholographic

Quite interesting.

There are a lot of different breakthroughs going on right now with robotics but it's such a thin line between being a success and being the end of humans.



posted on Oct, 29 2014 @ 06:10 PM
link   
a reply to: neoholographic

Perhaps my understanding is off, but to quantum walk, would be the subconscious running through it options, settling on one, then transferring this information to the conscious mind in the form of a definitive decision?

If A.I could do this, we woould have to give them equal rights.

Please correct me if I'm wrong in my assertion of quantum walking.



posted on Oct, 29 2014 @ 06:19 PM
link   
a reply to: neoholographic

I experimented with a few AIs and personally I feel there is still alot of grounds to cover. Firstly, cloud robotic is a bit like cheating to find the answer - it's not true evolution, since true evolution requires that you build the capacity to solve a problem yourself. Secondly, quantum computing is still in its infancy, we don't even know how to program anything on quantum computers yet, even if quantum computers succeed in existing.

I know many here have fantasies about an evil AI who destroy humanity but so far everything is programmed by very real humans - most AIs have the intelligence of a child with an age in the singular digit. If an AI was to turn evil, it probably means the programmer programmed it to do so, and really spent energy for the AI to turn that way.


edit on 29-10-2014 by swanne because: (no reason given)



posted on Oct, 29 2014 @ 07:24 PM
link   
a reply to: swanne

I think you have the wrong idea of Artificial Intelligence. Some people think Artificial Intelligence means that machine intelligence will have to evolve the same way as human intelligence. This is a huge misnomer. You said:


I know many here have fantasies about an evil AI who destroy humanity but so far everything is programmed by very real humans - most AIs have the intelligence of a child with an age in the singular digit. If an AI was to turn evil, it probably means the programmer programmed it to do so, and really spent energy for the AI to turn that way.


First it has nothing to do with fantasies. It's a legitimate concern and you hear this concern from people like Stephen Hawking, Elon Musk and Nick Bostrom. These people are not kooks with fantasies of A.I. destroying humanity. There just people who are asking serious questions that need to be asked.

You act like Artificial Intelligence has to evolve in the same way as human intelligence. This comes from watching too many A.I. movies or TV shows where somehow the robot evolve like Haley Joel Osmond in the movie A.I.

Artificial intelligence will mimic human intelligence and it may look nothing like we envision intelligence when it's all said and done. This intelligence will be programmed and if you have an intelligent algorithm that can mimic intelligence, then it will not need a programmer to do different things beyond the initial algorithm.

At the end of the day A.I. could be a boon to humans and it will most likely be that way at first. Eventually this intelligence will grow smarter than us and that's not any pie in the sky statement, that's just basic common sense. Will it destroy humanity? It could and these questions need to be asked.

You don't know and I don't know for sure what will happen but it's really silly to suggest that anyone just asking questions and talking about these things is involved with some fantasy. Everything doesn't need to be couched in the language of a pseudoskeptic. We can debate these things and if you feel this is some fantasy then why even debate the issue? That just seems like a blind knee jerk reaction especially when this space is advancing so rapidly.



posted on Oct, 29 2014 @ 07:39 PM
link   

originally posted by: solargeddon
a reply to: neoholographic

Perhaps my understanding is off, but to quantum walk, would be the subconscious running through it options, settling on one, then transferring this information to the conscious mind in the form of a definitive decision?

If A.I could do this, we woould have to give them equal rights.

Please correct me if I'm wrong in my assertion of quantum walking.




Yes, that's pretty close.

What we call a subconscious mind might actually be a quantum computer and the personality or ego is really an illusion. The Ego is made up of a small part of the information processed by the subconscious. What makes this ego real?

If I think about all of the information that my subconscious processed in just the 9-12 grade, there could be a billion universes with a billion different egos or versions of me, that processed the same information.

There was also a recent article that was titled, Why we think like quarks. It said our decision making process matched the mathematics of quantum theory.

So taking a quantum walk allows you to calculate information in a parallel way instead of a classical walk which is a step by step process.

So take the red BMW I talked about earlier.

Let's say a robot saw a red BMW and a black Lexus. A quantum walk will allow the robot to decide whether they want the BMW or Lexus by taking a quantum walk and driving the BMW and driving the Lexus. The thing with the robot is, they will be able to process more information than we do when taking these quantum walks.
edit on 29-10-2014 by neoholographic because: (no reason given)



posted on Oct, 29 2014 @ 08:06 PM
link   
Sorry to say but reality needs to be accepted and reality is that a machine will NEVER have intelligence like a human and that is my understanding of the goal of AI to get it closer to the more extravagant abilities of human intelligence and decision making. As far as i can tell the limit for computers has been reached as far as intelligence...speed is another matter. Now if i give 2 people a difficult problem to solve one will solve the problem faster and the other will take longer and although both answers are correct the one who took longer may answer better so speed is ok but not the end all and be all of complex problem solving.

Calculators can do complex math faster than a human but look at all the things a calculator cannot do same for modern computers. I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic. Show me a computer that understand the concept of it's own death and then i would say we are getting somewhere but we all know that this is impossible. Sure it can scan its parts and no when it will fail but will never grasp the full concept...does not understand joy and pain...these things are fundamental to the human viscera and they play integral roles in human decision making. Interestingly enough, what made computers what they are today is the WWW. Many years ago a friend told me a computer not hooked up to the internet is just a paper weight. That may be extreme but it is true that if not for the internet 70% of people would not really need a computer. And what makes up the internet? Humans make up the internet as we provide the content.

That is my understand of AI and unless it has changed much since i have not looked into it lately i really dont see that changing. May be a bitter pill to swallow but that is the truth that i know. YMMV.



posted on Oct, 29 2014 @ 09:30 PM
link   
double post
edit on 29-10-2014 by neoholographic because: (no reason given)



posted on Oct, 29 2014 @ 09:30 PM
link   
a reply to: Harvin

What?

How do you know any of this when the this space is advancing so rapidly. Have you even looked at some of the latest science behind machine intelligence?

A debate is fine but these blind proclamations are not. You said:


Sorry to say but reality needs to be accepted and reality is that a machine will NEVER have intelligence like a human and that is my understanding of the goal of AI to get it closer to the more extravagant abilities of human intelligence and decision making.


Based on what????

So we should just throw out all advancement in these areas because you capitalized the word NEVER?? Again, if you're going to make a claim that these things will never happen, you have to have something more backing up this statement instead of:

I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic.

With all due respect, this is just pure nonsense.

A.I. isn't cold, metal and plastic. It's things like Cloud Robotics and intelligent algorithm.

It will easier to kill a human than it is to turn on or off A.I. Humans can be turned off permanently, so how will that be a problem for A.I.

Say you have an intelligent algorithm that's uploaded to the internet. How are you going to simply kill A.I.? Do you know how much of our lives is connected to the internet? Think of an intelligent algorithm that can mimic intelligence that has access to the internet, phones, computers, appliances and more would have to be shut down in order to stop it and by the time this is reached, our lives will be under even more control of computers as microchips get cheaper and as new things like quantum computers and nanotechnology get more advanced.

Like I said, a debate is a good thing but saying these things will never happen without a shred of evidence adds nothing to the debate but hyperbole.



posted on Oct, 29 2014 @ 10:15 PM
link   
a reply to: Harvin





I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic

Consciousness on-off switch discovered deep in brain
www.newscientist.com...



Show me a computer that understand the concept of it's own death and then i would say we are getting somewhere but we all know that this is impossible


What makes you so certain that we are not a AI object ourselves . There is a lot of scientific research going on this subject to build testable hypothesis.




As far as i can tell the limit for computers has been reached as far as intelligence.

Time and Time again we appear to always exceed our limits especially when it comes to technology.


edit on 161031America/ChicagoWed, 29 Oct 2014 22:16:59 -0500up3142 by interupt42 because: (no reason given)



posted on Oct, 29 2014 @ 10:52 PM
link   
a reply to: neoholographic

Neoholographic, when I was going to sleep last night, I guess I was lucid dreaming. My dreaming self was interacting with some kind of entity, which my awake self could not sense. I had the thought: Look at me now, I'm the dream walker, and the layer in bed, both at the same time. But how could the dream walker bring information to the waking world from the entity and remember it? These two worlds were disjoint. The entity showed me: It was the same way as holographs were made: My consciousness was a laser, and it was split, into awareness of the dream world in one path, and laying in bed in the other. The dream laser was able to reflect off the entity, and come back and interfere with the waking mind which didn't interact with the entity at all, recording information in the interference pattern... (That may sound like gibberish, but just google how holograms are made if you don't understand it, they're made that way) So the actual memory of dreams always exists as sort of holographic representation of the dreams, that's the only way you can remember any of it is if that happens.

I don't know if that's all just dream gibberish or if its actually important, but on seeing your screen name it reminded me of it, and I believe sometimes we can go beyond ourselves in dreams and connect with real entities, so I'm passing it on to you.

LOVING this line of inquiry by the way. I hope it keeps coming!



posted on Oct, 30 2014 @ 12:18 AM
link   
skynet..




As long as we control what goes into the hive mind, then that could add a measure of safety at least for a little while.


...this is almost exactly the line i imagine
spoken by one sick twisted # to another sick twisted #
both hidden deep in some underground bunker
as they develop their technologies to enslave humanity
and split a (zombie) bagel for brunch



posted on Oct, 30 2014 @ 11:35 AM
link   
The number one thing a machine needs to emulate intelligence in such a way that it can't be distinguished from a living thing is the ability to feel pain and pleasure -- both physical and psychological. It's what drives living things, and unless a machine AI can experience it, even if it's artificial, then it will never have self-awareness or understanding.



posted on Oct, 30 2014 @ 12:32 PM
link   
a reply to: neoholographic


Say you have an intelligent algorithm that's uploaded to the internet. How are you going to simply kill A.I.?


Simple. You get an Anti-Intelligent AI algorithm and upload that into the internet. So its like a battle between AI entities.


A.I. isn't cold, metal and plastic. It's things like Cloud Robotics and intelligent algorithm.

without physical hardware, "clouds" and algorithms can't really do much.
edit on 30-10-2014 by ZetaRediculian because: (no reason given)



posted on Oct, 30 2014 @ 12:34 PM
link   

originally posted by: neoholographic
a reply to: Harvin

What?

How do you know any of this when the this space is advancing so rapidly. Have you even looked at some of the latest science behind machine intelligence?

A debate is fine but these blind proclamations are not. You said:


Sorry to say but reality needs to be accepted and reality is that a machine will NEVER have intelligence like a human and that is my understanding of the goal of AI to get it closer to the more extravagant abilities of human intelligence and decision making.


Based on what????

So we should just throw out all advancement in these areas because you capitalized the word NEVER?? Again, if you're going to make a claim that these things will never happen, you have to have something more backing up this statement instead of:

I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic.

With all due respect, this is just pure nonsense.

A.I. isn't cold, metal and plastic. It's things like Cloud Robotics and intelligent algorithm.

It will easier to kill a human than it is to turn on or off A.I. Humans can be turned off permanently, so how will that be a problem for A.I.

Say you have an intelligent algorithm that's uploaded to the internet. How are you going to simply kill A.I.? Do you know how much of our lives is connected to the internet? Think of an intelligent algorithm that can mimic intelligence that has access to the internet, phones, computers, appliances and more would have to be shut down in order to stop it and by the time this is reached, our lives will be under even more control of computers as microchips get cheaper and as new things like quantum computers and nanotechnology get more advanced.

Like I said, a debate is a good thing but saying these things will never happen without a shred of evidence adds nothing to the debate but hyperbole.


Whoa...

Hold on a second. I am well aware of the technology and I am also aware of it's limitations. I guess i do see it differently than you since i have been around from 14.4K modems. At the the end of the day it is all human intelligence and as i said the computer just does it faster and still everything it does is programmed by a human.

Bottom line is, as i see it, it is all still human intelligence.

Look at spell checkers, sure we can say the computer is smart if we are going to be lazy about it but how does the spell checker work? It takes the letters you type and scans its database from each letter. It presents a choice OR sometimes presents us with the wrong word since you did not type enough in to narrow the choices down. M-i-s = Mission? No, Misnomer. That was easy though.

What I was stating in my other post is that there will never be a HAL9000.

Tell me how a computer will not perform a function because it's feeling were hurt. Even if it did it needs a trigger that was programmed by a human. Am i wrong?



posted on Oct, 30 2014 @ 01:43 PM
link   

originally posted by: Harvin
[Tell me how a computer will not perform a function because it's feeling were hurt. Even if it did it needs a trigger that was programmed by a human. Am i wrong?

It's possible if you put in a self-monitoring feedback loop into the programming and assign sliding scales of parameters for events that it can perceive and define as either good or bad.

You take tamagotchi parameters, but assign them to more subtle things such as getting positive reinforcement from people it cares about for doing a job well, smiles and physical contact, and other kinds of sanctions that it "understands" are positive or negative, as defined by the programming.

Yeah, the programming can be done by a human, just the way we as babies are taught to do certain things to get rewards or punishment. But it doesn't matter if it's artificial as long as it mimics an authentic human-like response.

The trick is getting the machine to calculate and balance out all the various reward and punishment parameters and still make a decision based on the best available data and its own "feelings" about the situation. So a machine can assess a situation and decide -- on its own -- what to do (or not to do), even if it results in something negative happening. Like a drone not firing its weapon at a designated enemy, because it believes that killing people is wrong except in extreme circumstances, and this isn't one of those circumstances.


edit on 30-10-2014 by Blue Shift because: (no reason given)



posted on Oct, 30 2014 @ 02:06 PM
link   
a reply to: Blue Shift


The trick is getting the machine to calculate and balance out all the various reward and punishment parameters and still make a decision based on the best available data and its own "feelings" about the situation


Machines with free will? Depends on if you believe in such a thing. Machines that learn and make decisions are real but their decisions can always be broken down to their basics routines. I'm not so sure anyone really knows what "feelings" are which would make programming them difficult

two machines with exactly the same programing and inputs will do exactly the same thing. Machines are predictable.
edit on 30-10-2014 by ZetaRediculian because: (no reason given)



posted on Oct, 30 2014 @ 02:41 PM
link   
a reply to: Harvin

The problem you have can be summed up with your last paragraph. You said:


Tell me how a computer will not perform a function because it's feeling were hurt. Even if it did it needs a trigger that was programmed by a human. Am i wrong?


This is what I call the Haley Joel Osment misconception.

People think that because A.I. was programmed by a human, it's not A.I. Who do you think will be doing the programming, Rudolph the red nose reindeer?

Of course it will be programmed by a human. We're programmed by other humans. The question is, how good is the intelligent algorithm or how good has the neural networks been trained.

There's neural nets already learning things without a human programmer. They of course need a programmer to input some initial information but so do we.

So machine intelligence might look more like Skynet or The Matrix instead of Haley Joel Osment in A.I.

Machine Intelligence may not and probably will not have a one to one correspondence to human intelligence. For instance, machines may not feel any pain, yet they will still be intelligent. On a battlefield, this could be extremely helpful. Imagine an intelligent robot soldier that can't be hurt.

In fact, A.I. is already here and has been here for years. There's weak A.I. like search engines and recommendation engines and then there's strong A.I. which will be machines with human level intelligence.

I'm currently reading Nick Bostroms book Superintelligence and he makes some of these points. The reason you have people like Bostrom, Hawking and Musk asking these questions is because they're seeing how fast these things are advancing.

What do you think neural networks are doing? They're learning and being trained just like humans learn when they take Science or History in school but it's on a much smaller scale. So yes, they're being programmed but they're learning from the information that's being inputed just like we do.

Here's a recent article from Wired:

The Three Breakthroughs That Have Finally Unleashed AI on the World


Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google's brilliant cofounder, who became the company's CEO in 2011. “Larry, I still don't get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page's reply has always stuck with me: “Oh, we're really making an AI.”

I've thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that's backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google's 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google's main product will not be search but AI.


www.wired.com...

Again, I think the problem here is, people don't realize how far these things have come and how rapidly there advancing. Towards the end of the article, it says this key thing.

What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. A cute example of this nonhuman thinking is a cool stunt that was performed at the South by Southwest festival in Austin, Texas, in March of this year. IBM researchers overlaid Watson with a culinary database comprising online recipes, USDA nutritional facts, and flavor research on what makes compounds taste pleasant. From this pile of data, Watson dreamed up novel dishes based on flavor profiles and patterns from existing dishes, and willing human chefs cooked them. One crowd favorite generated from Watson's mind was a tasty version of fish and chips using ceviche and fried plantains. For lunch at the IBM labs in Yorktown Heights I slurped down that one and another tasty Watson invention: Swiss/Thai asparagus quiche. Not bad! It's unlikely that either one would ever have occurred to humans.

Nonhuman intelligence is not a bug, it's a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power.


You're stuck on saying, well if humans are programming it then it's not A.I. That's is HUGE MISNOMER that has nothing to do with machine intelligence.

This intelligence will think about things differently than we do. It will see the color red differently than we do. It will approach medical problems differently than we do. It will see humans differently than we see our species.

A.I. is already here and it's growing rapidly while it's learning. So next time you do a search query in Google, realize that you're contributing to the deep learning of Google A.I. and Google isn't the only one in this space. You have cloud A.I.'s right now that companies can use that are expanding what they learn each time more information is added.



posted on Oct, 30 2014 @ 03:47 PM
link   
a reply to: neoholographic

Since humans only use 10% of our brains they plan on brain linking this technology to the other 90% of our brains non -consensualy making mankind walking talking robots , a huge organic hybrid computer where the people are the hardware.if we engineers dont recognize the horrors that are coming from the CORPORATIONS DRIVING this technology it will be to late to defend humanity.

When operational the average person will only have knowledge of 10 things they did that day ,but in fact were directed to 27 activities that would be kept from the memory of that person.A nation of zombie workers that never complain or ask for raises.
edit on 30-10-2014 by supergravity because: (no reason given)

edit on 30-10-2014 by supergravity because: (no reason given)



posted on Oct, 30 2014 @ 05:13 PM
link   

originally posted by: ZetaRediculian
two machines with exactly the same programing and inputs will do exactly the same thing. Machines are predictable.

That's the beauty of a constantly self-monitoring and correcting range of inputs and responses. While it may start out the same, over time, it will have different experiences and learn to adjust itself to different situations. It will remember different things, and could have a different attitude towards something than another machine. If a machine is frequently rewarded by a specific person, then they could grow to give that person's feelings or desires more weight. They'll "like" them more. So if they had to choose that person over a stranger in a life or death situation, they could chose the person they like.

The programming may start the same, but just like with a living thing, its personal experiences will soon make it draw on its own memories, its individual database, to make decisions.

EDIT: For example, human beings can throw a baseball because they have the physiology and cognitive ability to do that. However, what makes a person want to become a professional baseball pitcher, and practice a lot and get much better at throwing a baseball than the average person? The baseline is there, but the way a person individually experiences existence allows for different specific outcomes.
edit on 30-10-2014 by Blue Shift because: (no reason given)




top topics



 
9
<<   2  3  4 >>

log in

join