It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Q.
What is cloud robotics?
A.
Cloud robotics is a new way of thinking about robots. For a long time, we thought that robots were off by themselves, with their own processing power. When we connect them to the cloud, the learning from one robot can be processed remotely and mixed with information from other robots.
Q.
Why is that a big deal?
A.
Robot learning is going to be greatly accelerated. Putting it a little simply, one robot can spend 10,000 hours learning something, or 10,000 robots can spend one hour learning the same thing.
Q.
What are some examples of this?
A.
Google’s self-driving cars are cloud robots. Each can learn something about roads, or driving, or conditions, and it sends the information to the Google cloud, where it can be used to improve the performance of other cars.
But no matter how powerful these machines become, they may never develop true intelligence if we continue to rely on conventional computing technology. According to the authors of a paper published in the journal Physical Review X last July, however, adding a dash of quantum mechanics could do the trick.
The problem lies in part with the step-by-step processes that limit conventional artificial intelligence learning algorithms. The authors of the paper equate it with classical random walk searches. Random walks are sometimes described as being like the stumbling of a drunk person - each step is about the same size, but the direction of the steps are random. Random walkers can cover a lot of territory, and an artificial intelligence system that explores various problems with random walk learning algorithms can eventually learn new behaviors, but it takes a long time.
Quantum walks, on the other hand, describe a walker who doesn't exist at one spot at a time, but instead is distributed over many locations with varying probability of being at any one of them. Instead of taking a random step to the left or right for example, the quantum walker has taken both steps. There is some probability that you will find the walker in one place or the other, but until you make a measurement the walker exists in both.
Compared with a random walk, quantum random walks are much, much faster ways to get around. To the extent that learning is like taking a walk, quantum walks are a much faster way to learn.
That's not to say you'd need to make a full-blown quantum computer to build a truly intelligent machine - only part of an otherwise classical computer would need to be supplemented with a bit of quantum circuitry. That's good because progress toward developing a stand-alone quantum computer has been about as slow as the progress toward artificial intelligence. Combining artificial intelligence systems with quantum circuitry could be the recipe we need to build the HAL 9000s and R. Daneel Olivaws of the future.
I know many here have fantasies about an evil AI who destroy humanity but so far everything is programmed by very real humans - most AIs have the intelligence of a child with an age in the singular digit. If an AI was to turn evil, it probably means the programmer programmed it to do so, and really spent energy for the AI to turn that way.
originally posted by: solargeddon
a reply to: neoholographic
Perhaps my understanding is off, but to quantum walk, would be the subconscious running through it options, settling on one, then transferring this information to the conscious mind in the form of a definitive decision?
If A.I could do this, we woould have to give them equal rights.
Please correct me if I'm wrong in my assertion of quantum walking.
Sorry to say but reality needs to be accepted and reality is that a machine will NEVER have intelligence like a human and that is my understanding of the goal of AI to get it closer to the more extravagant abilities of human intelligence and decision making.
I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic
Show me a computer that understand the concept of it's own death and then i would say we are getting somewhere but we all know that this is impossible
As far as i can tell the limit for computers has been reached as far as intelligence.
As long as we control what goes into the hive mind, then that could add a measure of safety at least for a little while.
Say you have an intelligent algorithm that's uploaded to the internet. How are you going to simply kill A.I.?
A.I. isn't cold, metal and plastic. It's things like Cloud Robotics and intelligent algorithm.
originally posted by: neoholographic
a reply to: Harvin
What?
How do you know any of this when the this space is advancing so rapidly. Have you even looked at some of the latest science behind machine intelligence?
A debate is fine but these blind proclamations are not. You said:
Sorry to say but reality needs to be accepted and reality is that a machine will NEVER have intelligence like a human and that is my understanding of the goal of AI to get it closer to the more extravagant abilities of human intelligence and decision making.
Based on what????
So we should just throw out all advancement in these areas because you capitalized the word NEVER?? Again, if you're going to make a claim that these things will never happen, you have to have something more backing up this statement instead of:
I think one issue against AI for computers is the mere fact that they can be turned off and then turned on again and they are cold, metal and plastic.
With all due respect, this is just pure nonsense.
A.I. isn't cold, metal and plastic. It's things like Cloud Robotics and intelligent algorithm.
It will easier to kill a human than it is to turn on or off A.I. Humans can be turned off permanently, so how will that be a problem for A.I.
Say you have an intelligent algorithm that's uploaded to the internet. How are you going to simply kill A.I.? Do you know how much of our lives is connected to the internet? Think of an intelligent algorithm that can mimic intelligence that has access to the internet, phones, computers, appliances and more would have to be shut down in order to stop it and by the time this is reached, our lives will be under even more control of computers as microchips get cheaper and as new things like quantum computers and nanotechnology get more advanced.
Like I said, a debate is a good thing but saying these things will never happen without a shred of evidence adds nothing to the debate but hyperbole.
originally posted by: Harvin
[Tell me how a computer will not perform a function because it's feeling were hurt. Even if it did it needs a trigger that was programmed by a human. Am i wrong?
The trick is getting the machine to calculate and balance out all the various reward and punishment parameters and still make a decision based on the best available data and its own "feelings" about the situation
Tell me how a computer will not perform a function because it's feeling were hurt. Even if it did it needs a trigger that was programmed by a human. Am i wrong?
Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google's brilliant cofounder, who became the company's CEO in 2011. “Larry, I still don't get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page's reply has always stuck with me: “Oh, we're really making an AI.”
I've thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that's backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google's 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google's main product will not be search but AI.
originally posted by: ZetaRediculian
two machines with exactly the same programing and inputs will do exactly the same thing. Machines are predictable.