It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Supersmart Robots Will Outnumber Humans Within 30 Years, Says SoftBank CEO

page: 1
6
<<   2 >>

log in

join
share:

posted on Feb, 27 2017 @ 03:48 PM
link   
This is a very interesting stuff.


Within 30 years, artificial intelligence will be smarter than the human brain.

That is according to Masayoshi Son, chief executive of SoftBank Group Corp., who says that supersmart robots will outnumber humans and more than a trillion objects will be connected to the internet within three decades.


www.wsj.com...

It started with a bang.

A TRILLION OBJECTS CONNECTED!

This is pretty huge because he's talking about the internet of things. The internet of things will grow data even faster than it is today. Right now it has grown pretty fast with things like the internet and smartphones. The IoT will see another explosion of data and A.I. will get smarter and smarter as data grows.


In a brief interview after his speech, Mr. Son said his $100 billion project with the Saudis, dubbed the SoftBank Vision Fund, was bigger than the $65 billion in combined investments from the venture-capital world. He said the SoftBank Vision Fund would be focused. “Artificial intelligence, Internet of Things, smart robots: Those are the three main things I’m interested in,” he said.

The Internet of Things is the technology world’s term for connecting everyday objects, such as refrigerators and sneakers, to the web.

In his speech, Mr. Son said that while average humans had an IQ of roughly 100 and that geniuses such as Albert Einstein were believed to score about 200, superintelligent computers would have IQs of 10,000. He said computer chips possessing superintelligence would be put into robots big and small that can fly and swim. These robots would number in the billions and would be greater than the human population within 30 years, he said.

The chips would also be in everyday objects. “One of the chips in our shoes will be smarter than our brain,” he said. “We will be less than our shoes, and we will be stepping on them.”


www.wsj.com...

This is inevitable.

This can be a good or a bad thing. It can be bad if superintelligence doesn't like humans. An intelligent system that's more intelligent than any human that has ever lived might just wipe most of us out and keep a few of us around for sport.

If superintelligence adores humans, it can be the greatest thing ever. It could give us all kinds of goodies in the form of technology and we can have type 1 civilization and type 2 technology. This is because superintelligence will be 1000's of years ahead of us because of data.

Michio Kaku suggested that humans may attain Type I status in 100–200 years, Type II status in a few thousand years, and Type III status in 100,000 to a million years.

Hopefully it's benevolant but I don't think superintelligence will be one thing. I think it will be like a collective conscious and you will have superintelligence that hates humans and superintelligence that likes humans. So some people say there's a singular consciousness expressed in many different local realities, there could be a singular superintelligence expressed in many different machines from robots to your smart stove.



posted on Feb, 27 2017 @ 04:07 PM
link   
If you are dumb enough to believe this, then the robots will be smarter than you.

I bet the top Physicists in the world will laugh at this.



posted on Feb, 27 2017 @ 04:08 PM
link   
Is having wisdom different than being smart?



posted on Feb, 27 2017 @ 04:09 PM
link   
If things go the way our elite masters want them to, then most of us don't have 30 years. Or even 10 really.



posted on Feb, 27 2017 @ 04:21 PM
link   
Not terribly difficult, as there are very few super smart humans.



posted on Feb, 27 2017 @ 05:31 PM
link   
a reply to: CheckPointCharlie

To start with, everyone who believes that article. Then again, most of those people aren't as smart as our current devices, and that's not a very high bar.



posted on Feb, 27 2017 @ 05:49 PM
link   
a reply to: neoholographic
Self aware within 30 years. Yes I can go with that. In our "shoes". Total BS. Machines with an IQ of 10000 erm no. There is a problem of diminishing returns no matter how fast the computer. The sheer volume of knowledge that has to be tapped into and referenced in order to keep increasing the IQ Measure mushrooms.

Self awareness has nothing to do with God or a spirit or some human only condition. It is simply the point where the analysis of the information and volume used to analyse leads to questions "who am I", "why am I here", "why am I doing this". With humans these questions are sidelined pretty quick when death approaches in the form of a predator, or survival due to hunger etc. That's why we work folks and why most people never get to those questions as they are consumed with survival and reproduction (which is survival by proxy).



posted on Feb, 28 2017 @ 01:50 AM
link   
a reply to: neoholographic

Within 30 years, artificial intelligence will be smarter than the human brain.


I think that estimate is about right, but perhaps slightly generous. There is always the chance of breakthroughs or road blocks in the research.

IBM's SyNAPSE has ~1,000,000 primitive electronic neurons.
The human brain has ~86,000,000,000 neurons.
So, it would take a super computer with 86,000 chips to equal a primitive electronic human brain.
IBM's Blue Gene/P super computer has 250,000 processors, meaning 86,000 chips is a reasonable setup.

So, I believe that we could construct a super computer in the near future that has roughly the same neural capacity as a human brain. That's not to say that it would have the same number of synapses.

IBM's SyNAPSE has 256 synapses per neuron.
The human brain has ~7,000 synapses per neuron.
86 billion * 256 = ~22 trillion synapses.
The human brain has ~100-1000 trillion synapses (estimates vary greatly, particularly with age).
Also note that a human synapses don't have the restriction of only connecting to neurons on its chip.

This doesn't say anything in relation to programming required, but having the hardware is a great start. There are dozens, if not hundreds of hurdles regarding A.I. programming, still. But slowly, each of these obstacles are being surmounted. I have no idea if future generations of the SyNAPSE chips will have more synapses per neuron.


That is according to Masayoshi Son, chief executive of SoftBank Group Corp., who says that supersmart robots will outnumber humans and more than a trillion objects will be connected to the internet within three decades.

The IoT will see another explosion of data and A.I. will get smarter and smarter as data grows.


I have a couple issues here:
1. Robots aren't very popular in western countries, as compared to Japan. This could change if a future generation gets excited about robots, but so far it just hasn't exploded in the West.
2. While I agree that IoT devices will be in the billions, claiming a trillion is an outright lie--that's like 117 devices per person. Most of the world is in poverty and somehow we're going to have 117 devices per person? No. Remember, smart phones have REDUCED the number of devices that a person needs to carry. I assume more devices will be integrated over time, thus reducing the total devices per person. I do agree that some items might become connected that are not connected now, which will bring up the number... like maybe Flash Storage drives or UPS boxes. But many devices are already connected, so those won't bring up the total very much (TVs, cars, computers, baby monitors, refrigerators, dish washers... already wireless). You also have to take into account every device that is broken or not used will not be a "connected device" in practice, but only in classification.
3. IoT devices are currently REALLY DUMB. They are often super low power, low cost chips with very minimal programming. I do believe this could change in 30 years if costs continue to come down (Moore's Law is stagnating right now, so who knows).


In his speech, Mr. Son said that while average humans had an IQ of roughly 100 and that geniuses such as Albert Einstein were believed to score about 200, superintelligent computers would have IQs of 10,000. He said computer chips possessing superintelligence would be put into robots big and small that can fly and swim. These robots would number in the billions and would be greater than the human population within 30 years, he said.

The chips would also be in everyday objects. “One of the chips in our shoes will be smarter than our brain,” he said. “We will be less than our shoes, and we will be stepping on them.”


He's wrong. Einstein's IQ was 160-170. David Hilbert's IQ was probably quite a bit higher. It is believed that Hilbert finished General Relativity before Einstein, but gave Einstein all of the credit because it was his original idea: en.wikipedia.org...

If we talk about "memory", I think the internet already has an IQ of over 10,000. But if we talk about learning ability and creative solutions, I don't think 30 years is nearly enough.

If P = NP, then we might see a spike in a computer's ability to solve some very difficult problems. But that's an unsolved problem in Computer Science.

Source: en.wikipedia.org...

If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps," no fundamental gap between solving a problem and recognizing the solution once it's found.

— Scott Aaronson, MIT


I don't think that our shoes will be smarter than our brains in 30 years. The super computer that I described above, combined with Moore's Law slowing, proves that a low-cost, high capacity neural chip probably won't be available for your shoes in 30 years. They may be in your desktop or laptop by then, though.


If superintelligence adores humans, it can be the greatest thing ever. It could give us all kinds of goodies in the form of technology and we can have type 1 civilization and type 2 technology. This is because superintelligence will be 1000's of years ahead of us because of data.


Actually, the best theory we currently have is that we'll interact with super computers of the future with "neural mesh/lace". Also, we'll probably enhance ourselves with technology so that we BECOME the super intelligence of the future.

gizmodo.com...


Michio Kaku suggested that humans may attain Type I status in 100–200 years...


I hope so. I'd love it if General Fusion gets their current prototype up and running in the next 3-5 years. I could see us hitting a Type I civilization a little after 2100 (guesstimate).


I think it will be like a collective conscious ... there could be a singular superintelligence expressed in many different machines from robots to your smart stove.


Developing a connected, collective conscious will be a weird reality of the future. I assume humans will be a mix of radical individualism and collective conscious. It's really hard to predict.



posted on Feb, 28 2017 @ 06:20 AM
link   
a reply to: Protector

Wrong on several fronts and like I said in another thread, you're just not taking the time to read or understand the research in these areas. You keep saying the same thing.

You said:

So, I believe that we could construct a super computer in the near future that has roughly the same neural capacity as a human brain.

You don't need the same neural capacity of the human brain in order to have A.I. that's smarter than humans. This is because of deep learning and big data. This is why many researchers in this area are concerned. Deep Learning has changed the game and you already have intelligent systems that are smarter than human in some areas.

Artificial Intelligence will not be the same as human intelligence. I think people get caught up in the movies but A.I. will be machine intelligence and there will not be a one to one correspondence with the human brain.

You have intelligent systems that beat human players at poker and go. LEARN how to play Atari games and more.

Deep Learning Machine Beats Humans in IQ Test

www.technologyreview.com...

Artificial intelligence can spot skin cancer as well as a trained doctor

www.theverge.com...

Google's AI Software Beats Humans at Writing AI Software

www.lightreading.com...

I can go on and on with examples. This is happening now. Like one Researcher said, these things are like waves before a tsunami. There's only one thing that's holding these systems back and that's moving A.I. away from supercomputers to any device and Researchers are working on this problem.

XNOR.ai frees AI from the prison of the supercomputer


When someone talks about AI, or machine learning, or deep convolutional networks, what they’re really talking about is — as is the case for so many computing concepts — a lot of carefully manicured math. At the heart of these versatile and powerful networks is a volume of calculation only achievable by the equivalent of supercomputers. More than anything else, this computational cost is what is holding back applying AI in devices of comparatively little brain: phones, embedded sensors, cameras.

If that cost could be cut by a couple orders of magnitude, AI would be unfettered from its banks of parallel processors and free to inhabit practically any device — which is exactly what XNOR.ai, a breakthrough at the Allen Institute for AI, makes possible.

XNOR.ai is, essentially, a bit of clever computer-native math that enables AI-like models for vision and speech recognition to run practically anywhere. It has the potential to be transformative for the industry.


techcrunch.com...

Here's a vide:



Also, trillions of objects sounds about right and this is because microchips will be so cheap. Dr. Kaku talks about this and says computing power will be everywhere soon.


By the year 2020, a chip with today’s processing power will cost about a penny, which is the cost of scrap paper we throw in the garbage.

By 2020, computer intelligence will be everywhere: not just in the cars and the roads, but practically in every object you see around you....We are now at a point in our lives where computers are everywhere: in our phones, televisions, stereos, thermostats, wrist watches, refrigerators and even our dishwashers. In just a few years, basic microchips will be so cheap they could be built into virtually every product that we buy, creating an invisible intelligent network that’s hidden in our walls, our furniture, and even our clothing. Some of you may even have microchips in your dog or cat, acting as a digital collar in the event they become lost.


bigthink.com...

Microchipping pets


The average cost to have a microchip implanted by a veterinarian is around $45, which is a one–time fee and often includes registration in a pet recovery database. If your pet was adopted from a shelter or purchased from a breeder, your pet may already have a microchip.


www.petfinder.com...

Here's a recent breakthrough that's very important and goes to what Dr. Kaku said about a chip costing one penny.

This Chip Costs One Cent and Can Diagnose Everything From Cancer to HIV


A team of Stanford engineers have developed an alternative diagnostic method that may be a potential solution to medical diagnostic inaccessibility in developing countries. Their research, published in the Proceedings of the National Academy of Sciences, overviews a tiny, reusable microchip capable of diagnosing multiple diseases. As mentioned, the tool, which they’ve dubbed FINP, is surprisingly affordable, with a production cost of just $.01, and it can be developed in 20 minutes.

If they reach that point, they will have a device on their hands that could potentially cut costs down tremendously in diagnosing equipment while simultaneously preventing the spread of infection around the world. The team is optimistic that their device can make a difference, as they should be. A penny chip that can detect disease is one reminder that we are in the future.


futurism.com...

So saying 2030 is being generous!



edit on 28-2-2017 by neoholographic because: (no reason given)



posted on Feb, 28 2017 @ 06:42 AM
link   
a reply to: neoholographic

Lets be fair through there are not that many super smart humans doing the rounds today.


Hence the bankers and there political puppets control over our respective societies, might be time to give these super smart robots there chance.


edit on 28-2-2017 by andy06shake because: (no reason given)



posted on Feb, 28 2017 @ 12:10 PM
link   
Deep Learning Machine Beats Humans in IQ Test

originally posted by: neoholographic


Wrong on several fronts and like I said in another thread, you're just not taking the time to read or understand the research in these areas. You keep saying the same thing.


While I don't read every paper, I am an expert in a related field. You are the one who doesn't understand what you are reading, as others have pointed out. I don't mind that you aren't an expert. You bring up good questions in relation to these topics, but when you lose an argument on the topic, you then refuse to acknowledge it. That's just you being stubborn, but has nothing to do with the validity of my contributions, nor others, for the sake of accuracy of these forums.


You don't need the same neural capacity of the human brain in order to have A.I. that's smarter than humans.


While I have agreed with you on this argument, in other scenarios, you can't apply that argument to this scenario. This scenario was presented where the comparison was made to human intelligence. You quoted a source using a false "Einstein's IQ" as a comparator. That is directly comparing machine intelligence with human intelligence. So, to give this argument weight, we have to compare the subjects given; that is, human intelligence verses computer intelligence. Outside of a direct comparison like this, you are right that a machine can have different optimizations outside of human reasoning and be considered intelligent.


Deep Learning has changed the game and you already have intelligent systems that are smarter than human in some areas.


That's not quite correct. The intelligent systems that beat humans, as pointed out in a previous post to you, which you continually deny, use memorized (brute forced) data. Your poker bot beat humans in the phase of play where it accessed billions of pre-computed solutions. Yes, I know that modern A.I.s are advancing and are starting to develop on-the-fly reasoning. This is great, but they aren't at human level, yet, in this area. Humans (almost) always lose to pre-computed solutions.

There is a well known case where a Chess bot's hash tables (pre-computed solutions) had a problem and subsequently lost to the human player:

Source: www.quora.com...

Pablo Lafuente - Shredder
In this game the player who blunders is surprisingly... a computer. After the bishop exchange 19.Bxb7 Shredder calculated its variation 20 moves ahead and interestingly enough decided to ignore the white's bishop whatsoever. Shredder played19...Rfd8?? not regaining the material. Laufente won some 30 moves later. The Shredder's lose was later explained as 'hash tables error', with one in a million chance.



Artificial Intelligence will not be the same as human intelligence. I think people get caught up in the movies but A.I. will be machine intelligence and there will not be a one to one correspondence with the human brain.


You're correct. There will not be a 1:1 correspondence. But when comparing to human intelligence, specifically, the A.I. must be able to compete with the major human intellectual faculties.

I'm going to deconstruct some of your most prized articles on the subject.


Deep Learning Machine Beats Humans in IQ Test


Source: aclanthology.info...

This was a paper presented at a conference, but was not a peer reviewed paper:
"Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing"

This was a very novel test setup. It has some flaws, but it's decent. Use 30 questions (per category) from an online IQ test and compare that to the results of Amazon's Mechanical Turk (AMT--crowdsourced humans). There are 5 categories (Analogy I, Analogy II, Classification, Synonym, Antonym). It should be noted that the majority of Mechanical Turk participants are teens, 20s, and 30s; and the researchers forced Amazon to select Americans with previous experience properly completing previous AMT tasks. 200 participants were used so that uncommon scores could be dropped and an average formed.

Some flaws:
1. It is not known how many questions each person answered.
2. Each person only answered one category, which could create a mild skew in the data.
3. Within a margin of error, the A.I. only scored better in Classification and Synonyms--honorable mention that it roughly matched humans in the other 3 categories.
4. The A.I. should have crushed humans in Antonyms, but it seems that their test setup wasn't tweaked for this.
5. The A.I. was able to memorize the Longman Dictionary and WordRep, so it should (and did) crush humans in Synonyms.
6. Thesaurus.com, an extension of Dictionary.com, has been around since the last 90s, proving that Synonyms and Antonyms, managed by computers, have been superior for just under 2 decades.
7. This test setup obviously ignores the other 3 categories of IQ tests, specifically Numerical, Spatial, and Logical intelligence.
8. This test setup did not take into account all variations within Verbal intelligence tests, outlined here:
en.wikipedia.org...
For example: "Participants must name objects in pictures or define words presented to them."
9. All but the RK A.I. model failed significantly verses the humans--but it only takes one.
10. A previous company that I worked for employed dozens (if not hundreds) of mechanical turk workers in a similar fashion to this test. The resulting material, "verbal" in nature, was scrapped. Office employees had to rewrite everything over a few months. I add this as a practical counter-example of using the same service and receiving sub-par results in regards to language skills.
11. Other attempts at similar I.Q. problems yield a more sobering 4 year old child score:
www.researchgate.net...< br />

The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds.

Also, 4 year old Child IQ scores have been found to not accurately correlate with their Adult IQ score: infoproc.blogspot.it...


Artificial intelligence can spot skin cancer as well as a trained doctor


This is awesome! I've read about it before and it is a fantastic application of visual learning algorithms. Very practical and very cool.



Google's AI Software Beats Humans at Writing AI Software


For one, this refers to an article stating a very simple gain in performance of, again, a non-peer reviewed paper.

Source: arxiv.org...

Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme.


In short, this software uses pre-existing (human) architectures for visual data (picture recognition) to speed up learning.
edit on 2017-2-28 by Protector because: Fixing a bad link



posted on Feb, 28 2017 @ 12:11 PM
link   
Part II:

Regarding XNOR.ai, everything is marketing at this point. They don't have a product on the market, so it's vaporware until then.

You can read about the hype here--just received startup funding:
www.geekwire.com...

From your Techcrunch article:

Now, this isn’t a miracle technology; it’s a compromise between efficiency and accuracy. What the team realized was that CNN calculations don’t have to be exact, because the results are confidence levels, not exact values.
...
The cast-away data would help with the confidence, but it isn’t absolutely necessary; you’d lose 5 percent of your accuracy, but get your results 10,000 percent faster. That’s about the nature of the trade-off made by XNOR.ai.
...
Whether machine learning models truly constitute AI is another, so far unanswered, question, but for now we’ll use AI in its broader sense.


So it's a slimmed down visual object detect system that can run on a smart phone (which are reasonably powerful these days). These don't require super computers today. This is a slimmed down version of what Google's self-driving car uses (which doesn't have a super computer).

They also detail that they are using primitive, very-high-speed operations:

These simple operations are carried out at the transistor level and as such are very fast. In fact, they’re pretty much the fastest calculations a computer can do, and it happens that huge arrays of numbers can be subjected to this kind of logic at once, even on ordinary processors.


That's a great idea. I think this type of technology would be great for AR devices. Remove accuracy for battery life, when accuracy isn't the highest priority.

Interestingly, they referred to it as "AI-like", similar to visual AI networks (CNNs), but not the same. I'd be curious to find out what that means. I'm thinking they are using the visual maps generated by neural networks (maybe a lower resolution version). So it uses the result of an A.I., but doesn't actually learn from incoming data.

Here's an example of what I mean:
static1.squarespace.com...

Moving on...

By the year 2020, a chip with today’s processing power will cost about a penny...


That statement was made in 2003 (or earlier), when Moore's Law was doing great:
www.geek.com...

Moore's Law is almost over. It's a matter of physics. So the next 3 decades can't rely on what Kaku was talking about in 2003 (ish).
Source: www.nature.com...

Your article on super-cheap diagnostic chips is from this:


The technology is meant for 3rd world nations that need low cost diagnostic equipment--because they currently have none.

Here's another recent article on it: news.stanford.edu...



posted on Feb, 28 2017 @ 01:31 PM
link   
a reply to: Protector

Another long winded post where you don't respond to anything that you previously said. First off, you said:

That's not quite correct. The intelligent systems that beat humans, as pointed out in a previous post to you, which you continually deny, use memorized (brute forced) data. Your poker bot beat humans in the phase of play where it accessed billions of pre-computed solutions. Yes, I know that modern A.I.s are advancing and are starting to develop on-the-fly reasoning. This is great, but they aren't at human level, yet, in this area. Humans (almost) always lose to pre-computed solutions.

Like I said before, this is just a flat out lie.

You have no clue about deep learning. The intelligent system couldn't use brute force it had to LEARN because it had incomplete information.



A couple of key points. First Protector keeps talking about "brute force" which makes no sense. Tuomas Sandholm, a computer scientist at Carnegie Mellon University, says at around 5:10 that it's not about BRUTE FORCE because there's 10^160 situations that the player can face in this game. That's 1 followed by 160 zeroes which is more than the number of atoms in the universe which is estimated between 10^78 to 10^82. So it CAN'T use brute force, it has to learn.

It didn't use brute force, it learned how to play poker and the algorithm wasn't poker specific.

Again, these chips are made for a penny and nothing you said refutes that. The technology isn't meant for just 3rd world countries. That's just ignorant. THE MICROCHIP COSTS A PENNY TO MAKE. It doesn't cost a penny because it will be used in third world countries, it costs a penny because that's how much it costs to make the chip and what Dr. Kaku said is right and he knew about Moore's Law in 2003 in fact he talks about it LOL


That's just nonsense.

A team of Stanford engineers have developed an alternative diagnostic method that may be a potential solution to medical diagnostic inaccessibility in developing countries. Their research, published in the Proceedings of the National Academy of Sciences, overviews a tiny, reusable microchip capable of diagnosing multiple diseases. As mentioned, the tool, which they’ve dubbed FINP, is surprisingly affordable, with a production cost of just $.01, and it can be developed in 20 minutes.

futurism.com...

THE PRODUCTION COSTS IS JUST ONE PENNY.

Here's Dr. Kaku talking about Moore's Law.



So this makes no sense. These microchips will be used for many things and this is just the beginning. One cent microchips will be everywhere in 30 years and everything from roads to tennis shoes will have chips in them.

So again, you keep ignoring reality no matter how many times it's shown to you. You don't understand what deep learning means. You think they're talking about doing brute force calculations where they have to calculate every move. This is just nonsense and if this was the case there's no need to say it's learning, it's just doing brute force calculations.

NONSENSE!

It has to learn and in the case of poker the intelligent system had incomplete information and had to know when to bluff. There was 10^160 possible situations that the system could face. This is an amount more than the number of atoms in the universe. There's no way you can do this through brute force calculations. You have to learn when to bluff and when not to bluff and play cards when you don't have all of the information. So you have to LEARN how to play.

This is why the players talked about doing certain things that worked during the game but the system learned what they were doing and adjusted it's play as well as compensating for other ways they may play. The system did this and wasn't poker specific and it had incomplete information.
edit on 28-2-2017 by neoholographic because: (no reason given)



posted on Feb, 28 2017 @ 01:34 PM
link   
a reply to: neoholographic

lol. Unfortunately, even humans don't like humans!



posted on Feb, 28 2017 @ 01:37 PM
link   

originally posted by: soficrow
a reply to: neoholographic

lol. Unfortunately, even humans don't like humans!




That's true LOL, and hopefully superintelligence stays away from Twitter or we're all doomed!



posted on Feb, 28 2017 @ 01:37 PM
link   
a reply to: neoholographic

And did you catch this one?
[edited for better quotes]



The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel

The success of Nvidia and its new computing chip signals rapid change in IT architecture

A big part of Nvidia’s success is because demand is growing quickly for its chips, called graphics processing units (GPUs), which turn personal computers into fast gaming devices. But the GPUs also have new destinations: notably data centres where artificial-intelligence (AI) programmes gobble up the vast quantities of computing power that they generate.

…Nvidia’s GPUs are one example. They were created to carry out the massive, complex computations required by interactive video games. GPUs have hundreds of specialised “cores” (the “brains” of a processor), all working in parallel, whereas CPUs have only a few powerful ones that tackle computing tasks sequentially. Nvidia’s latest processors boast 3,584 cores; Intel’s server CPUs have a maximum of 28.

…And GPUs are only one sort of “accelerator”, as such specialised processors are known. The range is expanding as cloud-computing firms mix and match chips to make their operations more efficient and stay ahead of the competition. “Finding the right tool for the right job”, is how Urs Hölzle, in charge of technical infrastructure at Google, describes balancing the factors of flexibility, speed and cost.






edit on 28/2/17 by soficrow because: (no reason given)



posted on Feb, 28 2017 @ 02:23 PM
link   
a reply to: soficrow

Good article and it's exactly what Dr. Kaku was talking about in the Big Think video.



posted on Feb, 28 2017 @ 02:26 PM
link   
Why would it hate all humans? Some suck more than others surely they'd keep some of us around for archiving.

Pets.



posted on Feb, 28 2017 @ 02:48 PM
link   

originally posted by: Lysergic
Why would it hate all humans? Some suck more than others surely they'd keep some of us around for archiving.

Pets.


They will keep some around as pets or kill us all and create a trillion ancestor simulations. This will be better for them because it will create massive amounts of data. Who knows, we may be in an ancestor simulation now.
edit on 28-2-2017 by neoholographic because: (no reason given)



posted on Mar, 1 2017 @ 10:01 AM
link   
a reply to: neoholographic



...These microchips will be used for many things and this is just the beginning. One cent microchips will be everywhere in 30 years and everything from roads to tennis shoes will have chips in them.



Do you honestly think it will take that long?







top topics



 
6
<<   2 >>

log in

join