Virtual computers should be the next biggest thing to unfathomable computing power

page: 1
1

log in

join

posted on Nov, 5 2012 @ 12:00 PM
link   
So Last night I read the news thread on the Science & Technology page about the D.O.E. unveiling the newest most powerful super Computer. Thread below:

Titan, World's Fastest Supercomputer (20 Petaflops)

And actually had a dream last night about taking Titan and creating a Virtual Titan which would have all the same processing power as Titan. What that would entail and how that work is a little bit beyond my scope of understanding in regards to the actual processing power involved in such a feat, however granted to say this is basically a theoretical post.

In all regards, we already do have virtual computing out there. So why not "virtualize" Titan, then build a super computer that has a trillion of virtual titans all working in conjunction?

Those knowledgeable in this area may add much more knowledge on why this could or can't be done just yet. Please feel free to contribute to the thread.

Another interesting thing, would be to create a stem cell human brain that is virtually connected to computers, and possibly have dozens or more working in conjunction daisy chained to each other and all plugged into virtual titans. I think it's all heading there anyway.

Scientists already had a rat brain flying a virtual drone, so we are just seeing the beginning of some moral and ethical questions in these directions as well
edit on 5-11-2012 by dominicus because: (no reason given)




posted on Nov, 5 2012 @ 12:12 PM
link   
It takes more computing power to virtualise something than it does to in theory have it running as its own separate entity (there may be times where IO transfers are faster in a VM but a good fast network will be fine for most things)



posted on Nov, 5 2012 @ 12:16 PM
link   

Originally posted by Maxatoria
It takes more computing power to virtualise something than it does to in theory have it running as its own separate entity (there may be times where IO transfers are faster in a VM but a good fast network will be fine for most things)

Do you think there is some way to surpass these limits and make virtual computing more efficient and faster than an actual physical computer?



posted on Nov, 5 2012 @ 12:16 PM
link   
IMHO, the next big step in computing will be quantum computing. This is when a single bit of data can be either 1 or 0 simultaneously. Once we reach this stage of the game computer power will be unlimited.



posted on Nov, 5 2012 @ 12:17 PM
link   
Yea, a virtualized computer must get it's processing power from somewhere. It is always less powerful than the system that it is hosted on. Basically, you would not be gaining any raw power by virtualizing but you would be spreading Titan's power around.

I think this Titan thing is an evolution in home computing, because more than likely in the near future we will not own actual PCs, but only boxes that interface with a Titan, and our "PC" will be hosted on it virtually.

Same for videogame consoles. This is basically a terrible centralization of power.
edit on 5-11-2012 by Socrato because: (no reason given)



posted on Nov, 5 2012 @ 12:28 PM
link   
You will never be able to create a VM to be more powerful then the machine it's hosted on. But as already stated, you would be able to create multiple VM's on Titan that would be very powerful indeed.



posted on Nov, 5 2012 @ 12:34 PM
link   
reply to post by Maxatoria
 


Not true.

You can cluster several computers together to act as 1 very powerful machine.
This way you can spread out cpu intensive tasks over several computers.



posted on Nov, 5 2012 @ 12:36 PM
link   

Originally posted by grey580
reply to post by Maxatoria
 


Not true.

You can cluster several computers together to act as 1 very powerful machine.
This way you can spread out cpu intensive tasks over several computers.


What about virtual processors, virtual cpu's, virtual quantum computers, the possibilities are honestly endless in my opinion. I still feel VM is the way



posted on Nov, 5 2012 @ 01:01 PM
link   
reply to post by dominicus
 


Well VM is virtualization of an operating system.
What I'm talking about is taking a bunch of servers and clustering them into 1 giant server.

And of course you can take those clustered servers and put several VM's on them.
edit on 5-11-2012 by grey580 because: (no reason given)



posted on Nov, 5 2012 @ 03:09 PM
link   

Originally posted by grey580
reply to post by dominicus
 


Well VM is virtualization of an operating system.
What I'm talking about is taking a bunch of servers and clustering them into 1 giant server.

And of course you can take those clustered servers and put several VM's on them.
edit on 5-11-2012 by grey580 because: (no reason given)

What im saying is that we should explore ways to have all of this be virtual: servers, operating systems, cpu's, processors, quantum computers, clusters, and VM's of vms.



posted on Nov, 5 2012 @ 04:22 PM
link   
reply to post by dominicus
 

Business does that today. Instead of going out and buying 10 servers for 10 various pieces of software (ie. exchange servers, db servers, etc.) they will just buy one big server. One that has the processing, RAM and HDD (multiple HDD's) power needed. The company I work for has the same setup. They have three main bad boys (servers), each with four to five, one terabyte drives. Most of the servers we use are virtualized.



posted on Nov, 5 2012 @ 04:40 PM
link   

Originally posted by dominicus
What im saying is that we should explore ways to have all of this be virtual: servers, operating systems, cpu's, processors, quantum computers, clusters, and VM's of vms.


Fine fine fine. We GET that. The question is, what hardware are you going to run your virtual CPUs on? There's a bottoming out to realities here. If you want to bone up on the subject, read about the Universal Turing Machine and the concepts of algorithmic depth and non-computability. It comes down to the basic laws of physics ultimately saying that there's only so much computational power from a given system. Your idea of simulating a more powerful system is certainly doable, but it will run MUCH more slowly than a physical instantiation of your machine.

Basically, you can't put 10 gallons of computing in a 5 gallon jug, but you can always make two trips.

Now this all applies only to classical computing. Quantum computing has very different rules, but is still theoretical and not a real thing yet.
edit on 5-11-2012 by Stunspot because: (no reason given)



posted on Nov, 15 2012 @ 12:02 PM
link   
Linking many computers together to make a big 'cluster' does exist, and has existed for a long time. However how these clusters work is not at all like a PC.

You dont say for example decide to run some computer intensive task and it is instantly performed 100s or 1000s of times faster than on your desktop. Each of the nodes of the server acts as a single CPU, and if your job only uses one thread, then it will run at exactly the same speed as though you are to run it on your PC. The advantage to these systems is heavy parallel computing. 95% of PC users these days be it for business or for pleasure would gain very little benefit in performance, and in fact performance i can imagine would actually be degraded.

The same can be said for quantum computing, my question for everyone is this... do any of you know how current quantum computers look like or operate? In general their usefulness is in performing specific computations, and again the technical advantage to the the end user who wants to play games, watch youtube and type the occasional document... you get zero benefit over what the current x86_64 processors give you. Quantum computers and Quantum computing is often touted as the next breakthrough and give us orders of magnitude improvements in performance... well, thats because journalists don't understand what the quantum computing people tell them, and further more, when one of these labs tries to explain what they are doing, even physics dont really understand what they are doing either... so it is currently a bit of a mystery and can be touted as anything you want it to be.

The only people who benefit from it are people running simulations of some form... be it 3D, video or raw computation... and the above post is correct... there is a baseline if you have 4 cores and 4gb of ram, the computer will fall over if you try and run 4 machines on it that max out the ram... and instead of taking out 1 user, you take out 4. This happens in reality, and iv experienced it in cluster computing.

Anyone used X11? I can imagine it would be similar to that, and even on a good fast network its not exactly lightening fast.
edit on 15-11-2012 by ErosA433 because: (no reason given)



posted on Nov, 15 2012 @ 01:12 PM
link   
reply to post by dominicus
 


You're not going to even break even performance-wise. You're using hardware to simulate hardware. The limit is the hardware to perform the simulation.
edit on 15-11-2012 by john_bmth because: (no reason given)



posted on Nov, 15 2012 @ 04:39 PM
link   
The only way to minimize performance loss when emulating a computer is to somehow allow each virtual machine/whatever to actually have metal access to each hardware component, so in a way make the OS itself do the virtualization of multiple computers.

Still, sounds like it would never be as good as its sounds and current methods, while there being huge room for improvement, don't give you a trickle down effect that an average end user would notice.

It always makes me wonder what people would use extra computational power for? I mean right now, most people who consider themselves enthusiasts actually don't use much of the computational power that they spend $1000s on. Even peoples ideas regarding over clocking and performance is 50% correct at best.

A fine example is overclocking ram... it gives you minimal % level performance increases, and performance is highly application dependant. You know that a GPU wastes about 50-60% of its cycles because it has nothing to do most of the time when you are playing games...

So my point is... people doing simulations and massive calculations will benefit a lot from any improvement they get... the end user? nothing (almost).



In My opinion, the future of computing will first be scaling of CPU cores... we are already able to make a specialist cpu with 50+ cores on a dye, this is the most power efficient (probably not cost efficient yet) way of computing. Back when i was undergrad, there was a student project to do exactly this, where a few students were experimenting putting about 100 cores (very simple slow ones, equivalent to 386 processors) onto a single dye. It will then be the production of linear scaling performance with the number of cores along with improving memory controllers to allow the use of high bandwidth large capacity ram... without setting on fire.
edit on 15-11-2012 by ErosA433 because: (no reason given)





new topics

top topics



 
1

log in

join