a reply to: chris_stibrany
I started to respond with "Well, not exactly...", but after I thought about it for a moment, yes, that's exactly why I posted my OP as I did. Hence
the "ruse" comment.
What they're going to do is a test. So in that context yes, the net result will be far slower, and this is why they've qualified it the way they
have. But that's not their point. To run the test NASA will have to ship the data and the software to Google, who will then crunch whatever the data
is, benchmark it and then they and NASA will compare results from other methods. The comparison won't include the time it took to move the data and
the software, only the time it took to compute. However, notice how they've constructed their argument...
They (Google) can't come to them (their stuff is too hard to move), and they can't come to them (NASA), so use our service to bridge the gap, to get
your data to us. (Flag #1) Then, if they like what they see, they can do it this way all the time, BUT (Flag #2), as chris_stibrany noted on ATS,
this method is too slow. Soooo, thev've got a deal you can't refuse; they'll come to you and build replicate the HAL 6000 at your house and crunch
all the data for them. Of course, they'll need to pay to do this, but they still own the technology including all the storage arrays (because it is
integral to the processing), okay?
I think some are missing the forest but for the trees here. Google is only in the data processing business because of the data they can process. In
other words, Google is in the storage business, but as you've correctly observed, they know that the classical differences between storage and compute
get pretty murky when you get into supercomputing because it's all about access times (on a technical level). Traditionally, there has always been
data and compute and then a connection between the two (a pipeline if you will). As compute capability has increased so has the storage capability,
but something else happened which wasn't so obvious (interestingly, this is why I've always said the Cloud was a ruse, but that's another thread).
What happened was storage had to be physically moved closer and closer to the compute. Used to be it could be separated by scores of miles, but then
it had to be in the same building. You start getting into supercompute and it has to be in the same room, and then in the same cabinet, and then in
the same box, and before you know it it's all become one.
Google is smart, and they see this. But they're looking at things from the value of the data, not the value of the technology leap. They want the
data. Then they can mine it, they can parse it, they can sell it, and what they can't sell they want to be so far in the middle of the owner's
business that they cannot be easily extracted or replaced.
So yes, it's all about the data.
Information is power! And data is King.
ETA - My 'not exactly' part at the beginning was going to address differences in the Cloud (i.e. there being lots of different flavors of the Cloud;
there's Cloud compute, Cloud storage, Cloud networking, etc.)
edit on 11/5/2018 by Flyingclaydisk because: (no reason given)