google switches to brand new network protocols, and no one noticed

page: 3
16
<< 1  2   >>

log in

join

posted on Apr, 22 2013 @ 10:31 PM
link   
reply to post by okachobi
 


i must have missed it last year good catch


xploder




posted on Apr, 23 2013 @ 05:04 AM
link   
reply to post by XPLodER
 


Great thread and information. This had escaped me and for the sake of the tragedy of Boston's terrible attacks but because I was not well that week.

I had actually been pondering in my mind what I deemed was the second Renaissance occurring in the 21st century not long ago. It is a technological and societal one for sure. Many believe that the singularity theory will occur due to this or at least most people I know who are aware of it. It has seemed to me with my pondering that Google is one of the main innovators, key players and leaders in what is occurring in this 2nd Renaissance of sorts. Just my 2 cents on the matter.



posted on Apr, 23 2013 @ 03:44 PM
link   
Most of what google seems to be doing is a 2 level job

1. separate the control of the physical devices from its manufacturers built in control system to the eyes of the people looking after it, so so you point and you click and the jobs done (should of been do able with SNMP but it never really worked)

2. the physical slinging of data down the pipes is done differently but its no surprise due to the fact that modern links have such low error rates/latency compared to when most of the protocols were formalized and it being google they can do their own thing since its their network and not having to spend years arguing with the standards body, but they still have to present it in standard tcp/ip 4/6 format at the gateway

having worked on mainframes from the days of 300bps modems theres always been these sort of moments where someone gives the technology a good kicking into touch and it takes a while but everyone says "hmm...that'll work" and the standards people eventually rubber stamp it



posted on Apr, 23 2013 @ 04:01 PM
link   

Originally posted by Maxatoria
Most of what google seems to be doing is a 2 level job


i think you are correct, transport layer


1. separate the control of the physical devices from its manufacturers built in control system to the eyes of the people looking after it, so so you point and you click and the jobs done (should of been do able with SNMP but it never really worked)


it will be much easyer to see the topology of the network and to set policy "on the fly"



2. the physical slinging of data down the pipes is done differently but its no surprise due to the fact that modern links have such low error rates/latency compared to when most of the protocols were formalized and it being google they can do their own thing since its their network and not having to spend years arguing with the standards body, but they still have to present it in standard tcp/ip 4/6 format at the gateway


yes it still has to "share" the internet with other users so their protocol must be "compliant" with existing protocols,
and not hog links, the interesting part is the routing instructions can be,
the packet header, or "off the back" of the packet data.
in the interior of the SDN the packets can be routed on content,
on the open internet the packets can be routed by header


having worked on mainframes from the days of 300bps modems theres always been these sort of moments where someone gives the technology a good kicking into touch and it takes a while but everyone says "hmm...that'll work" and the standards people eventually rubber stamp it


in think you are right, and this is one of those times where the technology is changing in a fundamental way and the changes although not noticable at the end user level will change the face of IT and Administration and the delivery of services.

internet 2.0, backwards compatible with internet 1.0


xploder



posted on May, 6 2013 @ 02:59 AM
link   

ORBX streaming tech could revolutionize computing



Hosting the event was Autodesk Chief Technology Officer Jeff Kowalski, who boldly claimed that the impact of ORBX.js will be enormous. "We want you to be able to do hardware downgrades," he said. "For our customers, up until now they've had to choose the hardware to match the software. And that's not the case anymore with this."


news.cnet.com...

this will change the net..........

xploder

edit on 6/5/13 by XPLodER because: (no reason given)



posted on May, 6 2013 @ 06:00 AM
link   
We're going back to the mainframe days again when everything was done centrally and when ready the results are shipped to your dumb terminal

the thing with this orbx that it'll all be rental so fancy a quick game of something - you'll have to sub for $7.99 for a 3 month rental and there will be nothing to own and it still requires a good internet link which in some places could be done faster via carrier pigeons
also the placement of these server clusters is going to be important as having to connect to a USA game server from rural siberia is going to affect the gameplay a lot so its going to add to the setup costs to position the clusters at the right points in the internets structure and with any downtime potentially affecting 10's of thousands its going to be hard to maintain without some serious work behind the scenes

There is onlive available in the uk which does this but its never really taken off too much



posted on May, 6 2013 @ 05:14 PM
link   

Originally posted by Maxatoria
We're going back to the mainframe days again when everything was done centrally and when ready the results are shipped to your dumb terminal


only for web apps that require alot of rendering or server side grunt,
i think it folds quite nicely into virtual boxes and environments.
imagine your desktop cloud streaming at 30fps in HD.
or a web based communications platform.
cloud gaming without a server?
add hoc networks from browser to browser?



the thing with this orbx that it'll all be rental so fancy a quick game of something - you'll have to sub for $7.99 for a 3 month rental and there will be nothing to own and it still requires a good internet link which in some places could be done faster via carrier pigeons
also the placement of these server clusters is going to be important as having to connect to a USA game server from rural siberia is going to affect the gameplay a lot so its going to add to the setup costs to position the clusters at the right points in the internets structure and with any downtime potentially affecting 10's of thousands its going to be hard to maintain without some serious work behind the scenes


i imagine it will be good for all sorts of single play/pay scenarios, from movies to games to access to rendering engines.

it could be adapted for remote connections and for VM type individual environments that can be "ported" to any server cloud anywhere where your connection is stable


xploder

edit on 6/5/13 by XPLodER because: (no reason given)





top topics
 
16
<< 1  2   >>

log in

join