Help ATS with a contribution via PayPal:
learn more

google switches to brand new network protocols, and no one noticed

page: 2
16
<< 1    3 >>

log in

join

posted on Apr, 21 2013 @ 09:58 AM
link   
reply to post by verschickter
 


The .NET structure has become a rally point for the Microsoft branch, so how does this fit with SDN Exploder? At the moment is it sounding like IP v6.5 with the network layer focus while those on the server development branch are left to twiddle their thumbs. I know the actual system and integration is not this simple. HTML 5 is a clear future rally point, so how does SDN relate to this?

When it comes to database integration with PHP I choose the abstract layer of PDO. I know mySql plus does have some efficiencies and optimizations, but as a programmer I do not want to be hindered by the storage level and want to remain agile enough through a constantly evolving and dynamic situation. The .NET framework does have quite a massive branch of functions and programming translations. The No Sql branch also looks to be making some ground with the distributed information arena.

I know this stuff ain't simple exploder, just putting up a few flags to watch on your journey. Any information will be considered.




posted on Apr, 21 2013 @ 10:19 AM
link   
Bumping so more people can see and comment on this tech. Thanks for the thread and the continuing data.



posted on Apr, 21 2013 @ 10:44 AM
link   

Originally posted by kwakakev
reply to post by verschickter
 


The .NET structure has become a rally point for the Microsoft branch, so how does this fit with SDN Exploder? HTML 5 is a clear future rally point, so how does SDN relate to this?


It was my answer to you saying balance is the word. It does not fit in any way with the SDN exploder. I thought that would be obvious? My entry point was correcting a little misswording in the OP about latency, then you responded and I said, balance is the word, like you said. Then I brought the .NET example (balance).



posted on Apr, 21 2013 @ 11:05 AM
link   
reply to post by verschickter
 


I hear ya...

Balance is the answer through this. Balance with SDN works by better managing the transport layer, other wise why would google go though it and still have it if it was a fail? Sure it may still need some work and continual balance with other systems and processes, but for the moment it looks to be a safe step in the right direction.

I think the issue we are having is a generic complexity of the software development landscape and in particular the online capabilities. I do understand that you where exposed to prototypes and beta versions of what has recently been produced, without an actual comparison to the actual system that has been employed I am unable to make a reasoned comment. So far my web browsing experience is ok, if the gaming and video conferencing communities give it the thumbs up I cannot find complaint with what has been done.

Google will not stop here, not with its resources and if it wants to continue as a company. Raising issues like .NET, PHP and HTML just gives them something to consider as they come up for air before their next dive.



posted on Apr, 21 2013 @ 02:03 PM
link   


So how is IP addressing handled? I would expect it is IP4 and IP6 compliant. Are there white, grey and black lists supported from certain IP domains or is this all in the box? No developer concerns or awareness about IP traffic?

backwards compatible with IP V4 & V6, in older links and switches the packet "header" still contains destination address, so is routed normally on non SDN network fabric, and is routed by SDN networks (in data centre or between data centres) by looking at "encoded" information in the data portion of the packet, so you can route by packet header or by packet encoding. a point to make is that switches can "look up" any space in the packet, not just the "destination" space
hint



Sounds like some programmer overheads / abstractions that need to be performed, any specifics?


yes the basic deployment of open flow (at this stage) requires nothing to work out of the box,
if you want to design specific operations like,
packet/traffic shaping,
smart routing with fail-over,
load balancing,
packet strip and network isolation
or
individual user segregation on commodity server infrastructure,
you are required to "program" solutions in the form of APIs or purchase solutions that suit your requirements.
so there is a need for skilled programmers to customise the implementation



This is one of the hard edges at the divide. The ability to restrict permissions is a huge plus. The structured function also provides a clear separation of powers. Ok, a hacker may be able to implement what can be done through the website, but they cannot do a table delete or basic reformat if database permissions where exposed.


you can program the network indirectly through the central controller, so that any management functions can only be accepted from specific locations and the commands themselves can be fabric specific (with some cleaver coding)
so that "route" and "commands" are at the same "level" in the packet inspection processes


My flash video player is busted at the moment, bandwidth issues prevent me for implementing a fix. I have heard of REST before, can you describe it?



The controller exposes open northbound APIs which are used by applications. OpenDaylight supports the OSGi framework and bidirectional REST for the northbound API. The OSGi framework is used for applications that will run in the same address space as the controller while the REST (web based) API is used for applications that do not run in the same address space (or even necessarily on the same machine) as the controller. The business logic and algorithms reside in the applications. These applications use the controller to gather network intelligence, run algorithms to perform analytics, and then use the controller to orchestrate the new rules, if any, throughout the network.


www.opendaylight.org...

there is a small technical explanation on the open daylight page linked,
my definition is a full duplex connection manager that can be used for API connectivity and allows for separation of functions across different physical devices.

The controller platform itself contains a collection of dynamically pluggable modules to perform needed network tasks. There are a series of base network services for such tasks as understanding what devices are contained within the network and the capabilities of each, statistics gathering, etc. In addition, platform oriented services and other extensions can also be inserted into the controller platform for enhanced SDN functionality.


hope this helps



Open SDN Architectures represent a fundamental change in networking architectures. An Open SDN introduces centralized software controllers that implement a common data plane abstraction that unifies the entire network fabric southbound, and publishes open APIs for software applications northbound. With this open architecture, a fabric of multi-vendor devices can be aggregated into a single policy domain that can be programmed and automated using standard software (not CLI). Open SDN Architectures leverage an industry standard data plane abstraction protocol, like OpenFlow, which provide direct access to the data plane hardware and forwarding flow tables – not just a CLI proxy mechanism. And, OpenFlow can operate across a variety of physical and virtual switches, as well as vendor architectures. As a result, for the first time in history, an Open SDN controller can program and automate a multi-vendor network using standard software protocols, like OpenFlow and RESTful APIs.


www.bigswitch.com...

xploder
edit on 21/4/13 by XPLodER because: (no reason given)



posted on Apr, 21 2013 @ 02:19 PM
link   

Nah not 100% correct. It does not speed up the delivery it just makes sure that all the bits, that frame the packet, are delivered or can be reconstructed so you don´t need to resend the whole chunk.


there are some "other" recent advancements in packet loss avoidance, that actually increase good put and bandwidth utilization, im sorry i should not go into details about specifics on this subject, (proprietary) but it actually speeds up data transfer rates.


If you think about it, chopping data into packets is already doing a job, because corrupted packets can be resend, not the whole file or whatever you try to send. In truth, loss avoidance algorithms will make the data you have to send more bulky, thus you get longer transfer rates per chunk itself. While short transfer rates are critical to high FPS games, you may get lag free (as in smooth) playing experience but will add an offset to everything.


the overhead can be measured in 1-9ms time frames whereas a resend could cost 10-100ms, so the overhead cost to recompile the lost packets is very small compared to the RTT


I opted out from telecoms loss avoidance system. years ago when I was customer to telekom (germany) it cost me a few euros a month to have this technology shut off on my line. Made me a stable 15ms latency on most servers in Germany, where as I had 45ms or more before. This was/is called fastpath.
Funny, you have to pay to make sure you don´t get a service.


Stanford university has developed the next generation algorithm for loss avoidance and mitigation,
i cant go into details about exactly how it works, but this new transport is blisteringly fast.
sorry got to be careful with what i say in regards to this new loss avoidance error correction



You only gain end-to-end speed increase if the time needed to compute, add, send, receive and decipher the validation bits is shorter then to send the whole packet again. Of course, there is always some kind of correction system working in the back.


these days there is more than enough compute power to do repetitively complex operations post transportation,
with a very small delay and "adaptive" protocols have been designed to measure and compensate for fluctuations in loss


I would know a nice example about GSM protocols that would clearly show how this affects bandwidth (or packet window size) but increases quality but my post is already lengthy and a little bit OT.


loss rates scale to throughput so any forward error correction "on the fly" can increase throughput speeds by factors.

xploder
edit on 21/4/13 by XPLodER because: (no reason given)



posted on Apr, 21 2013 @ 02:39 PM
link   

The Googles of the world (i.e Google, universities, and a tiny set of other companies) will use this technology to improve service and lower prices.


it will also lower the price of compute and storage to the business community,
it will allow the large provider to provision hardware more effectively and efficiently.


Everybody else (the Comcasts, Time-Warners, ad networks, Verizons, etc) will use this technology to enable fine-grained anti competitive behavior (slowing down Netflix because it competes with their own more expensive cable---this is the "dynamic allocation of bandwidth" at work), intrusive advertisement (add/replace ads on websites) and espionage.


there are laws against this form of anti competitive bandwidth allocation or "throttling"
if your worried about DRM i would be looking in the direction of the WWW3 HTML5 spec with DRM at the websight level. this removes the need for "plugins" in the browser but forces DRM into the web at a base level.


The Chinese will take the software without paying and build it into Huawei routers (and use it for international corporate and government espionage, and improving internal censorship capabilities).


this is a duel use technology, it is open for everyone to develop on,
weather you intent is to do good or bad. and closed proprietary routers are already a soft point for exploits.
this will actually harden security for most small vendors.


It democratizes the previously expensive technology necessary to make the internet anti-democratic.


your missing the fact that the internet is already being captured and controlled by technology that allows for censorship and control,
and the point that in a world where nation states are starting to call the internet a "battlefeild" and claiming cyberwar
you should realise that some things would change in reaction to the dangers.

these changes are a result of the need for a democratic platform, not because of the need for further censorship

xploder
edit on 21/4/13 by XPLodER because: (no reason given)



posted on Apr, 21 2013 @ 03:04 PM
link   

The .NET structure has become a rally point for the Microsoft branch, so how does this fit with SDN Exploder?

it looks like Microsoft is going to play ball,
they have become a contributor to the standards org open daylight


At the moment is it sounding like IP v6.5 with the network layer focus while those on the server development branch are left to twiddle their thumbs.


i would advise you to keep up with developments in SDN as it will effect web hosting and because of its ability to realise the functionality of virtualised workloads there will be cross over



I know the actual system and integration is not this simple. HTML 5 is a clear future rally point, so how does SDN relate to this?


HTML5 opens up the ability to "deploy" high performance java applications directly to web browsers, i have to be careful that i dont say the wrong thing here.........but a nice remote support platform could be served at the browser level to interconnect two or more users. the other point is that if you wanted to you could provide a "customer" routed path controller from the browser, so that certain paths are avoided,
ie route around a noisy link or contested link to speed up downloads or streaming content.
would be handy for a CDN (content delivery network)
just a few things to think about



When it comes to database integration with PHP I choose the abstract layer of PDO. I know mySql plus does have some efficiencies and optimizations, but as a programmer I do not want to be hindered by the storage level and want to remain agile enough through a constantly evolving and dynamic situation. The .NET framework does have quite a massive branch of functions and programming translations. The No Sql branch also looks to be making some ground with the distributed information arena.


at this stage the features required for SDN and server virtualisation are custom developed by developers,
in the future the virtual servers will natively support SDN and the tools will be included. it is very much "if you need a tool you might end up writing it" this is only in the interim until more vendors package their own custom tools with their software. i think intel and HP already has a compete solution but is very much "vendor specific"


I know this stuff ain't simple exploder, just putting up a few flags to watch on your journey. Any information will be considered.


why thank you my friend i hope the HTML5 ideas above get you thinking about complementing SDN routing with browser interfaces


xploder



posted on Apr, 21 2013 @ 03:12 PM
link   
reply to post by kwakakev
 



I think the issue we are having is a generic complexity of the software development landscape and in particular the online capabilities


thats what is great about the open daylight project,
standards will allow anyone to program solutions to the SDN landscape and not just hardware vendors,
everyone is open to use the transport and because it is still new and only recently validated there is a window of opertunity for people to innovate, and because the platform works very well for rolling out new APPS and protocols, it is perfect for very fast innovation. the platform makes testing really easy anf you can write apps for the transport in any number of programming languages.

as for HTML5 we are only just starting to see web side apps develop their potential
see mega.co.nz for an example


xploder



posted on Apr, 21 2013 @ 03:14 PM
link   

Originally posted by Aleister
Bumping so more people can see and comment on this tech. Thanks for the thread and the continuing data.


your welcome,
please feel free to ask me nearly anything,
i will try to answer as much as i can


xploder
edit on 21/4/13 by XPLodER because: (no reason given)



posted on Apr, 21 2013 @ 04:12 PM
link   
reply to post by XPLodER
 


Do you anticipate a spike in new .coms..... .nets...... etc. because of this streamlining? Seems like this change makes things more user friendly on all levels. From the random user, to site owners, to hosts......... And because of the lower costs that SHOULD trickle down from the top, people should have an easier time financially getting a site going?

Make sense? Or is that wishful thinking on my part?




posted on Apr, 21 2013 @ 04:30 PM
link   

Originally posted by Taupin Desciple
reply to post by XPLodER
 


Do you anticipate a spike in new .coms..... .nets...... etc. because of this streamlining?


yes hosting costs will lower by about 20-30 over time, bandwidth requirements will be flexible and more efficient,
it may take time to filter down but its going to come as SDN rolls out. (IMHO)


Seems like this change makes things more user friendly on all levels. From the random user, to site owners, to hosts......... And because of the lower costs that SHOULD trickle down from the top, people should have an easier time financially getting a site going?


and level the playing field for small web plaforms, a smaller web sight can "scale" larger and smaller without extra dedicated equipment, and paying for capacity that is only required at peak times.


Make sense? Or is that wishful thinking on my part?



you are correct

when you are talking about 20-30% savings for providers,
the cost decrease will open up the market and allow flexibility to scale resources to demand.


xploder
edit on 21/4/13 by XPLodER because: (no reason given)
edit on 21/4/13 by XPLodER because: (no reason given)



posted on Apr, 21 2013 @ 04:36 PM
link   
reply to post by XPLodER
 


Sounds good. Thank you.


This thread has been a wealth of information.





posted on Apr, 21 2013 @ 05:58 PM
link   

Originally posted by Taupin Desciple
reply to post by XPLodER
 


Sounds good. Thank you.


This thread has been a wealth of information.



your welcome

please feel free to ask any other questions you have

xploder



posted on Apr, 21 2013 @ 06:10 PM
link   

Originally posted by Taupin Desciple
reply to post by XPLodER
 


Sounds good. Thank you.


This thread has been a wealth of information.




this is the effective savings using the "big switch" solution


www.bigswitch.com...

this is for data centres but it is my opinion that the same or similar savings can be expected from web hosting

xploder



posted on Apr, 22 2013 @ 02:00 AM
link   
Cool information. I'm surprised people still use Google when they take your information and sell it to 3rd parties. Meh...



posted on Apr, 22 2013 @ 06:15 AM
link   
faster porn and no lag in games is what 90% of the people on here will read this news as meaning.

I have absolutely NO idea about networking. It blows my mind that there is an actual language to speak to computers. Wish I was smart enough to learn it but alas I am terrible at new languages.

Thanks for the heads up new tech is always something I smile about.



posted on Apr, 22 2013 @ 03:33 PM
link   

Originally posted by TiM3LoRd
faster porn and no lag in games is what 90% of the people on here will read this news as meaning.


i guess your right, but there is a much bigger picture here,
the underlying fabric of the internet is changing. it effects more than just porn and games,
while i realise that will be the most obvious outcome will be as you describe



I have absolutely NO idea about networking. It blows my mind that there is an actual language to speak to computers. Wish I was smart enough to learn it but alas I am terrible at new languages.

Thanks for the heads up new tech is always something I smile about.


this could allow for HD high FPS streaming gaming,
that should make you smile


xploder



posted on Apr, 22 2013 @ 10:08 PM
link   

Originally posted by XPLodER
in a very unexpected announcement, Google has announced it moved the worlds largest network onto open flow,
this news was missed by many because of the tragedy of Boston.

in an announcement that went largely missed by the general population Google has announced that it altered its WAN network to work on Open Flow, a new Software Defined Networking standard (SDN)


This video was posted in May of 2012. According to Wikipedia,

In April 2012, Google's Urs Hölzle described how the company's Internal Network had been completely re-designed over the previous two years to run under OpenFlow with substantial efficiency improvement


So this was not done overnight and it wasn't just switched on with no one noticing. This was a slow integration process and it was their internal network, not the network that we interface with when we load google.com.



posted on Apr, 22 2013 @ 10:28 PM
link   
reply to post by okachobi
 



you are correct

thank you for the correction


i did not realise they did it last year


its still quite cool how they carried it out tho....

sorry for the mistakes


xploder





new topics

top topics



 
16
<< 1    3 >>

log in

join