MIT Self Learning Algorithm Makes The Internet 3 Times Faster

page: 1
2

log in

join

posted on Jul, 24 2013 @ 10:44 AM
link   
I've always been fascinated by self-learning algorithms and artificial neural networks. It's mind blowing to think that computers could design a faster internet for us. Some people believe that the first self aware machine will actually be the entire internet. When you think about it, the internet is like a global electronic neural network. The computers/nodes within our world wide web act like the neurons and fire out signals and receive signals from other nodes. Now we are going to use self-learning algorithms to dictate the way signals are sent between nodes for maximum efficiency, essentially meaning that the global neural network is evolving on its own.


If you're reading this, you're probably using a version of the transmission control protocol, or TCP, a system that regulates internet traffic to prevent congestion. It works, and it's getting better all the time. But it was a system made by puny humans - surely our machine-overlords can do better.

Yes, and possibly as much as two or three times better, say the MIT researchers behind Remy, a system that spits out congestion-stopping algorithms.

To use Remy, an Internet-goer plugs in answers to a few variables (How many people will use this connection? How much bandwidth will they need?) and what metric they want to use for measuring performance (Is throughput, the measure of how much data is going through, the most important? Or is it the delay, the measure of how long it takes that information to travel?).

The system then starts testing algorithms to determine which works best for your situation. Testing every possible algorithm would be impractical, so Remy prioritizes, searching for the smaller tweaks that will result in the largest jump in speed. (Even this "quicker" process takes four to 12 hours.)

The resulting rules that the system spits out are more complicated than in most TCPs, according to Remy's inventors: while TCP programs might operate based on a few rules, Remy works out algorithms with more than 150 if-x-then-y rules for operating. The simulations sound impressive: doubled throughput and two-thirds less delay on a computer connection, and a 20 to 30 percent increase in throughput for a cell network, with a 25 to 40 percent slower delay.

www.popsci.com.au...


The original source MIT article gives a good explanation too:


Indeed, where a typical TCP congestion-control algorithm might consist of a handful of rules — if the percentage of dropped packets crosses some threshold, cut the transmission rate in half — the algorithms that Remy produces can have more than 150 distinct rules.

“It doesn’t resemble anything in the 30-year history of TCP,” Winstein says. “Traditionally, TCP has relatively simple endpoint rules but complex behavior when you actually use it. With Remy, the opposite is true. We think that’s better, because computers are good at dealing with complexity. It’s the behavior you want to be simple.” Why the algorithms Remy produces work as well as they do is one of the topics the researchers hope to explore going forward.

In the meantime, however, there’s little arguing with the results. Balakrishnan and Winstein tested Remy’s algorithms on a simulation system called the ns-2, which is standard in the field.

In tests that simulated a high-speed, wired network with consistent transmission rates across physical links, Remy’s algorithms roughly doubled network throughput when compared to Compound TCP and TCP Cubic, while reducing delay by two-thirds. In another set of tests, which simulated Verizon’s cellular data network, the gains were smaller but still significant: a 20 to 30 percent improvement in throughput, and a 25 to 40 percent reduction in delay.

www.mit.edu...


Now all they need to do is combine this breakthrough with this technology: Scientists Create Wi-Fi That Can Transmit Seven Blu-ray Movies Per Second
edit on 24/7/2013 by ChaoticOrder because: (no reason given)




posted on Jul, 24 2013 @ 10:55 AM
link   
reply to post by ChaoticOrder
 


I read a little about this, best part of it is that it can be implemented fast and relatively cheaply. But don't expect your ISP bill to go down lol oh no, they'll just keep charging you the same and stuff more users on the network.

The internet as AI, almost. The internet is just the network. The cloud that can be built on that, distributed computing, that would have the potential to be AI, it would have access to the raw processing power of every device setup to be part of the cloud, it would have the internet as a redundant but routed network, when segments fail, traffic gets rerouted.

I could see the internet as being the domain of the first self aware AI, but not the ai itself, just the domain that facilitates it.

AI is a very slow progressing field, so who knows, this technology has direct benefits now, and a faster internet would make skynet even better



posted on Jul, 24 2013 @ 10:58 AM
link   
reply to post by phishyblankwaters
 


I laughed out loud reading "skynet" in your post. To be honest I wouldn't be surprised if technology for computers, the internet, etc... Will improve many times over. Because there are a lot of brilliant creative people out there who can improve the way technology is used by humans.



posted on Jul, 24 2013 @ 11:19 AM
link   
reply to post by phishyblankwaters
 



I read a little about this, best part of it is that it can be implemented fast and relatively cheaply. But don't expect your ISP bill to go down lol oh no, they'll just keep charging you the same and stuff more users on the network.

Very good point, it seems the ISP companies are always looking for ways to increase their profits. But look on the bright side, if they can make a profit off it then it gives them a motive to implement the idea as soon as possible.



posted on Jul, 24 2013 @ 11:52 AM
link   
Its not surprising that something that can monitor and adapt will do much better than a few hard coded rules, we have cheap storage to be able to record the massive amount of data needed to work on these sort of problems, traditionally its been down to a human to make the choices since most of the gear has had limited statistical and reporting options and has been built just to chuck the data down the line as fast as possible



posted on Jul, 24 2013 @ 11:59 AM
link   
MIT - the Governments own expert R&D think tank - this technology will have NSA back doors made into it so they can get our private data easier - sounds like they want to replace the TCP with something they can control.

( MIT has a history of developing technologies for the government)

Of course i'm just speculating but I wouldn't put anything past this corrupt government.
edit on 24-7-2013 by JohnPhoenix because: sp



posted on Jul, 24 2013 @ 12:43 PM
link   
reply to post by JohnPhoenix
 


TCP by itself is pretty much open doors to the spooks even tcp v6 uses triple DES as its best encryption which i can imagine wouldn't stress out the NSA's gear to decrypt, this seems more like its self adapting software which when set a few goals will eventually try and set everything as close to those settings as possible - its nothing you couldn't do yourself if you fancied a few weeks of number crunching and a very sore head





new topics
top topics
 
2

log in

join