Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

A Bandwidth Breakthrough!
MIT Technology Review ^ | Tuesday, October 23, 2012 | David Talbot

Posted on 10/23/2012 11:42:47 AM PDT by Red Badger

A dash of algebra on wireless networks promises to boost bandwidth tenfold, without new infrastructure.

Academic researchers have improved wireless bandwidth by an order of magnitude—not by adding base stations, tapping more spectrum, or cranking up transmitter wattage, but by using algebra to banish the network-clogging task of resending dropped packets.

By providing new ways for mobile devices to solve for missing data, the technology not only eliminates this wasteful process but also can seamlessly weave data streams from Wi-Fi and LTE—a leap forward from other approaches that toggle back and forth. "Any IP network will benefit from this technology," says Sheau Ng, vice president for research and development at NBC Universal.

Several companies have licensed the underlying technology in recent months, but the details are subject to nondisclosure agreements, says Muriel Medard, a professor at MIT's Research Laboratory of Electronics and a leader in the effort. Elements of the technology were developed by researchers at MIT, the University of Porto in Portugal, Harvard University, Caltech, and Technical University of Munich. The licensing is being done through an MIT/Caltech startup called Code-On Technologies.

The underlying problem is huge and growing: on a typical day in Boston, for example, 3 percent of packets are dropped due to interference or congestion. Dropped packets cause delays in themselves, and then generate new back-and-forth network traffic to replace those packets, compounding the original problem.

The practical benefits of the technology, known as coded TCP, were seen on a recent test run on a New York-to-Boston Acela train, notorious for poor connectivity. Medard and students were able to watch blip-free YouTube videos while some other passengers struggled to get online. "They were asking us 'How did you do that?' and we said 'We're engineers!' " she jokes.

More rigorous lab studies have shown large benefits. Testing the system on Wi-Fi networks at MIT, where 2 percent of packets are typically lost, Medard's group found that a normal bandwidth of one megabit per second was boosted to 16 megabits per second. In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Medard's work "is an important breakthrough that promises to significantly improve bandwidth and quality-of-experience for cellular data users experiencing poor signal coverage," says Dipankar "Ray" Raychaudhuri, director or the Winlab at Rutgers University (see "Pervasive Wireless"). He expects the technology to be widely deployed within two to three years.

To test the technology in the meantime, Medard's group set up proxy servers in the Amazon cloud. IP traffic was sent to Amazon, encoded, and then decoded as an application on phones. The benefit might be even better if the technology were built directly into transmitters and routers, she says. It also could be used to merge traffic coming over Wi-Fi and cell phone networks rather than forcing devices to switch between the two frequencies.

The technology transforms the way packets of data are sent. Instead of sending packets, it sends algebraic equations that describe series of packets. So if a packet goes missing, instead of asking the network to resend it, the receiving device can solve for the missing one itself. Since the equations involved are simple and linear, the processing load on a phone, router, or base station is negligible, Medard says.

Whether gains seen in the lab can be achieved in a full-scale deployment remains to be seen, but the fact that the improvements were so large suggests a breakthrough, says Ng, the NBC executive, who was not involved in the research. "In the lab, if you only find a small margin of improvement, the engineers will be skeptical. Looking at what they have done in the lab, it certainly is order-of-magnitude improvement—and that certainly is very encouraging," Ng says.

If the technology works in large-scale deployments as expected, it could help forestall a spectrum crunch. Cisco Systems says that by 2016, mobile data traffic will grow 18-fold—and Bell Labs goes farther, predicting growth by a factor of 25. The U.S. Federal Communications Commission has said spectrum could run out within a couple of years.

Medard stops short of saying the technology will prevent a spectrum crunch, but she notes that the current system is grossly inefficient. "Certainly there are very severe inefficiencies that should be remedied before you consider acquiring more resources," she says.

She says that when her group got online on the Acela, the YouTube video they watched was of college students playing a real-world version of the Angry Birds video game. "The quality of the video was good. The quality of the content—we haven't solved," Medard says.


TOPICS: Business/Economy; Culture/Society; Technical; Testing; US: Massachusetts
KEYWORDS: bandwidth; bandwith; communications; computers; electronics; internet
Navigation: use the links below to view more comments.
first previous 1-2021-38 last
To: Red Badger

Sounds like a RAID for a moving target.


21 posted on 10/23/2012 12:51:24 PM PDT by ImJustAnotherOkie (zerogottago)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Buckeye McFrog

Yeah, a little algebra following a space transformation... Leads the public to believe it’s as easy as figuring out which train got to New York first. Well, it’s not like the general public can solve that problem either. No point in confusing them with the details! LOL


22 posted on 10/23/2012 1:08:49 PM PDT by chaos_5
[ Post Reply | Private Reply | To 13 | View Replies]

To: Red Badger
Several companies have licensed the underlying technology in recent months,

It's good to see claims of technological improvement actually make it to market, rather than promising market availability several years in the future.

23 posted on 10/23/2012 1:09:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Keep increasing the speed until you get packet loss.

24 posted on 10/23/2012 1:10:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


25 posted on 10/23/2012 1:57:48 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


26 posted on 10/23/2012 2:01:31 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............

That's not all that useful. All it will do is improve youtube and the downloading of... porn....

.....OMG THIS IS THE MOST IMPORTANT INNOVATION IN THE HISTORY OF MAN!!!!!

27 posted on 10/23/2012 2:09:57 PM PDT by Lazamataz (The Pravda Press has gone from 'biased' straight on through to 'utterly bizarre'.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Red Badger

Sounds like Forward Error Correction (FEC) code


28 posted on 10/23/2012 2:18:35 PM PDT by Bruce Kurtz
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

bkmk


29 posted on 10/23/2012 2:22:15 PM PDT by Sergio (An object at rest cannot be stopped! - The Evil Midnight Bomber What Bombs at Midnight)
[ Post Reply | Private Reply | To 1 | View Replies]

To: KarlInOhio

When we send and Excel spreadsheet with pivot tables we often get new blanks and new lines of what seem to be / appear to be the same sort words or data. I have wondered if these are transmission errors?


30 posted on 10/23/2012 4:38:25 PM PDT by Sequoyah101
[ Post Reply | Private Reply | To 17 | View Replies]

To: Red Badger

My casual take on this after reviewing some of their published work is it’s founded on erasure codes. This is funny because I think there’s already products for the wired tcp universe that work the same way, plus storage implementations too.


31 posted on 10/23/2012 4:47:27 PM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 1 | View Replies]

To: no-s

I was looking for details but the article was just generalities for public consumption. Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.


32 posted on 10/24/2012 1:09:16 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 31 | View Replies]

To: SpaceBar
...Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.

heheh there just went 45 minutes of guilty pleasure see Modeling Network Coded TCP.

33 posted on 10/24/2012 2:51:49 AM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 32 | View Replies]

To: Red Badger

So much in computing boils down to an encoding in some way or another.


34 posted on 10/24/2012 4:47:16 AM PDT by 2 Kool 2 Be 4-Gotten
[ Post Reply | Private Reply | To 1 | View Replies]

To: SpaceBar

I think this quote cuts through the hype:

” In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.”

loss-free scenarios are not rare, They are entirely predictable, and they just require higher signal strength

Error correction codes are already part of the data link layer.

What they imply (order of magnitude increase) is a violation of Nyquist’s Law.

What they are doing will not solve plain old congestion problems.

I suspect that they are using normal packet transmission overhead in a different way that includes error correction elements. I could see some level of improvement possible - mainly by managing flow-control differently.

Here is the fundamental theory issue: Typical Digital Networks perform consistently and then “fall off a brick wall”. If you can stave off the brick wall with a bit of low-overhead error-correction, you might be able to measure a significant increase in performance (10x even) at the signal ‘brick wall’.

This may increase cell coverage for a specific link scenario (fringe) a little bit, but will not increase designed capacity or any other meaningful measure of a well-designed network - per their quote at the beginning of this post.


35 posted on 10/24/2012 5:07:56 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 32 | View Replies]

To: RFEngineer

I was thinking something like a hybrid FEC scheme with an ACK/NACK fallback, or even variable code robustness similar to the latest versions of PACTOR which estimate the channel S/N and adjust accordingly.


36 posted on 10/24/2012 5:21:52 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 35 | View Replies]

To: Red Badger

Sounds more like RAID for packets. Ergo there will be a reduction in useable data per packet to provide the redundancy for ONE lost packet in the sequence. And if more than one packet per sequence is lost does the whole sequence need to be retrans-ed? Likely, and that will increase network congestion proportional to what would supposedly be “saved”.

And they aren’t increasing bandwidth, rather the circuit is just being used closer to its errorless rate. They had to use a really crummy network to show its value otherwise it only adds to network congestion. How? By sending less useable information per packet, which in turn produces more packets required per information transfer which = more traffic. I wonder if any of those other riders considered them to be bandwidth hogs taking bandwidth needed for them to access their network? For a YouTube of college students playing Angry Birds?

Anyone who uses a solution which requires reducing information per packet without reducing the packet size really needs a very strong (read: security) justification to do so as it degrades network performance for all users.

I hope this so called solution requires FCC licensing. It is certainly not an elegant or egalitarian solution as it increases bandwidth usage per information transfer.

Why not integrate some of the better network accelerator technologies into wireless devices instead? At least they don’t reduce the information bytes per packet.

So what’s next, S-ing around with MTU sizes? /s


37 posted on 10/24/2012 5:34:56 AM PDT by Justa
[ Post Reply | Private Reply | To 5 | View Replies]

To: SpaceBar

The data link already does some of this stuff. At the packet level (without changing the network out) as I’m sure you know, you have some fixed overhead. Each packet has a checksum to determine if the data has been transmitted correctly.

In a typical network, if the checksum is bad, you throw out the whole packet. This has to encode the data in some FEC-like way and try to extract useful data from bad packets instead of requesting a retransmission. That’s about the only way to get any sort of performance enhancement here - you don’t wait for a retransmit, you don’t retransmit bad packets - and you perhaps trade the FEC overhead for a larger packet size (more efficient) to compensate for the extra overhead. That should be a deterministic problem - and would give you a slight performance advantage for congested networks and fringe coverage areas. 10x is hype only possible in selective scenarios - still, it’s not nothing but it’s not a panacea. You still have to have a well designed physical layer - as always.


38 posted on 10/24/2012 5:43:54 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 36 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-38 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson