Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

A Bandwidth Breakthrough!
MIT Technology Review ^ | Tuesday, October 23, 2012 | David Talbot

Posted on 10/23/2012 11:42:47 AM PDT by Red Badger

click here to read article


Navigation: use the links below to view more comments.
first previous 1-2021-38 last
To: Red Badger

Sounds like a RAID for a moving target.


21 posted on 10/23/2012 12:51:24 PM PDT by ImJustAnotherOkie (zerogottago)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Buckeye McFrog

Yeah, a little algebra following a space transformation... Leads the public to believe it’s as easy as figuring out which train got to New York first. Well, it’s not like the general public can solve that problem either. No point in confusing them with the details! LOL


22 posted on 10/23/2012 1:08:49 PM PDT by chaos_5
[ Post Reply | Private Reply | To 13 | View Replies]

To: Red Badger
Several companies have licensed the underlying technology in recent months,

It's good to see claims of technological improvement actually make it to market, rather than promising market availability several years in the future.

23 posted on 10/23/2012 1:09:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
In a circumstance where losses were 5 percent—common on a fast-moving train—the method boosted bandwidth from 0.5 megabits per second to 13.5 megabits per second. In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.

Keep increasing the speed until you get packet loss.

24 posted on 10/23/2012 1:10:49 PM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


25 posted on 10/23/2012 1:57:48 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

Bookmark


26 posted on 10/23/2012 2:01:31 PM PDT by IronJack (=)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger
Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............

That's not all that useful. All it will do is improve youtube and the downloading of... porn....

.....OMG THIS IS THE MOST IMPORTANT INNOVATION IN THE HISTORY OF MAN!!!!!

27 posted on 10/23/2012 2:09:57 PM PDT by Lazamataz (The Pravda Press has gone from 'biased' straight on through to 'utterly bizarre'.)
[ Post Reply | Private Reply | To 5 | View Replies]

To: Red Badger

Sounds like Forward Error Correction (FEC) code


28 posted on 10/23/2012 2:18:35 PM PDT by Bruce Kurtz
[ Post Reply | Private Reply | To 1 | View Replies]

To: Red Badger

bkmk


29 posted on 10/23/2012 2:22:15 PM PDT by Sergio (An object at rest cannot be stopped! - The Evil Midnight Bomber What Bombs at Midnight)
[ Post Reply | Private Reply | To 1 | View Replies]

To: KarlInOhio

When we send and Excel spreadsheet with pivot tables we often get new blanks and new lines of what seem to be / appear to be the same sort words or data. I have wondered if these are transmission errors?


30 posted on 10/23/2012 4:38:25 PM PDT by Sequoyah101
[ Post Reply | Private Reply | To 17 | View Replies]

To: Red Badger

My casual take on this after reviewing some of their published work is it’s founded on erasure codes. This is funny because I think there’s already products for the wired tcp universe that work the same way, plus storage implementations too.


31 posted on 10/23/2012 4:47:27 PM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 1 | View Replies]

To: no-s

I was looking for details but the article was just generalities for public consumption. Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.


32 posted on 10/24/2012 1:09:16 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 31 | View Replies]

To: SpaceBar
...Big fan of various codes including RS, golay, and convolutional, having implimented/used them in some of my homebrew projects.

heheh there just went 45 minutes of guilty pleasure see Modeling Network Coded TCP.

33 posted on 10/24/2012 2:51:49 AM PDT by no-s (when democracy is displaced by tyranny, the armed citizen still gets to vote)
[ Post Reply | Private Reply | To 32 | View Replies]

To: Red Badger

So much in computing boils down to an encoding in some way or another.


34 posted on 10/24/2012 4:47:16 AM PDT by 2 Kool 2 Be 4-Gotten
[ Post Reply | Private Reply | To 1 | View Replies]

To: SpaceBar

I think this quote cuts through the hype:

” In a situation with zero losses, there was little if any benefit, but loss-free wireless scenarios are rare.”

loss-free scenarios are not rare, They are entirely predictable, and they just require higher signal strength

Error correction codes are already part of the data link layer.

What they imply (order of magnitude increase) is a violation of Nyquist’s Law.

What they are doing will not solve plain old congestion problems.

I suspect that they are using normal packet transmission overhead in a different way that includes error correction elements. I could see some level of improvement possible - mainly by managing flow-control differently.

Here is the fundamental theory issue: Typical Digital Networks perform consistently and then “fall off a brick wall”. If you can stave off the brick wall with a bit of low-overhead error-correction, you might be able to measure a significant increase in performance (10x even) at the signal ‘brick wall’.

This may increase cell coverage for a specific link scenario (fringe) a little bit, but will not increase designed capacity or any other meaningful measure of a well-designed network - per their quote at the beginning of this post.


35 posted on 10/24/2012 5:07:56 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 32 | View Replies]

To: RFEngineer

I was thinking something like a hybrid FEC scheme with an ACK/NACK fallback, or even variable code robustness similar to the latest versions of PACTOR which estimate the channel S/N and adjust accordingly.


36 posted on 10/24/2012 5:21:52 AM PDT by SpaceBar
[ Post Reply | Private Reply | To 35 | View Replies]

To: Red Badger

Sounds more like RAID for packets. Ergo there will be a reduction in useable data per packet to provide the redundancy for ONE lost packet in the sequence. And if more than one packet per sequence is lost does the whole sequence need to be retrans-ed? Likely, and that will increase network congestion proportional to what would supposedly be “saved”.

And they aren’t increasing bandwidth, rather the circuit is just being used closer to its errorless rate. They had to use a really crummy network to show its value otherwise it only adds to network congestion. How? By sending less useable information per packet, which in turn produces more packets required per information transfer which = more traffic. I wonder if any of those other riders considered them to be bandwidth hogs taking bandwidth needed for them to access their network? For a YouTube of college students playing Angry Birds?

Anyone who uses a solution which requires reducing information per packet without reducing the packet size really needs a very strong (read: security) justification to do so as it degrades network performance for all users.

I hope this so called solution requires FCC licensing. It is certainly not an elegant or egalitarian solution as it increases bandwidth usage per information transfer.

Why not integrate some of the better network accelerator technologies into wireless devices instead? At least they don’t reduce the information bytes per packet.

So what’s next, S-ing around with MTU sizes? /s


37 posted on 10/24/2012 5:34:56 AM PDT by Justa
[ Post Reply | Private Reply | To 5 | View Replies]

To: SpaceBar

The data link already does some of this stuff. At the packet level (without changing the network out) as I’m sure you know, you have some fixed overhead. Each packet has a checksum to determine if the data has been transmitted correctly.

In a typical network, if the checksum is bad, you throw out the whole packet. This has to encode the data in some FEC-like way and try to extract useful data from bad packets instead of requesting a retransmission. That’s about the only way to get any sort of performance enhancement here - you don’t wait for a retransmit, you don’t retransmit bad packets - and you perhaps trade the FEC overhead for a larger packet size (more efficient) to compensate for the extra overhead. That should be a deterministic problem - and would give you a slight performance advantage for congested networks and fringe coverage areas. 10x is hype only possible in selective scenarios - still, it’s not nothing but it’s not a panacea. You still have to have a well designed physical layer - as always.


38 posted on 10/24/2012 5:43:54 AM PDT by RFEngineer
[ Post Reply | Private Reply | To 36 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-38 last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson