Basically it’s a way for the computer to make a SWAG as to what the missing packet is, based on what comes before and after............
Seems like a “smart” FTP approach. Still gonna lose a lot of reliability, that TCP/IP gives, if you really need all the data bits but at least it should be good for streaming uses.
A modification of simple “checksum” has been used in the past to detect and fill in for dropped bits - I believe that one system was able to detect and correct up to 3 bits in a 1024byte block, but my memory is fuzzy on this right now, and I am not going to look it up. Essentially that, and use of the 9th “parity” bit has been an available method since the early days of computing.
This sounds like it may be some modification of that. It would be nice to hear a few more details of how much fault it is able to detect&correct, what the overhead data sent, handshaking needed, etc.
That's not all that useful. All it will do is improve youtube and the downloading of... porn....
.....OMG THIS IS THE MOST IMPORTANT INNOVATION IN THE HISTORY OF MAN!!!!!
Sounds more like RAID for packets. Ergo there will be a reduction in useable data per packet to provide the redundancy for ONE lost packet in the sequence. And if more than one packet per sequence is lost does the whole sequence need to be retrans-ed? Likely, and that will increase network congestion proportional to what would supposedly be “saved”.
And they aren’t increasing bandwidth, rather the circuit is just being used closer to its errorless rate. They had to use a really crummy network to show its value otherwise it only adds to network congestion. How? By sending less useable information per packet, which in turn produces more packets required per information transfer which = more traffic. I wonder if any of those other riders considered them to be bandwidth hogs taking bandwidth needed for them to access their network? For a YouTube of college students playing Angry Birds?
Anyone who uses a solution which requires reducing information per packet without reducing the packet size really needs a very strong (read: security) justification to do so as it degrades network performance for all users.
I hope this so called solution requires FCC licensing. It is certainly not an elegant or egalitarian solution as it increases bandwidth usage per information transfer.
Why not integrate some of the better network accelerator technologies into wireless devices instead? At least they don’t reduce the information bytes per packet.
So what’s next, S-ing around with MTU sizes? /s