The codes here are not like conventional FECs, such as block codes (think Reed-Solomon and their ilk) or rateless codes (think Fountain Codes and the such). Such codes can be used over UDP readily. Using such codes over TCP is quite difficult.
Besides the usual difficulties, traditional codes have a structure that precludes their composability. There are several issues with this. One is that they cannot be re-encoded without being decoded. The ability to re-encode blindly is crucial to achieving capacity. For a theoretical treatment of this, I would refer you to:
http://202.114.89.42/resource/pdf/1131.pdf
The article is rather mathematical but the abstract provides the gist of the results "
We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of stored packets. In such a strategy, intermediate nodes perform additional coding yet do not decode nor wait for a block of packets before sending out coded packets. Moreover, all coding and decoding operations have polynomial complexity.
We show that, provided packet headers can be used to carry an amount of side-information that grows arbitrarily large (but independently of payload size), random linear network coding achieves packet-level capacity for both single unicast and single multicast connections and for both wireline and wireless networks. "
The benefit of this composability, which we had first shown theoretically, is illustrated in TCP in the following paper that recently appeared in Proceedings of IEEE
The theory of network coding promises significant benefits in network performance, especially in lossy networks and in multicast and multipath scenarios. To realize these benefits in practice, we need to understand how coding across packets interacts with the acknowledgment (ACK)-based flow control mechanism that forms a central part of today’s Internet protocols such as transmission control protocol (TCP). Current approaches such as rateless codes and batch-based coding are not compatible with TCP’s retransmission and sliding-window mechanisms. In this paper, we propose a new mechanism called TCP/NC that incorporates network coding into TCP with only minor changes to the protocol stack, thereby allowing incremental deployment. In our scheme, the source transmits random linear combinations of packets currently in the con- gestion window. At the heart of our scheme is a new interpretation of ACKs the sink acknowledges every degree of freedom (i.e., a linear combination that reveals one unit of new information) even if it does not reveal an original packet immediately (...) An important feature of our solution is that it allows intermediate nodes to perform re-encoding of packets, which is known to provide significant throughput gains in lossy networks and multicast scenarios.
The exciting news that prompted the piece in TR is that we now had an implementation "in the wild", in a simple Amazon proxy, which required significant engineering and works remarkably well.
The article refers to the issue of composability by pointing out that there would be advantage if it were built "directly into transmitters and routers, she says". The more places you build this, the better. Dave Talbot also provided a link to work that our group has done with researchers at Alcatel-Lucent
http://arxiv.org/pdf/1203.2841.pdf
which touches on this point to show the very important energy savings that could be obtained by implementing network coding in different parts of the network.
Besides the usual difficulties, traditional codes have a structure that precludes their composability. There are several issues with this. One is that they cannot be re-encoded without being decoded. The ability to re-encode blindly is crucial to achieving capacity. For a theoretical treatment of this, I would refer you to: http://202.114.89.42/resource/pdf/1131.pdf The article is rather mathematical but the abstract provides the gist of the results " We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of stored packets. In such a strategy, intermediate nodes perform additional coding yet do not decode nor wait for a block of packets before sending out coded packets. Moreover, all coding and decoding operations have polynomial complexity. We show that, provided packet headers can be used to carry an amount of side-information that grows arbitrarily large (but independently of payload size), random linear network coding achieves packet-level capacity for both single unicast and single multicast connections and for both wireline and wireless networks. "
The benefit of this composability, which we had first shown theoretically, is illustrated in TCP in the following paper that recently appeared in Proceedings of IEEE
http://dandelion-patch.mit.edu/people/medard/papers2011/Netw...
The theory of network coding promises significant benefits in network performance, especially in lossy networks and in multicast and multipath scenarios. To realize these benefits in practice, we need to understand how coding across packets interacts with the acknowledgment (ACK)-based flow control mechanism that forms a central part of today’s Internet protocols such as transmission control protocol (TCP). Current approaches such as rateless codes and batch-based coding are not compatible with TCP’s retransmission and sliding-window mechanisms. In this paper, we propose a new mechanism called TCP/NC that incorporates network coding into TCP with only minor changes to the protocol stack, thereby allowing incremental deployment. In our scheme, the source transmits random linear combinations of packets currently in the con- gestion window. At the heart of our scheme is a new interpretation of ACKs the sink acknowledges every degree of freedom (i.e., a linear combination that reveals one unit of new information) even if it does not reveal an original packet immediately (...) An important feature of our solution is that it allows intermediate nodes to perform re-encoding of packets, which is known to provide significant throughput gains in lossy networks and multicast scenarios.
The exciting news that prompted the piece in TR is that we now had an implementation "in the wild", in a simple Amazon proxy, which required significant engineering and works remarkably well.
The article refers to the issue of composability by pointing out that there would be advantage if it were built "directly into transmitters and routers, she says". The more places you build this, the better. Dave Talbot also provided a link to work that our group has done with researchers at Alcatel-Lucent http://arxiv.org/pdf/1203.2841.pdf which touches on this point to show the very important energy savings that could be obtained by implementing network coding in different parts of the network.