It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The Internet protocol suite is the set of communications protocols used for the Internet and similar networks, and generally the most popular protocol stack for wide area networks. It is commonly known as TCP/IP, because of its most important protocols: Transmission Control Protocol (TCP) and Internet Protocol (IP), which were the first networking protocols defined in this standard. It is occasionally known as the DoD model due to the foundational influence of the ARPANET in the 1970s (operated by DARPA, an agency of the United States Department of Defense).
TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, transmitted, routed and received at the destination. It has four abstraction layers, each with its own protocols.[1][2] From lowest to highest, the layers are:
The link layer (commonly Ethernet) contains communication technologies for a local network.
The internet layer (IP) connects local networks, thus establishing internetworking.
The transport layer (TCP) handles host-to-host communication.
The application layer (for example HTTP) contains all protocols for specific data communications services on a process-to-process level (for example how a web browser communicates with a web server).
Originally posted by Xoanon
reply to post by XPLodER
Hi XPlodER,
I love this stuff but I am still sort of in my toddler years when it comes to understanding it all, so please excuse my bone-headed Qs.
Where does the system get the data from to replace the lost data without a resend?
On a funkier note, I am really intrigued by the 'layering' of TCP IP. I am studying histology right now and it sort of reminds me of tissue layers and enzymes in the body that allow for specific types of 'histo-communication' (I made that up).
And also, isn't it interesting that this all seems to be a DARPA project from day 1. I can't imagine that anything has changed, and in fact, there seems to be plenty of evidence that it has not.
Originally posted by Xoanon
reply to post by XPLodER
Hi XPlodER,
Where does the system get the data from to replace the lost data without a resend?
Originally posted by Xoanon
reply to post by XPLodER
On a funkier note, I am really intrigued by the 'layering' of TCP IP. I am studying histology right now and it sort of reminds me of tissue layers and enzymes in the body that allow for specific types of 'histo-communication' (I made that up).
Originally posted by Maxatoria
It sounds like each packet contains a small amount of data which can be used to reconstruct missing packets (think .par files) so while the size of each packet increases to accommodate the lost data or get smaller to fit current standards so we have to transmit more packets to transfer the same data but this sounds like its a dynamically changing system so as the network gets crappier the amount of data used to provide the necessary error correction increases and as it gets better it uses less error correction which to me sounds like someones ported 1960-1970's data transmission over crappy telegraph systems onto wireless
Originally posted by zroth
Nice.
I imagine this discovery will find its way to the traditional transport layer as well.
Originally posted by Maxatoria
Theres always going to be a certain point of cut off when the amount of parity data to recover the data stream will be more than the ability of the system to provide such data and then you'll be stuffed but it sounds good for system where theres a reasonable quality of connection as it'll save both sides some effort and on wired based systems i sense that there wont be much use unless you are near the physical ends of the media's ability aka running token ring over 1920's phone wiring
Originally posted by Arbitrageur
reply to post by XPLodER
There is also the bandwidth penalty for sending the data that can be used to reconstruct missing packets.
If there is enough extra data in 8 packets to reconstruct one missing packet, that means there is 12.5% overhead in additional data being sent.
The issue I see with this is, ALWAYS sending the additional 12.5%, even when the connection is good, may be more of an overhead overall to the networks than just resending the lost packet data, which let's say involves a 1% overall overhead. So, this could make network traffic 12 times worse, instead of better.
The best technical solution is to fix the connectivity issues so you don't have lost packets, but where you do have some lost packets, I'm not sure it makes sense to penalize the entire network with an additional 12.5% overhead for those cases, most of which may not be losing any packets at all.
The network
coding layer intercepts and modifies TCP’s acknowledgment
(ACK) scheme such that random erasures does not affect
the transport layer’s performance. To do so, the encoder,
the network coding unit under the sender TCP, transmits R
random linear combinations of the buffered packets for every
transmitted packet from TCP sender. The parameter R is
the redundancy factor. Redundancy factor helps TCP/NC
to recover from random losses; however, it cannot mask correlated losses, which are usually due to congestion. The
decoder, the network coding unit under the receiver TCP,
acknowledges degrees of freedom instead of individual packets, as shown in Figure 1. Once enough degrees of freedoms
are received at the decoder, the decoder solves the set of
linear equations to decode the original data transmitted by
the TCP sender, and delivers the data to the TCP receiver.
We briefly note the overhead associated with network coding. The main overhead associated with network coding can
be considered in two parts: 1) the coding vector (or coeffi-
cients) that has to be included in the header; 2) the encoding/decoding complexity. For receiver to decode a network
coded packet, the packet needs to indicate the coding coeffi-
cients used to generate the linear combination of the original
data packets. The overhead associated with the coefficients
depend on the field size used for coding as well as the number
of original packets
We have presented an analytical study and compared the
performance of TCP and E2E-TCP/NC. Our analysis characterizes the throughput of TCP and E2E as a function of
erasure rate, round-trip time, maximum window size, and
the duration of the connection. We showed that network
coding, which is robust against erasures and failures, can
prevent TCP’s performance degradation often observed in
lossy networks. Our analytical model shows that TCP with
network coding has significant throughput gains over TCP.
E2E is not only able to increase its window size faster but
also to maintain a large window size despite losses within the
network; on the other hand, TCP experiences window closing as losses are mistaken to be congestion. Furthermore,
NS-2 simulations verify our analysis on TCP’s and E2E’s
performance. Our analysis and simulation results both support that E2E is robust against erasures and failures. Thus,
E2E is well suited for reliable communication in lossy wireless networks.
Originally posted by XPLodER
why this changes the internet,
over the last few years "streaming content" has become a large part of life for internet users, and places a large demand on hardware infrastructure, if the "end user" is on a "lossy" or "busy" wireless network endpoint, the TCP/IP uses large resources to ensure "reliable" transport of streamed data to the end user.
I have several major issues understanding your post or your point:
1) Voice or video transmits are never requiring 100% accuracy
If packet (also called datagram) gets lost, what happens due this loss is something you barely notice on the YouTube video you are watching or on your Skype call.
Streaming videos use buffering - meaning in plain words that the video you are watching is being downloaded on the background to your local computer and you are watching the video from that local buffer.
Unless you deliberately configure your local buffer value to be zero, you never stream the video you are watching directly to your computer screen from the Internet.
2) UDP protocol
UDP is part of TCP/IP protocol suite, it has been since long used to transmit video or voice over Internet.
Unlike TCP, the UDP protocol does not do any error correcting, because as mentioned, there is simply no need for any error correcting when it comes to video or voice.
3) Factors which impact quality of online streaming video
4) How TCP as protocol works
TCP on the TCP/IP protocol suite is, in high-level, responsible for data transfer and IP does only addressing.
The TCP protocol works so that the data is divided into packets. Those packets are then sent to the network.
When the packet (datagram) is sent to destination (SYN), destination notifies the source that it got the transmit (SYN-ACK) and source finalizes the transfer by replying to this notification (ACK). If anything gets lost during the trasmit, the data is resent.
Packets - again in high-level - have three fields: header, data field and footer. The footer has the CRC, or error-checking data, that currently determines if TCP retransmit is needed or not or was the transfer successful.
There is a method of handling packets which arrive out of order to destination, but there is no method to construct back together packets which are lost during the transfer - or "on the fly" - as you say. If data is lost, it is lost during the transfer.
If it is missing it cannot be made up: unless the next and previous packet would be able to in some way determine what the missing packet had in it's data field. In practice it would mean that there would be an algorithm that can recover anything by just knowing two parts of the whole, no matter how large or complex that whole might be.
And when saying that, I can bet it would not be first used to stream videos from the Internet.
Originally posted by definity
Originally posted by Xoanon
reply to post by XPLodER
Hi XPlodER,
Where does the system get the data from to replace the lost data without a resend?
it can recalculate the lost data with the check sums of all the other packets sent, but it cant do it a lot.
Here what a TCP packet looks like.
A. Header Structure
The NC layer protocol reuses parts of the TCP header
without any modification. To reduce protocol overhead, these
common header parts can be stripped off the TCP header
before encoding of the TCP segments. They can be easily
reconstructed at the receiver side by simple extraction from
the NC header. The common and stripped–off header parts are
source port, destination port, all control flags except ACK, and
sequence number. See notes about the latter one in Section IIIB. The reused and stripped–off header fields are blue shaded
in Figure 2.
The remaining TCP header fields are not required by the
NC layer. Thus, they are not part of the NC layer header
and become part of the encoded TCP segment, i.e. the NC
layer payload data. The TCP header fields which are encoded
include acknowledgment number, offset, reserved, window,
checksum, urgent pointer and options. They are non-shaded
on the left-hand side of Figure 2.
Besides the reused TCP header fields, the NC layer adds
two additional header fields, i.e. symbol indicator (symb.) and
NC options. The new fields are yellow shaded on the righthand side of Figure 2. The symbol indicator is responsible to
determine the position of a segment within an MDS codeword,
for details see the following section. The NC options are not
used in the current basic version of our protocol. They might
be used to signal adaptions of code rate or speculative ACK
threshold in later versions.