1.1.e Explain TCP operations
- 1.1.e [i] IPv4 and IPv6 PMTU
- 1.1.e [ii] MSS
- 1.1.e [iii] Latency
- 1.1.e [iv] Windowing
- 1.1.e [v] Bandwidth delay product
- 1.1.e [vi] Global synchronization
- 1.1.e [vii] Options
IPv4 and IPv6 PMTU
IPv4 MTU is typically not discovered as it is in IPv6 PMTUD. Instead Packets that arrive at a chokepoint where MTU is smaller than the payload size that was transmitted will cause the packet to become fragmented mid-transit or dropped all together. IPv4 can optionally use path discovery to verify the largest size packet will traverse on a given path before transmitting, IPv6 PMTUD will do this automatically. Also fragmentation occurs at the source if the path MTU size winds up being smaller than the packet size. Therefore it is strongly suggested to utilize the features of PMTUD to avoid fragmentation from happening on IPv6 packets.
TCP Segments carry the actual data between two TCP endpoints, these segments sizes are dependent on the Maximum Segment Size setting if one is configured. Lack of a specific MSS setting inherently still has a maximum threshold, given when MTU is considered for TCP communications. The amount of space left within the MTU size after all transport headers are taken into account is effectively the MSS without it being specifically defined. You would typically manipulate the MSS for TCP communications when you know that traffic will encounter paths that require additional transport headers that aren’t considered locally before being transmitted out. This effectively makes the segment size smaller reserving space for the anticipated transport headers to be included along the transport path.
Latency is the amount of time it takes from an originating source to send data to the receiving destination. Factors that influence the amount of time it takes for data to get from start to finish include:
- Propagation Delay – This is the amount of time it physically takes for transmitted signals to get from source to destination.
- Serialization – This is the amount of time it takes for the conversion of bytes to be formed into a bit stream and placed onto an interface to be transmitted.
- Data Protocols – Some protocols require checksums such as three way handshakes before data can be sent, the amount of time for acknowledgements to come back from these handshakes takes time
- Routing and Switching – the amount of time it takes for a router or switch to determine the direction and interface the packet must exit out of takes time
- Queuing and buffering – this can only happen on a congested link where packets can be held in a buffer or queue before being serialized onto an interface for transmission, the amount of time the traffic stays in queue can add to latency
Windowing is a TCP operation that allows more than one segment to be transmitted before being acknowledged during the TCP acknowledgement process. This allows for faster, more efficient methods of transmitting data across an unreliable network path. Windowing essentially doubles the number of unacknowledged segments sent for every successful acknowledgement received provided there are no lost segments in transit. At which point if there is an interruption in the sequencing of segments received the window starts over and only sends and acknowledges single segments then repeats the increased windowing process.
Bandwidth delay product
The delay-bandwidth product of a transmission path defines the amount of data TCP should have within the transmission path at any one time, in order to fully utilize the available channel capacity.
Bandwidth delay product is defined as capacity of a pipe = bandwidth (bits/ sec) * RTT (s) where capacity is specific to TCP and is a bi-product of how the protocol itself operates.
BDP = total_available_bandwidth (KBytes/sec) x round_trip_time (ms)
If BDP is ever larger than the the TCP receive window, TCP receive window is the limiting factor for bandwidth, an adjustment to the TCP window to be larger would be needed to fix this issue.
TCP global synchronization occurs with TCP flows during times of high network congestion. the typical behavior of interfaces is when enough traffic fills up the hardware transmit queues additional packets attempting to be sent are subject to tail drop.
TCP has automatic recovery of dropped packets, as a result TCP will reduce the rate at which is sends traffic for a certain period of time, and then tries to determine if the network is no longer congested by slowly ramping up the speed at which it transmits traffic. This is known as the slow-start algorithm.
When all senders experience this same problem at the same time, when all senders are influenced to send/receive slower/faster as a result of this problem at the same time this is referred to as Global synchronization and leads to inefficient use of bandwidth to to large numbers of dropped packets which must be re-transmitted, at a reduced sending rate.
Tail drop is the leading cause of this problem, features such as RED or WRED can reduce the likelihood of this occurring as well as keeping queue sizes down to a manageable level.
TCP options field was introduced to allow the TCP protocol to add new functionality for the evolving new age networks of today and in the future. Features like TCP windowing is accomplished by extending the window size field to a larger value allowing TCP to send more segments that the original size field allowed.