IPv4 and IPv6 (P)MTU
Maximum Transmission Unit (MTU) is the largest size Protocol Data Unit (PDU) that can be sent over the network. Higher MTU leads to a more efficient network as you can move more in a single packet/frame. In addition, Path MTU is the largest packet size along a given route from source to destination that can be sent without fragmentation. By setting the DF flag (or always in IPv6), routers that receive a packet that is larger than it’s MTU drops the packet and replies with an ICMP Unreachable (Datagram too big) message, also containing it’s MTU. The host will resend smaller packets until it reaches the destination.
With many networks blocking ICMP traffic, the above method for discovering Path MTU doesn’t always work. In this situation you may encounter issues with a smaller MTU than 1500. To combat this TCP sends increasingly larger packets over the network to determine the largest MTU along a given path.
Maximum Segment Size (MSS) is the largest amount of data TCP can send in a single packet. Default TCP MSS for IPv4 is 536 and for IPv6 it’s 1220. MSS doesn’t count the TCP or IP headers (unlike MTU). Therefore in a 576 byte packet the MMS will be 536 (20 bytes for IP header, 20 bytes for TCP header. However in IPv6 a 1280 byte packet will have an MSS of 1220 (40 byte IP header, 20 byte TCP heater.
If a host uses a non default MSS, the MSS is specified as a TCP option in the SYN packet during the TCP handshake.
Latency in a network is the time it takes from source to destination and back to source. This is called Round Trip Time (RTT). Latency is always there as routers, switches and hosts take time to do their jobs. The main causes of large spikes in latency are buffer bloat, queuing, processing time, data protocols, propagation delay and serialization.
There are 2 Windows in TCP, Congestion and Receive. These Windows limit the throughput of a TCP connection. The Congestion Window works to avoid congestion on the network. While the Receive Window prevents flooding the receiver with more information than they can process.
Bandwidth Delay Product
The bandwidth Delay Product is the product of the bandwidth in bits/sec and RTT in seconds. Therefore, the result is the maximum amount of data (in bits) that can be on the network at any moment. This is data that has been sent and is yet to reach it’s destination and be ACKed. This is useful when optimizing the TCP Receive Window size.
TCP Global Synchronization will occur during times of congestion with multiple TCP streams. The result is they back off simultaneously when packet loss occurs. The TCP streams then slowly increase there rate until congestion results in packet loss again. All TCP streams act the same resulting in the common saw sooth effect on a throughput graph.
When there is congestion in the network and both UDP and TCP streams of traffic occurring, UDP will tend to win out. This is known as TCP Starvation, UDP dominance. Because TCP has congestion control built in and UDP doesn’t, TCP will lower it’s bandwidth usage while UDP keeps slamming data down the line.
UDP has lower latency then TCP due to smaller header and less processing overheads. Also, UDP doesn’t have Windowing to slow it down. Therefore throws data on the line as fast as it can.