MTU and MSS
MTU is the maximum transmission unit: the largest IP packet a link or path will carry without needing fragmentation.
MSS is the maximum segment size: the largest TCP payload that can fit inside a packet after the IP and TCP headers are accounted for.
MTU and MSS are normally handled by the system automatically, but sometimes they need to be adjusted manually to fit the realities of a particular path, especially when things like tunnels, VPNs, and encapsulation are involved.
This page explains IPv4 packet structure, the relationship between MTU and MSS, packet fragmentation, how encapsulation and VPNs affect the numbers, and what you can do to set MTU and MSS correctly in your networks.
IPv4 packet structure
The IPv4 protocol transmits data as a series of packets consisting of two major parts:
- The header, which carries addressing and control information.
- The payload, which is the data being carried.
Each device or link on a packet network can handle packets up to a certain size in bytes. On Ethernet networks, the default limit is 1500 bytes, meaning an IPv4 packet can be at most 1500 bytes total. With a normal 20-byte IPv4 header, that leaves up to 1480 bytes for payload.
Plain IPv4 packet: +----------------------------------------------------+ | IPv4 Header | Payload | +----------------------------------------------------+ |<-- 20 bytes -->|<----------- 1480 bytes ---------->| |<------------------- 1500 bytes ------------------->|
If that payload contains TCP, which has its own 20-byte header, the largest TCP payload is not 1480 but 1460 bytes. This is what often shows up as the TCP MSS: the maximum size of the TCP segment that can fit inside an IPv4 packet.
TCP packet: +----------------------------------------------------+ | IPv4 Header | TCP Header | Payload | +----------------------------------------------------+ |<-- 20 bytes -->|<-- 20 bytes -->|<-- 1460 bytes -->| |<------------------- 1500 bytes ------------------->|
For transports other than TCP, like UDP and ICMP, the header sizes are different, and thus also the payload size:
UDP packet: +-----------------------------------------------------+ | IPv4 Header | UDP Header | Payload | +-----------------------------------------------------+ |<-- 20 bytes -->|<--- 8 bytes --->|<-- 1472 bytes -->| |<------------------- 1500 bytes -------------------->|
Protocol and transport options can reduce those numbers further by increasing the size of the header beyond the minimum. For example, an IPv4 header without options is only 20 bytes, but adding the timestamp option adds 12 bytes, reducing the payload to 1500-20-20-12 = 1448 bytes.
Encapsulation overhead
Encapsulation is the process of putting one packet inside another. This is common on 44Net, both with the IPIP Mesh and with 44Net Connect.
To encapsulate a packet is to take a full packet, including its header, and wrap it inside another packet with its own header. The outer packet is what gets sent across the network, and the inner packet is the payload of the outer packet.
The additional headers added by encapsulation reduce the size of the final payload.
For example, a simple IP-in-IP tunnel adds another 20-byte IPv4 header:
+------------------------------------------------------------------------+
| Outer IPv4 Header | Inner IPv4 Header | Inner Payload |
+------------------------------------------------------------------------+
|<--- 20 bytes ---->|<--- 20 bytes ---->|<--------- 1460 bytes --------->|
|<---------- 1480 byte inner IPv4 packet ----------->|
|<------------------------ 1500 byte outer packet ---------------------->|
If the inner packet itself is IPv4 carrying TCP:
+------------------------------------------------------------------------+
| Outer IPv4 Header | Inner IPv4 Header | TCP Header | TCP Payload |
+------------------------------------------------------------------------+
|<--- 20 bytes ---->|<--- 20 bytes ---->|<-- 20 b -->|<-- 1440 bytes --->|
|<- TCP App data -->|
|<-- 1460 byte inner payload --->|
|<---------- 1480 byte inner IPv4 packet ----------->|
|<------------------------ 1500 byte outer packet ---------------------->|
Add more layers and the usable payload shrinks again.
Depending on the upstream network, there may be many such layers. Providers may carry customer IPv4 inside IPv6. VPNs add their own headers. Other tunnels may add still more. A packet sent from a ham's distant remote station may be wrapped several times before it reaches the public Internet. Ideally, a service provider's encapsulation does not reduce the customer's effective MTU, but in practice that is not always the case.
As a result, by the time all the intermediary headers are added, the upstream path may only be able to carry something like 1420, 1400, or even 1380-byte packets.
MTU, path MTU, and MSS
MTU, path MTU, and MSS are distinct concepts related to the size of a packet as it is transmitted across a network.
An interface MTU is a parameter set on a local interface that tells the system the size of the largest packet the interface should send.
The path MTU is the size of the largest packet that can be sent intact from one endpoint to another, considering all the links and devices in between.
A general practical approach is to measure the path MTU between endpoints and then set local systems to use an interface MTU that fits within that constraint.
For example, if the path MTU is tested and observed to be 1380 bytes, then setting an interface MTU higher than 1380 may lead to dropped packets. Setting an interface MTU lower than 1380 will ensure packets are not too large, but won't use the full capacity of the path. Setting the interface MTU to 1380 would mean the packets are sized to fit the path as fully as possible.
MSS is a TCP parameter that tells the far end of a connection the largest TCP payload the local end is willing to receive. If the real path includes unseen overhead, a system may offer an MSS larger than the path can actually carry.
For example, if the MTU is 1380, the IP header is 20, and the TCP header is 20, then the maximum TCP payload that fits within the path MTU is 1380 - 20 - 20 = 1340 bytes. If the MSS is set higher than 1340, then TCP segments will be too large and may be dropped. If the MSS is set lower than 1340, then the TCP segments will fit within the payload without exceeding the MTU, but there will be bytes available but unused, and the packets won't use the full capacity of the path. Sizing the MSS to 1340 would mean the TCP segments fill the payload as fully as possible.
MTU can affect MSS, but MSS does not affect MTU. MSS is generally set automatically by the system based on the MTU of the local interface. Reducing MSS can help ensure TCP segments fit within the IP packet, but it doesn't change the MTU itself. If the MTU is too large, reducing the MSS will improve TCP performance but won't fix non-TCP traffic like UDP or ICMP.
Fragmentation and how systems avoid it
If a router receives a packet that is larger than the next hop can carry, dropping it is not the only option. If fragmentation is allowed, the packet may be split into smaller fragments.
Fragmentation can help packets get through without being dropped, but it has costs. Each fragment needs its own header, the receiver has to reassemble them, and some devices handle fragments poorly. Avoiding fragmentation whenever possible is generally preferred.
Managing fragmentaton
There are several mechanisms to manage fragmentation and avoid it when possible.
Path MTU Discovery (PMTUD)
If a packet is too large for a router along the path, the router can drop it and send back an ICMP "Fragmentation Needed" message, indicating the maximum size that can be sent without fragmentation. This allows the sender to learn the real path MTU and adjust its packet sizes accordingly.
Packetization Layer PMTUD
Similar to PMTUD, but implemented at the transport layer rather than relying on ICMP messages. This allows the transport protocol to probe for usable sizes and adjust its segment sizes without depending on ICMP feedback.
TCP MSS selection
When a TCP connection is established, each side announces its MSS in the SYN packets. If the MSS is set correctly based on the local MTU, then both ends will send TCP segments that fit within the path MTU, avoiding fragmentation for TCP traffic.
MSS clamping
If the MSS is still too large, a device along the path can rewrite the MSS value in TCP SYN packets on the fly, effectively "clamping" it to a smaller value. This is a common workaround when PMTUD is unreliable or when only TCP parameters can be controlled.
Don't Fragment (DF) bit
If fragmentation is not permissible, as with some application protocols, the sender can set the "Don't Fragment" (DF) bit in the IPv4 header. If the packet is too large for a router along the path, the router will drop it and send back an ICMP "Fragmentation Needed" message, indicating the maximum size that can be sent without fragmentation (PMTUD).
ICMP - Internet Control Message Protocol
ICMP is used for error messages and operational information. If ICMP is filtered, if a host doesn't reply with "Fragmentation Needed" messages, or if a tunnel is mis-sized, these mechanisms may fail, and the result is the black-hole MTU problem: small packets work, larger ones disappear.
Testing path MTU with ping
One way to test the path MTU is to send ICMP echo ("ping") requests with the DF bit set and gradually increase the payload size until packets stop getting through.
Note that the ping payload size is not the same as the total packet size. The total packet size includes the IPv4 header and the ICMP header on top of the ping payload, so you have to add them all up to get the actual packet size on the wire.
For example, to test whether a path can carry a 1500-byte packet, you need to set the ping payload size to 1500 minus the headers:
Testing a 1500-byte path MTU with ping:
- Subtract 20 bytes for the IPv4 header
- Subtract 8 bytes for the ICMP header
- Ping payload size = 1500 - 20 - 8 = 1472 bytes
To evaluate the results, add the headers back to the largest working payload size. For example, if 1472 bytes works but 1473 bytes fails, the path MTU is likely 1472 + 20 + 8 = 1500 bytes.
macOS
On macOS, `ping -D` sets DF and `-s` sets the ICMP payload size.
Examples:
# Test whether 1500-byte packets can get through: ping -D -s 1472 1.1.1.1 # Test whether 1428-byte packets can get through: ping -D -s 1400 1.1.1.1
If `1472` works reliably, the path is carrying a full 1500-byte IPv4 packet. If it fails but a smaller size works, reduce the payload until you find the largest working value, then add 28 to estimate the path MTU.
Linux
On Debian and most Linux systems using `iputils`, `-M do` sets DF behavior and `-s` sets the ICMP payload size.
Examples:
# Test whether 1500-byte packets can get through: ping -M do -s 1472 1.1.1.1 # Test whether 1428-byte packets can get through: ping -M do -s 1400 1.1.1.1
Again, add 28 bytes to the largest working payload to estimate the usable IPv4 path MTU.
If it helps, here is a script that can quickly find the largest working payload size by binary search:
https://git.ampr.org/johnburwell/find-mtu
Choosing the right fix
People often reach for MSS clamping when the real issue is MTU, or lower MTU everywhere when they only needed to keep TCP honest across one segment. Both approaches can work, but they are not interchangeable.
General rules:
- Adjust MTU when a link, tunnel, or interface genuinely cannot carry full-size packets because of encapsulation or provider constraints.
- Lowering MTU is the more fundamental fix when the path is constrained. It affects all IPv4 traffic, not just TCP. That makes MTU the usual parameter to adjust when a tunnel or link has known overhead.
- Use MSS clamping when TCP flows need help staying below a known path MTU, especially when endpoint PMTUD is unreliable or when MSS is the only parameter you can adjust. MSS clamping affects only TCP, usually by rewriting the MSS announced in SYN packets so both ends send smaller TCP segments.
- But MSS clamping can be misleading if it is used as a substitute for correct MTU sizing. It works only for TCP, it applies independently in each direction, and it helps only new sessions that negotiate through the clamping device. Web browsing or SSH may appear to work while UDP, ICMP, or tunnelled traffic still breaks.
- In general, you want each layer of encapsulation to fit within the next layer's MTU. That may mean setting MTU in more than one place.
- Don't overlook other factors that may impede PMTUD. Common firewalls often block ICMP by default, which breaks the normal feedback loop that would otherwise help handle this automatically.
- Clamp TCP MSS at the tunnel edge if needed (or if the only available control), but do not treat that as a full substitute for correct MTU sizing.
- Check every layer of encapsulation you can, not just the innermost interface, and try to make adjustments only where needed.
- Verify both directions, because asymmetric paths can hide the real problem (44Net to 44Net, 44Net to Internet, Internet to 44Net, etc.) -- especially if MSS clamping is used and applied in only one direction.
Why 44Net Connect suggests 1380
44Net Connect produces configurations that suggest a tunnel MTU of 1380.
This number was determined by testing across a variety of real-world links representing common 44Net Connect use cases. For example, on a MacBook connected through a GL.iNet cellular hotspot in its default configuration, with WireGuard running on the Mac, pings to public hosts over the tunnel topped out at 1352 bytes of payload, for a 1380 byte MTU.
mtu=1350 payload=1322 ok mtu=1425 payload=1397 fail mtu=1387 payload=1359 fail mtu=1368 payload=1340 ok mtu=1377 payload=1349 ok mtu=1382 payload=1354 fail mtu=1379 payload=1351 ok mtu=1380 payload=1352 ok mtu=1381 payload=1353 fail highest working MTU: 1380 (payload 1352)
Connect is designed to accommodate links prone to higher overhead like cellular and satellite. To increase the chance that a new user can get connected even in such environments without having to debug path MTU issues immediately, the recommended value of 1380 is intentionally conservative. Users who know their PMTU can raise it accordingly.
Fun fact
In the packet radio era, every byte of overhead consumed limited airtime over slow links. Efficiently aligning IP packets with AX.25 transmission frames was part of designing a usable link with minimal retransmissions.
The same tradeoffs still exist today. Faster CPUs, better NICs, and high-speed Internet access often mask the cost, but every extra header still consumes space that could otherwise carry useful data, and low-speed or heavily encapsulated paths can surface that overhead immediately.