SS stands for Socket Statistics. It’s a Linux utility used to inspect socket connections and their detailed TCP/IP kernel-level statistics, much like the older netstat, but faster, more detailed, and more modern.
It reads live kernel data directly from:
- /proc/net/tcp
- /proc/net/udp
- /proc/net/unix
- /proc/net/sockstat
and presents it in human-readable, structured form.
How ss Works Internally
- It queries the kernel’s socket tables using Netlink sockets (specifically NETLINK_INET_DIAG and NETLINK_SOCK_DIAG).
- This means it gets real-time data, including per-connection metrics from the TCP stack.
- The kernel collects metrics like RTT, cwnd, retransmissions, and pacing rate — these are then exposed via ss -i or ss -ti.
| Use Case | Description |
|---|---|
| Connection listing | Show all TCP/UDP connections (like netstat -anp) |
| Performance debugging | Show RTT, cwnd, retransmissions, pacing rate, etc. |
| Congestion control analysis | Identify which TCP algorithm (CUBIC, BBR, Reno) is in use |
| Socket buffer tuning | Observe send/receive queue buildup and detect bottlenecks |
| Detect retransmissions | Identify packet loss, reordering, DSACKs, or retrans timers |
| TLS offload visibility | Check whether kernel TLS (tcp-ulp-tls) or kTLS is active |
| Protocol filtering | Filter by protocol, port, address, or state |
Example :
This output is not user-space only — it’s kernel-internal TCP socket telemetry made visible to you. Each field is live, pulled from the kernel’s struct tcp_sock and related substructures.
⚡ Key Advantage
Unlike packet captures (e.g., tcpdump), ss doesn’t require sniffing packets or root network privileges. It reads live socket state directly from the kernel, meaning:
- Zero overhead on traffic
- Instant stats per connection
- Works even on encrypted (TLS) connections
Why ss Matters
Hook: Start with a common problem, like troubleshooting slow network connections or needing a faster alternative to netstat.
What is ss? Define it as a utility to dump socket statistics, a modern replacement for netstat.
Why use it?
- Speed: It's significantly faster than netstat, especially on systems with many active connections, because it retrieves information directly from the kernel's TCP/IP stack data structures (like /proc/net/tcp, /proc/net/udp, etc.) rather than processing raw output.
- Features: Provides more detailed information on TCP states, socket options, and kernel-level metrics.
Basic Usage: Getting Started
- The Simplest Command:
- ss (Displays all open non-listening sockets, a good starting point).
- The Essentials (The Go-To Command):
- ss -tunapl
- -t: TCP sockets
- -u: UDP sockets
- -n: Numeric ports (prevents DNS lookups, making it fast)
- -a: All sockets (listening and non-listening)
- -p: Show the Process ID (PID) and name (process that opened the socket)
- -l: Only show listening sockets (often used without -a)
Focusing Your View: Filtering and Specific Stats
- Show Only Listening Sockets:
- ss -lntu (Listen, numeric, TCP/UDP)
- Display Socket Summary (Quick Health Check):
- ss -s (Shows a summary count of different socket states, like established, closed, etc.)
- Filtering by Protocol and State:
- All ESTABLISHED TCP connections: ss -t state established
- All TIME-WAIT TCP connections: ss -t state time-wait (useful for identifying potential resource exhaustion from many closed connections)
- Specific Port: ss -nt sport = :80 (Source port 80)
- Specific IP/Port: ss -nt dst 192.168.1.1:443 (Destination IP and port)
- Advanced Debugging: Unlocking Deeper Information
- Show Timer Information:
- ss -o (Shows the timer state of a connection, like keepalive, on, or off, and when the next event will occur. Crucial for understanding idle connections.)
- Displaying TCP Information (The Good Stuff):
- ss -tnpmi
- -m: Show Memory usage of the socket.
- -i: Show Internal TCP information (congestion control algorithm, RTT, retransmits, window sizes).
- Example Debugging Output:
- Break down the output of the -i flag:
- wscale:7,7: Window scaling factor (client and server)
- rto:204: Retransmission Timeout (in ms)
- rtt:2.043/4.086: Round-Trip Time (average/variance)
- cwnd:10: Congestion Window size (in segments)
ss vs. netstat: Why the Switch?
- Performance: Reiterate that netstat requires traversing the /proc/net files, which is slow. ss uses a more efficient method via Netlink sockets.
- Data Richness: ss exposes richer kernel-level data related to TCP congestion, memory, and timers directly, making advanced debugging easier.
- Recommendation: Officially recommend using ss over netstat for modern Linux systems.
Why ss is Better Than netstat
| Feature | netstat | ss |
|---|---|---|
| Speed | Slow (reads from /proc) | Fast (Netlink interface) |
| IPv6 support | Partial | Full |
| Per-socket TCP info | Limited | Complete (ss -i) |
| Congestion control info | No | Yes |
| Queue lengths (Recv-Q / Send-Q) | Yes | Yes |
Process mapping (-p) | Yes | Yes |
| Filtering capabilities | Basic | Advanced (Berkeley Packet Filter syntax) |
Common Commands :
| Command | Description |
|---|---|
ss -s | Summary of socket usage by state and type |
ss -t | List TCP sockets only |
ss -u | List UDP sockets only |
ss -l | Show listening sockets |
ss -n | Don’t resolve hostnames or ports |
ss -p | Show process owning each socket |
ss -i | Show internal TCP info (RTT, cwnd, ssthresh, etc.) |
ss -ti | Show very detailed TCP internal info (the advanced stats we’ve been analyzing) |
ss -o state established | Show established sockets with timers |
ss -4, ss -6 | Show IPv4 or IPv6 sockets only |
ss -a | Show all (listening + non-listening) sockets |
Summarizes every major TCP metric from ss -ti, the symptom patterns, their likely causes, and what a healthy baseline looks like :
๐ฉบ How to Use This Table
When analyzing ss -ti output:
- Start with rtt, retrans, and cwnd → detect congestion.
- Then check send-Q, delivery_rate, and snd_wnd → app or flow control issues.
- Review rcv_space, rcv_ooopack, and lost → receiver and link quality.
- Confirm pacing_rate and delivery_rate alignment → transmission health.
- Verify congestion control algorithm suits your environment (CUBIC/BBR).
TCP performance triage + tuning guide built for real-time debugging using ss -ti, netstat, or /proc/net/tcp metrics.
๐ Quick Reference – What to Watch in ss -ti
| Metric | Normal Range | Problem Indicator |
|---|---|---|
rtt | <50 ms (LAN), <150 ms (WAN) | rising continuously |
rttvar | <10 ms | >30 ms means jitter |
cwnd | grows steadily | stuck or oscillates |
retrans | <0.1% | >1% indicates loss |
send-Q | small, fluctuating | growing → app/ACK delay |
delivery_rate | near link speed | <50% of link → bottleneck |
Example :
Example 2 :
Go parameter by parameter :
| Category | Field | Description | Default Unit | Example Value | Use / Functionality |
| Connection Info | State | TCP connection state | N/A | ESTAB | Shows connection lifecycle (ESTAB, LISTEN, TIME-WAIT) |
| Recv-Q | Bytes in receive queue (the count of bytes not yet copied by the user program i,e not yet read by application) | bytes | 0 | Detects receiver buffer backlog / app slowness | |
| Send-Q | Bytes in send queue (the count of bytes not yet acknowledged by the remote host i,e not yet ACKed ) | bytes | 4 | Detects sender congestion or buffer saturation | |
| Local Address:Port | Source IP and port (source of the connection) | N/A | 192.168.50.155:49616 | Identifies local endpoint | |
| Peer Address:Port | Destination IP and port(destination of the connection) | N/A | 192.168.50.208:11222 | Identifies remote endpoint | |
| Congestion Control & Window Scaling | Congestion Control | Algorithm (CUBIC, Reno, BBR) | N/A | cubic | Determines congestion control behavior |
| wscale | Send/Receive
window scale factors i,e first number (6) is for local (send) side, second (7) is for the remote (receive) side | exponent | 7,2 | Determines max window for high-BW links | |
| cwnd | Congestion window. The maximum number of segments the local host is allowed to send before waiting for an ACK. | MSS | 7 | Indicates congestion; small → throttling | |
| ssthresh | Slow start threshold. The congestion window size at which the congestion avoidance phase begins. | MSS / bytes | 7 | Switch between slow start & congestion avoidance | |
| snd_wnd | Sender’s view of peer advertised window. The maximum amount of unacknowledged data the local host can send (the peer's advertised receive window). | bytes | 1,103,872 | Flow control; avoid overwhelming receiver | |
| rcv_ssthresh | Receiver slow start threshold. Used by the local host to limit its advertised receive window when coming out of a slow-start phase. | bytes | 66,607 | Used in slow start; impacts congestion control | |
| max_window | Maximum allowed window | bytes | 16,777,216 | Hard limit for TCP window scaling | |
| RTT, Timers & Pacing | rtt | Smoothed round-trip time (current/variance). First value(1.499) is the Smoothed RTT (SRTT), and the second value(0.962) is the Mean Deviation (RTT variance). | ms | 1.499/0.962 | Latency measurement; impacts RTO & congestion control |
| minrtt | Minimum observed RTT | ms | 0.189 | Baseline latency; detects jitter | |
| rto | Retransmission timeout I,e If an ACK isn't received within this time, the segment is retransmitted. | ms | 202 | Timeout for retransmissions; tuning & loss detection | |
| ato | Delayed ACK timeout.The time before a TCP stack sends a standalone ACK. | ms | 40 | Determines when delayed ACKs are sent | |
| lastsnd | Timestamp of last sent segment. Time since the last data segment was sent. | ms | 3444 | Debugging stalled connections | |
| lastrcv | Timestamp of last received segment. Time since the last segment was received. | ms | 3409 | Debugging stalled connections | |
| lastack | Timestamp of last ACK received. Time since the last ACK was sent by the local host | ms | 3409 | Check acknowledgment lag | |
| busy | Socket active time.Total time the connection has been busy (sending or waiting for an ACK). | ms | 801 | Measures socket utilization; idle vs active | |
| pacing_rate | Kernel pacing rate.The rate at which the kernel is scheduled to pace out the packets (rate limiting the output). | bps | 65.4 Mbps | Monitor throughput; useful for pacing-based CC | |
| send | Current send rate. The current sending rate the TCP stack calculates it is achieving. | bps | 54.5 Mbps | Real-time send throughput; app-limited vs network-limited | |
| delivery_rate | Rate of bytes ACKed.The actual observed rate at which the network is delivering data (usually a more accurate measure of throughput than send) | bps | 20.4 Mbps | Effective throughput; low value → packet loss | |
| delivered | Segments successfully delivered. Total number of data segments successfully delivered to the receiver. | count | 514 | Shows actual data transfer progress | |
| timer | Retransmit / delayed ACK timer info | ms / struct | N/A | Debugging retransmit scheduling and delayed ACK | |
| MSS / PMTU / Segments | mss | Maximum segment size (payload only).The largest amount of data that a TCP segment can hold. | bytes | 1460 | Determines segmentization; affects throughput |
| pmtu | Path MTU. The largest packet size that can traverse the entire path without fragmentation. | bytes | 1500 | Detects MTU restrictions to avoid fragmentation | |
| rcvmss | Peer advertised MSS. The maximum segment size the remote host reported it is willing to receive. | bytes | 536 | Receiver MSS; affects sender segment sizing | |
| advmss | Our advertised MSS. The MSS the local host advertised to the peer (usually PMTU minus IP/TCP header size) | bytes | 1460 | Determines max payload per segment | |
| segs_out | Total segments sent.Total segments sent (including data, retransmissions, ACKs, etc.). | count | 831 | Monitor traffic volume | |
| segs_in | Total segments received | count | 589 | Monitor inbound traffic | |
| data_segs_out | Segments carrying payload sent. Total segments sent that contained actual user data. | count | 514 | Separate data from control segments | |
| data_segs_in | Segments carrying payload received. Total segments received that contained actual user data. | count | 378 | Separate data from control segments | |
| Byte Counters | bytes_sent | Total payload bytes sent.Total bytes sent on the connection. | bytes | 548,587 | Throughput measurement & retransmissions |
| bytes_retrans | Bytes retransmitted.Total bytes retransmitted (sent again due to loss). | bytes | 3,547 | Indicates packet loss and network issues | |
| bytes_acked | Bytes acknowledged.Total bytes acknowledged by the remote host. | bytes | 545,041 | Confirms successful delivery | |
| bytes_received | Bytes received. Total bytes received on the connection. | bytes | 4,240 | Confirms data reception | |
| Reliability / Loss / Reordering | lost | Segments marked lost | count | 4 | Detect packet drops & network reliability |
| unacked | Segments sent but not yet acknowledged.Number of segments sent but not yet acknowledged. | count | 4 | In-flight data; too high → congestion | |
| notsent | Bytes waiting in send buffer.Amount of data queued for sending but not yet sent (often due to congestion control limits). | bytes | 66,144 | Monitor buffer saturation | |
| reordering | Number of reordering events.An estimated count of how many segments have been reordered (arrived out of sequence). | count | 20 | Indicates packet reordering in network | |
| rcv_ooopack | Out-of-order segments received | count | 24 | May affect ACK behavior & retransmissions | |
| dsack_dups | Duplicate segments detected by DSACK.Number of times a Duplicate Selective Acknowledgment (DSACK) was received. Indicates out-of-order arrival or loss. | count | 3 | Indicates duplicate delivery / network issues | |
| retrans | Pending / total retransmissions.The first number (0) is the number of segments currently in the retransmission queue. The second number (3930) is the total number of retransmissions attempts. | count | 0/3930 | Indicates retransmit events & potential packet loss | |
| Socket Buffer / Memory | rcv_space | Remaining receive buffer.The current amount of space available in the local receive buffer. | bytes | 14,600 | Prevents buffer overflow; flow control |
| skmem | Total socket memory allocated | bytes | 1,000,000 | Overall socket memory usage | |
| rmem | Receive memory usage | bytes | 60,000 | Tracks receive buffer utilization | |
| wmem | Send memory usage | bytes | 90,000 | Tracks send buffer utilization | |
| oom | Out-of-memory events | count | 0 | Indicates memory exhaustion on socket | |
| Misc / Kernel Internal | rxconf | RX queue / offload config | N/A | none | Debug RX offload / queue configuration |
| txconf | TX queue / offload config | N/A | none | Debug TX offload / queue configuration | |
| tcp_ulp | Upper layer protocol (TLS offload) | N/A | tcp-ulp-tls | Shows TLS or other offload in use | |
| acked | Segments acknowledged | count | 514 | Confirms ACKed segments | |
| packets_out | Segments in flight | count | 4 | Indicates congestion / in-flight data | |
| sacked_out | SACKed segments pending ACK | count | 2 | Tracks SACKed segments for loss recovery | |
| fackets | Forward ACK segments | count | 1 | Tracks advanced ACK behavior for TCP reliability |
Connection Info
ESTAB 0 71136 172.22.111.25:64413 172.25.21.1:34479
| Field | Meaning |
|---|---|
| ESTAB | TCP connection state (ESTABLISHED). |
| 0 | Number of bytes queued in send buffer (unacknowledged but ready to send). |
| 71136 | Number of bytes in receive queue (data received but not read by application). |
| 172.22.111.25:64413 → 172.25.21.1:34479 | Source IP:port → Destination IP:port. |
Congestion Control / TCP Parameters
cubic wscale:6,7 rto:670 backoff:1 rtt:134.721/0.928 ato:40 mss:1248 pmtu:1500 rcvmss:536 advmss:1460 cwnd:1 ssthresh:2
- cubic
- Meaning: Congestion control algorithm used by this TCP connection.
- CUBIC: Default on modern Linux kernels. Optimized for high-speed, long-latency networks.
- Function: Controls how the congestion window (cwnd) grows/shrinks in response to network conditions.
- Effect: Determines throughput behavior during slow start, congestion avoidance, and recovery.
- Example: cubic → connection uses CUBIC algorithm rules for cwnd growth.
Algorithm Description Typical Behavior CUBIC (default on modern Linux) Non-linear congestion window growth optimized for high-BDP (Bandwidth-Delay Product) networks. Grows slowly after congestion, then faster as time since last loss increases. Excellent for long fat networks (LFNs). Reno Classic TCP algorithm; linear increase, multiplicative decrease. Simple and fair but inefficient on high-latency or high-bandwidth paths. BBR (Bottleneck Bandwidth and RTT) Estimates available bandwidth and minimum RTT to maximize throughput and minimize queueing delay. Keeps queues short, high throughput, very responsive — often faster than CUBIC on clean links. BBRv2 Improved version of BBR with fairness improvements. More fair against Reno/CUBIC, better coexistence. HighSpeed Modified Reno for large bandwidth-delay networks. Aggressive window growth, useful for data centers. Vegas RTT-based algorithm (detects congestion before loss). Keeps latency low, but less throughput on lossy links. Westwood+ Estimates bandwidth using ACK rate after packet loss. Good for wireless/mobile networks. BIC Predecessor of CUBIC; binary search window growth. Replaced by CUBIC. - wscale:6,7
- Meaning: TCP Window Scale factor, applied to the advertised window to allow >64KB buffers.
- Format: wscale:<send scale>,<receive scale>
- 6 → sender window scale = 2⁶ = 64
- 7 → receiver window scale = 2⁷ = 128
- Effect: The actual window is TCP window field × 2^wscale.
- Why: Allows TCP to efficiently use large buffers on high-bandwidth, high-latency links.
- Example: rwnd of 501 in tcpdump × 128 → 64128 bytes effective receive window.
- rto:670
- Meaning: Retransmission Timeout (ms).
- Units: milliseconds
- Function: Time TCP waits before retransmitting an unacknowledged segment.
- Notes: RTO is adaptive, based on measured RTT + deviation (RTTvar).
- Effect: Too small → spurious retransmits; too high → slow recovery.
- Example: rto:670 → retransmit after 670 ms if no ACK received.
- backoff:1
- Meaning: RTO backoff factor due to repeated timeouts.
- Function: TCP doubles RTO for each retransmission timeout (exponential backoff).
- Effect: Prevents flooding the network when persistent packet loss occurs.
- Example: backoff:1 → RTO currently multiplied by 2¹ = 2x (from base RTO).
- Explanation : TCP maintains a base RTO (Retransmission Timeout), computed dynamically from the RTT and its variation: RTO=SRTT+4×RTTVAR
Let’s say your base (computed) RTO = 200 ms. - First timeout happens, If an ACK isn’t received within 200 ms, TCP assumes packet loss and retransmits the unacknowledged segment.
At this point: The connection experienced one timeout
backoff = 1, TCP applies exponential backoff — meaning:
Actual RTO=Base RTO×2*backoff
So: RTO=200 ms×21=400 ms =400 ms
That means after the first timeout, TCP will now wait 400 ms before retransmitting again if another ACK isn’t seen. - Another timeout (backoff increases again)
If still no ACK after retransmitting, TCP increases backoff again → backoff = 2.
Now: RTO=200 ms×22=800 ms
And if yet another timeout occurs:
backoff = 3 → RTO = 200 ms × 2³ = 1600 ms - This is called exponential backoff — it protects the network during persistent loss or congestion by spacing out retransmissions, avoiding overload when the path is already unstable.
Event Base RTO (ms) Backoff Effective RTO (ms) Behavior Initial transmission 200 0 200 Normal send 1st timeout 200 1 400 2× delay 2nd timeout 200 2 800 4× delay 3rd timeout 200 3 1600 8× delay 4th timeout 200 4 3200 16× delay - Resetting backoff : Once a valid ACK is received and transmission stabilizes, backoff resets to 0 and RTO returns to its base (adaptive) value. backoff:1 means TCP has already experienced one retransmission timeout, so it has doubled the RTO to 2× its base value before trying again.
- rtt:134.721/0.928
- Meaning: Measured Round Trip Time (RTT) and its variance.
- Units: milliseconds
- Format: <RTT> / <RTT variance>
- Effect: Used to compute adaptive RTO.
- Notes: High variance → more conservative RTO; low variance → faster retransmit.
- Example: rtt:134.721/0.928 → 134.721 ms average RTT, 0.928 ms variation.
- Explanation :
- TCP measures RTT (Round Trip Time) — how long it takes for a packet to go to the receiver and get an ACK back.
- It also tracks RTT variance (how much the RTT fluctuates).
- Then it calculates RTO (Retransmission Timeout) — the time TCP waits before retransmitting an unacknowledged packet.
- The formula (RFC 6298) is roughly: RTO = SRTT + 4 × RTTVAR
- SRTT → Smoothed RTT (average RTT)
- RTTVAR → Smoothed RTT variance
- ๐น Meaning of “High variance → more conservative RTO”
If RTT values fluctuate a lot (network is unstable), then RTTVAR is high.
→ RTO becomes larger (since RTO = SRTT + 4×RTTVAR).
→ TCP waits longer before retransmitting.
✅ This prevents spurious retransmissions when ACKs are just delayed due to jitter. - ๐น Meaning of “Low variance → faster retransmit”
If RTT values are stable and consistent, RTTVAR is small.
→ RTO becomes closer to SRTT (smaller timeout).
→ TCP retransmits faster when a packet is really lost.
✅ This improves throughput and responsiveness on stable links. Case SRTT (ms) RTTVAR (ms) RTO (ms) Behavior Stable network 100 2 108 Fast retransmit (tight timeout) Unstable network 100 30 220 Slow retransmit (safe buffer) - Average RTT = 134.721 ms
RTT variance = 0.928 ms (very low → stable network)
→ RTO will be ≈ 134.721 + 4×0.928 = ~138.4 ms
→ TCP will retransmit quickly if needed — since the connection is steady. - ato:40
- Meaning: ACK timeout (delayed ACK timer) in ms.
- Function: Max time TCP waits before sending an ACK (to combine ACKs).
- Effect: Reduces small packet overhead but increases latency for small packets.
- Example: ato:40 → delayed ACK timer is 40 ms.
- Explanation : ato stands for ACK Timeout — the delayed ACK timer value in milliseconds. It represents how long TCP will wait before sending an ACK (acknowledgment) if no outgoing data is ready to "piggyback" the ACK on.
- TCP can send an ACK immediately or delay it slightly to reduce overhead.
The delayed ACK algorithm waits up to ato milliseconds, hoping that: More data arrives to acknowledge together, or The application sends some data back (so ACK can be piggybacked).This saves bandwidth and reduces small packets. - The TCP delayed ACK timer is 40 milliseconds.
- If the receiver gets a segment but has no data to send back, it will wait up to 40 ms before sending the ACK.
- If another packet arrives within that time, both can be acknowledged together.
- Typical Linux Values : sysctl -w net.ipv4.tcp_delack_min=20, sysctl -w net.ipv4.tcp_delack_max=40
Kernel Default atoNotes Older (2.6.x) 200 ms High latency for small packets Modern (4.x–6.x) 40 ms Tuned for better responsiveness Some distros Adaptive (20–40 ms) Based on RTT and application type - Setting smaller values improves interactivity (e.g., in RPC or HTTPS handshakes), but increases ACK traffic.
- mss:1248
- Meaning: Maximum Segment Size negotiated for this connection.
- Units: bytes
- Function: Maximum TCP payload per segment, excluding headers.
- Effect: Defines largest chunk TCP will send per segment; smaller MSS → more segments, more overhead.
- Example: mss:1248 → each TCP segment carries max 1248 bytes of data.
- pmtu:1500
- Meaning: Path MTU (Maximum Transmission Unit)
- Units: bytes
- Function: Maximum packet size that can traverse the path without fragmentation (IP + TCP headers included).
- Effect: Ensures packets aren’t dropped due to fragmentation.
- Example: pmtu:1500 → maximum full packet size = 1500 bytes.
- rcvmss:536
- Meaning: Receiver’s maximum segment size (TCP advertised).
- Function: Sender must not send more than this per segment to avoid overwhelming receiver.
- Example: rcvmss:536 → sender must limit segments to 536 bytes payload.
- advmss:1460
- Meaning: Peer’s advertised MSS.
- Function: Used by sender to segment outgoing data.
- Example: advmss:1460 → receiver can handle 1460 bytes per segment.
Field Who Sent It What It Means Impact on Server rcvmss:536 Client advertised this in its SYN “I (Client) can receive at most 536 bytes per TCP segment.” ➜ Server must limit each outgoing segment to ≤ 536 bytes when sending data to this client. advmss:1460 Server advertised this in its own SYN-ACK “I (Server) can receive up to 1460 bytes per TCP segment.” ➜ Client can send up to 1460-byte segments to the Server. - It means:
You (server) can receive 1460-byte segments. ✅
You (server) must send smaller 536-byte segments to the client. ⚠️ - Possible Reason for rcvmss:536 The client is behind a low-MTU link (e.g., PPPoE, VPN, mobile, or older stack with 576-byte MTU). So, client advertised MSS=536 = 576 (MTU) - 40 (IPv4+TCP header).
- cwnd:1
- Meaning: Congestion window, sender-side.
- Units: MSS (segments)
- Function: Max number of segments allowed in flight (unacknowledged) at any time.
- Effect: Limits send rate; grows according to congestion control.
- Example: cwnd:1 → only 1 MSS in flight (connection is likely in slow start or recovering from loss).
- ssthresh:2
- Meaning: Slow Start Threshold
- Units: MSS (segments)
- Function: Switch point from slow start (exponential growth) to congestion avoidance (linear growth).
- Effect: If cwnd < ssthresh → slow start; if cwnd ≥ ssthresh → congestion avoidance.
- Example: ssthresh:2 → after cwnd reaches 2 MSS, TCP will enter congestion avoidance.
✅ Explanation of the timeline:
- Starts with cwnd=1 MSS → slow start doubles cwnd each RTT.
- Reaches ssthresh=2 MSS → growth becomes linear (congestion avoidance).
- Packet loss occurs → cwnd collapses to 1 (backoff), RTO waits 670ms.
- Retransmission resumes, cwnd grows again.
- MSS / PMTU limit the max bytes per segment; combined with cwnd, it limits in-flight data.
- RTT controls how fast ACKs arrive, pacing cwnd growth.
Data Counters
bytes_sent:23793598 bytes_retrans:5059976 bytes_acked:18728630 bytes_received:331 segs_out:19069 segs_in:14656 data_segs_out:19067 data_segs_in:2
- bytes_sent:23793598
- Meaning: Total number of payload bytes sent by this TCP connection since it was established.
- Units: Bytes (excluding TCP headers).
- Includes: Both successfully delivered bytes and retransmissions.
- Use: Shows total traffic volume originating from sender.
- Example: ~23.8 MB sent over this connection.
- bytes_retrans:5059976
- Meaning: Number of bytes retransmitted due to packet loss or timeout.
- Units: Bytes
- Why it happens:
- RTO (Retransmission Timeout) expired
- Triple duplicate ACK detected → fast retransmit
- Impact:
- Retransmissions indicate network congestion or packet loss
- Reduces effective throughput
- Directly affects cwnd (TCP reduces window on loss)
- Example: ~5.05 MB retransmitted → ~21% of total bytes sent. High value → poor network conditions.
- bytes_acked:18728630
- Meaning: Number of bytes successfully acknowledged by the receiver.
- Units: Bytes
- Use:
- Shows effective throughput (how much data actually reached the peer)
- Helps compute loss rate: loss_rate ≈ bytes_retrans / bytes_sent ≈ 5.05M / 23.79M ≈ 21%
- Example: ~18.7 MB acknowledged → connection is sending more than what actually gets through without retransmission.
- Rule of thumb: <0.1–0.5% is fine; >1% is noticeable
- bytes_received:331
- Meaning: Total payload bytes received from peer (application layer data).
- Units: Bytes
- Notes:
- Very low here → sender is mostly transmitting; peer has sent minimal data.
- Could indicate a download-heavy or client-server upload-heavy scenario.
- Example: Only 331 bytes received → mainly upload connection.
Metric Formula Healthy Range Interpretation Loss rate bytes_retrans / bytes_sent< 0.5%Higher = packet loss or congestion ACK efficiency bytes_acked / bytes_sent> 95%Lower = retransmissions or stalled ACKs ACK coverage gap bytes_sent - bytes_ackedsmall Large gap = unacknowledged or inflight bytes TX/RX balance bytes_sent / bytes_received≈1 (full duplex) or ≫1 (upload) Helps classify directionality Scenario bytes_sent bytes_retrans bytes_acked bytes_received Interpretation ๐ค Idle ~0 ~0 ~0 ~0 No active traffic; closed or paused connection. ✅ Healthy Active High Low (<0.5%) Nearly = bytes_sent Moderate/high Normal flow, no loss or congestion. ⚠️ Mild Congestion High Moderate (1–2%) Slightly below bytes_sent Normal Some retransmissions; cwnd adjusting. ๐จ Severe Loss High High (>5%) Much lower than bytes_sent Normal Network dropping packets; throughput collapsing. ๐ง Unacknowledged / stalled High Growing Stagnant Normal Peer not ACKing — maybe path blocked or flow control. ๐งพ Upload-heavy High Low–moderate Matches sent Very low Sender uploads data; peer mostly ACKs (like file upload). ๐ฅ Download-heavy Low Low Small High Receiving much more than sending; e.g., file download. ๐ Retransmission storm / RTO collapse High Extremely high (>20%) Far below bytes_sent Normal Connection unusable; repeated losses. ๐ง Receiver window full (rwnd=0) Rising slowly Low Stalled Stalled Peer cannot read fast enough; flow control limiting speed. Derived Metric Value Interpretation Loss rate 5.05M / 23.79M = 21.2%⚠️ Extremely high — massive retransmissions ACK efficiency 18.73M / 23.79M = 78.7%Poor — 21% not ACKed successfully TX/RX ratio ≈ 71,800 : 1Unidirectional — upload-only flow Summary Server is mainly sending data; network has high packet loss or congestion.
- segs_out:19069
- Meaning: Total number of TCP segments sent, including retransmissions.
- Includes: Data segments + pure ACKs + control segments.
- Units: Count of segments.
- Example: 19,069 segments sent over the connection.
- segs_in:14656
- Meaning: Total number of segments received from peer.
- Includes: Data + ACKs + control packets.
- Units: Count of segments.
- Example: 14,656 segments received.
- data_segs_out:19067
- Meaning: Number of segments carrying actual TCP payload sent.
- Difference vs segs_out:
- segs_out = all segments including pure ACKs
- data_segs_out = only segments with real data
- Example: 19,067 segments carry actual payload → 2 segments may have been pure ACKs or control.
- data_segs_in:2
- Meaning: Number of segments carrying actual payload received from peer.
- Notes: Very low → peer is barely sending data.
- Example: 2 segments → peer sends al
Throughput / Timing
send 74.1kbps pacing_rate 356kbps delivery_rate 220kbps lastsnd:655 lastrcv:1591060 lastack:1266 busy:1592118ms
- send 74.1kbps
- Meaning: Current instantaneous send rate of this TCP connection.
- Units: kilobits per second (kbps)
- How it’s computed:
- Based on the number of bytes actually transmitted over the recent RTT
- Only includes new transmissions, not retransmissions.
- Observation in your case:
- 74.1 kbps is very low → connection is congestion-limited, probably due to cwnd=1 and retransmissions.
- pacing_rate 356kbps
- Meaning: TCP pacing rate, used if Linux TCP pacing is enabled.
- Units: kbps
- Function: Limits how fast the sender injects packets into the network to avoid bursts.
- Observation:
- Kernel allows up to 356 kbps, but actual send = 74.1 kbps → sending slower due to cwnd/retransmissions.
- delivery_rate 220kbps
- Meaning: Measured rate of successful delivery of bytes to the receiver.
- Units: kbps
- Difference from send/pacing:
- send = current bytes leaving kernel
- delivery_rate = how fast data is ACKed and confirmed delivered
- Observation:
- 220 kbps → actual effective throughput higher than instantaneous send?
- This may include bursts or averaged over longer RTT window.
-
Scenario send pacing_rate delivery_rate Diagnosis ✅ Healthy steady state ≈ pacing_rate ≈ pacing_rate ≈ pacing_rate Fully utilizing allowed bandwidth ⚠️ Loss recovery / cwnd shrink ≪ pacing_rate steady or reduced < pacing_rate Send rate throttled by cwnd, not pacing ๐ง ACK delay or high RTT ≈ pacing_rate high lower Data sent quickly, ACKs arrive slowly ๐งฑ Severe congestion / RTO tiny larger tiny Connection nearly stalled ๐ค Idle / app limited 0 steady 0 Application not producing data - Quick Diagnostic Guide
Pattern Likely Cause Typical Action send ≈ pacing_rate ≈ delivery_rateNormal / balanced None send << pacing_ratecwnd too small (loss recovery) Investigate bytes_retrans, RTT spikesdelivery_rate << sendACK loss or receiver slow Check rtt,rttvar,rwndAll near-zero Idle or app paused App-limited, normal - In Above Numbers (74 kbps vs 356 kbps vs 220 kbps)
Observation Implication send << pacing_rateSender is holding back — likely due to small cwnd after losses (remember you had ~21 % retransmits). delivery_rate > sendReceiver ACKs older bursts; temporary effect of delayed ACK catching up. Combined → Connection is congestion-limited, not pacing-limited. Linux pacing permits 356 kbps, but cwnd or RTO recovery restricts actual send to ~74 kbps. - lastsnd:655
- Meaning: Timestamp of last data sent on this connection (ms since connection start).
- Effect: Useful to detect idle periods or pacing behavior.
- lastrcv:1591060
- Meaning: Timestamp of last segment received from peer (ms since connection start).
- Observation: Huge difference vs lastsnd → peer is sending almost nothing (matches bytes_received=331).
- lastack:1266
- Meaning: Timestamp of last ACK received from the peer.
- Function: Determines whether unacknowledged data exists and affects cwnd growth.
- busy:1592118ms
- Meaning: Total time this TCP connection has been active in kernel (microseconds → ms).
- Observation: Connection has been active ~1,592 seconds (~26 minutes).
Insights
- Send vs Pacing vs Delivery:
- The kernel can send faster (pacing_rate), but cwnd = 1 → throttles actual send rate.
- Delivery rate is higher than instantaneous send due to averaging or bursts.
- Low send rate:
- Matches earlier observation of cwnd=1 and high retransmissions (bytes_retrans).
- Idle detection:
- lastsnd vs lastrcv → peer sending almost nothing → connection mostly upload.
- Performance bottleneck:
- Not NIC or link capacity (pacing_rate=356 kbps), TCP congestion control and retransmissions are limiting throughput.
Queue / Retransmission Info
unacked:4 retrans:1/4055 lost:4 reordering:20 rcv_space:14600 rcv_ssthresh:64076 notsent:66144 minrtt:127.84 snd_wnd:76096
- unacked:4
- Meaning: Number of segments sent but not yet acknowledged by the receiver.
- Units: Segments (MSS units)
- Impact:
- Determines how much data is currently “in flight.”
- If unacked >= cwnd, TCP cannot send more until ACKs arrive.
- Example: 4 segments unacknowledged → limits sending if cwnd is small (matches cwnd=1 from earlier).
- retrans:1/4055
- Meaning: Retransmissions in two forms:
- 1 → currently scheduled for retransmission
- 4055 → total segments retransmitted so far
- Cause: Packet loss detected via RTO or triple duplicate ACKs
- Impact:
- Each retransmission reduces effective throughput
- Triggers congestion control (reduces cwnd)
- Example: 1 segment pending retransmit; 4055 already retransmitted → high-loss path.
- lost:4
- Meaning: Segments marked lost by TCP (timeout or duplicate ACK detection).
- Impact:
- TCP reduces cwnd (usually to 1 or half of ssthresh)
- Triggers retransmissions
- Example: 4 segments lost → explains why cwnd=1 and send rate is low.
- reordering:20
- Meaning: Number of segments received out-of-order.
- Impact:
- Can trigger duplicate ACKs → potential fast retransmit
- High reordering can simulate packet loss in TCP perception
- Observation: 20 out-of-order segments → moderate reordering, typical on multi-path or virtual networks.
- rcv_space:14600
- Meaning: Remaining receiver buffer space in bytes.
- Impact:
- Limits how much data the sender can push
- Effective send window = min(cwnd × MSS, snd_wnd)
- Observation: Receiver has ~14.6 KB free buffer → not currently the bottleneck.
- rcv_ssthresh:64076
- Meaning: Receiver-side slow start threshold (in bytes)
- Impact: Guides flow control on how quickly the receiver can accept data
- Observation: 64 KB threshold → receiver can handle bursts before signaling congestion.
- notsent:66144
- Meaning: Bytes in the send buffer waiting to be transmitted (not yet sent).
- Impact:
- Limited by cwnd and pacing
- Indicates backpressure in sender queue
- Example: 66 KB waiting → can’t send immediately because cwnd is too small.
- minrtt:127.84
- Meaning: Minimum RTT observed on this connection (ms)
- Impact:
- Used by congestion control (CUBIC/TCP) to estimate bandwidth
- Helps compute pacing and retransmission timing
- Example: 127 ms → baseline RTT for cwnd growth calculations.
- snd_wnd:76096
- Meaning: Sender’s view of receiver advertised window (rwnd)
- Units: Bytes
- Impact:
- TCP can send at most min(cwnd × MSS, snd_wnd)
- rwnd limits maximum in-flight data
- Example: 76 KB → receiver window is sufficient; not limiting in-flight data here.
Key Insights
- Sender congestion-limited: cwnd=1 + 4 unacked → can't send all 66 KB in buffer.
- High retransmissions & lost segments → explains low send rate (74 kbps).
- Receiver window sufficient: snd_wnd=76 KB > cwnd*MSS → not limiting.
- Reordering moderate: could trigger extra duplicate ACKs, affecting cwnd.
- Min RTT = 127 ms: sets baseline for CUBIC growth; pacing may wait for ACKs.
TLS / ULP Info
tcp-ulp-tls rxconf: none txconf: none
| Field | Meaning |
|---|---|
| tcp-ulp-tls | Using TLS as a TCP Upper Layer Protocol (kernel-level TLS offload). |
| rxconf / txconf | TLS configuration on RX/TX side (none configured here). |