Sunday, September 7, 2025

Troubleshooting ARP Table Overflows: Why Random Connectivity Drops Happen in VLANs

Recently, we have been observing frequent connectivity drop issues on random application servers. Packet captures (tcpdump) revealed that some servers stop receiving ARP replies from their destination IP addresses—even though both are in the same VLAN and subnet.

This raises an important question: what could cause ARP failures in a flat Layer 2 domain?

A Quick Refresher: What is ARP?

Address Resolution Protocol (ARP) is the mechanism that maps an IP address to its corresponding MAC address on a local Ethernet network.

Without ARP, hosts cannot communicate within the same subnet.

  •  Layer 2 to Layer 3 Mapping: NICs don’t understand IP addresses—they only use MAC addresses. ARP provides the “glue” between IP (Layer 3) and Ethernet (Layer 2).
  • MAC Addresses: Every Ethernet NIC has a 48-bit globally unique identifier, burned into ROM, which is used to deliver Ethernet frames.
  • Operation: If host 10.0.0.11 wants to communicate with 10.0.0.22, it broadcasts an ARP request asking: “Who has 10.0.0.22?”. The host owning that IP replies with its MAC address.

Where Things Go Wrong: Possible ARP Table Overflow

In a /24 subnet (255.255.255.0), we can assign up to 254 usable IP addresses. If this limit is fully utilized (or close to it), ARP tables at switches, routers, or even servers may hit capacity constraints.

What we observed:

  • No ARP reply received for some destination IPs.
  • Symptoms appear random—one server works fine while another loses connectivity.
  • Recovery sometimes only happens after rebooting the NIC (which clears and reloads its ARP cache).

This suggests a case of ARP table overflow, where the device managing ARP entries runs out of space.

 

Why Does ARP Overflow Matter?

When an ARP table is full:

  • The garbage collector may discard ARP entries, sometimes randomly.
  • A discarded entry means that the host can no longer resolve the MAC address for a destination IP.
  • Until the NIC or OS refreshes its ARP cache—or the device itself is rebooted—communication to that destination fails.

Essentially, the host knows where to send packets (IP), but doesn’t know how to send them (MAC).

 

Potential Culprits

  • Router ARP Table Limit – Routers maintain ARP caches for each connected subnet. Hitting the per-interface ARP entry limit can cause drops.
  • Switch ARP/Forwarding Table Overflow – L2 switches may also have finite CAM/ARP tables. Overflow can lead to incomplete lookups.
  • Server-Side ARP Cache Limits – Even Linux/Windows servers have configurable limits for ARP cache entries. 

 

gc_thresh1 (since Linux 2.2) : The minimum number of entries to keep in the ARP cache. The garbage collector will not run if there are fewer than this number of entries in the cache.  Defaults to 128.

gc_thresh2 (since Linux 2.2) : The soft maximum number of entries to keep in the ARP cache.  The garbage collector will allow the number of entries to exceed this for 5 seconds before collection will be performed.  Defaults to 512.

gc_thresh3 (since Linux 2.2) : The hard maximum number of entries to keep in the ARP cache.  The garbage collector will always run if there are more than this number of entries in the cache.  Defaults to 1024.

Troubleshooting Commands

Here are useful commands to investigate ARP-related issues across different systems:

 # Show current ARP cache entries
arp -n

# Modern replacement command
ip neigh show

# Clear ARP cache
ip -s -s neigh flush all

# Check ARP kernel parameters
cat /proc/sys/net/ipv4/neigh/default/gc_thresh1
cat /proc/sys/net/ipv4/neigh/default/gc_thresh2
cat /proc/sys/net/ipv4/neigh/default/gc_thresh3

# Adjust thresholds (increase ARP cache size if needed)
sysctl -w net.ipv4.neigh.default.gc_thresh1=512
sysctl -w net.ipv4.neigh.default.gc_thresh2=1024
sysctl -w net.ipv4.neigh.default.gc_thresh3=2048

Recommendations

  • Check ARP Table Sizes on routers, switches, and servers in the VLAN. Verify if limits are being hit.
  • Segment Large VLANs – If the VLAN is hosting too many hosts (close to 254), consider subnetting further (e.g., /25, /26) to reduce ARP load.
  • Monitor ARP Cache Usage – Many network devices provide counters/logs for ARP cache utilization.
  • NIC/OS Tuning – Adjust ARP cache timeouts and maximum entries in server OS (Linux: /proc/sys/net/ipv4/neigh/*).

 

Tuesday, September 2, 2025

The NIC Speed Mismatch Challenge

 

Resolving Intermittent Connection Resets on ESXi: The NIC Speed Mismatch Challenge

Maintaining stable and high-performance network connectivity is critical in modern virtualized environments. Recently, our team encountered an intermittent TCP connection reset issue on the ESXi blade MK-FLEX-127-40-B2, which provided a perfect case study on the importance of proper NIC teaming configurations.

🧩 Issue Overview

During routine connectivity testing on the ESXi host, we observed sporadic TCP connection resets that were difficult to reproduce consistently. Upon investigation, we found that the issue occurred specifically when:

  • vmnic1 (10Gbps) and vmnic3 (1Gbps) were configured together in an active-active NIC teaming setup.

Other combinations, such as vmnic0 + vmnic1 or vmnic2 + vmnic3, exhibited no connectivity issues, highlighting a configuration-specific problem.



🔍 Root Cause Analysis

The underlying cause was a speed mismatch between teamed NICs, which led to asymmetric traffic paths:

  • Traffic could egress over the 10Gbps NIC (vmnic1) but return via the 1Gbps NIC (vmnic3) or vice versa.

  • This path asymmetry confused network devices, such as firewalls and load balancers performing stateful inspection, resulting in intermittent TCP resets.

  • Mismatched NICs in a team can also lead to:

    • Out-of-order packet delivery

    • MTU mismatches, particularly if jumbo frames are enabled on only one NIC

    • Load balancing inconsistencies under certain hashing policies

Key takeaway: All physical NICs in a team should be of the same speed, duplex, and model to avoid unpredictable network behavior.


🛠️ Resolution Steps

To address the issue, the NIC teaming configuration was updated:

  1. Replaced vmnic3 (1Gbps) with vmnic0 (10Gbps) in the team alongside vmnic1.

  2. Ensured consistent MTU, speed, and duplex settings across both NICs.

  3. Verified that traffic symmetry and load balancing consistency were restored under active-active teaming.



✅ Post-Change Results

After reconfiguration:

  • No further connection resets were observed during testing.

  • Network performance stabilized across all workloads.

  • The NIC team now adheres to best practices: all adapters are of the same speed and type, ensuring link-layer stability.

📌 Lessons Learned

This incident reinforced several key networking principles:

  1. NIC Homogeneity: Only team NICs with the same speed and model.

  2. MTU Consistency: Ensure jumbo frame settings match across all adapters.

  3. Traffic Symmetry: Active-active NIC teams require symmetric egress and ingress paths to maintain session integrity.

  4. Documentation & Audit: Regularly review NIC teaming and ESXi hardening checklists to prevent recurring issues.

🔗 Conclusion

Even in highly virtualized environments, simple configuration mismatches like NIC speed differences can cause elusive connectivity problems. By adhering to NIC teaming best practices, organizations can avoid asymmetric traffic issues, stabilize network performance, and ensure reliable connectivity for critical workloads.


Misusing SO_LINGER with 0 can lead to data loss

What SO_LINGER does

SO_LINGER is a socket option (setsockopt) that controls how close() behaves when unsent data remains on the socket.

  • SO_LINGER decides how the socket behaves when the application calls close() and there is unsent data in the socket’s send buffer.
  • Unsent data = bytes written by the application but not yet transmitted and acknowledged by the peer.

Behavior in Client Scenario

  • Client workflow: Send request → Wait for complete response → Call close().
  • At the point of close():
    • All client request bytes have been transmitted and ACKed.
    • Send buffer is empty.
    • There is no “unsent data” for SO_LINGER to discard.
  • Result:
    • With SO_LINGER(0), the client still sends RST instead of FIN, but no client data is lost.
    • The server may log the abrupt reset, but functionally it is harmless for stateless APIs.

Normal Close (default, no linger set): 

  • Client:  FIN →  
  • Server:      ACK → FIN →  
  • Client:           ACK

    Connection passes through FIN_WAIT, CLOSE_WAIT, LAST_ACK, TIME_WAIT.

    

Characteristics:

  • Graceful 4-way handshake.
  • States traversed: FIN_WAIT_1 → FIN_WAIT_2 → TIME_WAIT (client) and CLOSE_WAIT → LAST_ACK (server).
  • Guarantees reliable delivery of all data.

SO_LINGER(0) (Abortive Close) : 

  • Client:  RST →  
  • Server: Connection dropped immediately

      

All intermediate states skipped → both sides move to CLOSED instantly.

✅ Characteristics:

  • Instant teardown.
  • Skips FIN/ACK handshake, TIME_WAIT, CLOSE_WAIT, LAST_ACK.
  • Peer sees abrupt RST.
  • Any unsent data is discarded (not applicable in our stateless scenario).

Practical Implications of SO_LINGER(0)

  • ✅ No risk of data loss here (request fully sent, response fully received).
  • ✅ Good for short-lived, stateless API calls — avoids lingering sockets.
  • ⚠️ Server logs may show RST instead of FIN.
  • ⚠️ Should not be used in protocols requiring graceful close or guaranteed delivery after close().

SO_LINGER(5)

Case A: All data delivered before timeout

✅ Behavior:

  • close() blocks until handshake completes.
  • Graceful 4-way close, just like normal.
  • Application knows delivery succeeded before returning.

Case B: Timeout expires (no ACK from server within 5s)

❌ Behavior:

  • Data not acknowledged → stack aborts with RST.
  • Connection torn down abruptly.
  • Peer sees reset instead of FIN.

🔑 Summary of SO_LINGER(5)

  • Best case: Works like normal close, but close() blocks until data is ACKed.
  • Worst case: After 5s timeout, behaves like SO_LINGER(0) (RST abort).
  • Useful when the application must know if the peer ACKed data before completing close().

✅ Conclusion

  • In stateless client-server flow, SO_LINGER(0) is acceptable.
  • It allows instant connection teardown with no data loss, since the request/response exchange is already complete.
  • The only visible impact: the server sees an RST instead of a normal FIN handshake.

1. Definition of unsent data (in TCP/SO_LINGER context)

  • When a client calls write() (or send()) on a TCP socket, data goes into the socket’s send buffer.
  • Unsent data = bytes that have not yet been transmitted and acknowledged by the peer.
  • SO_LINGER controls what happens to this unsent data when close() is called:
    • SO_LINGER(0) → discard immediately, send RST.
    • SO_LINGER>0 → try to send within timeout.
    • default (SO_LINGER not set) → normal FIN handshake.

2. How it applies to stateless scenario

  • Client flow: send request → wait → receive full response → close.
  • At the point of close():
    • All client request bytes have been transmitted and acknowledged by the server.
    • There are no pending bytes in the client send buffer.
  • Therefore, the “unsent data” that SO_LINGER refers to does not exist in your scenario.

In client workflow, the client only calls close() after sending the complete request and receiving the full server response. At this point, the socket’s send buffer is empty, so there is no unsent data. SO_LINGER(0) will still close the socket abruptly, but it does not result in any loss of transmitted data.

  • By default (no linger or SO_LINGER disabled):
    • close() just queues a graceful shutdown.
    • TCP tries to deliver any unsent data and perform the normal 4-way FIN/ACK close.
    • The application returns from close() quickly, but the actual TCP teardown may still be in progress.
  • If SO_LINGER is enabled with a timeout >0 (e.g., 5 sec):
    • close() becomes blocking until either:
      • All unsent data is delivered and ACKed, and connection closes gracefully, or
      • The timeout expires → then connection is reset (RST).
  • If SO_LINGER is set with timeout = 0 (i.e., SO_LINGER(0)):
    • close() causes an immediate abortive close.
    • Any unsent data is discarded, and the stack sends RST instead of FIN.
    • This tears down the connection instantly.

🔹 Can we use SO_LINGER(0)?

  • Yes, it’s a valid, documented use.
  • But it changes semantics: instead of a graceful shutdown, we’re forcing an abortive close.
  • This is typically used when:
    • We don’t care about undelivered data.
    • We want to immediately free up resources / ports.
    • We need to ensure the peer can’t reuse half-open connections.

When client call close() API , how it behaves under different SO_LINGER settings, including the packets exchanged and the practical use cases.

Mode Packets on Wire Behavior of close() Pros Cons Typical Use
Normal close (default, no SO_LINGER) App calls close() → stack sends FIN → peer ACK → peer FIN → local ACK (4-way close) close() returns immediately, TCP teardown continues in background ✅ Graceful shutdown
✅ Ensures data delivery
✅ Peer sees clean close
❌ Leaves socket in TIME_WAIT
❌ Connection cleanup takes longer
General case (most apps)
SO_LINGER enabled, timeout > 0 
(e.g. 5 sec)
On close(): TCP waits until unsent data is ACKed, then does FIN/ACK exchange. If timeout expires → sends RST close() blocks until either data is delivered or timeout expires ✅ App knows whether data was delivered
✅ Useful in transactional protocols
❌ Blocks calling thread
❌ If timeout, abrupt RST
When you must confirm data delivery before returning from close()
SO_LINGER(0) 
(timeout = 0)
Immediately sends RST, skipping FIN/ACK handshake close() returns immediately, and connection is torn down instantly ✅ Frees resources instantly
✅ Avoids half-open states
❌ Any unsent data is discarded
❌ Peer sees abrupt reset (may log error)
❌ Not graceful
Emergency cleanup, abortive close, broken peers (like M400 not ACKing FIN)

Explanation

  • As soon as the client calls close() with SO_LINGER(0):
    • TCP stack sends RST immediately, discarding any unsent data.
    • Client socket transitions instantly to CLOSED.
  • Server receives RST:
    • Drops the half-open connection immediately.
    • Moves directly to CLOSED.
  • No FIN/ACK handshake occurs; there is no FIN_WAIT, CLOSE_WAIT, LAST_ACK, or TIME_WAIT on either side.

✅ Key difference vs normal 4-way close:

  • All intermediate states like FIN_WAIT-1, FIN_WAIT-2, TIME_WAIT, CLOSE_WAIT, LAST_ACK are skipped.
  • Connection is torn down immediately.

TCP Buffer :  When TCP documentation says “unsent data is discarded,” it refers to **data in the client’s send buffer that the TCP stack hasn’t physically put on the wire yet.

In Stateless scenario, Using SO_LINGER(0) is acceptable because:

  • The client already sent the request(all transaction payload ) and received the response, so there’s no risk of losing client data.
  • Client has no pending writes in the TCP send buffer. Therefore, there is no unsent data at the moment of calling close().
  • The connection is stateless, and each transaction opens a new connection anyway, so skipping the graceful FIN/ACK handshake doesn’t break application logic.
  • The only downside is the server may log an RST instead of a normal FIN, which is usually harmless for stateless APIs.

SO_LINGER(0) Impact

  • Causes immediate TCP reset (RST) instead of normal FIN handshake.
  • “Unsent data” refers to pending client-side writes, which in your case is already sent, so nothing is actually lost.
  • Client sees no issue; server may log an abrupt reset.

Wireshark TLS Decryption Guide For Java

 Introduction

Sometimes during project development, it is necessary to use tools like Wireshark to analyze the underlying network communication. However, as SSL/TLS has become the standard for secure network communication, all data is encrypted, making it impossible to directly observe the actual payload.

To decrypt TLS traffic in Wireshark, we need a way for the client or server to export the master secret key after the SSL/TLS certificate exchange. According to the TLS standard, once the certificate exchange and verification are complete, the communication channel switches from asymmetric encryption (RSA/ECDH) to symmetric encryption (AES/GCM, AES/CBC, etc.).

The rationale for this fallback is that public key encryption is computationally expensive and unnecessary for encrypting the bulk of the data. Symmetric encryption is much faster and provides the same level of security for ongoing communication. By capturing the master secret key, we can decrypt the encrypted TLS traffic in Wireshark and inspect the actual content exchanged between client and server.

This document explains how TLS encryption/decryption works in a Java client and how to use the extract-tls-secrets-4.0.0.jar agent to export TLS secrets for analysis, both standalone and with a Payara application server.

TLS Decryption & extract-tls-secrets Usage in Java

When a Java application (e.g., HttpsURLConnection, SSLSocket) communicates over HTTPS/TLS:

Component

Responsibility

OS (Kernel/TCP Stack)

Handles TCP/IP: packet receipt, checksum, segmentation, reassembly. Does not decrypt TLS.

JVM TLS Library (JSSE / Conscrypt / BouncyCastle)

Performs TLS handshake, key derivation, encryption, decryption, integrity verification. Produces plaintext for the app.

Application Server / Client App

Receives plaintext data after JVM decryption, executes business logic.

TLS Agent (extract-tls-secrets)

Hooks into JVM TLS APIs to capture pre-master/master secrets for external decryption. Does not modify payload.

This swim lane diagram clearly shows who does what at each step of TLS communication between a Java client/server and the network, highlighting the role of each component in encryption, decryption, and secret logging.

Decryption is performed by the JVM TLS library.

·       The TLS Agent only logs secrets.

·       The OS only handles transport of encrypted bytes.

·       The Application Server works purely with plaintext after decryption.

Step-by-Step Explanation:

1.       OS Lane

o   Receives encrypted TCP segments from the network.

o   Validates checksums and TCP flags.

o   Reassembles TLS records into a continuous stream for the JVM.

2.       JVM TLS Library Lane

o   Reads encrypted bytes from the OS.

o   Performs TLS handshake (ClientHello, ServerHello).

o   Validates server certificates.

o   Generates pre-master and master secrets.

o   Expands keys and decrypts ApplicationData records.

o   Verifies integrity and produces plaintext for the application server.

o   Encrypts response data before sending back to OS.

3.       TLS Agent Lane

o   Hooks into the JVM TLS library.

o   Captures pre-master and master secrets during handshake.

o   Logs secrets to a file for use with Wireshark.

o   Does not perform decryption or modify TLS data.

4.       Application Server Lane

o   Receives plaintext HTTP requests from JVM.

o   Parses headers and body, validates request data.

o   Executes business logic.

o   Generates HTTP response, which is then encrypted by JVM before being sent to the OS.

extract-tls-secrets Overview

Decrypt HTTPS/TLS connections on-the-fly. Extract the shared secrets from secure TLS connections for use with Wireshark. Attach to a Java process on either side of the connection to start decrypting.

·       Java agent to extract TLS secrets from running JVM processes.

·       Can be used standalone (attach to HttpURLConnectionExample) or with application servers like Payara.

·       Output secrets can be used by Wireshark to decrypt TLS traffic.

Using extract-tls-secrets Standalone and Run with TLS Agent

Download this extract-tls-secrets-4.0.0.jar from https://repo1.maven.org/maven2/name/neykov/extract-tls-secrets/4.0.0/extract-tls-secrets-4.0.0.jar

 Attach on startup

Add a startup argument to the JVM options: -javaagent:<path to jar>/extract-tls-secrets-4.0.0.jar=<path to secrets log file>

 For example to launch an application from a standalone java file run:

 java -javaagent:/path/to/extract-tls-secrets-4.0.0.jar=/path/to/secrets.log HttpURLConnectionExample

·       /path/to/secrets.log will contain TLS session secrets.

·       These can then be configured in Wireshark to decrypt the traffic.

Using extract-tls-secrets with Payara Server and Run with TLS Agent

JVM Startup Option : Captures TLS secrets for all JVM-initiated connections after startup.

Add the Java agent in Payara JVM options:

asadmin create-jvm-options "-javaagent:/path/to/extract-tls-secrets-4.0.0.jar=/path/to/secrets.log"

Hot-Attach to Running Payara

Only captures new TLS sessions after attachment. No runtime toggle; to “disable,” restart JVM without -javaagent.

Attach agent to running process:

 java -jar /path/to/extract-tls-secrets-4.0.0.jar <PID> /path/to/secrets.log

TLS secret key logs

TLS 1.3 (with traffic secrets)

In modern TLS 1.3, tools like export-tls-secrets or SSLKEYLOGFILE produce logs with named traffic secrets. Example (secrets.log):

TLS 1.3 no longer uses a single “master key.” Instead, it derives multiple secrets (handshake, application traffic, etc.) from the initial key exchange.

 Each line has: <Secret_Type> <ClientRandom> <Secret_Value_Hex>

Examples of Secret_Type:

  • CLIENT_HANDSHAKE_TRAFFIC_SECRET
  • SERVER_HANDSHAKE_TRAFFIC_SECRET
  • CLIENT_TRAFFIC_SECRET_0
  • SERVER_TRAFFIC_SECRET_0

TLS 1.2 and earlier (RSA / Master Secret)

 Older SSL/TLS (RSA key exchange) used a single Master-Key.

  • TLS 1.2 (RSA) logs include Session-ID and Master-Key.
  • The Master-Key is used to derive session keys for encryption.
  • Only one line per session; no multiple traffic secrets like TLS 1.3.

Example (secrets.log):



 Using TLS Secrets in Wireshark
  • Open Wireshark and open .pcap file.
  • Go to: Edit → Preferences → Protocols → TLS

o    Upload (Pre)-Master-Secret log filename.

 
 o    Set (Pre)-Master-Secret log filename to the secrets log path.
 

           

 TLS packets will now decrypt automatically.

TLS Version

Log Style

Example

TLS 1.3

Traffic secrets per direction/stage

CLIENT_HANDSHAKE_TRAFFIC_SECRET <ClientRandom> <HexKey>

TLS 1.2/SSL

Single master key per session

RSA Session-ID:<id>\nMaster-Key:<hex>


TLSv1.2 Example

·       Initially it shows like below:

o     Encrypted Request Payload : Frame 23 (Application Data)

o     Encrypted Response Payload : Frame 29 (Application Data)


·       After Decryption :

o   Encrypted Request Payload: Frame 23 (POST /iCNow…. HTTP/1.1,…)

o    Encrypted Response Payload : Frame 29 (HTTP/1.1 200 (text/html))

Decrypted Request Payload:

Decrypted Response Payload:


TLSv1.3 Example

·       Initially it shows like below:

o     Encrypted Request Payload : Frame 5986  (Application Data)

o     Encrypted Response Payload : Frame 6143  (Application Data)


·       After Decryption :

o   Decrypted Request Payload: Frame 5986 (GET /auruspay/api/dev/status HTTP/1.1)

o    Decrypted Response Payload : Frame 6143 (HTTP/1.1 200 OK, (application/json))


Decrypted Request Payload:

Decrypted Response Payload: