Should I use TCP_NODELAY for connections to AMPS?

If your application needs the lowest-possible latency, it can help to set TCP_NODELAY. The overall effect of this setting is consume more bandwidth to achieve lower latency for an individual message.

This means that, on the other hand, if network bandwidth is at a premium or if your application is less latency-sensitive, leaving the option in the default state may be a better choice.

TCP_NODELAY controls whether a given connection uses Nagle's algorithm. In simple terms, this algorithm effectively introduces "batching" behavior: if a network packet to be sent is not completely full, the algorithm will wait for a small period of time for the application to send more data. By default (TCP_NODELAY off), connections use the algorithm, to more efficiently use bandwidth.

As a practical result, this means that the benefits of setting TCP_NODELAY would typically be most noticeable when an application needs to transfer small amounts of data with minimal latency. Larger messages, or a continuous flow of data, would see less benefit from this setting. When this setting provides a benefit, it does so by sending more packets for the connection. This results in more overhead for the same amount of data, less efficiently using bandwidth.

If your use case is latency-critical, test use of the TCP_NODELAY option to reduce latency. Otherwise, if sustained throughput is more important for your application, use the default setting.

(Note that AMPS always uses TCP_NODELAY on outgoing replication connections. However, if an outgoing connection becomes saturated, AMPS will buffer messages using a strategy similar to Nagle's algorithm.)

Last updated