User Rating 0.0 โ˜…โ˜…โ˜…โ˜…โ˜…
Total Usage 0 times
Quick Presets:
Is this tool helpful?

Your feedback helps us improve.

โ˜… โ˜… โ˜… โ˜… โ˜…

About

The Bandwidth Delay Product (BDP) quantifies the maximum volume of data in transit on a network link at any instant. It is the product of a link's BW (capacity in bits per second) and its RTT (round-trip time in seconds). A misconfigured TCP receive window smaller than the BDP forces the sender to idle while waiting for acknowledgements, throttling throughput well below the physical capacity of the pipe. This is the core bottleneck on high-latency WAN links, satellite connections, and transcontinental fibre paths. Getting this number wrong means paying for bandwidth you cannot use.

This calculator computes BDP in bits and bytes, derives the minimum TCP window size required for full utilization, and estimates actual link utilization given a user-specified window. It assumes a single TCP flow with no packet loss. Real deployments should also factor in TCP options overhead (40 bytes with timestamps), and note that the Linux kernel auto-tunes tcp_rmem up to 6 MB by default. Satellite links with RTT exceeding 600 ms routinely produce BDP values that exceed the classic 65,535-byte TCP window, requiring RFC 1323 window scaling.

bandwidth delay product BDP calculator TCP window size network throughput RTT buffer size network optimization WAN optimization

Formulas

The Bandwidth Delay Product is the fundamental capacity metric for any network pipe. It represents the total data "in flight" at full utilization.

BDP = BW ร— RTT

Where BDP is expressed in bits, BW is the link bandwidth in bits/s, and RTT is the round-trip time in seconds. Converting to bytes for TCP window sizing:

BDPbytes = BW ร— RTT8

Link utilization with a given TCP window size W:

U = min(WBDPbytes ร— 100, 100)

Where U is utilization in %, and W is the TCP receive window in bytes. If W โ‰ฅ BDPbytes, the link is fully utilized. The classic TCP window field is 16 bits, capping at 65,535 bytes. RFC 1323 window scaling extends this to 1 GB using a scale factor up to 14.

Reference Data

Link TypeTypical BandwidthTypical RTTBDP (approx.)Min TCP Window
LAN (Ethernet)1 Gbps0.5 ms62.5 KB64 KB
Metro Ethernet1 Gbps5 ms625 KB640 KB
10G LAN10 Gbps0.2 ms250 KB256 KB
Domestic WAN (US)100 Mbps30 ms375 KB384 KB
Transatlantic Fibre1 Gbps80 ms10 MB10 MB
Transpacific Fibre1 Gbps150 ms18.75 MB19 MB
GEO Satellite50 Mbps600 ms3.75 MB4 MB
LEO Satellite (Starlink)100 Mbps40 ms500 KB512 KB
DSL20 Mbps25 ms62.5 KB64 KB
4G LTE50 Mbps50 ms312.5 KB320 KB
5G (Sub-6 GHz)500 Mbps10 ms625 KB640 KB
5G (mmWave)2 Gbps5 ms1.25 MB1.3 MB
T1 Leased Line1.544 Mbps40 ms7.72 KB8 KB
T3 / DS344.736 Mbps40 ms223.68 KB224 KB
OC-48 / STM-162.488 Gbps20 ms6.22 MB6.3 MB
100G Data Center100 Gbps0.1 ms1.25 MB1.3 MB

Frequently Asked Questions

If the TCP receive window is smaller than the Bandwidth Delay Product, the sender exhausts its allowed in-flight data and stalls waiting for ACKs. For example, a 1 Gbps link with 100 ms RTT has a BDP of 12.5 MB. Using the classic 65,535-byte window limits throughput to roughly 5.24 Mbps - under 1% utilization. Enable RFC 1323 window scaling and set the receive buffer to at least the BDP value.
Always use Round-Trip Time (RTT). TCP's flow control is ACK-clocked: the sender transmits data, then waits for the acknowledgement to return before sending more (within the window). The full round trip governs how long data is in flight. If you only have one-way latency, multiply by 2. Tools like ping report RTT directly.
This calculator assumes zero loss. In practice, TCP congestion control (Reno, CUBIC, BBR) reduces the effective window upon loss detection. The Mathis formula estimates throughput as T โ‰ˆ MSSRTT ร— โˆšp, where p is loss rate. Even 0.1% loss on a high-BDP link can reduce throughput by an order of magnitude.
The Stanford model recommends buffer size equal to BDP รท โˆšN for N long-lived flows. For a single flow, the buffer should approximate the full BDP to prevent tail drops. Over-buffering causes bufferbloat (excess latency). Under-buffering causes premature loss.
The BDP concept applies to any protocol, but the TCP window constraint is TCP-specific. UDP has no built-in flow control, so application-level pacing must account for BDP. QUIC implements its own flow control with per-stream and connection-level windows. The BDP value remains relevant for sizing QUIC's flow control credits and send buffers.
Set net.core.rmem_max and net.ipv4.tcp_rmem (third value) to at least the BDP in bytes. For a 12.5 MB BDP: sysctl -w net.core.rmem_max=13107200 and sysctl -w net.ipv4.tcp_rmem="4096 87380 13107200". Ensure tcp_window_scaling is 1 (enabled by default since Linux 2.6.17).