Traffic Distribution & Latency

Last updated: December 17, 2025

Please read up on Application Traffic as a primer for understanding the content below.

Traffic Latency

Application Latency is the response times (in milliseconds) an application exhibits in a specific location. For example, one could measure the response times for ecommerce.nike.com in Japan and report that, on average, the application's latency was 35ms.

Because our system monitors the physical data center(s) an application is hosted in, we can model the latency experience for users from multiple locations around the world.

 

Example Data at the Company Level

Our system monitors traffic at the country level. The table below shows the demand, supply, latency, and traffic volumes.

Company Country (Demand) Country (Supply) Traffic % Latency (ms) Traffic (ITV)
Nike CN JP 5% 70ms 54222313
Nike GB DE 16% 25ms 899322112

In the first row, Nike has 5% of its traffic coming from China, served by data centers in Japan, with an average latency of 70ms. In the 2nd row, Nike has 16% of its traffic coming from the UK, served by data centers in Germany, with an average latency of 25ms.

 

Example Data at the Product Deployment Level

Our system monitors a company's product traffic at the country level. The table below shows product demand, supply, and traffic volumes.

Company Product Country (Demand) Country (Supply) Traffic % Latency (ms) Traffic (ITV)
Nike Amazon EC2 AU SP 1% 55 ms 899322112
Nike Amazon EC2 MX MX 2% 0 ms 30042112

In the first row, Nike has 1% of its traffic coming from Australia, served by an Amazon EC2 region in Singapore. This traffic experiences 55ms of latency on average. In the 2nd row, Nike has 2% of its traffic coming from Mexico, served by an Amazon EC2 region in Mexico. Because both this traffic demand and supply occur in the same region, a latency of 0 ms is attributed to it.