When to enable HTTP Compression

We should consider this overlooked aspect in every system that deals with a lot of HTTP calls, and I’ll tell you a simple reason: network traffic is not free.


If you’ve written code for enough time to leave your imprint in the office chair, and never had a need for HTTP compression, you are probably wondering why should you bother now? You’ve developed hundreds of APIs, some of them even public and it did not call for you to enable server-side HTTP Compression.

We should consider this overlooked aspect in every system that deals with a lot of HTTP calls, and I’ll tell you a simple reason: network traffic is not free.

Today we are going to discuss HTTP Compression, a mechanism implemented on the server and the client that allows us to reduce the amount of data moved from server to client. GZip is the best-known compression schema, and it's what we are going to use for our analysis. Other popular schemas include Brotli, Deflate and Zopfli.

When does HTTP Compression make sense?

Is the HTTP Compression right for you?

As with most design decisions, the answer is that it depends. It depends on your business needs and, in my mind, it makes most sense when you are a match for at least two of the following conditions:

  • You are paying for a sizeable amount of network traffic
  • Your server responses are slow due to the payload size
  • You have the CPU power to spare

Main advantages

The price of the network traffic

To get a more accurate picture of the $ reduction, let's do a simulation. We've chosen a public API that has support for HTTP Compression, the one where we get back a list of beers (of course we would choose the beers one) and associated information.

Let's first check out the response size of a regular request:

curl -X GET https://api.punkapi.com/v2/beers -w '%{size_download}'

The response size is roughly 46.947kb.

For the size of the HTTP Compressed response we need to either add the header Accept-Encoding: gzip or to use the --compressed flag like in the next cURL example:

curl --compressed -X GET https://api.punkapi.com/v2/beers -w '%{size_download}'

This time the response size is roughly 9.766kb. That's a reduction of over 4.5x times!

Based on the prices for network traffic from various cloud providers, we can estimate the network traffic costs for 100k requests to be:

100k Responses AWS Azure GCP
Regular $3.29 $2.60 $3
Compressed $0.61 $0.22 $0.62
Savings $2.68 $2.38 $2.38

With the following mentions:
* For AWS, we used the GB price from *Data Transfer OUT From Amazon EC2 To Internet*.
* For Azure, we've used the network traffic pricing corresponding to *From North America, Europe to any destination*.
* For GCP we computed with the price for *Egress to a Google Cloud region on another continent (excludes Oceania)*.
* In DigitalOcean, Droplets include free outbound data transfer, starting at 1TB/month for the smallest plan, after that it's $0.01 per GB. This means that our calculated traffic will be well under the free limit for this volume of requests.
* For Upcloud, they have a generous 1TB-24TB of free outbound traffic, and the price per GB after is $0.01, well under the free limit for this volume of requests.

The estimated prices above can vary based on your region, account, volume, reserved resources, total bandwidth amount, connected services etc so the prices will not provide reliable cost difference between cloud providers, rather it will show that compressing the response will always be cheaper (or at the very least the same price) and those $ savings can reach to a good amount is you are running a business where you serve millions of requests per month.

Faster responses

On top of the savings due to network traffic, we need to keep in mind the fact that our responses will be faster, because of a lower payload that needs to reach the client.

In the previous example, we saw that the regular payload is 46.947kb while the compressed size is 9.766kb. This time I’m going to use a speed test tool to see what’s my download speed from a distant server:

Considering 100k requests again, we can calculate that issuing those from my personal computer, from London, will take me:

  • 41.45 seconds for 100k responses with regular payloads
  • 8.61 seconds for 100k responses with compressed payloads

That means I’ll get to enjoy my movie roughly 32 seconds sooner, and in the digital era, that will add up.

Implementation sample via Spring Boot

Mature servers/libraries usually support HTTP Compression with no, or little, implementation, sometimes it’s as simple as adding a single line of code.

As an example, when using Java with Spring Boot you only need one line in the application properties file:


This is the only mandatory change because the compression related properties have sensible default values. My personal recommendation is to read the full list of compression related properties from the official Spring documentation, or your technology of choice, and tweak the values to fit your needs.

The major disadvantage is the CPU Consumption

The major drawback for HTTP Compression is the fact that you'll consume more CPU power. The ideal setup would be to use VMs not billed by CPU usage so you can use the idle CPU power to compress. Otherwise, if you pay extra for those extra CPU computations, you need to see if it makes sense to compress your responses.

The easiest way to mitigate this is to add a caching system so once you compress a response, you won’t need to compress it again. This would work best with data that doesn’t change often.

Other considerations

Your cloud provider might make the HTTP Compression a must depending on your setup. For example, if you use AWS API Gateway it constrains you to a maximum payload size of 10MB so if you have bigger responses you can resort to HTTP Compression as a solution. Check out what restrictions exist in your setup!

The implementation itself could take some time depending on your tech stack so it might not provide you with enough benefits to enable it.


We’ve seen some pros and cons of enabling HTTP Compression but, in the end, it’s your application and usage that will tell you if it makes sense for you to enable it.

Subscribe to HTTPULSE

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.