Security

API Throttling

API request throttling limits the number of requests that can use your API. “Madness” you say, “intentionally stop requests from reaching my API?”

What is Request Throttling

Request throttling limits the number of requests that can use your API.

“Madness” you say, “intentionally stop requests from reaching my API?”

Usually people using your API is a good thing, but not always.

Why do you want it

There are a number of situations where allowing someone to use your API as much as they please is actually a bad thing and you’ll find the majority of API’s you use will come with limits attached.

Some problems you encounter without using throttling could be:

  • Legitimate users may use the service a lot more than you hoped
  • An attacker may perform rapid, automated attacks on your service in an attempt to find vulnerabilities
  • An attacker may direct lots of traffic to your API in the hope that the additional traffic will negatively impact, or completely stop your API from working.

Attackers here could simply be a hacker attempting to gain something from the attack, or could even be a competitor attempting to stop your customers from accessing your API.

The impact of the above attacks could be:

  • Your service ends up costing a lot of money to run
  • Attackers could very rapidly find and exploit vulnerabilities in your API
  • Your API may be used to the point that it no longer works, meaning legitimate customers can longer access it – a Denial of Service attack

How do Throttles work

A throttle can use various attributes to base request limiting on, including:

  • A logged in user’s ID
  • An API Key given to the user
  • The IP address that is the source of the request – although this isn’t always useful as lots of users can appear behind a single IP address
  • A specific resource – you may only want a certain action to performed so often
  • More than one of the above, for example, you may use the API key and if that isn't present, use the IP address

A throttle can work in different ways:

  • A single limit over time, e.g. 10 requests per second
  • Multiple limits over time, e.g. 10 requests per second and 10,000 requests per day
  • Various algorithms to apply limits, such as Token Bucket and Leaky Bucket

When using a throttle limit the server might be polite enough to provide the client with information to help it determine how close to the limit it is. This can be done in the form of headers (such as “x-rate-limit-remaining” and “x-rate-limit-reset”) which state the number of requests remaining and the time at which the count is reset.

If a limit is reached then the server should respond with a 429 Http response code, which means there are “Too many requests”. This may result in a complete ban from using the API for a given time period, or may just be repeated for every request until the natural expiry of the limit e.g. a new minute / hour / day which allows more requests. Again it is considered polite for the server to inform the client how long it is before another request can be made.

The implementation of throttling requires a data store of previous requests, this store could take the form of an in-memory cache, which would only work if you had a single server. The data store could also take the form of a Redis instance which allows the cache to be shared across servers, meaning you’ve got a more robust, scalable solution.

Once you’ve got your data store you need to define what your throttle limits are. The limit you set should take into account how the clients of the API are likely to use it while also paying some thought to costs and how much your infrastructure can support. If you’re in a position where you’ve load tested your API then you’ll have an understanding of the amount of load it can handle and this may also help to inform your decision. The ability to easily change the throttle limit is useful and means you can set the limit to meet the current situation.

The Down side

Clearly there are costs involved in adding throttling to your service, you’re going to add some infrastructure and complexity in order to get benefits. So it’s perhaps not a day-one necessity for your API, but it should be considered as part of your ongoing security and defence in depth stance.


Got a comment or correction (I’m not perfect) for this post? Please leave a comment below.
You've successfully subscribed to Gavin Johnson-Lynn!




My Pluralsight Courses: (Get a free Pluaralsight trial)

API Security with the OWASP API Security Top 10

OWASP Top 10: What's New

OWASP Top 10: API Security Playbook

Secure Coding with OWASP in ASP.Net Core 6

Secure Coding: Broken Access Control

Python Secure Coding Playbook