top of page
  • Writer's pictureTravis McCormack

Brute Force Attacks and Rate Limiting

Updated: Feb 1

So what are brute force attacks and how do we prevent them?

Brute force guessing attacks are some of the most common web application attacks because they involve simply trying combinations of data over and over again until the correct guess is made. Hence their name brute force. But where do we commonly see these attacks employed?

The most obvious answer is credentials. Everyone knows that attackers will try and guess usernames and passwords to various services such as RDP, SSH, or your banking web app for obvious reasons. But those aren't the only places brute force is utilized. We also see brute force guesses for things like:

  • Insecure Direct Object Reference (IDOR) enumerations

  • Cracking JWT secrets to attempt modification and re-signing

  • Username enumeration attempts

  • Directory enumeration/spidering is a form of brute force guessing

These are just a few other examples besides the most well known brute force implementations. So now that we are familiar with what a brute force attack is how would we protect against them? That is where rate limiting, amongst other controls such as account lockouts, comes into play.

What is rate limiting?

Rate limiting is simply configuring our environment to reject requests that come in too rapidly from a certain source. This can be accomplished in many ways such as with a Web Application Firewall (WAF), traditional firewall, or your cloud provider's control panel settings for something like a firewall or WAF depending on what they call it.

All of these are viable options, but the important aspect in configuring these is to determine what you should set your limitations to. For example, maybe your authentication endpoint should utilize both a rate limiter, AND an account lockout policy. This way even if an attacker is using multiple sources, such as with a botnet, they will not be able to completely bypass your limitations by coming from multiple source IPs. The account lockout will still protect your users because, upon however many guesses five to ten failed attempts usually feels good, their account will be inaccessible to the attacker. Another example of utilizing rate limiting is to protect your service from a Denial of Service, or DOS, condition from being over-utilized. In example, let's say you have a public API that is available for your customers. Well you wouldn't want one customer, unless it was part of a service agreement, to be able to query your endpoints countless times causing significant load which impacts your other customers. So we can utilize rate limiting in ways such as the Twitter API where your authorization token is allowed to make a certain number of queries within a day, week, month, etc. This can also be a useful tactic if your service is focused on enabling your clients to lookup data, and you wish to have them pay for different packages based on usage. Overall, this is a fairly simplistic, but useful, security control as well as quality control. Nowadays these rate limiting directives are fairly easy to set up, especially for individuals using cloud services who don't have to configure their own software or hardware to do it. Even still though we commonly test APIs that lack any rate limiting protections. This is yet another low security risk finding that is commonly reported which is quite easy to remedy. And this should absolutely be considered by everyone creating a web app. Because, when you tune your limitations to be more than the average customer should ever utilize it still adds a layer of protection without impacting the user.

Examples of configuring rate limiting

NGINX: NGINX can have its rate limiting activated by using the limit_req_zone or limit_req directives. An example of limiting your login can be found from NGINX as such:

limit_req_zone $binary_remote_addr zone=myLimit:10m rate=10r/s;
server {
    location /login/ {
        limit_req zone=myLimit;
        proxy_pass http://my_upstream;

This example would limit requests to your /login endpoint at a maximum rate of 10 requests per second. This is obviously configurable to whatever parameters you deem acceptable. IPTables Good old IPTables has a rate limiter that can be configured as well via the limit module. In this example we will make a chain and rate limit all incoming connections to 40 connections per second. This particular example is not port/service specific, but you can also make a simple specification such as adding a rule to limit 22/tcp as a way to limit SSH for example.

sudo iptables --new-chain LIMITER
sudo iptables --append LIMITER --limit 40/sec ACCEPT

Azure We won't re-create the excellent walk-through from Microsoft here, but this will get you to the API rate limiting documentation from them here. AWS We will also link directly to AWS' documentation which is very thorough as well. Again this is geared towards an API, but that is because this is the most common usage of rate limiting to begin with. Find the documentation here. We hope this helps to better understand the usefulness of rate limiting and some basic steps on how you may start implementing it in your environment. There are definitely performance AND security implications to utilizing this technology and I highly recommend checking it out!

Are you looking for a security assessment for your network or applications? Send us an email at

93 views0 comments

Recent Posts

See All


bottom of page