We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.
Sharp eye my friend! Thanks for pointing that out. I have fixed it :)
Thanks for the article and the explanation of both strategies.
One additional factor that improves web request rate limiters is the fact that webservers can return a Retry-After header when the limit on their side is reached. Using this returned header makes the rate limiting dynamic on a per-domain basis. Even when you have a rate too high (for example after the other side reconfigured) the rate limiter will adapt and be a good web citizen (don’t we all want that?)
Good article, however using GenServers can become a bottleneck itself, for something that's intended to get a lot of hits, this could become a scalability concern if you're traffic is high enough!
If a GenServer sees a lot of requests it can indeed become a bottleneck. I touch on that point towards the end of the article when covering the standard deviation in times as throughput increases (scaling up to 100 req/sec). In addition, by leveraging Task.Supervisor and delegating the actual requests to a separate process, the work that the GenServer is doing is minimal and it merely acts as a coordinator (i.e the GenServer is not blocked during the duration of the mock HTTP request).
Like most things in software...it depends on your application whether this model is a good fit or not :).
Thanks for the article.
Little typo in the TokenBucket iex
:sys.get_state(RateLimitingPrivate.RateLimiters.TokenBucket)
# instead of
:sys.get_state(PaymentsClient.RateLimiters.TokenBucket)