API Traffic Shaping/Throttling Strategies For Tenant Isolation - api

I'll start my question by providing some context about what we're doing and the problems we're facing.
We are currently building a SaaS (hosted on Amazon AWS) that consists of several microservices that sit behind an API gateway (we're using Kong).
The gateway handles authentication (through consumers with API keys) and exposes the APIs of these microservices that I mentioned, all of which are stateless (there are no sessions, cookies or similar).
Each service is deployed using ECS services (one or more docker containers per service running on one or more EC2 machines) and load balanced using the Amazon Application Load Balancer (ALB).
All tenants (clients) share the same environment, that is, the very same machines and resources. Given our business model, we expect to have few but "big" tenants (at first).
Most of the requests to these services translate in heavy resource usage (CPU mainly) for the duration of the request. The time needed to serve one request is in the range of 2-10 seconds (and not ms like traditional "web-like" applications). This means we serve relatively few requests per minute where each one of them take a while to process (background or batch processing is not an option).
Right now, we don't have a strategy to limit or throttle the amount of requests that a tenant can make on a given period of time. Taken into account the last two considerations from above, it's easy to see this is a problem, since it's almost trivial for a tenant to make more requests than we can handle, causing a degradation on the quality of service (even for other tenants because of the shared resources approach).
We're thinking of strategies to limit/throttle or in general prepare the system to "isolate" tenants, so one tenant can not degrade the performance for others by making more requests than we can handle:
Rate limiting: Define a maximum requests/m that a tenant can make. If more requests arrive, drop them. Kong even has a plugin for it. Sadly, we use a "pay-per-request" pricing model and business do not allow us to use this strategy because we want to serve as many requests as possible in order to get paid for them. If excess requests take more time for a tenant that's fine.
Tenant isolation: Create an isolated environment for each tenant. This one has been discarded too, as it makes maintenance harder and leads to lower resource usage and higher costs.
Auto-scaling: Bring up more machines to absorb bursts. In our experience, Amazon ECS is not very fast at doing this and by the time these new machines are ready it's possibly too late.
Request "throttling": Using algorithms like Leaky Bucket or Token Bucket at the API gateway level to ensure that requests hit the services at a rate we know we can handle.
Right now, we're inclined to take option 4. We want to implement the request throttling (traffic shaping) in such a way that all requests made within a previously agreed rate with the tenant (enforced by contract) would be passed along to the services without delay. Since we know in advance how many requests per minute each tenant is gonna be making (estimated at least) we can size our infrastructure accordingly (plus a safety margin).
If a burst arrives, the excess requests would be queued (up to a limit) and then released at a fixed rate (using the leaky bucket or similar algorithm). This would ensure that a tenant can not impact the performance of other tenants, since requests will hit the services at a predefined rate. Ideally, the allowed request rate would be "dynamic" in such a way that a tenant can use some of the "requests per minute" of other tenants that are not using them (within safety limits). I believe this is called the "Dynamic Rate Leaky Bucket" algorithm. The goal is to maximize resource usage.
My questions are:
Is the proposed strategy a viable one? Do you know of any other viable strategies for this use case?
Is there an open-source, commercial or SaaS service that can provide this traffic shaping capabilities? As far as I know Kong or Tyk do not support anything like this, so... Is there any other API gateway that does?
In case Kong does not support this, How hard it is to implement something like what I've described as a plugin? We have to take into account that it would need some shared state (using Redis for example) as we're using multiple Kong instances (for load balancing and high availability).
Thank you very much,
Mikel.

Managing request queue on Gateway side is indeed tricky thing, and probably the main reason why it is not implemented in this Gateways, is that it is really hard to do right. You need to handle all the distributed system cases, and in addition, it hard makes it "safe", because "slow" clients quickly consume machine resources.
Such pattern usually offloaded to client libraries, so when client hits rate limit status code, it uses smth like exponential backoff technique to retry requests. It is way easier to scale and implement.
Can't say for Kong, but Tyk, in this case, provides two basic numbers you can control, quota - maximum number of requests client can make in given period of time, and rate limits - safety protection. You can set rate limit 1) per "policy", e.g for group of consumers (for example if you have multiple tiers of your service, with different allowed usage/rate limits), 2) per individual key 3) Globally for API (works together with key rate limits). So for example, you can set some moderate client rate limits, and cap total limit with global API setting.
If you want fully dynamic scheme, and re-calculate limits based on cluster load, it should be possible. You will need to write and run this scheduler somewhere, from time to time it will perform re-calculation, based on current total usage (which Tyk calculate for you, and you get it from Redis) and will talk with Tyk API, by iterating through all keys (or policies) and dynamically updating their rate limits.
Hope it make sense :)

Related

How to limit the number of outgoing web request per second?

Right now, I am working on an ASP.NET Core Web API that calls an external web service and uses the returned data in its own response. This is working fine.
However, I discovered that the external service is not as scalable as I would like to. Therefore, as discussed with the company providing this external service, the number of outgoing requests needs to be limited to one per second. I als use caching to reduce the number of outgoing requests but this has been shown to be not effective enough because (as logically) it only works when a lot of requests are the same so cache data can be reused.
I have been doing some investigation on rate limiting but the documented examples and types are far more complicated than what I need. This is not about partitions, tokens or concurrency. What I want is far more simple. Just 1 outgoing request per second and that´s all.
I am not that experienced when it comes to rate limiting. I just started reading the referred documentation and found out that there is a nice and professional package for it. But the examples are more complicated than what I need for the reasons explained. It is not about tokens or concurrency or so. It is the number of outgoing requests per second that needs to be limited.
Possibly, there is a way using the package System.Threading.RateLimiting in such a way that this is possible by applying the RateLimiter class correctly. Otherwise, I may need to write my own DelegateHandler implementation to do this. But there must be a straightforward way which people with experience in rate limiting can explain me.
So how to limit the number of outgoing web request per second?
In addition, what I want to prevent is a 429 or so in case of to many request. In such a situation, the process should just take more waiting time in order to complete so the number of outgoing requests is limited.

Caching authentication calls

Imagine, that we have one authentication service and two applications that use It by http call. Where we should put a cache?
Only authentication service has its cache (redis for example)
Caching is a part of application number one and two
Putting your cache inside the applications is a judgement call. It will be far more efficient if it can cache in-memory, but it comes at a trade-off in that it might be invalid. This is ultimately a per-case basis, and depends on the needed security of your application.
You will have to figure out what an acceptable TTL for your cache is, it could be anywhere between zero and eternal. If it's zero, then you've answered your question, and there is no value in a cache layer at all in the application.
Most applications can accept some level of staleness, at least on the order of a few seconds. If you are doing banking transactions, you likely cannot get away with this, but if you are creating a social-media application, you can likely have a TTL at least in the several-minutes range, possibly hours or days.
Just a bit of advice, since you are using HTTP for your implementation, take a look at using the cache-control that is baked into HTTP, it's likely your client will support it out-of-the-box, and most of the large, complicated issues (cache expiration, store sizing, etc) will be be solved by people long before you.

ISO-8583 message processing(defining priority of messages)

I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?

ZMQ device queue does not load balance properly

I know that ZMQ offers all of the flexibility to do your own load-balancing. However I would expect the out-of-the-box broker, about 4 lines of code using the line
zmq_device (ZMQ_QUEUE, frontend, backend);
to load balance quite well as the documentation says it does load balance.
ZMQ_QUEUE creates a shared queue that collects requests from a set of clients, and distributes these fairly among a set of services. Requests are fair-queued from frontend connections and load-balanced between backend connections. Replies automatically return to the client that made the original request.
I have an army of back-end services and yet find that often my front-end clients have to wait several seconds for something that takes < 1/10 of a second in a 1:1 setting (there are same # of client and service machines). I suspect that ZMQ is not load-balancing properly out of the box - it's sending too many requests to the same service even though it doesn't have bandwidth, etc.
I think this is partly because the services are multithreaded in a way that lets them take up to 10 concurrent requests yet it slows down greatly at near the 10th request even though it can still accept them. Random distribution would be ideal. Is there an out-of-the-box way to do this or can it be done in a few lines of code, or do I have to write my own broker from scratch?
Fwiw issue was the workers were taking on work when they didn't have room for it, issue was not in ZMQ layer per se.

How well will WCF scale to a large number of client users?

Does anyone have any experience with how well web services build with Microsoft's WCF will scale to a large number of users?
The level I'm thinking of is in the region of 1000+ client users connecting to a collection of WCF services providing the business logic for our application, and these talking to a database - similar to a traditional 3-tier architecture.
Are there any particular gotchas that have slowed down performance, or any design lessons learnt that have enabled this level of scalability?
To ensure your WCF application can scale to the desired level I think you might need to tweak your thinking about the stats your services have to meet.
You mention servicing "1000+ client users" but to gauge if your services can perform at that level you'll also need to have some estimated usage figures, which will help you calculate some simpler stats such as the number of requests per second your app needs to handle.
Having just finished working on a WCF project we managed to get 400 requests per second on our test hardware, which combined with our expected usage pattern of each user making 300 requests a day indicated we could handle an average of 100,000 users a day (assuming a flat usage graph across the day).
In addition, since it's fairly common to make the WCF service code stateless, it's pretty easy to scale out the actual WCF code by adding additional boxes, which means the overall performance of your system is much more likely to be limited by your business logic and persistence layer than it is by WCF.
WCF configuration default limits, concurrency and scalability
Probably the 4 biggest things you can start looking at first (besides just having good service code) are items related to:
Bindings - some binding and they protocols they run on are just faster than others, tcp is going to be faster than any of the http bindings
Instance Mode - this determines how your classes are allocated against the session callers
One & Two Way Operations - if a response isn't needed back to the client, then do one-way
Throttling - Max Sessions / Concurant Calls and Instances
They did design WCF to be secure by default so the defaults are very limiting.