Limiting a queue over time - rabbitmq

I'm using an API that is limited for usage, let's say: no more than 10 calls per second, and no more than 5000 calls per day.
I am handling this calls in a beanstalkd queue process job. How can I limit the processing of this jobs, having in mind the API's limits.

When you use Beanstalkd you can have the tube paused for a certain seconds.
When you reserve a job, and you know the API call failed during that call, you get to pause the tube for X seconds.
You can find out the time needed to pause the tube, either from your API response (usually they return you are locked until Time X), or start with something adaptive like pause for the next 60 seconds, and increase/decrease on the go.
If you know you can delay, or disperse in advance, before placing the jobs into your queues, you can also add a delay to the job, so it won't execute immediately, this way you can have your jobs distributed over time.
Also there is a great post about distributed rate limiting using redis

If all workers share durable state, they can update shared status and collective implement rate limiting.
If the only shared writable state is the queue itself, you could create ticketing tubes for the rate limited jobs, and have a rate limit manager insert tickets (permission slips) to control when the jobs get run. Would need changes to the workers, would need a way to time out unused tickets, but should be workable.
Edit: a "valid until" timestamp in the ticket might do it for per-second limits. Per-day limits might need a feedback tube back to let the rate limit manager know about actual usage (to implement a rolling 24 hour window instead of the 5000 all getting reset at midnight)

Related

Rate Limit Pattern with Redis - Accuracy

Background
I have an application that send HTTP request to foreign servers. The application communicating with other services with strict rate limit policy. For example, 5 calls per second. Any call above the allowed rate will get 429 error code.
The application is deployed in the cloud and run by multiple instances. The tasks are coming from shared queue.
The allowed rate limit synced by Redis Rate Limit pattern.
My current implementation
Assuming that the rate limit is 5 per second: I split the time into multiple "window". Each window has maximum rate of 5. Before each call I checking if the counter is less then 5. If yes, fire the request. If no, wait for the next window (after a second).
The problem
In order to sync the application around the Redis, I need to Redis calls: INCR and EXPR. Let's say that each call can take around 250ms to be returned. So we have checking time of ~500ms. Having said that, in some cases you will check for old window because until you will get the answer the current second has been changed. In case that on the next second we will have another 5 quick calls - it will lead to 429 from the server.
Question
As you can see, this pattern not really ensuring that the rate of my application will be up to 5 calls\second.
How do you recommend to do it right?

Understanding why you would want to process Message Queues at a future time

So I'm trying to understand what practical problems Queues solve. By reading all the information from Google, I get the high-level.
Push message to Queue for processing at a later time
So I'm looking at an architecture from Company A and they have different use cases for Job Queueing like for example
chat messages
file conversion
searching
Heavy sql queries
Why process it at a later time?
Here's my best guess...
Let's say I have an application that can process 10 "things" at a time.
My application then maxes out it's processing capacity.
an 11th request came in so app puts it in the Queue for later processing
Assuming this is a valid Use Case, wouldn't adding more servers to process more "things" make sense? Is it because it's more costly to add more servers than employ a Queue and sacrifice response time a little bit?
Given my Use Case examples, what other problems would Queues solve for them?
Have you ever lined up at a bank when it is busy? You would have waited in a queue.
"But," you could say, "wouldn't adding more staff to process more customers make sense? Is it because it's more costly to add more staff than employ a Queue and sacrifice response time a little bit?"
That would be correct. It can be quite costly to staff a bank based on the peak number of customers who would arrive each day. It is cheaper to staff below this level and have some customers wait in a queue.
Also, the number of customers each day are not 100% predictable. A queue allows excess demand to wait without breaking the system.
Queues enable decoupling.
For example, imagine an online store where customers purchase an item. They select the item, provide a credit card number and click 'Purchase'. If the credit card is declined, the online store can immediately prompt them to re-enter the number. This interaction has to take place immediately while the customer is still online.
However, there is no need to have the customer wait while an invoice is generated, a record is added to the accounting system and inventory is pulled off the shelf. This can be decoupled from the ordering process. A good way to do it is to push the order into a queue, which can be handled by the next system.
If that 'next system' happens to be offline at the moment, there is no reason to cancel the whole sale. The transaction can be processed when the 'next system' comes back online. This is much better than failing the whole process just because one component (which is not required immediately) has a failure.
Bottom line: Queues are excellent. They enable better handling of failures. They makes things more resilient (just wait a few minutes and try again!). They should be used at all times when the process is compatible with a queuing architecture.
Let's do scenarios
Scenario 1 without queue:
you request an endpoint /blabla/do-eveything/
this request do
download an image from very slow FTP
e.g 1.5 sec (can error, retry ? add +X sec)
attach the image to an email
send an email (3 sec)
e.g 1 sec (can error, retry ? add +X sec)
confirmation received > store confirmation to a third company tracking stuff
e.g 1.5 (can error, retry ? add +X sec)
when tracking confirm, update your data from another third company for big data purpose
e.g 2 sec (can error, retry ? add +X sec)
... you get the idead
return the response e.g 11 sec later (this is to slow) or more or timeout when everything failed
End user said internet was faster 20 years ago, maybe I need to change my internet connection or change my 16 threads
Scenario 2 queue everything you can:
you request an endpoint /blabla/do-eveything/
this request do
Queue job "DO_EVERYTHING"
e.g 0.02 sec
Return the response less then 0.250 sec
End user said that is website/app is too fast, I can keep my 56K internet connection
on queue/event system one failed job can be retry later without affeting the end user
you can pause job, add a unlimited number a task/step after the original message
better fault tolerance
Working with queue will allow you a better micro/nano service architecture, better testing because, you can test a single job, intead of a full controller that do everything...
Ye, is maybe more work, more thinking, but a the end no need to think about the work when holidays

Why send rate is lower than configured rate in config.yaml (hyperledger caliper) even after use of only one client?

I configured send rate at 500 tps and I am using only one client so send rate should be around 500tps but in generated report send rate is around 130-40 tps. Why there is so much deviation?
I am using fabric ccp version of caliper.
I expect the send rate around 450-480 but the actual send rate is around 130-40 tps.
Node.js is a single-threaded framework (async/await just means deferred execution, not parallel execution). Caliper runs a loop with the following step:
Waiting for the rate controller to enable the next TX
Creates an async operation in which the user module will call the blockchain adapter.
All of the pending TXs eat up some CPU time (when not waiting for I/O), plus other operations are also scheduled (like sending updates about TXs to the master process).
To reach 500 TPS, the rate controller must enable a TX every 2ms. That's not a lot of time. Try spawning more than 1 local clients, so the load will be shared among them (100 TPS/client for 5 clients, 50 TPS/client for 10 clients, etc).

Jmeter : how to get large number of rps in jmeter

I'm testing a web app using jmeter for load test and I getting a hard time on how can I set properly how many threads, ramp-up and loops will I use in order to get a large number of rps. Anyway, I want to check if my server can keep up to 500rps. Does anyone here can help me how can I set it properly. Thanks.
The number of requests per unit of time is called Throughput and mainly depends on two factors:
Number of active threads
Your application response time
The first one is obvious - more threads -> more requests per second. However JMeter will wait for response from the previous thread before starting the next request so application response time matters as well.
So the recommendations are:
Set number of threads in the Thread Group to the number of anticipated users of your system.
Set ramp-up period accordingly to the number of threads so the load will increase (and decrease) gradually, this way you will be able to correlate increasing/decreasing load with the changing response time and throughput
Instead of loops it might be a better idea to set desired test duration using Scheduler section of the Thread Group.
Run your test and observe the actual throughput using i.e. Server Hits Per Second listener or Transactions per second chart of the HTML Reporting Dashboard. If it matches your expectations - you are done, if not - you will need to increase the number of virtual users.
You can use ConcurrencyThreadGroup plugin , Specifically see how to Produce Desired RPS:
Threads pool size can be calculated like RPS * <max response time> / 1000. The more rate desired the more threads you will need. The more response time service have the more threads you will need.
For example, if your service response time may be 2.5sec and target
rps is 1230, you have to have 1230 * 2500 / 1000 = 3075 threads.

Apigee SpikeArrest Sync Across MessageProcessors (MPs)

Our organisation is currently migrating to Apigee.
I currently have a problem very similar to this one, but due to the fact that I am a Stack Overflow newcomer and have low reputation I couldn't comment on it: Apigee - SpikeArrest behavior
So, in our organisation we have 6 MessageProcessors (MP) and I assume they are working in a strictly round-robin manner.
Please see this config (It is applied to the TARGET ENDPOINT of the ApiProxy):
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest async="false" continueOnError="false" enabled="true" name="spikearrest-1">
<DisplayName>SpikeArrest-1</DisplayName>
<FaultRules/>
<Properties/>
<Identifier ref="request.header.some-header-name"/>
<MessageWeight ref="request.header.weight"/>
<Rate>3pm</Rate>
</SpikeArrest>
I have a rate of 3pm, which means 1 hit each 20sec, calculated according to ApigeeDoc1.
The problem is that instead of 1 successful hit every 20sec I get 6 successful ones in the range of 20sec and then the SpikeArrest error, meaning it hit once each MP in a round robin manner.
This means I get 6 hit per 20 sec to my api backend instead of the desired 1 hit per 20sec.
Is there any way to sync the spikearrests across the MPs?
ConcurrentRatelimit doesn't seem to help.
SpikeArrest has no ability to be distributed across message processors. It is generally used for stopping large bursts of traffic, not controlling traffic at the levels you are suggesting (3 calls per minute). You generally put it in the Proxy Request Preflow and abort if the traffic is too high.
The closest you can get to 3 per minute using SpikeArrest with your round robin message processors is 1 per minute, which would result in 6 calls per minute. You can only specify SpikeArrests as "n per second" or "n per minute", which does get converted to "1 per 1/n second" or "1 per 1/n minute" as you mentioned above.
Do you really only support one call every 20 seconds on your backend? If you are trying to support one call every 20 seconds per user or app, then I suggest you try to accomplish this using the Quota policy. Quotas can share a counter across all message processors. You could also use quotas with all traffic (instead of per user or per app) by specifying a quota identifier that is a constant. You could allow 3 per minute, but they could all come in at the same time during that minute.
If you are just trying to protect against overtaxing your backend, the ConcurrentRateLimit policy is often used.
The last solution is to implement some custom code.
Update to address further questions:
Restating:
6 message processors handled round robin
want 4 apps to each be allowed 5 calls per second
want the rest of the apps to share 10 calls per second
To get the kind of granularity you are looking for, you'll need to use quotas. Unfortunately you can't set a quota to have a "per second" value on a distributed quota (distributed quota shares the count among message processors rather than having each message processor have its own counter). The best you can do is per minute, which in your case would be 300 calls per minute. Otherwise you can use a non-distributed quota (dividing the quota between the 6 message processors), but the issue you'll have there is that calls that land on some MPs will be rejected while others will be accepted, which can be confusing to your developers.
For distributed quotas you'd set the 300 calls per minute in an API Product (see the docs), and assign that product to your four apps. Then, in your code, if that product is not assigned for the current API call's app, you'd use a quota that is hardcoded to 10 per second (600 per minute) and use a constant identifier rather than the client_id, so that all other traffic uses that quota.
Quotas don't keep you from submitting all your requests nearly simultaneously, and I'm assuming your backend can't handle 1200+ requests all at the same time. You'll need to smooth the traffic using a SpikeArrest policy. You'll want to allow the maximum traffic through the SpikeArrest that your backend can handle. This will help protect against traffic spikes, but you'll probably get some traffic rejected that would normally be allowed by the Quota. The SpikeArrest policy should be checked before the Quota, so that rejected traffic is not counted against the app's quota.
As you can probably see, configuring for situations like yours is more of an art than a science. My suggestion would be to do significant performance/load testing, and tune it until you find the correct values. If you can figure out how to use non-distributed quotas to get acceptable performance and predictability, that will let you work with per second numbers instead of per minute numbers, which will probably make massive spikes less likely.
Good luck!
Unlike Quota limits, the Spike Arrest cannot be synchronized across MP.
But, as you're setting them on a per minute level, you could use Quota Policy instead -- then set it to Distributed and Synchronized and it will coordinate across MP.
Keep in mind there will always be some latency on the synchronization across machines so it will never be a completely precise number.