I have an API created with Laravel 5.2. I am using throttle for rate limiting. In my route file, I have set the following..
Route::group(['middleware' => ['throttle:60,1'], 'prefix' => 'api/v1'], function() {
//
}
As I understood the Laravel throttling, the above script will set the request limit to 60 per minute. I have an application querying the route which repeat every 10 seconds. So, per minute there are 6 request which is much more satisfying the above throttle.
Problem is, my query works until I execute 60 request regardless of time and after 60 request, it responds me 429 Too Many Requests with header Retry-After: 60. As per my understanding, the X-RateLimit-Remaining should update every 1 minute. But it seems never updated until it goes to 0. After become zero, then waits for 60 seconds then updated.
Am I doing anything wrong here.
Related
i'm doing Stress Test for my API for two endpoint. First is /api/register and second is /api/verify_tac/
request body on /api/register is
{
"provider_id": "lifecare.com.my",
"user_id": ${random},
"secure_word": "Aa123456",
"id_type": "0",
"id_number": "${id_number}",
"full_name": "test",
"gender": "F",
"dob": "2009/11/11",
"phone_number": ${random},
"nationality": "MY"
}
where ${random} and ${id_number} is a list from csv data config.
while request body for verify_tac is
{
"temp_token": "${temp_token}",
"tac":"123456"
}
${temp_token} is a response extract from /api/register response body.
For the test. I have done 5 type of testing without returning all error.
100 users with 60 seconds ramp up periods. All success.
200 users with 60 seconds ramp up periods. All success.
300 users with 60 seconds ramp up periods. All success.
400 users with 60 seconds ramp up periods. All success.
500 users with 60 seconds ramp up periods. All success.
600 users with 60 seconds ramp up periods. most of the /api/register response data is empty resulting in /api/verify_tac return with an error. request data from /api/verify_tac that return an error is
{
"temp_token": "NotFound",
"tac":"123456"
}
How can test number 6 was return with an error while all other 5 does not return error. They had the same parameter.
Does this means my api is overload with request? or weather my testing parameter is wrong?
If for 600 users response body is empty - then my expectation is that your application simply gets overloaded and cannot handle 600 users.
You can add a listener like Simple Data Writer configured as below:
this way you will be able to see request and response details for failing requests. If you untick Errors box JMeter will store request and response details for all requests. This way you will be able to see response message, headers, body, etc. for previous request and determine the failure reason.
Also it would be good to:
Monitor the essential resources usage (like CPU, RAM, Disk, Network, Swap usage, etc.) on the application under test side, it can be done using i.e. JMeter PerfMon Plugin
Check your application logs for any suspicious entries
Re-run your test with profiler tool for .NET like YourKit, this way you will be able to see the most "expensive" functions and identify where the application spends most time and what is the root cause of the problems
I'm working with RabbitMQ instances hosted at CloudAMQP. I'm calling the management API to get detailed queue statistics. About 1 in 10 calls to the API return invalid numbers.
The endpoint is /api/queues/[vhost]/[queue]?msg_rates_age=600&msg_rates_incr=30. I'm looking for average message rates at 30 second increments over a 10 minute span of time. Usually that returns valid data for the stats I'm interested in, e.g.
{
"messages": 16,
"consumers": 30,
"message_stats": {
"ack_details": {
"avg_rate": 441
},
"publish_details": {
"avg_rate": 441
}
}
}
But sometimes I get incorrect results for one or both "avg_rate" values, often 714676 or higher. If I then wait 15 seconds and call the same API again the numbers go back down to normal. There's no way the average over 10 minutes jumps by a multiple of 200 and then comes back down seconds later.
I haven't been able to reproduce the issue with a local install, only in production where the queue is always very busy. The data displayed on the admin web page always looks correct. Is there some other way to get the same stats accurately like the UI?
I have to create items in podio using the api. When i let my program go full speed i noticed that after 5 - 6 items I get an error response from podio saying:
{
"error_propagate":false,
"error":"rate_limit",
"error_description":"You have hit the rate limit. Please wait 300 seconds before trying again",
"request":{
"url":"http://api.podio.com/oauth/token",
"query_string":"",
"method":"POST"
}
}
I tought the rate limit was 5000 calls/H and I get this error after 25 calls...
I added a thread.sleep in my code, and now it seems to be better, but even when I let the thread sleep for 10s I still get this error, I have now set the thread.sleep to 20 sec and it seems to work.
Is there a hidden rate limit to the number off calls per second ?
I think you are using Username password authentication here. The token request endpoint have lower limit from my experience. So the best way to solve this is to store and reuse the access tokens, instead of re-authenticating every time your program runs.
Podio API client libraries provide convenience methods to do this. See this links:
http://podio.github.io/podio-dotnet/sessions/
http://podio.github.io/podio-php/sessions
The rate limit is 1000 calls/H. so you can put sleep accordingly.
I am trying to send an HTTP request via JMeter. I have created a thread group with a loop count of 25. I have a ramp up period of 120 and number of threads set to 30. Within the thread group, I have 20 HTTP Requests. I am a little confused as to how JMeter runs these requests. Do each of the 20 requests within a thread group run in a single thread, and each loop over a thread group runs concurrently on a different thread? Or do each of the 20 requests run in different threads as and when they are available.
My other question is, Over each loop, I want to vary the body of the post data that is being sent via the HTTP request. Is it possible to pass the post data body via a file instead of inserting the data into the JMeter Body Data Tab as show below:
However, instead of doing that, I want to define some kind of variable that picks a file based on iteration of the threadgroup that is running, for example, if it is looping over the thread group the second time, i want to call test2.txt, if the third time test3.txt etc and these text files will contain different post data. Could anyone tell me if this is possible with JMeter please and if so, how would I go about doing this.
Point 1 - JMeter concurrency
JMeter starts with 1 thread and spawns more threads as per ramp-up set. In your case (30 threads and 120 seconds ramp-up) another thread is being added each 4 seconds. Each thread executes 20 requests and if there is another loop - starts over, if there is no loop - the threads shuts down. To control load and concurrency JMeter provides 2 options:
Synchronizing Timer - pause all threads till specified threshold is reached and then release all of them at the same time
Constant Throughput Timer - to specify the load in requests per minute.
Point 2 - Send file instead of text
You can replace your request body with __fileToString function. If you want to parametrize it you can use nested function to provide current iteration - see below.
Point 3 - adding iteration as a parameter
JMeter provides 2 options on how you can increment a counter each loop
Counter config element - starts from specified value and gets incremented by specified value each time it's called.
__counter function - start from 1 and gets incremented by 1 each time it's being called. Can be "per-user" or "global"
See How to Use JMeter Functions post series for comprehensive information on above and more JMeter functions.
We run multiple short queries in parallel, and hit the 10 sec limit.
According to the docs, throttling might occur if we hit a limit of 10 API requests per user per project.
We send a "start query job", and then we call the "getGueryResutls()" with timeoutMs of 60,000, however, we get a response after ~ 1 sec, we look for JOB Complete in the JSON response, and since it is not there, we need to send the GetQueryResults() again many times and hit the threshold, that is causing an error, not a slowdown. the sample code is below.
our questions are as such:
1. What is a "user" is it an appengine user, is it a user-id that we can put in the connection string or in the query itslef?
2. Is it really per API project of BigQuery?
3. What is the behavior?we got an error: "Exceeded rate limits: too many user/method api request limit for this user_method", and not a throttling behavior as the doc say and all of our process fails.
4. As seen below in the code, why we get the response after 1 sec & not according to our timeout? are we doing something wrong?
Thanks a lot
Here is the a sample code:
while (res is None or 'jobComplete' not in res or not res['jobComplete']) :
try:
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
except HTTPException:
if independent:
raise
Are you saying that even though you specify timeoutMs=60000, it is returning within 1 second but the job is not yet complete? If so, this is a bug.
The quota limits for getQueryResults are actually currently much higher than 10 requests per second. The reason the docs say only 10 is because we want to have the ability to throttle it down to that amount if someone is hitting us too hard. If you're currently seeing an error on this API, it is likely that you're calling it at a very high rate.
I'll try to reproduce the problem where we don't wait for the timeout ... if that is really what is happening it may be the root of your problems.
def query_results_long(self, jobId, maxResults, res=None):
start_time = query_time = None
while res is None or 'jobComplete' not in res or not res['jobComplete']:
if start_time:
logging.info('requested for query results ended after %s', query_time)
time.sleep(2)
start_time = datetime.now()
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
query_time = datetime.now() - start_time
return res
then in appengine log I had this:
requested for query results ended after 0:00:04.959110