Which http status code is the appropriate for a request with N items and some of them failed validation? - api

I'm working on API REST service that handle more or less 50 req/s in parallel. Every request come 100 items, some of them are valid and another failed validations
For example, a request to insert a list of 100 products, 10 of them do not meet the specifications of the input schema. I'm wondering what HTTP code would be appropriate to respond considering that the service process the valid items (90 items) and return the ids and error details for failed items (10 items).
201 (Created) , 202 (Accepted), 400 (Bad request), 422 (Unprocessable Entity) or none of them?
Thanks beforehand

Related

Stress Test hit an HTTP error 400 during stress test with 600 threads

i'm doing Stress Test for my API for two endpoint. First is /api/register and second is /api/verify_tac/
request body on /api/register is
{
"provider_id": "lifecare.com.my",
"user_id": ${random},
"secure_word": "Aa123456",
"id_type": "0",
"id_number": "${id_number}",
"full_name": "test",
"gender": "F",
"dob": "2009/11/11",
"phone_number": ${random},
"nationality": "MY"
}
where ${random} and ${id_number} is a list from csv data config.
while request body for verify_tac is
{
"temp_token": "${temp_token}",
"tac":"123456"
}
${temp_token} is a response extract from /api/register response body.
For the test. I have done 5 type of testing without returning all error.
100 users with 60 seconds ramp up periods. All success.
200 users with 60 seconds ramp up periods. All success.
300 users with 60 seconds ramp up periods. All success.
400 users with 60 seconds ramp up periods. All success.
500 users with 60 seconds ramp up periods. All success.
600 users with 60 seconds ramp up periods. most of the /api/register response data is empty resulting in /api/verify_tac return with an error. request data from /api/verify_tac that return an error is
{
"temp_token": "NotFound",
"tac":"123456"
}
How can test number 6 was return with an error while all other 5 does not return error. They had the same parameter.
Does this means my api is overload with request? or weather my testing parameter is wrong?
If for 600 users response body is empty - then my expectation is that your application simply gets overloaded and cannot handle 600 users.
You can add a listener like Simple Data Writer configured as below:
this way you will be able to see request and response details for failing requests. If you untick Errors box JMeter will store request and response details for all requests. This way you will be able to see response message, headers, body, etc. for previous request and determine the failure reason.
Also it would be good to:
Monitor the essential resources usage (like CPU, RAM, Disk, Network, Swap usage, etc.) on the application under test side, it can be done using i.e. JMeter PerfMon Plugin
Check your application logs for any suspicious entries
Re-run your test with profiler tool for .NET like YourKit, this way you will be able to see the most "expensive" functions and identify where the application spends most time and what is the root cause of the problems

JMeter , Execution order is not correct

I have 4 transaction controllers named
Trans_api1
__Http Request
Trans_api2
__Http Request
Trans_api3
__Http Request
Trans_api4
__Http Request
that contain Http Requests, However when I run my test plan, I want them to run in numerical order but then they run randomly. How do I fix the order to it runs from 1-4?
Each JMeter thread (virtual user) runs Samplers upside down so you don't need to do anything, your requests are executed from top to bottom already. If you run your test with 1 user - you will see that requests are being executed sequentially.
If you're seeing some "mess" most probably it's caused by concurrency, like
1st user starts 1st request
2st user starts 2nd request
2nd user starts 1st request
1st user starts 3rd request
etc.
You can see this yourself if you add ${__threadNum} function as the prefix or postfix for your request (or transaction controller) label and eventually ${__groovy(vars.getIteration(),)} function to display the current loop number
With 1 user:
With 2 users:
With 2 users and 2 iterations:
As it evidenced by the above images each users executes samplers sequentially on each iteration, these "inconsistencies" are misleadingly interpreted due to concurrency
See Apache JMeter Functions - An Introduction article to get familiarized with JMeter Functions concept

Why does S3 sometimes return HTTP 206 when the whole file has been downloaded?

I've had my S3 bucket logging into another bucket using Server Access Log Format for a while. For the Operation: REST.GET.OBJECT sometimes an HTTP Status: 206 Partial Content is returned because the whole file wasn't downloaded. But I can see in the logs that sometimes when HTTP Status: 206 is returned the whole file was downloaded. I've removed some fields to make it simpler:
Operation: REST.GET.OBJECT
Request-URI: "GET [File] HTTP/1.1"
HTTP Status: 206
Error Code: -
Bytes Sent: 76431360
Object Size: 76431360
Total Time: 16276
Turn-Around Time: 190
What happened here? If the Bytes Sent are the same as the Object Size then how can the source report this as a Partial Content?
The 206 status has nothing to do with incomplete file transfer. The server determines what status code to send before it starts sending the response body, so it would have to predict future to know whether it will be able to send the whole file.
Instead, what 206 status code actually means is that the following three things happened at once:
the client sent Range header in its request;
the server decided to honour it and send exactly the bytes requested, not the whole file;
the server was actually able to do so — the range was valid and satisfiable.
In this case, the standard requires the server to reply with the 206 status code, not 200, regardless whether the range happen to cover exactly the whole file or only a part of it.

Podio Create Item rate limit after 25 calls

I have to create items in podio using the api. When i let my program go full speed i noticed that after 5 - 6 items I get an error response from podio saying:
{
"error_propagate":false,
"error":"rate_limit",
"error_description":"You have hit the rate limit. Please wait 300 seconds before trying again",
"request":{
"url":"http://api.podio.com/oauth/token",
"query_string":"",
"method":"POST"
}
}
I tought the rate limit was 5000 calls/H and I get this error after 25 calls...
I added a thread.sleep in my code, and now it seems to be better, but even when I let the thread sleep for 10s I still get this error, I have now set the thread.sleep to 20 sec and it seems to work.
Is there a hidden rate limit to the number off calls per second ?
I think you are using Username password authentication here. The token request endpoint have lower limit from my experience. So the best way to solve this is to store and reuse the access tokens, instead of re-authenticating every time your program runs.
Podio API client libraries provide convenience methods to do this. See this links:
http://podio.github.io/podio-dotnet/sessions/
http://podio.github.io/podio-php/sessions
The rate limit is 1000 calls/H. so you can put sleep accordingly.

How to calculate total response time in jmeter?

I have send 10 consecutive http requests in jmeter.
I have stored output as csv file.
endTimeMillis responseTime latency sentBytes receivedBytes responseCode
1357279943.984 1426 1426 347 287 200
1357279944.685 1888 1888 347 287 200
..............
..............
In above output file response time displayed by each request. But i need to calculate total response time for 10 requests.
How to calculate total response time in jmeter?
You need a Transaction Controller. Put elements times of which you want to sum under it. Transaction controller will then appear in all your listeners. Its load and latency times will be sums of those parameters of its nested elements.
Note that this time by default includes all processing within the controller scope, not just the samples, this can be changed by unchecking "Include duration of timer and pre-post processors in generated sample".