Situation
I am preparing a JMeter test plan to test against AWS Device provisioning MQTT API.
The flow is:
connect to AWS IoT Core endpoint
make CreateKeysAndCertificate request until receiving a success response as the below one (as there is a 10 times requests per second quota limit in aws)
{
"certificateId": "string",
"certificatePem": "string",
"privateKey": "string",
"certificateOwnershipToken": "string"
}
extract certificateOwnershipToken from the success response to make RegisterThing request until receiving a success response (as there is a 10 times requests per second quota limit in aws)
{
"certificateOwnershipToken": "string",
"parameters": {
"string": "string",
...
}
}
If the no. of requests exceeds AWS quota limits, the response would be as the below one
{
"statusCode":412,
"errorCode": "Throttled",
"errorMessage": "Rate limit exceeded"
}
Problems in my test plan
My test plan is as below pics, there are two problems
I want to make 500 success requests, but now the test plan only make 500 requests(including those fails)
I need to use certificateOwnershipToken from each success request for each thread to make RegisterThing request, but now certificateOwnershipToken would be the default value not_found even if CreateKeysAndCertificate request is successful
What are the problems in my test plan and how to fix them? Thank you.
AWS Device provisioning MQTT API docs
JMeter MQTT plugin github page
There are following problems in your Test Plan:
The condition in While Controller should be inverted, currently it repeats the request until certificateOwnershipToken is not equal to not_found and my expectation is that you should repeat the request until it equals and stop repeating the request once the value will differ. Moreover you should be using __jexl3() or __groovy() function for performance reasons instead of __javaScript().
If there is a rate limiting for the certificateOwnershipToken request you shouldn't be sending requests at the rate above this limit, consider using an appropriate JMeter Timer like Precise Throughput Timer so JMeter would send at most 10 requests per second
Related
I received the "JS error: Exceeded the total HTTP connection count limit!", error in Voximplant Platform today, but I can't find the limits. Where do I find this information?
There are some restrictions in Voximplant Javascript engine. One of them limits number of concurrent HTTP requests being made from the call scenario - only 3 requests can be processed simultaneously.
You should consider adjusting scenario flow to reduce number of simultaenous requests. For example, you can use httpRequestAsync and chain requests using await.
Full list of sandbox restrictions can be found here.
Reading the docs I found:
XMPP server throttling
We limit the rate that you can connect to FCM XMPP servers to 400 connections per minute per project. This shouldn't be an issue for message delivery, but it is important for ensuring the stability of our system.
For each project, FCM allows 2500 connections in parallel.
https://firebase.google.com/docs/cloud-messaging/concept-options#xmpp_throttling
Also within that page there was a description of different ways to connect to FCM to send messages. HTML and XMPP are different mechanisms, so I am assuming that the admin SDK (Golang in my case) uses HTTP under the hood and not XMPP so please correct me if that's not true.
If the admin SDK uses HTTP, that means there can only be 2500 simultaneous connections.
I'm making a scalable application where users basically define their own schedule for notifications (and the messages) and a server retrieves it, runs on a timer loop every 30 seconds or so to see who needs their message sent.
For all intents and purposes, each notification is different. However the vast majority of these notifications will land on the hour. Meaning my server will have to send possibly many thousands of notifications within the X:00 minute in the hour. It's important these notifications come on time (ie, I cannot space them all out within the hour).
Using workarounds like topics won't work in my case because everyone is individual.
I'm just thinking of options to deal with these limitations (and make sure I understand them). If FCM allows 2500 connections in parallel via the admin SDK in Go, can I do 2500 async connections, wait until they all finish, and do another 2500, rinse and repeat? That way if I have 25,000 subscribed users let's say, and each takes 1 seconds, I could theoretically send all the notifications in 10 seconds, which is acceptable.
Are there any other rate limits that I need to be aware of?
Thanks!
I am assuming that the admin SDK (Golang in my case) uses HTTP under the hood
That is correct. The Admin SDKs use the versioned HTTP API to make calls to FCM.
The key to scaling your FCM usage is to use the resources efficiently. For example in the versioned API (that the Admin SDKs use under the hood) you can pass up to 500 requests over a single HTTP connection, meaning that you can amortize the cost of building the connection over many calls.
You can find an example of the actual HTTP calls in the REST example in the documentation on sending messages to multiple devices:
--subrequest_boundary
Content-Type: application/http
Content-Transfer-Encoding: binary
Authorization: Bearer ya29.ElqKBGN2Ri_Uz...HnS_uNreA
POST /v1/projects/myproject-b5ae1/messages:send
Content-Type: application/json
accept: application/json
{
"message":{
"token":"bk3RNwTe3H0:CI2k_HHwgIpoDKCIZvvDMExUdFQ3P1...",
"notification":{
"title":"FCM Message",
"body":"This is an FCM notification message!"
}
}
}
...
--subrequest_boundary
Content-Type: application/http
Content-Transfer-Encoding: binary
Authorization: Bearer ya29.ElqKBGN2Ri_Uz...HnS_uNreA
POST /v1/projects/myproject-b5ae1/messages:send
Content-Type: application/json
accept: application/json
{
"message":{
"token":"cR1rjyj4_Kc:APA91bGusqbypSuMdsh7jSNrW4nzsM...",
"notification":{
"title":"FCM Message",
"body":"This is an FCM notification message!"
}
}
}
--subrequest_boundary--
In the Go Admin SDK this'd be equivalent to calling sendAll, which:
func (c Client) SendAll(ctx context.Context, messages []*Message) (*BatchResponse, error)
SendAll sends the messages in the given array via Firebase Cloud Messaging.
The messages array may contain up to 500 messages. SendAll employs batching to send the entire array of [messages] as a single RPC call. Compared to the Send() function, this is a significantly more efficient way to send multiple messages. The responses list obtained from the return value corresponds to the order of the input messages. An error from SendAll indicates a total failure -- i.e. none of the messages in the array could be sent. Partial failures are indicated by a BatchResponse return value.
Im new to REST API testing, I want to do a load of an HTTP REST API, is there any possibility to do a 100 number of parallel HTTP request testing with Jmeter?
And also my request need a query params, can I provide a list of values to Jmeter and Jmeter can loop each request with one param form the list provided?
Appreciate your help.
JMeter executes requests as fast as it can, I don't fully understand your 100 number of parallel HTTP request stanza.
If you want 100 virtual users concurrently accessing your API endpoint: under Thread Group specify number of threads as 100 and set loop count to Forever or -1. In this case the actual number of requests per second will depend on your application response time
If you want to send 100 requests at exactly the same moment - use Synchronizing Timer
If you need to send 100 requests per second - use Precise Throughput Timer
For parameterizing your request with external data people normally use CSV Data Set Config
I'm using the Autodesk Forge - Model Derivative service to convert a Revit (.rvt) file to IFC and SVF formats. To be notified when these conversions are done I have set up a webhook using the webhooks system Autodesk created for this purpose.
Now, when the conversion is done, the webhook succesfully sends a request to my given callback URL, but it does it more than once. And even the amount of times it fires isn't constant. It just fired the webhook twice in a minute, and then again after ten minutes. What could be a reason for this?
The problem is, that the first time the webhook fires, I can't even access the completed derivatives completely yet (I can't load the hierarchy and properties yet). It seems to be an issue on Autodesk's side, but I was wondering if anyone else encountered this problem.
This is the webhook I've created, and the only one on my account:
"links": {
"next": null
},
"data": [
{
"hookId": "<<hook id>>",
"tenant": "jobstarted",
"callbackUrl": "<<my url>>/callback",
"createdBy": "2UbAPVbvWGfLAPHWLZD7Mld0bVRpI8aJ",
"event": "extraction.finished",
"createdDate": "2019-07-29T09:06:46.971+0000",
"system": "derivative",
"creatorType": "Application",
"status": "active",
"scope": {
"workflow": "jobstarted"
},
"urn": "<<hook urn>>",
"__self__": "/systems/derivative/events/extraction.finished/hooks/<<hook id>>"
}
]
}
What I expect, is for the webhook to fire only once, so I can trust that I can access, download and use the entire derivative's data. Currently it's just unreliable.
Such behavior typically happens when our service did not receive a successful response from your end to the callbacks. Try divert the callbacks to a third party service (e.g. ngrok) and record them to isolate the issue. And see here for more details:
Webhooks guarantees at least once delivery. When the event occurs, the webhooks service sends a payload to the callback URL as an HTTP POST request. The webhook service expects a 2xx response to the HTTP POST request. The response must be received within 6 seconds. A non-2xx response is considered an error. In the event of an error, the webhook service will retry immediately, in 15 minutes, and 45 minutes thereafter. The webhook service retries for 48 hours and disables the webhook if the callback does not succeed during this time. You may need to reconfigure your webhooks if they are disabled.
I am using Kong 0.10.3 and it seems that the "latencies" object being logged by kong using the file logging plugin and the LATENCY headers in the response have erroneous values.
Based on the Kong documentation, the "request" latency is overall latency, first byte in and last byte out, "proxy" is processing time of the upstream API and "Kong" latency is the time for Kong to execute plugins on the request/response.
My issue is that the kong latency is frequently reported as 0 AND kong+proxy latency typically equals the request latency. Based on the documentation, I would think there would be a difference to account for transfer of the request/response payload.
I am trying to figure out if my API clients are slow, but these values returned seem to be faulty and not helping at all.
In this example, my request had a 6.6MB payload and Kong logged these latencies.
if the proxy took 9648ms to do it's work, all I am left with is 38ms which is the Kong latency, and no remainder to account for the data transfer time.
"latencies": {
"request": 9686,
"proxy": 9648,
"kong": 38
}
Am I missing something or is this a Kong issue?
Does the same issue exist in the latest Kong Community Edition version 0.13?