In the statics section of the Gatling report, the tests are grouped based on paths. However in our case the API URI + path is same and the functional flow will be differentiated based on the header values and the request method is POST. So in this case even if I test four different scenarios/flows, all the tests will be grouped as a single, since the path is same. Is there any option for us to group the statistics section based on scenarios/something else?
Is there any option for us to group the tests based on these scenarios? The expectation is something similar to this. The below one is a screen shot from a gatling.io page. It seems to be a web based application
Talk about timing. Here's the issue I raised a few hours back: https://github.com/intuit/karate/issues/526
So yes, this is a gap we are planning to address. Counting on you for beta-testing.
EDIT: Available since version 0.9.0 - this works by specifying a nameResolver: first put a header in your transaction and use a nameResolver to use this to group the transactions:
protocol.nameResolver = (req, ctx) => req.getHeader("karate-name")
refer the docs: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
Also note that you can group using Gatling in latest version: https://github.com/intuit/karate/issues/1467#issuecomment-772609249
This enhancement is now done on Karate-Gatling on the release 0.9.0.RC2 version. The statistical analysis will now display the results based on each business flows. So now onward you can have separate analysis for each business flow. This is the way the result will looks like in the new release
You can also do a detailed analysis of percentiles, deviations of each business flow separately.
Related
We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks
Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).
I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))
I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.
In the statics section of the Gatling report, the tests are grouped based on paths. However in our case the API URI + path is same and the functional flow will be differentiated based on the header values and the request method is POST. So in this case even if I test four different scenarios/flows, all the tests will be grouped as a single, since the path is same. Is there any option for us to group the statistics section based on scenarios/something else?
Is there any option for us to group the tests based on these scenarios? The expectation is something similar to this. The below one is a screen shot from a gatling.io page. It seems to be a web based application
Talk about timing. Here's the issue I raised a few hours back: https://github.com/intuit/karate/issues/526
So yes, this is a gap we are planning to address. Counting on you for beta-testing.
EDIT: Available since version 0.9.0 - this works by specifying a nameResolver: first put a header in your transaction and use a nameResolver to use this to group the transactions:
protocol.nameResolver = (req, ctx) => req.getHeader("karate-name")
refer the docs: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
Also note that you can group using Gatling in latest version: https://github.com/intuit/karate/issues/1467#issuecomment-772609249
This enhancement is now done on Karate-Gatling on the release 0.9.0.RC2 version. The statistical analysis will now display the results based on each business flows. So now onward you can have separate analysis for each business flow. This is the way the result will looks like in the new release
You can also do a detailed analysis of percentiles, deviations of each business flow separately.
Does karate provide any listener support where I can intercept any specific things like rest calls?
This is more like added customization we want to perform apart from what the karate provides. There will be always something or other we need to customize based on the need.
Say that I have 10000 test cases running in parallel and using karate parallel runner I get a nice report with the time it takes for each step and test cases. One of my service is getting called multiple times and I wanted to know what is the average time the service takes out of all the calls. What is the maximum or minimum time it takes.
I think Karate Hooks will get you what you need - if you write a function to aggregate the responseTime.
I'm willing to look at introducing this feature if needed, but you'll have to make a proposal on what the syntax should look like. Feel free to open a feature request. Today we do have configure headers that is like a "before" for all requests. Maybe something along those lines.
I'm trying to benchmark/ do performance testing of API's at my work. So the client facing is REST format while the backend data is retrieved by SOAP messages. So my question is can some of you share your thoughts on how you implement it (if you have done so in the past/doing it now), am basically interested in avg response time it takes for API to return results for the client
Please let me know if you need any additional information to answer the question
Could not say it any better than Mark, really: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules
Maybe you should give JMeter a try.
You can try using Apache Benchmark.This is simple and quick
Jmeter gives you additional flexibility like adding functional cases along with performance details. Results will be almost similar to Apache Benchmark tool.
The detailed one which gives Functional Test Result, performance counters settings, Call response time details, CPU and Memory changes along with Load/Stress results, with different bandwidth and browser settings - Visual Studio Team system
I used VSTS2010 for performance testing. Also GET and POST are straight forward. PUT and DELETE need coded version of webtest.
Thanks,
Madhusudanan
Tesco
If you are trying to test the REST -> SOAP calls. One more thing you can consider is to have some stubs created (for backend). This way you can perf test REST -> Stub performance followed by Stub -> SOAP perfomance. This will help in analyzing the individual components.