Karate Listener support - testing

Does karate provide any listener support where I can intercept any specific things like rest calls?
This is more like added customization we want to perform apart from what the karate provides. There will be always something or other we need to customize based on the need.
Say that I have 10000 test cases running in parallel and using karate parallel runner I get a nice report with the time it takes for each step and test cases. One of my service is getting called multiple times and I wanted to know what is the average time the service takes out of all the calls. What is the maximum or minimum time it takes.

I think Karate Hooks will get you what you need - if you write a function to aggregate the responseTime.
I'm willing to look at introducing this feature if needed, but you'll have to make a proposal on what the syntax should look like. Feel free to open a feature request. Today we do have configure headers that is like a "before" for all requests. Maybe something along those lines.

Related

Custom request name in Gatling report in karate? [duplicate]

In the statics section of the Gatling report, the tests are grouped based on paths. However in our case the API URI + path is same and the functional flow will be differentiated based on the header values and the request method is POST. So in this case even if I test four different scenarios/flows, all the tests will be grouped as a single, since the path is same. Is there any option for us to group the statistics section based on scenarios/something else?
Is there any option for us to group the tests based on these scenarios? The expectation is something similar to this. The below one is a screen shot from a gatling.io page. It seems to be a web based application
Talk about timing. Here's the issue I raised a few hours back: https://github.com/intuit/karate/issues/526
So yes, this is a gap we are planning to address. Counting on you for beta-testing.
EDIT: Available since version 0.9.0 - this works by specifying a nameResolver: first put a header in your transaction and use a nameResolver to use this to group the transactions:
protocol.nameResolver = (req, ctx) => req.getHeader("karate-name")
refer the docs: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
Also note that you can group using Gatling in latest version: https://github.com/intuit/karate/issues/1467#issuecomment-772609249
This enhancement is now done on Karate-Gatling on the release 0.9.0.RC2 version. The statistical analysis will now display the results based on each business flows. So now onward you can have separate analysis for each business flow. This is the way the result will looks like in the new release
You can also do a detailed analysis of percentiles, deviations of each business flow separately.

Is there an option for us to customize and group the test scenarios in the statics section of the Karate Gatling Report?

In the statics section of the Gatling report, the tests are grouped based on paths. However in our case the API URI + path is same and the functional flow will be differentiated based on the header values and the request method is POST. So in this case even if I test four different scenarios/flows, all the tests will be grouped as a single, since the path is same. Is there any option for us to group the statistics section based on scenarios/something else?
Is there any option for us to group the tests based on these scenarios? The expectation is something similar to this. The below one is a screen shot from a gatling.io page. It seems to be a web based application
Talk about timing. Here's the issue I raised a few hours back: https://github.com/intuit/karate/issues/526
So yes, this is a gap we are planning to address. Counting on you for beta-testing.
EDIT: Available since version 0.9.0 - this works by specifying a nameResolver: first put a header in your transaction and use a nameResolver to use this to group the transactions:
protocol.nameResolver = (req, ctx) => req.getHeader("karate-name")
refer the docs: https://github.com/intuit/karate/tree/master/karate-gatling#nameresolver
Also note that you can group using Gatling in latest version: https://github.com/intuit/karate/issues/1467#issuecomment-772609249
This enhancement is now done on Karate-Gatling on the release 0.9.0.RC2 version. The statistical analysis will now display the results based on each business flows. So now onward you can have separate analysis for each business flow. This is the way the result will looks like in the new release
You can also do a detailed analysis of percentiles, deviations of each business flow separately.

Can we have different scenario for one api in karate?

Lets say I have one api and there are different scenarios to check in that one api.So for this can we add different scenarios in one feature file without calling api again and again.
You can certainly have multiple scenarios in one feature file.
And if you get a response back and you can do all the assertions you have in mind against this single response - you don't need to call again. Maybe you need a single scenario.
If you are expecting all your boundary conditions and non-happy paths to be achieved without making multiple HTTP calls, I'm sorry - I don't think any framework will do that magic for you.

Benchmarking/Performance testing of the API - REST/SOAP

I'm trying to benchmark/ do performance testing of API's at my work. So the client facing is REST format while the backend data is retrieved by SOAP messages. So my question is can some of you share your thoughts on how you implement it (if you have done so in the past/doing it now), am basically interested in avg response time it takes for API to return results for the client
Please let me know if you need any additional information to answer the question
Could not say it any better than Mark, really: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules
Maybe you should give JMeter a try.
You can try using Apache Benchmark.This is simple and quick
Jmeter gives you additional flexibility like adding functional cases along with performance details. Results will be almost similar to Apache Benchmark tool.
The detailed one which gives Functional Test Result, performance counters settings, Call response time details, CPU and Memory changes along with Load/Stress results, with different bandwidth and browser settings - Visual Studio Team system
I used VSTS2010 for performance testing. Also GET and POST are straight forward. PUT and DELETE need coded version of webtest.
Thanks,
Madhusudanan
Tesco
If you are trying to test the REST -> SOAP calls. One more thing you can consider is to have some stubs created (for backend). This way you can perf test REST -> Stub performance followed by Stub -> SOAP perfomance. This will help in analyzing the individual components.

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps