Benchmarking/Performance testing of the API - REST/SOAP - api

I'm trying to benchmark/ do performance testing of API's at my work. So the client facing is REST format while the backend data is retrieved by SOAP messages. So my question is can some of you share your thoughts on how you implement it (if you have done so in the past/doing it now), am basically interested in avg response time it takes for API to return results for the client
Please let me know if you need any additional information to answer the question

Could not say it any better than Mark, really: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules

Maybe you should give JMeter a try.

You can try using Apache Benchmark.This is simple and quick
Jmeter gives you additional flexibility like adding functional cases along with performance details. Results will be almost similar to Apache Benchmark tool.
The detailed one which gives Functional Test Result, performance counters settings, Call response time details, CPU and Memory changes along with Load/Stress results, with different bandwidth and browser settings - Visual Studio Team system
I used VSTS2010 for performance testing. Also GET and POST are straight forward. PUT and DELETE need coded version of webtest.
Thanks,
Madhusudanan
Tesco

If you are trying to test the REST -> SOAP calls. One more thing you can consider is to have some stubs created (for backend). This way you can perf test REST -> Stub performance followed by Stub -> SOAP perfomance. This will help in analyzing the individual components.

Related

Concurrent testing in karate

I am using karate for automating the things in my project and I am so much exited to say that the way karate gives solutions on API testing. I have a requirement in my project where I need to check the effect on the system when multiple users are performing the same task at the same time(exactly same time including fraction of seconds). I want to identify the issues like deadlock, increased response time, application crashes etc... using this testing. Give me a glint that how can I get concurrent testing solution in karate?
There is something called karate-gatling, please read: https://github.com/intuit/karate/tree/master/karate-gatling

Karate Listener support

Does karate provide any listener support where I can intercept any specific things like rest calls?
This is more like added customization we want to perform apart from what the karate provides. There will be always something or other we need to customize based on the need.
Say that I have 10000 test cases running in parallel and using karate parallel runner I get a nice report with the time it takes for each step and test cases. One of my service is getting called multiple times and I wanted to know what is the average time the service takes out of all the calls. What is the maximum or minimum time it takes.
I think Karate Hooks will get you what you need - if you write a function to aggregate the responseTime.
I'm willing to look at introducing this feature if needed, but you'll have to make a proposal on what the syntax should look like. Feel free to open a feature request. Today we do have configure headers that is like a "before" for all requests. Maybe something along those lines.

AngularJS Protractor E2E Mocking

I have an Angular SPA retrieving its data from a node backend.
Since the node project is fully covered with tests I want to mock the Angular HTTP calls.
(I do not want to start a discussion about functional-/smoke-tests in general, thanks).
What I'd like to have is s.th. like this
Api = $injector.get('Api');
sinon.mock(Api, 'getSomethingFromServer').andRespondWith({foo: 'bar'})
assert(Api.getSomethingFromServer.wasCalledOnce);
But no matter how I can't find a nice solution.
I found several posts regarding the same issue.
For example this one.
Since protractor is changing a lot and frequently, I just like to ask here on SO if anyone found a proper solution for mocking the HTTP requests.
we are currently doing that using http://apiary.io
Besides being able to "mock" your responses, you get a nice API description as a bonus!
What we do is we run the Angular app against a proxy, which depending on whether we are in dev or in production can forward either to real backend or the one provided by apiary.
I agree with previous answer. An answer to frequent change of Protractor is to completly decorrelate the backend from the system under test, no matter if it is mock, stub, or fake.
The difficulty is to maintain a strong coherence with the real backend, but it is not said that it is more overhead than trying to maintain an always changing way of mocking with angular.

Advices to correctly manage threads

I have a big Domino Web application, which uses numerous calls "OpenAgent" to Java agents to retrieve data via ajax. The application is used by several users.
What are the main parameters that you advise me to check and adjust on server, in order to avoid HTTP hang or performance issues?
There is quite an overhead in calling to an agent be it LotusScript or Java. So if your AJAX calls are quite frequent you are going to overload the server easily.
Domino comes with a test tool for this called Server.Load. It will allow you to emulate a heavy load server and you will see how your code performs under that. Another I've used is Rational Functional Tester (trial version), but there are probably free ones out there as well (eg. JMeter/LoadRunner. I haven't used).
So if you are doing infrequent complex actions that may take time and don't need a quick response to the user, I would recommend to continue with the web agent.
If it is simple look up calls I would recommend to use alternative methods. For example XPages has the AJAX functionality built into it with scaling in mind. Or if it is JSON data then look into Domino Data Service, or Domino URL commands.

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps