Can Karate-Gatling scripts be used to have distributed testing/clustering for performance testing - karate

We are currently using Jmeter for API performance testing in distributed mode (1 master + 3 slaves) as need to generate 10k requests.
Now using Karate for API functional testing and could integrate with Gatling using Maven dependencies successfully. As documentation says I could inject users and duration in these scripts and run>generate report (tested for 10 users).
Kindly guide, having below queries:
Is it possible to make these Karate-Gatling scripts to run as we do in Jmeter distributed mode.
How many users can be injected using Karate-Gatling scripts in a single machine (AWS/GCP mini instance/VM).
I guess this might vary how fast the application responds/volume.
I have gone through Jmeter Vs Gatling and looks like Clustering/distributed mode is supported only in Gatling paid version.

As per Gatling Performance Testing Pros and Cons article:
If you don’t want to pay for Gatling FrontLine, but you need to take your load test a little bit further, it may not be so easy to distribute the load as it is with JMeter. Despite that, not all is lost, as Gatling actually provides a way to distribute the load with the free version of the tool.
The way of distributing load in Gatling can be found here, but the main idea of Gatling’s distribution is based on a bash script that takes care of executing the Gatling scripts located in the slaves machines, which then sends the logs generated by the simulation to the master machine, where the consolidated report will be built.
So you can kick off several Gatling instances on several hosts and use the Bash script provided in order to run your test simultaneously on different machines. You might also want to use ssh-copy-id command to avoid entering the password for each machine

Related

How to create automation test flow with JMeter?

Is it possible to create Automation test flow with Jmeter?
So that all the Jmeter tests run it self and then I just gets the result generated for it?
Also can we add different scenarios for an application in that automation?
It would be great if some one can share on how to start from scratch for JMeter Automation.
Thanks
I don't think it is possible to have "Jmeter tests run it self ".
JMeter tests can be invoked in multiple ways:
Command-line
Ant task
Maven plugin
All options provide results either in .jtl file format or in HTML
If you want to invoke the tests in unattended manner - you can either rely on your operating system tasks scheduling mechanisms like Windows Task Scheduler or cron or even use a continuous integration tool like Jenkins, all of them are capable of kicking off the arbitrary tasks depending on various criteria, produce reports, display trends, etc.

should E2E be run in production

Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

What are best practises to run integration tests?

We have more than 150 postman tests. They are ** integration tests** and they run against actual databases and service fabric instances. They fail as they are not lined up with development who is merging to integration from time to time.
They are great to find some errors. It is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. We are using Newman to run them from the console. At the same time we want to improve our continuous deployment pipeline.
Questions
1.Where should we hold/run them ? Is there a cloud tool to run postman API tests ?
How should we use/approach to them ? ( After every commit? Daily ? )
Can we call postman API tests as integration or smoke tests ?
My understanding of smoke tests is that they should be are relatively small in size (150 tests seems at the first sight too much) and practically "never" (or not too often) fail. You want to include only mission critical endpoints for your application and the tests should execute very fast.
The scope of the smoke tests would be to mark a release / build or installation as unacceptable / failed by testing for simple failures, such as (but not necessarily limited to) is the status code 200 (or something else) and is the response in JSON format.
I would not rely on smoke tests for find actual bugs in a specific REST endpoint, but rather to get a general overview that things are running.
1.Where should we hold/run them ? Is there a cloud tool to run postman API tests ?
Use version control to save them and use Jenkins or another CI tool in order to run them.
Additionally you might want to run smoke tests after a deployment on a staging or production server.
Postman offers some paid tools as well.
How should we use/approach to them ? ( After every commit? Daily ? )
They should be part of your pipeline. Fail fast! Run them if possible and reliable after every commit or build. If this is not possible because of external dependencies that are unreliable during the day - for example - run them at night.
Can we call postman API tests as smoke tests ?
You can call them whatever you like!
The question is more like: "what are you exactly trying to achieve?". If your some tests do a bit too much or fail to often, it may be because they are more like integration tests.

Most effective and realistic free web-app load tester?

I'm in the middle of picking tools to load test my Ruby on Rails app. So far I'm trying out -
apachebench
autobench
httperf
selenium
trample
Is there anything else worth looking at? I don't have a ton of hardware, so efficiency is a concern.
The famous one (at least for me):
JMeter
The Grinder
OpenSTA
All support simulating concurrent users, can generate decent load, support distributed testing if required (with distributed agent). JMeter and OpenSTA have a recorder and recorded scripts are relatively easy to variablelize. For The Grinder, I'm not sure.
OpenSTA is the most polished one and with the most features (but is not portable).
JMeter is my preferred one mostly because I know it well and because testing can be easily automated (e.g. to be included in a build). Have a look at the user manual to get started. If you need to record over SSL, check BadBoy.
More interesting reading at Shootout: Load Runner vs The Grinder vs Apache JMeter.
Check out JMeter.