Reporting test result in order to create nightly metrics - testing

I have a micro services system written in java and deployed using docker containers.
In order to run our nightly tests, a tester container is started, performs it's e2e tests upon the system and creates it's junit report.
I wish to generate a summarized report at the end of the tester's run, simply a list of failed tests, and send it to another server for long time storing and analyzing.
I suppose I could alter said Tester's Dockerfile, and add this functionality as a command which process the junit report and send it, but I wonder what should be the best practice, both for generating the report and sending it.
Thanks in advance,
Ariel

Related

How to create automation test flow with JMeter?

Is it possible to create Automation test flow with Jmeter?
So that all the Jmeter tests run it self and then I just gets the result generated for it?
Also can we add different scenarios for an application in that automation?
It would be great if some one can share on how to start from scratch for JMeter Automation.
Thanks
I don't think it is possible to have "Jmeter tests run it self ".
JMeter tests can be invoked in multiple ways:
Command-line
Ant task
Maven plugin
All options provide results either in .jtl file format or in HTML
If you want to invoke the tests in unattended manner - you can either rely on your operating system tasks scheduling mechanisms like Windows Task Scheduler or cron or even use a continuous integration tool like Jenkins, all of them are capable of kicking off the arbitrary tasks depending on various criteria, produce reports, display trends, etc.

should E2E be run in production

Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.

Jenkins testing setup and test in every push for main project or testing code push

I have to test the project code using Jenkins. In Jenkins, I am using automation testing throw selenium. then my question is I have test scripts on my laptop and using Google Cloud virtual machine SHH i have to set up and test in every push on git. Here is my 2 demand-
1. on demand-whenever we want all the test run we should just trigger the test run.
2. Whenever we deploy something on staging then test all the test cases
Regarding (1), you are always able to trigger a job manually through the Jenkins UI. No special configurations there.
Regarding (2), you can install a plugin that will integrate webhooks functionality into Jenkins. In my case, I like to use Generic Webhook Trigger for this purpose, as it has the flexibility that I need on my setups.
In order to trigger the job on every deploy to staging, and assuming that your deploys are automated, you will need to add a final step on the deploy script, to make an HTTP request to the webhook URL (eg. JENKINS_URL/generic-webhook-trigger/invoke?token=<your-token>
I don't fully understand your setup with your machine and the VM on GCloud, in any case, I believe that the test code should be available to the machine running the tests, and not be stored in a location that might be unavailable when the tests need to be run (as your laptop might be).

Can Karate-Gatling scripts be used to have distributed testing/clustering for performance testing

We are currently using Jmeter for API performance testing in distributed mode (1 master + 3 slaves) as need to generate 10k requests.
Now using Karate for API functional testing and could integrate with Gatling using Maven dependencies successfully. As documentation says I could inject users and duration in these scripts and run>generate report (tested for 10 users).
Kindly guide, having below queries:
Is it possible to make these Karate-Gatling scripts to run as we do in Jmeter distributed mode.
How many users can be injected using Karate-Gatling scripts in a single machine (AWS/GCP mini instance/VM).
I guess this might vary how fast the application responds/volume.
I have gone through Jmeter Vs Gatling and looks like Clustering/distributed mode is supported only in Gatling paid version.
As per Gatling Performance Testing Pros and Cons article:
If you don’t want to pay for Gatling FrontLine, but you need to take your load test a little bit further, it may not be so easy to distribute the load as it is with JMeter. Despite that, not all is lost, as Gatling actually provides a way to distribute the load with the free version of the tool.
The way of distributing load in Gatling can be found here, but the main idea of Gatling’s distribution is based on a bash script that takes care of executing the Gatling scripts located in the slaves machines, which then sends the logs generated by the simulation to the master machine, where the consolidated report will be built.
So you can kick off several Gatling instances on several hosts and use the Bash script provided in order to run your test simultaneously on different machines. You might also want to use ssh-copy-id command to avoid entering the password for each machine

What are best practises to run integration tests?

We have more than 150 postman tests. They are ** integration tests** and they run against actual databases and service fabric instances. They fail as they are not lined up with development who is merging to integration from time to time.
They are great to find some errors. It is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. We are using Newman to run them from the console. At the same time we want to improve our continuous deployment pipeline.
Questions
1.Where should we hold/run them ? Is there a cloud tool to run postman API tests ?
How should we use/approach to them ? ( After every commit? Daily ? )
Can we call postman API tests as integration or smoke tests ?
My understanding of smoke tests is that they should be are relatively small in size (150 tests seems at the first sight too much) and practically "never" (or not too often) fail. You want to include only mission critical endpoints for your application and the tests should execute very fast.
The scope of the smoke tests would be to mark a release / build or installation as unacceptable / failed by testing for simple failures, such as (but not necessarily limited to) is the status code 200 (or something else) and is the response in JSON format.
I would not rely on smoke tests for find actual bugs in a specific REST endpoint, but rather to get a general overview that things are running.
1.Where should we hold/run them ? Is there a cloud tool to run postman API tests ?
Use version control to save them and use Jenkins or another CI tool in order to run them.
Additionally you might want to run smoke tests after a deployment on a staging or production server.
Postman offers some paid tools as well.
How should we use/approach to them ? ( After every commit? Daily ? )
They should be part of your pipeline. Fail fast! Run them if possible and reliable after every commit or build. If this is not possible because of external dependencies that are unreliable during the day - for example - run them at night.
Can we call postman API tests as smoke tests ?
You can call them whatever you like!
The question is more like: "what are you exactly trying to achieve?". If your some tests do a bit too much or fail to often, it may be because they are more like integration tests.