Junit 5 , parallel tests : logs getting mixed up - testing

for my project I am using junit5 for testing. Now, as we know when we run test in parallel, all the logs get mixed in different order. Now, as my project is more of backend, it requires debugging of logs alott.
Thus , is there a way when we can reorder the logs on the basis of test method ran ? here anyone has faced such issue and resolved it OR have any thoughts on that. It would be much help. Thank you.

Related

How to run TestCafe tests in parallel in CI by specifying the metadata

As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.

How to run tests in multiple environments (qa-dev) in TestParallel class and have results in one report?

We have QA and DEV environment in our automation repo. We are using karate as our framework. We have TestParallel class and integrated allure report.
How could we run all tests in QA first then in DEV back to back using TestParallel Class and see the results in the same report?
Thanks for such a great tool btw.
We are going to try and make this easier in the next version.
For now, you have to aggregate the reports yourself. Can you try this and let us know how it goes.
use the Runner class 2 times to run your tests with different settings and karate.env set for QA and then DEV
the important part is using a different value for the workingDir, e.g. target/reports/qa and then target/reports/dev - else the second run will overwrite the first
now when generating the HTML report, you can provide target/reports as the source folder. this should work for the Maven Cucumber Reports, for Allure, please figure this out on your own
if the above approach does not work well enough for your needs, please figure out a way to manually aggregate the Results object you get from each instance of the Runner, this should not be too complicated as Java code

Running GLORP tests

I am trying to get GLORP into the pharo 2.0 image. I managed to load GLORP , PostgresV2 driver and then changed the GlorpDatabaseLoginResource default login params. After that, i started running the tests starting with PostgresV2 tests TestPGConnection in this i got 2 failures testFieldConverter2 and testFieldConverter3.
after i ran the GlorpTest. here i got only 353 out of 674 tests passed. Is this normal? I am running the test using the testRunner. Any idea where i could have taken a possible bad step?
Thanks in Advance.
I got all test correct now. The probelm was in the image i was checking the DateAndTime offset: method was modified somehow (may be installing some other packages did that). So that was causing my Date And time related functions to fail. After i loaded all to a fresh image and ran it was like a fine piece of cake. :)

Display history of a single test result in Jenkins - additional plugin or config issue?

Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?

Cuccumber + Capybara, When running a scenario in my feature file only the background steps are running whilst the scenario steps are ignored

After quite a few hours of searching for answers to this to no avail, along with trying to source the issue myself within rubymine, I am now resigning to asking a question for it...
When I run one of my scenario's in my feature file, or all scenario's, it only processes the background steps and then ignores all the others that are within my scenario.
The stats at the end then report:
1 Scenario (1 Failed)
4 Steps (3 Skipped, 1 Passed)
So no steps failed! I have verified that the scenario works on another machine and passes successfully. Does anyone have an idea why it would just be ignoring my scenario steps?
Thank you in advance
I have actually managed to fix this problem myself!!! :)
In the javascript_emulation.rb file there is a known issue around capybara and racktest, the workaround and easy fix for that is to remove ::Driver after :Capybara for the java emulation bits.
If none of the ::Driver entries are removed the following error is returned:
undefined method 'click' for class 'Capybara::Driver:RackTest:Node' (NameError)
then a list of the problem areas in different files.
If the ::Driver entry is removed from the class Capybara::Driver:RackTest::Node
then the test will run but only process the background tests.
All instances of ::Driver must be removed in this file. For me there were 4 in total.
Hope this helps others :)