Why does the Intellij test runner count tests incorrectly? - testing

At some point during a testing run in Intellij test runner, the "total" test count begins to increase (as though its finding more tests while running), but the summary displayed when the tests are finished shows something like this:
Stopped. Tests failed: 5, passed: 3090, ignored: 392 of 3825 tests - 4 m 22 s 836 ms.
I'm not the best at math, but pretty sure 5 + 3090 + 392 != 3825.
I see nothing in Intellij bug reports mentioning something like this, so I'm wondering if there might just be a setup issue on my part, or something else entirely...

Related

Do we have an API to get Test Cycle Summary in Qtest?

Do we have an API in Qtest that can provide summary of test cycle execution ?
E.g. Passed: 23 Failed: 7 Unexecuted: 10 Running: 2
Need this data for generating report in our consolidated reporting tool along with data from some other sources.
Nothing that gives exactly what you ask for, but you could use the API calls below to create it yourself.
You can get the status of all test runs in a project using
GET /api/v3/projects/{projectId}/test-runs/execution-statuses
Or, to get results from a specific test cycle, first find all the test runs in that cycle using
/api/v3/projects/{projectId}/test-runs?parentId={testCycleID}&parentType=test-cycle
(append &expand=descendants to find test runs in containers under the test cycle)
and then get the results of each run individually using
/api/v3/projects/{projectId}/test-runs/{testRunId}/test-logs/last-run
See https://qtest.dev.tricentis.com/

Karate - Multi threaded access requested - issue

I have 100+ tests being covered in 25+ feature files and I have the karate-config.js which has 3 "karate.callSingle" functions as below.
config.weatherParams = karate.callSingle(
"file:src/test/java/utils/AvailableForecasts.feature",
config
);
config.routingParams = karate.callSingle(
"file:src/test/java/utils/CalculationInput.feature",
config
);
config.vesselParams = karate.callSingle(
"file:src/test/java/utils/VesselStatus.feature",
config
);
Same issue when I use classpath inside callSingle.
When I run all the tests at once with parallel (tried randomly 1-100 threads) enabled, I get the following error:
org.graalvm.polyglot.PolyglotException: Multi threaded access requested by thread Thread[pool-2-thread-8,5,main] but is not allowed for language(s) js.
- com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:132)
- com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:727)
- com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:627)
- com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:526)
- com.oracle.truffle.polyglot.PolyglotEngineImpl.enter(PolyglotEngineImpl.java:1857)
- com.oracle.truffle.polyglot.HostToGuestRootNode.execute(HostToGuestRootNode.java:104)
- com.oracle.truffle.polyglot.PolyglotMap.entrySet(PolyglotMap.java:119)
After playing around with multiple combinations- surprisingly, when I have only 2 "callSingle" functions in karate.config (commenting VesselStatus.feature) then it works fine.
All these 3 "callSingle" things calling 3 different services and sets the variable for other tests to run, so these 3 are critical.
Is there a way, we can re-optimize / bring a different approach to avoid the above issue?
This is a known issue that should be fixed in 1.1.0.RC2
Details here: https://github.com/intuit/karate/issues/1558
Would be good if you can confirm.
I faced this issue in my karate implementation #peter-thomas. I just got an easy workaround for this issue since we know that graalVM js engine doesnt support multithreading of karate-config.js
work around is - we can wait for a certain milliseconds and that milliseconds has to be genrated randomly.
below code inside karate-config.js have a look please -
function fn(){
// karate-config essential coding
var random_millis = Math.floor(Math.random() * 5000 - 1000 +1 )) + 1000;
java.lang.Thread.sleep(random_millis);
return something;
}
with above piece of code i tried my 100+ feature files running with 20 parrellal threads with karate 1.2.0.RC1 and it worked fantastically fine.
How its working - all the 20 threads will jump altogether , reaching karate-config at the same time. but if we apply some delay that too random between 1 to 5 seconds (in millis) , all threads will wait for different time avoiding multithreading issue.
I also know that between 1 to 5000 millisends , still there are suppose 1% chances that we get same numbers but till we get concrete solution of this issue i guess we can use this workaround.
Thanks,
Saurabh

How to skip a testcase if a link is not present and go to next link in Robot framework

Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================

Karate DSL surefire - reports duplicated time

In my work, we implemented a lot le features that call another feature because de reuse scenarios for many scenarios.
But, when see the html reporte, this one show 5 minutes execution when, in console said 2.5 minutes.
We found in sunfire reports that the time of the feature son, the step that call a web service delay 30 ms, but also the step that call this feature son has 30 ms. So is 60 ms.
feature parent
call (feature Son.feature) 30ms
this is the son
given url 0 ms
where status 200 30 ms
feature report
Column duration 60 ms
Excuse me por my bad english. Thanks for any help
2 things.
If you use the parallel runner, you will see different time (actual / elapsed)
When you call features, just focus on the time reported by the parent
Can you refer this video, so you can troubleshoot better: https://twitter.com/KarateDSL/status/1049321708241317888

Probability-based optimal Test Scheduling

Consider the cases where the test process will be terminated as soon as a defect is detected. We need to consider probability of passing in order to schedule the test sets so that the expected total test time will be minimized.
Ex:
91 0.805414173
79 0.73921812
61 0.940068379
Expected time : t1(1-p1) + (t1+t2)p1(1-p2) + ...
Can you please help in the algorithm to order the tests such that it will return minimum expected time.
The tasks can be ordered by
t1/(1-p1)
This will give the minimum test time.