Flaky test report on on TestMatrix completion after Firebase Test Lab run - google-cloud-sdk

Where is possible to see the list of tests marked as FLAKY from firebase test LAB?
I'm using:
functions.testLab.testMatrix().onComplete(testMatrix => {})
to receive the TestMatrix object and
--num-flaky-test-attempts=int
because they say:
Specifies the number of times a test execution should be reattempted if one or more of its test cases fail for any reason. An execution that initially fails but succeeds on any reattempt is reported as FLAKY.
In the documentation there is this part:
But there is no reference to the test marked as Flaky.
Where we can see the list of tests flaky?
In the TestMatrix object received as an output on onComplete, there is also no reference for flaky tests.

Unfortunately, flaky outcome is not reported at the matrix level at this point.
The flaky results are shown in History, Device, and Test Case views.
History view:
Test Case view:
Device view:

Related

How to detect when Robot Framework test case failed during a particular Test Setup keyword

The problem
I am currently running Robot Framework test cases with a target HW. During the Test Setup, I am flashing the software under test (firmware) to the target device. Due to several reasons, the flashing of the firmware fails sometimes, which causes the whole test case to fail. This is currently preventing me from getting any further meaningful results.
What I want to achieve
I want to detect when the test case fails in a particular Robot keyword during Test Setup. If a test case fails during a particular Test Setup keyword, I will retry/rerun the test case with a different target HW (I have my own Python script that executes the Robot runner with a given target device).
The main problem is that I don't know how to detect when a test case failed in a Test Setup keyword.
What I tried so far
My first guess was that this could be achieved by parsing the output.xml file, but I couldn't extract this information from there.
I have already considered retrying the flashing operation in Test Setup, but this won't work for me. The test case needs to be restarted from scratch (running in a different target HW).
Lastly, using "Run Keyword And Ignore Error" is not a solution neither since Test Setup must be run successfully in order to continue the test case.
The best solution I found so far: how to list all keywords used by a robotframework test suite
from robot.api import ExecutionResult, ResultVisitor
result = ExecutionResult('output.xml')
class MyVisitor(ResultVisitor):
def visit_keyword(self, kw):
print("Keyword Name: " + str(kw.name))
print("Keyword Status: " + str(kw.status))
result.visit(MyVisitor())
Output:
Keyword Name: MyKeywords.My Test Setup
Keyword Status: FAIL

Does testcafe support soft assertions

I have a scenario that is an end to end testing where I have multiple assertion points. Observed that when an assertion fails test stops. But I need to just report a failed step in test results and proceed further with the test execution. Does Test cafe support soft assertions?
Not yet. You might want to split your tests so that they test more specific things and not multiple assertions in a single test case.

Is there a way to abort a test suite in TestCafe if a test in the middle fails?

I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)

How to tell that a test case (TFS work item) had been tested?

I don't see in the test case work item status and reason anything to say "tested successfully"
Design status is for when the test case is being written
Ready status is for when the test case is ready to be tested
Closed status has reasons saying the test case is "not to test" (deprecated, different, duplicated)
So how can we mark a test case as "successfully tested"?
It does not seem right that the tester does not have to attest that the case has been tested with success.
There is an "Outcome" field that will show pass, failed ect.
There is no such kind of build-in state in Test Case work item. However, you could create your own customized work item state.
More details please take a look at this tutorial-- Add a workflow stateAdd a workflow state
If you have latest version of tfs on web test hub you should select
several test cases and update the state (screens from Azure DevOps
Service):
Set new value for the State field
Seems you're confusing Test Case with Test Result. A test as defined by a Test Case can be run multiple times to obtain many Test Results. This may not be very useful if the test is only getting run once, although it certainly still works; it is more useful when the test is being run multiple times, e.g. regression tests.
Also, if I may, your initial premise re: the different states a stock Test Case may be in is not correct. Per How workflow states and state categories are used in Backlogs and Boards, section State categories,
Design status is for when a test case is proposed, i.e. we should come up with a plan to test this case
Ready status is for when a test case is in progress, i.e. we are implementing the the test plan
Closed is for when a test case is completed, i.e. we have the plan
That said, I don't see why you couldn't use your above breakdown if it makes more sense for your team. But either way, the notion of "tested successfully" still belongs to the realm of the Test Result. Hope this helps to clarify things.

Retry Failed Automation Test Case from the logical point for E2E Automation

We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.