How can I avoid executing a whole failing test and relaunch it from a specific point instead? - selenium

Using selenium grid and JUnit5, I am executing tests that are way too long. Some of them may take about 30 minutes to complete. Maybe, what is failing is a silly locator, which I can easily fix in a few seconds. Then, in order to keep testing and check that the change has actually fixed the failure, I have to retest it again from the absolute beginning. So, is there a way to avoid this and to retake the test from a specific point?
Thanks in advance

AFAIK this is not possible.
Also, generally, tests should not be too long and too complex.
Making tests long and complex making the debugging and failures analysis much more complex.

Related

Whats wrong in using Hard Waits in UI Automation scripts?

I always keep on getting questions where people complaining explicit
or fluent waits not working all the time so looking for some other
solution. Well in most of the cases after putting a Hard Wait of 2 or
3 sec problem gets resolved.
But still the decision to make the scripts less faster does not makes
your scripts less reliable after using hard waits of few seconds if
they are really required is what I think.
I know the solution to my answer is not any code but want to really understand are they any serious hazards of using hard waits in Automation scripts?
For me they are equally important to that of explicit and fluent waits.
Hard or implicit waits are undesirable because they add uneccassary execution time to a automation suite. If your test suite is small, is not run regularly or you are not under any time restraints then maybe implicit waits are acceptable.
In most real world scenarios quick test execution turnaround is highly desirable and adding a few 2/3 second waits adds up pretty quickly if you are executing tests on every commit to a branch or have a large test suite.
I would argue that if explicit waits are not working properly then they are either:
poorly written
not given enough time
The former is more likely.
The hazard of using "Hard wait" or Thread.Sleep() more precisely is that you're unsure if your condition is met. In UI testing you mostly add sleeps so the UI will fulfill some condition such as Page to load, Element to appear / be enabled etc...
When you use Thread.Sleep() you need to set it to the maximum time so you'll be sure that your condition is met (even than you can't be sure about 'stress' situations - network load etc.). So you get wasting of your time and resources which make your automation less scalable and less agile (another hazard for some...)
So I would say if you can't add a condition to your wait Thread.Sleep() is fine (though less readable), otherwise it's simply a bad practice.

Making Selenium Tests Stable

There is an issue I have with automation.. Not that I do not agree that it is absolutely the best way to test an application, but in the sense that achieving stability is extremely hard to do. How do you guys make your tests stable? Of course I add explicit waits and the occasional thread.sleep(), but they are never 100% stable.. The point of this thread is to post some tips and tricks that you guys have discovered that have made an impact on your automated tests
You should try to avoid using thread.sleep(), the reason why is when you get to the point having big test suite you will waste your time on waiting. Instead learn how to use Explicit and Implicit Waits.
Community experts recommend to use explicit waits more often, it would allow you to wait for specific action to happen, and once it happen WebDriver will continue to work without wasting any more time.
Even though there is some more advance tips and tricks written by Mark Collin in Mastering Selenium WebDriver book
Sometimes you can fail tests on purpose and catch exceptions then based on it make decision, learn how to use "try catch", I don't think it is a good practice, but I have seen test engineers(including myself) use it a lot.
I would recommend looking at Selenide because if you don't want to go through the effort of figuring out how to make your own framework stable, you can just use the Selenide framework to get yourself going and then you wont need to worry about waits any more.
Of course, there is some value in creating your own framework, expecially if your doing test driven development and want to unit test your framework for Sonar code coverage. But, if your not at that level, using Selenide is what I would recommend for the biggest impact on your success.

How would you test a program that's supposed to run long time?

How would you test a program that's supposed to run long time?
In my project, the program is designed to keep running for years. It's impossible for the QA guys to really test the program that long time. Then what test strategy should I apply to guarantee that the program could really run for that long time?
Depends exactly what its supposed to be doing?
For example if you abstract all timer related things through an interface then you can "fake" the passage of time and run your application at say, 1,000,000 x speed. So if you ran a test for 1 min it would be as if you have tested it for 1,000,000 mins.
Of course this all depends on exactly what you're doing since perhaps the passage of time isn't what would cause your tests to actually require a long time to execute.
At any point in time, as the program executes, the program could enter an invalid state that causes it to crash (which I guess is what you are really interested in). So you are really asking "how can I ensure my program has a low probability of entering an invalid state". Your test strategy will have to be thorough. You might want to consider focusing on testing the parts of the software that do error recovery, to ensure that an internally detected error does not cause a crash.
Edit
Let me explain further.
All long running programs essentially serve a sequence of inputs or requests, each of which creates a task to be processed. Once processed, each task is discarded. You will want to ensure that a problem with one such task does not prevent processing of subsequent tasks, even if the problem with the task is due to a bug in the code for processing that task. In practice this means the server has some code for error recovery.
1) unit testing -- If the components work properly, it is more likely that the overall program will work properly.
2) simplified integration tests -- Try running it on a small problem that won't take as long to run.
You can unit test higher-level components of your program by providing fake objects to do the lower-level work that would normally be too expensive to run.
Test it for a shorter while. Usually a program will not run for years because of memory leaks and the like. If you don't have any leaks in a day or a week, then you will likely not have a leak in a year either.
Adding to others, think what you expect the state of the software to be after years of running. Consider what will change and try to produce the same situation in shorter time. For example if you think database will grow and have tens of millions of entries after couple years, simulate the situation by adding those entries now and verify that system can still perform.

RSpec errors when run en masse, but not individually

Unfortunately I don't have a specific question (or clues), but was hoping someone could point me in the right direction.
When I run all of my tests (rspec spec), I am getting two tests that fail specifically related to Delayed Job.
When I run this spec file in isolation (rspec ./spec/controllers/xxx_controller_spec.rb) all the tests pass...... Is this a common problem? What should I be looking for?
Thanks!
You are already mentioning it: isolation might be the solution. Usually I would guess that you have things in the database that are being changed and not cleaned up properly (or rather, are not not mocked properly).
In this case though I would suggest that, because the system is quite under a high workload, the delayed jobs are not being worked off fast enough. The challenge is with all asynchronous tasks that should be tested: you must not let the system run the delayed jobs, but mock the calls and just make sure that the delayed jobs have been received.
Sadly, with no examples, I can hardly point out the missing mocks. But make sure that all calls to delay_jobs and similar receive the correct data, but do not actually create and run those jobs - your specs will be faster, too. Make sure you isolate the function under test and not call external dependencies.

How many cycles are required to validate an automated script

I have one query. Maybe it is a silly question but still I need the answer to clear my doubts.
Testing is evaluating the product or application. We do testing to check whether there are any show stoppers or not, any issues that should not present.
We automate (script I am talking about) testcases from the present test cases. Once the test case is automated, how many cycle do we need to check the test that the script is running with no major errors and thus the script is reliable to run instead of manually executing the test cases.
Thanks in advance.
If the test script always fails when a test fails, you need to run the script only once. Running the script several times without changing the code will not give you additional safety.
You may discover that your tests depend on some external source that changes during the tests and thereby make the tests fail sometimes. Running the tests several times will not solve this issue, either. To solve it, you must make sure that the test setup really initializes all external factors in such a way that the tests always succeed. If you can't achieve this, you can't test reliably, so there is no way around this.
That said, tests can never make sure that your product is 100% correct or safe. They just make sure that your product is still as good (or better) as it was before all the changes your made since the last test. It's kind of having a watermark which tells you the least amount of quality that you can depend on. Anything above the watermark is speculation but below it (the part that your tests cover) is safe.
So by refining your tests, you can make your product better with every change. Without the automatic tests, every change has a chance to make your product worse. This means, without tests, your quality will certainly deteriorate while with tests, you can guarantee to maintain a certain amount of quality.
It's a huge field with no simple answer.
It depends on several factors, including:
The code coverage of your tests
How you define reliable