Iteration over the same automated test case - selenium

I am using selenium webdriver and testNG to create automated test case. I am running the same test case multiple times for different set of data. The execution is slowing down after each iteration and at some point it becomes very slow and the process stops.
The code is very straightforward: iterating over the same testNG method containing selenese scripts (example:driver.findElement(By.id(target)).click();)
Any idea why the execution is getting slower and after multiple iterations it stops.

#Anna Clearing temp files solved a similar issue for me. My test was generating a lot of log files, screenshots, windows temp files, among others. Now I make automation clear my temp files and my results have been way better.
If that does not solve your issue, please share more information on how your automation is setup (testNG, Jenkins, Maven, etc) and the code that initiates the runs.

Related

Running Google Test cases non parallel

Because of the resource exhaustion there is a need to run test cases serially, without threads (integration tests for CUDA code). Went through source code (e.g. tweaking GetThreadCount()) and trying to find other ways how to influence gmock/gtest framework to run tests serially but found no way out.
Apparently at first did not find any command line arguments that could influence it. Feel like the only way out is to create many binaries or create a script that utilizes --gtest-filter. I would not like to mess with secret synchronization primitives between test cases.

Karate Execution getting stuck in the report generation step

I am executing my karate suite from teamcity. I have started facing an issue when i had to add some data csv files with 1700 rows and around 10 columns.
I got Out of memory error while local execution. I added argLine params and increased heapSize to 6G. In local I managed to solve the error.
When I moved this to continuous integration environment even with argline params 6G heap size, its getting stuck. Interesting fact is even if I exclude these large files tests using tags its getting stuck.
I am using parallel executer with 2 threads(I also tried with 1 thread). Also I use cucumber reports.
From the analysis what i understand is karate completes the test execution just before generating the reports json and cucumber reports it gets stuck.
I have tried to remove those huge CSV files and tried to put the data directly in examples inside my feature file. Still it gets stuck.
I have managed to fix this in my local, but it seems to be potetial issue. Any suggestions.
Total number of tests am running is 4500.
I am no expert on this but I would say break down your tests into many classes (you could start with having 2 runners instead of just 1) and have each class only call a portion of the .feature files you have. It is possible breaking your tests into multiple classes running parts of your test cases might relieve the memory problem.
For example:
https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/greeting/GreetingRunner.java

Data driven testing using Selenium Grid

I have to execute large number of test cases in parallel using TestNG and Selenium. Each test case will be executed in different data set using Data driven testing. How to run these test cases in parallel in different machines? We can use Parallel attribute in TestNG but that is restricted to a single machine.
Can Selenium Grid tweaked and use in this purpose? If yes how to use this or any other suggestion?
I want examples of (https://www.seleniumhq.org/docs/07_selenium_grid.jsp#when-to-use-it)
To reduce the time it takes for the test suite to complete a test
pass.
Basically it's quite complicated it needs lot of understanding i haven't done it but i know that you need to create one root machine and rest machines will be childs of the parent machine then you can run the test scripts parallel but you need to make sure that those script shouldn't be dependent otherwise their will be lot of issue
I have shared the link with you so you can check how you set up?
https://medium.com/#appening/how-to-run-your-test-on-multiple-machines-using-selenium-grid-3aa37d5d2b63

BDD with Manual Tests?

We are switching from a classic 'Waterfall' model into more Agile-orient philosophy. We decided to give BDD a try (Cucumber), but we have some issues with migrating some of our 'old' methodologies. The biggest question mark is how manual tests integrates into the cycle.
Let's say the Project Manager defined the Feature and some basic Scenario Outlines. With the test team, we defined around 40 Scenarios for this feature. Some are not possible to automatically test, which means they will have to be tested manually. Execute manual testing when all you have is the feature file, feels wrong. We want to be able to see past failure rate of tests for example. Most of the Test-Cases managers support such features, but they can't work with Feature files. Maintaining the Manual Testcases in external Test-Case manager, will cause never-ending updating issues between the Feature file and the Test-Case manager.
I'm interested to hear if anyone is able to cover this 'mid-ground' and how.
This is not a very unusual case. Even in Agile it may not be possible to automate every scenario. The scrum teams I am working with usually tag them as #manual scenario in the feature file. We have configured our automation suite (Cucumber - Ruby) to ignore these tags while running nightly jobs. One problem with this is, as you have mentioned, we won't know what was the outcome of manual tests as the testers document the results locally.
My suggestion for this was to document the results of each iteration in a YML or any other file format that suits the purpose. This file should be part of the automation suite and should be checked in the repository. So to start with you have results documented along with the automation suite. Later when you have the resource and time, you can add a functionality to your automation suite to read this file and generate a report either with other automation results or separately. Until then your version control should help you to track all previous results.
Hope this helps.
To add to #Eswar's answer, if you're using Cucumber (or one of it's siblings), one option would be to execute the test runner manually and include prompts for the tester to check certain aspects. They then pass/fail the test according to their judgement.
This is often useful for aesthetic aspects e.g. cross-browser rendering, element alignment, correct images used, etc.
As #Eswar mentioned, you can exclude these tests from your automated runs by tagging them.
See this article for an example.
Test cases that cannot be automated are a poor fit for a cucumber test. We have a bunch of these edge cases. It is nigh impossible to get Selenium to verify PDF documents well. Same thing for CSV downloads (not impossible, but not worth the effort). Look and feel tests simply require human eyes at this point. Accessibility testing with screen readers is best done manually as well.
For that, be sure to record the acceptance criteria in the user story in whichever tool you use to track work items. Write a manual test case. The likes of Azure DevOps, Jira, IBM Rational Team Concert and their ilk have ways to record manual test plans, link them to stories, and record the results of executing a manual test.
I would remove the manual test cases from the cucumber tests, and rely on the acceptance criteria for the story, and link the story to some sort of manual test case, be it in a tool or a spreadsheet.
Sometimes you just need to compromise.
We use Azure DevOps with Test Plans + some custom code to synchronize cucumber tests to ADO. I can describe how we’ve realized it in our projects:
We start with the cucumber file first. Each User Story has its own Feature file. The scenarios in the Feature are the acceptance criteria for the story. We end up with lots of Feature files, so we use naming conventions and folders to organize them.
We annotate the top of the Feature file with a tag to the User Story, eg #Story-1234
We‘ve written a command line utility that reads the cucumber files with these tags. It then fetches all the Test Suites in the Test Plan that are linked to Stories. In ADO, a story can only be linked to a single test suite. If a Test Suite doesn’t exist for that Story, our tool creates one.
For each Scenario, the tool creates a an ADO Test Case and then annotates the Scenario with the Test Case ID. This creates amazing traceability for each User Story as the related Test Cases are automatically linked to the Story in the Azure DevOps UI
Although we don’t do this, we could populate the TestCase with the step definitions from our cucumber Scenario. It’s a basic XML structure that describes the steps to take. This would be useful if we wanted to manually execute the test case using the Azure DevOps Test Case UI. Since we focus primarily on automation, we rely on the steps in our Feature files and our ADO Test Cases end up being symbolic links back to cucumber Scenarios.
Because our cucumber tests are written in C# (SpecFlow), we can get the full class name and method for the cucumber test code. Our tool is able to update the Azure DevOps Test Case with the automation details.
Any test case that isn’t ready for automation or must be done manually, we annotate the Scenario with a #ignore or #manual tag.
Using Azure DevOps Pipelines, we use the Visual Studio Test task to run our tests. The important point here is we execute the Test Plan option. This option fetches the Test Cases in the Test Plan that have automation and then executes the specific cucumber tests. The out-of-the-box functionally updates the Test Case statuses with the test results.
After running through automation, we use the Test Plan Report in Azure DevOps which shows the Test Case execution status over time and can distinguish between test automated and manual test cases.
We execute any remaining manual test cases to complete the Test Plan
For us, we often found that the manual cases that cannot be automated are exception cases, or cases that depend on external environment (for example malformed data, network connection not available, maintenance, first time guide...). These cases require special setup to simulate the environment when they happen.
Ideally, I believe it is possible to cover everything, given that you are prepared to go as far as you can to make it happen. But in reality, it is most often too much an effort needed that we prefer the hybrid approach of mixed manual-automatic test cases. We do, however, try to convert those exception cases over the time to automatic ones, by setting up separate environment to simulate exception cases and write automation tests against them.
Nevertheless, even with that effort, there would be cases when it's impossible to simulate, and I believe they should be covered by technical tests from engineers.
You could use an approach similar to the following example:
http://concordion.org/Example.html
When you use a build or continuous integration system to track your test runs, you could add simple specifications / tests for your manual cases that contain a text comparison (e.g. "pass" or "fail"). Then you would need to update the spec after each manual test run, check it in, and start the tests in your build / continuous Integration system. Then the manual results would be recorded together with the results of the automated test execution.
If you would use a tool like Concordion+ (https://code.google.com/p/concordion-plus/) you could even write a summary specification, which could contain scenarios for each of your manual tests. Each one would be reported as individual test result in your test execution environment.
Cheers
taking screen shots seems to be a good idea, you can still automate the verification but will need to go an extra mile. for instance when using Selenium you can add Sikuli(NB: u can't run headless test) to compare results (images) or take a screenshot with Robot (java.awt) use OCR to read text and assert or verify(TestNG)

I have worked on Selenium And now i am working on Testcomplete, BUt i feel Playback in TestComplete is very slow, How to Increase it ???Any idea

I have Earlier worked on Selenium Webdriver -java -Eclipse for a long time, but now i have been working on testcomplete-9-vbscript,
I have though realised that the playback in selenium -eclipse was much faster tah twhat i have seen in test complete
My question is :-
Is there a particular way that we can optimise the playback time of testcomplete
You can find a list of performance tips for TestComplete in this article on the SmartBear web site. I hope they will help you.
Below Points may Help you-
1) Please Confirm whether you are using Recorded Script for execution Or you are writing your own Scripts .Difference between it is, Script Prepared by Recorder, Collects all the Events and Objects based on your Activities performed while Recording ( May be some of them are not required during execution, and unnecessarily it makes delays , waits in your execution.
instead if you will drive your script by writing on your own code, then it may reduce your execution time.
2.) Modularization in Framework also reduces execution timing.( cause it makes Branching in code,Re-usability in Subroutines also minimizes timing.
3) You can add only those check points which are actually important during your Script Execution.
4) As an feature, Test Complete Collects some Browser Specific obj's and Properties also where as in case of Selenium , The RC/ Webdriver -Server directly recognizes your Code.
5) Also you can write Dynamic Waiting Conditions by using Looping, That can also improve your script performance.
You can Refer Support blogs for Framework Optimization,
Please Correct me if i Mentioned anything Wrong., Thanks.