We currently use Behat 3 to automate BDD tests for our website.
The current setup uses Jenkins to run Selenium which attaches to Firefox and uses XVFB to render (this allows us to save screenshots when anything goes wrong).
This is great for testing that the site (including JavaScript) works and that a user can perform each documented task successfully.
I am looking to expand our testing facilities, and one thing I would like to add is the ability to check multiple browsers. This is very important as we get occasional quirks that can break functionality.
Since the tests currently take slightly over an hour to run (and we have 4 suites for that site on Jenkins), I'd preferably like to run all the browsers at the same time. If I can't find a way to do it concurrently, then I likely will just set up multiple Behat profiles and run each one in series.
One thing I've been looking at as a possible solution is Ghostlab. This would allow us to test across, multiple browsers and multiple devices, including mobile, at the same time. The problem is that I can't find a way of joining this to Behat in a meaningful way.
I could run one browser connected to Ghostlab, which would cause the same actions to be taken across all connected browsers, however, were a browser other than the one controlled by Selenium to break, I do not know how we would capture that information.
TL;DR: Is there any way for me to run BDD (preferable Behat) tests across multiple browsers in parallel, and capture information from any browser that fails?
This is what multi-configuration jobs (or matrix jobs) are designed for in Jenkins.
You specify your job configuration once, but add one or more variables that should change each time, building a matrix of combinations (in your case, the matrix has one dimension: browser).
Jenkins then runs one main build with multiple sub-builds in parallel — one for each combination in the matrix. You can then clearly see the results for each combination.
This requires that your test job can be parameterised, i.e. you can choose at runtime which browser should be run, rather than running all tests together in a single job.
The Jenkins wiki has minimal documentation on this feature, but there are a few good blog posts (and Stack Overflow questions) out there on how to set it up.
A matrix job will use all available "executors" in Jenkins, to run builds in parallel as much as possible.
In a default Jenkins installation, there are two executors availble, but you can change this, or extend Jenkins by adding further build machines.
Related
I'm just curious if there's any known unwanted effect of this flag on automation, or if it can make my tests less valid.
I'm currently running tests with this flag and it doesn't seem to hurt anything. Is it just overlooked?
https://peter.sh/experiments/chromium-command-line-switches/#browser-test
https://github.com/GoogleChrome/puppeteer/blob/master/lib/Launcher.js#L38
The --browser-test activates an internal test used by the Chromium developers regarding canvas repaints.
Some older code in the repository, gives this hint
Tells Content Shell that it's running as a content_browsertest.
And this issue in the Chromium repository contains more information:
We need a test that checks canvas capture happens for N times when there are N repaints. This test is not appropriate for webkit layout tests as it is slow and there are mock streams involved.
Looks like they added a special flag for this test.
Therefore, you should not activate this flag as this test is about internal browser tests by the developer team not about testing websites.
I am working in automation in selenium with java and testng.I have completed all my test scripts but i don't have practical experience of working in Selenium in IT industry.
My question is how the test scripts run after completing test scripts for a specific project for regression?
1.Using Eclipse(any IDE) on regular basis or
2.Making any jar file to run on regular basis or
3.Any other means
Please let me know what happens according to company point of view.
It really depends on companies point of view. From my work practice - we had been doing regression via selenium (Eclipse IDE), but again if the company practices continuous integration system, mostly the test must be ran by a machine, so probably then it's better to use jar which would return test results to some kind of file.
Different companies follow different approach while doing regression testing, when using automated scripts. There is not set standards or a SOP (Standard Operating Procedure) for this. The higher level of hierarchy in the testing department (test lead etc.) has to decide, which practise will have a maximum ROI while following a specific practise.
For example, in my current organisation, I have been running a CI system, using Jenkins, which runs all the automation scripts, at a specified time on the day the regression is supposed to begin - or we trigger it and the CI takes care of the rest.
In my previous organisation, for regression purposes, we have had a dedicated system, where we would copy all the scripts, make all the necessary updates/system changes and then trigger the scripts to have the tests run.
I believe not many big companies would follow the practise of having their regression tests run individually via Eclipse IDE, since for a whole sprint (or for a whole project), there would be hundreds of test cases, involving a lot of scripts, and running them via Eclipse, would make them pretty boring and time consuming as well. Plus every single test script run would generate a separate report, which would be too complex to store and debug in case of any failure.
However, as I said, this depends entirely on how the company sees the ROI and effort to be made for this.
We are switching from a classic 'Waterfall' model into more Agile-orient philosophy. We decided to give BDD a try (Cucumber), but we have some issues with migrating some of our 'old' methodologies. The biggest question mark is how manual tests integrates into the cycle.
Let's say the Project Manager defined the Feature and some basic Scenario Outlines. With the test team, we defined around 40 Scenarios for this feature. Some are not possible to automatically test, which means they will have to be tested manually. Execute manual testing when all you have is the feature file, feels wrong. We want to be able to see past failure rate of tests for example. Most of the Test-Cases managers support such features, but they can't work with Feature files. Maintaining the Manual Testcases in external Test-Case manager, will cause never-ending updating issues between the Feature file and the Test-Case manager.
I'm interested to hear if anyone is able to cover this 'mid-ground' and how.
This is not a very unusual case. Even in Agile it may not be possible to automate every scenario. The scrum teams I am working with usually tag them as #manual scenario in the feature file. We have configured our automation suite (Cucumber - Ruby) to ignore these tags while running nightly jobs. One problem with this is, as you have mentioned, we won't know what was the outcome of manual tests as the testers document the results locally.
My suggestion for this was to document the results of each iteration in a YML or any other file format that suits the purpose. This file should be part of the automation suite and should be checked in the repository. So to start with you have results documented along with the automation suite. Later when you have the resource and time, you can add a functionality to your automation suite to read this file and generate a report either with other automation results or separately. Until then your version control should help you to track all previous results.
Hope this helps.
To add to #Eswar's answer, if you're using Cucumber (or one of it's siblings), one option would be to execute the test runner manually and include prompts for the tester to check certain aspects. They then pass/fail the test according to their judgement.
This is often useful for aesthetic aspects e.g. cross-browser rendering, element alignment, correct images used, etc.
As #Eswar mentioned, you can exclude these tests from your automated runs by tagging them.
See this article for an example.
Test cases that cannot be automated are a poor fit for a cucumber test. We have a bunch of these edge cases. It is nigh impossible to get Selenium to verify PDF documents well. Same thing for CSV downloads (not impossible, but not worth the effort). Look and feel tests simply require human eyes at this point. Accessibility testing with screen readers is best done manually as well.
For that, be sure to record the acceptance criteria in the user story in whichever tool you use to track work items. Write a manual test case. The likes of Azure DevOps, Jira, IBM Rational Team Concert and their ilk have ways to record manual test plans, link them to stories, and record the results of executing a manual test.
I would remove the manual test cases from the cucumber tests, and rely on the acceptance criteria for the story, and link the story to some sort of manual test case, be it in a tool or a spreadsheet.
Sometimes you just need to compromise.
We use Azure DevOps with Test Plans + some custom code to synchronize cucumber tests to ADO. I can describe how we’ve realized it in our projects:
We start with the cucumber file first. Each User Story has its own Feature file. The scenarios in the Feature are the acceptance criteria for the story. We end up with lots of Feature files, so we use naming conventions and folders to organize them.
We annotate the top of the Feature file with a tag to the User Story, eg #Story-1234
We‘ve written a command line utility that reads the cucumber files with these tags. It then fetches all the Test Suites in the Test Plan that are linked to Stories. In ADO, a story can only be linked to a single test suite. If a Test Suite doesn’t exist for that Story, our tool creates one.
For each Scenario, the tool creates a an ADO Test Case and then annotates the Scenario with the Test Case ID. This creates amazing traceability for each User Story as the related Test Cases are automatically linked to the Story in the Azure DevOps UI
Although we don’t do this, we could populate the TestCase with the step definitions from our cucumber Scenario. It’s a basic XML structure that describes the steps to take. This would be useful if we wanted to manually execute the test case using the Azure DevOps Test Case UI. Since we focus primarily on automation, we rely on the steps in our Feature files and our ADO Test Cases end up being symbolic links back to cucumber Scenarios.
Because our cucumber tests are written in C# (SpecFlow), we can get the full class name and method for the cucumber test code. Our tool is able to update the Azure DevOps Test Case with the automation details.
Any test case that isn’t ready for automation or must be done manually, we annotate the Scenario with a #ignore or #manual tag.
Using Azure DevOps Pipelines, we use the Visual Studio Test task to run our tests. The important point here is we execute the Test Plan option. This option fetches the Test Cases in the Test Plan that have automation and then executes the specific cucumber tests. The out-of-the-box functionally updates the Test Case statuses with the test results.
After running through automation, we use the Test Plan Report in Azure DevOps which shows the Test Case execution status over time and can distinguish between test automated and manual test cases.
We execute any remaining manual test cases to complete the Test Plan
For us, we often found that the manual cases that cannot be automated are exception cases, or cases that depend on external environment (for example malformed data, network connection not available, maintenance, first time guide...). These cases require special setup to simulate the environment when they happen.
Ideally, I believe it is possible to cover everything, given that you are prepared to go as far as you can to make it happen. But in reality, it is most often too much an effort needed that we prefer the hybrid approach of mixed manual-automatic test cases. We do, however, try to convert those exception cases over the time to automatic ones, by setting up separate environment to simulate exception cases and write automation tests against them.
Nevertheless, even with that effort, there would be cases when it's impossible to simulate, and I believe they should be covered by technical tests from engineers.
You could use an approach similar to the following example:
http://concordion.org/Example.html
When you use a build or continuous integration system to track your test runs, you could add simple specifications / tests for your manual cases that contain a text comparison (e.g. "pass" or "fail"). Then you would need to update the spec after each manual test run, check it in, and start the tests in your build / continuous Integration system. Then the manual results would be recorded together with the results of the automated test execution.
If you would use a tool like Concordion+ (https://code.google.com/p/concordion-plus/) you could even write a summary specification, which could contain scenarios for each of your manual tests. Each one would be reported as individual test result in your test execution environment.
Cheers
taking screen shots seems to be a good idea, you can still automate the verification but will need to go an extra mile. for instance when using Selenium you can add Sikuli(NB: u can't run headless test) to compare results (images) or take a screenshot with Robot (java.awt) use OCR to read text and assert or verify(TestNG)
I'm automating a workflow (survey) . This has few questions on each page.
Each page has few questions and a continue button .Depending on your answers next pages load. .How can I automate this scenario.
TL;DR: Selenium should only form a part of your automated testing strategy & it should be the smallest piece. Test variations at a lower level instead.
If you want to ensure full coverage of all possibilities, you've two main options:
Test all variants through browser-based journey testing
Test variations outside of the browser & just use Selenium to check the higher-level wiring.
Option two is the way to go here — you want to ensure as much as possible is tested before the browser level.
This is often called the testing pyramid, as ideally you'll only have a small number of browser-based tests, with the majority of your testing done as unit or integration tests.
This will give you:
much better speed, as you don't have the overhead of browser load to run each possible variant of your test pages.
better consistency, i.e. with unit tests you know that they hold true for the code itself, whereas browser-based tests are dependent on a specific instance of the site being deployed (and so bring with it the other variations external to your code, e.g. environment configuration)
Create minimal tests in Selenium to check the 'wiring'.
i.e. that submitting any valid values on page 1 gives some version of page 2 (but not testing what fields in particular are displayed).
Test other elements independently at a lower level.
E.g. if you're following an MVC pattern:
Test your controller class on it's own to see that with a given
input, you get are sent to the expected destination & certain fields populated in the model.
Test the view on it's own that given a certain model, it can display all the variations of the HTML, etc.
It will be better to give if else statements and automate the same. Again it depends on how much scenarios u need to automate.
Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.