I'm trying to improve the testing process where I work, but without adjusting the structure.
What we have: VSTS, Selenium IDE, Testers who write test cases, but not code.
What I'd like to do is manage a way to marry our TFS continuous integration with the Selenium tests we write. These are NOT the code-driven selenium tests, but rather the IDE version where users click through, and set assertions using the IDE (All are just UI tests). I know we can export those tests plans as a .SIDE file, but what I can't figure out, is how to have our TFS server execute those as part of a deployment or build pipeline.
Ideally, developers/devops would setup projects in TFS from the onset with whatever solution makes sense to execute these Selenium .SIDE files, but afterwards, the testers would manage adding/modifying those tests cases elsewhere.
The real goal here is to not have testers writing code, or checking in code. Only writing these UI Selenium tests, but having TFS execute those as part of CI.
Researching this on the internet drives me basically always to something that requires testers to write code.
I don't think it can automate testing without code, at lease, you need a test project containing your automated tests.
Generally, in Azure DevOps, we use Visual Studio Test task to run tests. This task supports using the following tests:
Test assembly: Use this option to specify one or more test assemblies that contain your tests. You can optionally specify a
filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan that have an automated test method associated with it. To learn more about
how to associate tests with a test case work item, see Associate
automated tests with test cases.
Test run: Use this option when you are setting up an environment to run tests from test plans. This option should not be used when
running tests in a continuous integration/continuous deployment
(CI/CD) pipeline.
This was a question that I had as well, and I think I found an imperfect but better solution.
I wasn't able to get my Selenium IDE tests running with Jenkins, but I was able to get them to run with TeamCity, another CI.
I created a build step like the following :
Runner type: Command Line
Working Directory: where the selenium IDE .side file is located
Run: Custom Script
With the build script content that I usually use to run my Selenium IDE Tests, such as selenium-side-runner sidefile.side
I also added the following so I could output the results in Junitor another form: --output-directory=results --output-format=junit
You can also add the following so the tests are run headlessly, this only works in Chrome : -c "goog:chromeOptions.args=[--headless,--nogpu] browserName=chrome"
Finally, I also use --filter to run one test suite at a time, but that is optional too.
I then used another build step to export the results to our test manger, xray, but I think that is beyond the scope of this question.
The problem with this solution is that it runs directly from a users individual machine still, but this can be work around.
Related
How can I execute a test via command in a way that the test run is shown with all its results in TestCafé Studio afterwards?
I'm using:
npx testcafe [browser] [TestCafe file]
The test is executed but the results are only visible in the console. Is there a way to fully integrate it in TestCafé Studio?
BTW: Why isn't there a tag for [testcafestudio] – the product is not that new ;-)
EDIT: I'll take a little further out: We would like to switch from TestCafé to TestCafé Studio to increase the number of people who can maintain and create individual tests in QA. Some tasks could then also be handled by employees with somewhat lower HTML skills. In addition, we would like to keep the connections we are used to, so that the tests are still triggered at certain times or manually via a Jenkins pipeline (Jenkins-->VIX-->CMD-->TestCafé Studio). Depending on the configuration of the respective test run, different branches would be used for the TestCafé Studio project via Git. The test results are read, parsed, and written to a database after the test run is complete. In addition, I would like to see the atomatically triggered calls available in TestCafé Studio, as it is very convenient to navigate directly to the failed tests.
Is it not yet possible to start tests in TestCafé Studio via CMD?
TestCafe Studio stores reports in its own format while TestCafe stores reports in various different formats that are inconsistent with the IDE format.
You can run tests in the TestCafe Studio itself. Are you running your tests in CI? If so, what CI are you using and why does not its reporting system meet your requirements? If not, could you please clarify why you are required to run tests outside the TestCafe Studio IDE?
I have just started learning about test automation in Selenium and found out that most online tutorials would tell you to run the test suite inside an IDE together with a test framework such as TestNG (with testng.xml) and a build tool such as Maven.
When you are working in a software company and told to build a test framework and run automated tests, I don't believe you actually need to fire up your IDE every time you want to execute your test suite. So, my question is, what is the typical setup a software company follows to 'automate' running your test automation scripts?
Software companies are following agile practices and wanna keep up with industry practices. In real projects, CI & CD are used to continuously integrate, deploy and test the software.
Tests are written by SDET using test automation frameworks. While developing test scripts test developers use IDEs like eclipse. However, tests are executed over Jenkins as a job, after required frequency/event.
For example, after every code deployment, Jenkins can automatically trigger your sanity suite, and run regression bi-weekly.
The process' are automated now-a-days with stakeholders demanding agility.
One can invoke selenium java project from command line via .bat file in Jenkins, or using ant/maven as build tools.
IDEs are seldom used to run tests in real world.
Our goal is to implement CI testing and deployment for our DEV web environments:
Goal
Run XUnit tests on check-in.
If tests fail, create individual, associated Bug work items. Stop.
If tests tests pass, deploy build to a UNC file path.
Current Setup
CI is on for the branch, and the build definition currently has enabled Create Work Item on Failure on the Options panel.
XUnit was integrated into the Visual Studio Test build step by providing the Path to Custom Test Adapters necessary.
Problem
Tests run and display results correctly in the build, but no bugs are created for the failed tests, only one for the overall build fail.
Question
How can I create individual Bugs (and include details about the bug in its description)?
You would have to write your own code to create Bugs for each test failure.
I would however recommend against it as this creates unessesery work items and they may not really be bugs. Maybe we have a single test that fails, and the other 200 tests fail as a result. We only have one bug. You will overwhelm people.
You can easily create bugs as you investigate failures using the failed test list that is part of the build results.
https://www.visualstudio.com/en-us/docs/test/continuous-testing/getting-started/getting-started-with-continuous-testing
Presently we built a Automation framework which uses Selenium Webdriver+ specflow + Nunit, and we suing bamboo as our CI to run our Job against our every build.
we written a build.xml to handle our targets (like clean, init, install latest build, run Selenium scripts, uninstall build. etc)
ant command will read the tag name from the build.xml and runs the respective feature/scenarios based on Tags (like #smoke, #Regression)with Nunit in CI machine.
Now our requirement is to use Selenium Grid to divide scripts into different machine and execute with above set-up. Grid has to divide the scripts based on feature file or based on Tags.How to achieve this.
Is there any thing need to done under [BeforeFeature] and [BeforeScenario] ?
If you provide in details steps or any link which explains detail steps that would be a great help.
Please any one can help in this regards.
Thanks,
Ashok
You have misunderstood the role Grid plays in distributed parallel testing. It does not "divide the scripts", but simply provides a single hub resource through which multiple tests can open concurrent sessions.
It is the role of the test runner (in your case Specflow) to divide tests and start multiple threads.
I believe that you require SpecFlow+ (http://www.specflow.org/plus/), but this does have a license cost.
It should be possible to create your own multithread test runner for Specflow but will require programming and technical knowledge.
If you want a free open source approach to parallel test execution in DotNet, then there is MbUnit (http://code.google.com/p/mb-unit) but this would require you to rewrite your tests
Our department uses Visual Studio 2008 Team System, and we have a build server that integrates with our TFS source control server. It pulls the source code, builds the solution and runs the unit tests, just as we might do from within VS, and emails a report. The build server is setup using MSBuild and MSTest as the primary tools. All very sweet.
On our development machines we also run a set of selenium unit tests, and I want to include this in the test suite on the build server. I have been told that 'this is not possible using MSBuild/MSTest', but I am at a loss to understand why.
Does any one have experience of running selenium tests (they are just conventional test methods written in C#) who might be able to advise me on whether this is possible and what the gotchas are? Thinking about it, apart from giving the browser access to the desktop when the server is not logged in, once MSBuild has handed off a test list to MSTest it's exactly the same process as on our develop machines.
TIA
I know it's 3 years on, but someone might drop in on this post and not see an answer. This is possible todo.
In a similar fashion how you'd run unit test, in the build definition using default template you need to specify the name of the UnitTest DLL and ensure Run Unit Test is not disabled. Also ensure that the build is building your Automated UI Test solution.
Simples.