On running Selenium Test through Gitlab CI/CD Pipeline, Screenshots are not displaying within Extent Test Report - selenium

On running Selenium Test through Gitlab CI/CD Pipeline, Screenshots are not displaying within Extent Test Report. Test Report and Screenshots folder are getting generated separately as an artifact there but on running in local, it is displaying within test report.
I have tried base 64 encoded also for taking screenshots but it is not working. As path will change as it runs on different server when we run automation suite on Gitlab CI/CD pipeline, therefore I guess screenshots are not getting displayed within extent report.
Kindly please suggest what we can do here.

See GitLab 13.12 (May 2021)
Failed test screenshots in test report
GitLab makes it easy for teams to set up end-to-end testing with automation tools like Selenium that capture screenshots of failed tests as artifacts. This is great until you have to sort through a huge archive of screenshots looking for the specific one you need to debug a failing test. Eventually, you may give up due to frustration and just re-run the test locally to try and figure out the source of the issue instead of wasting more time.
Now, you can link directly to the captured screenshot from the details screen in the Unit Test report on the pipeline page. This lets you quickly review the captured screenshot alongside the stack trace to identify what failed as fast as possible.
See Documentation and Issue.

Related

Run tests in TestCafé Studio via command

How can I execute a test via command in a way that the test run is shown with all its results in TestCafé Studio afterwards?
I'm using:
npx testcafe [browser] [TestCafe file]
The test is executed but the results are only visible in the console. Is there a way to fully integrate it in TestCafé Studio?
BTW: Why isn't there a tag for [testcafestudio] – the product is not that new ;-)
EDIT: I'll take a little further out: We would like to switch from TestCafé to TestCafé Studio to increase the number of people who can maintain and create individual tests in QA. Some tasks could then also be handled by employees with somewhat lower HTML skills. In addition, we would like to keep the connections we are used to, so that the tests are still triggered at certain times or manually via a Jenkins pipeline (Jenkins-->VIX-->CMD-->TestCafé Studio). Depending on the configuration of the respective test run, different branches would be used for the TestCafé Studio project via Git. The test results are read, parsed, and written to a database after the test run is complete. In addition, I would like to see the atomatically triggered calls available in TestCafé Studio, as it is very convenient to navigate directly to the failed tests.
Is it not yet possible to start tests in TestCafé Studio via CMD?
TestCafe Studio stores reports in its own format while TestCafe stores reports in various different formats that are inconsistent with the IDE format.
You can run tests in the TestCafe Studio itself. Are you running your tests in CI? If so, what CI are you using and why does not its reporting system meet your requirements? If not, could you please clarify why you are required to run tests outside the TestCafe Studio IDE?

Automated Testing for testers with no coding required

I'm trying to improve the testing process where I work, but without adjusting the structure.
What we have: VSTS, Selenium IDE, Testers who write test cases, but not code.
What I'd like to do is manage a way to marry our TFS continuous integration with the Selenium tests we write. These are NOT the code-driven selenium tests, but rather the IDE version where users click through, and set assertions using the IDE (All are just UI tests). I know we can export those tests plans as a .SIDE file, but what I can't figure out, is how to have our TFS server execute those as part of a deployment or build pipeline.
Ideally, developers/devops would setup projects in TFS from the onset with whatever solution makes sense to execute these Selenium .SIDE files, but afterwards, the testers would manage adding/modifying those tests cases elsewhere.
The real goal here is to not have testers writing code, or checking in code. Only writing these UI Selenium tests, but having TFS execute those as part of CI.
Researching this on the internet drives me basically always to something that requires testers to write code.
I don't think it can automate testing without code, at lease, you need a test project containing your automated tests.
Generally, in Azure DevOps, we use Visual Studio Test task to run tests. This task supports using the following tests:
Test assembly: Use this option to specify one or more test assemblies that contain your tests. You can optionally specify a
filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan that have an automated test method associated with it. To learn more about
how to associate tests with a test case work item, see Associate
automated tests with test cases.
Test run: Use this option when you are setting up an environment to run tests from test plans. This option should not be used when
running tests in a continuous integration/continuous deployment
(CI/CD) pipeline.
This was a question that I had as well, and I think I found an imperfect but better solution.
I wasn't able to get my Selenium IDE tests running with Jenkins, but I was able to get them to run with TeamCity, another CI.
I created a build step like the following :
Runner type: Command Line
Working Directory: where the selenium IDE .side file is located
Run: Custom Script
With the build script content that I usually use to run my Selenium IDE Tests, such as selenium-side-runner sidefile.side
I also added the following so I could output the results in Junitor another form: --output-directory=results --output-format=junit
You can also add the following so the tests are run headlessly, this only works in Chrome : -c "goog:chromeOptions.args=[--headless,--nogpu] browserName=chrome"
Finally, I also use --filter to run one test suite at a time, but that is optional too.
I then used another build step to export the results to our test manger, xray, but I think that is beyond the scope of this question.
The problem with this solution is that it runs directly from a users individual machine still, but this can be work around.

How to display a directory full of junit test result files

As part of testing in the cloud, I plan to copy my test results (I happen to be using gradle) as xml out of a short lived container to an s3 bucket.
I am wondering what is the best approach for making use of the bucket full of test results and if anyone has ideas for how to get the most of these results with the least custom code, etc.
In the past I have handled this with XSLT to make the XML into HTML but that approach feels like more work than I'd like to have to do.
Has anyone solved this issue for their own purposes that can share your approach?
As I side note... I could also copy out to my s3 bucket the HTML reports from gradle if someone has an idea about how to use the HTML instead.
Ideally, I would be able to see a sorted list of results and click on the most recent to see the test run details.
Disclaimer: I'm not a Devops guy, I'll present a "developer's" point of view here
The answer IMO depends on what exactly would like to do with all these reports.
In a nutshell, you can have text/xml reports from the test execution (I can't say for gradle but in maven surefire plugin creates these reports, I believe gradle does the same).
You can also generate a good looking site with the reports information, there are tools for this as well, for example Allure Reports. Bottom line, some stuff to be shown can be generated in the build directory.
Copying the reports is as simple as copying a file, but the question is what would like to do with them when they're in S3?
In my understanding, its a Job for CI tools (like Jenkins) to run the tests during the build, or maybe a suit of automation tests against a deployed environment (in the cloud).
These CI Tools run gradle to build the project, and run tests, then they show test results per build memorizing last N builds. It can also be integrated with tools like aforementioned allure and show the HTML report per build as long as the build is stored in CI tool.
So I'm not sure I understand why would you like to "run tests in the cloud" (from which I assume you're running tests suite when the artifact is being deployed).
As for possible ideas of what can be done with all these results:
You can create a site that will show the results across builds (again, this is
already solved by CI tool like Jenkins)
You can store the results of running the tests in some database and provide some
clever statistics (I personally don't see much benefit if usually the build is
green and you're not in the process of "recuperation" of project from flaky tests or
something)

Allure Reports Team City plugin causing builds to just hang on build step that runs tests

I'm trying to get allure reporting working as part of an NUnit project in C# using Selenium WebDriver. Following the documentation for installing allure seems to work fine on a local machine but I'm trying to get the Team City plugin to also work this the project. I've uploaded the allure team city plugin and added a build step that will generate a report even if the build fails or is stopped. The project has all the required allure nunit tags added and again this generates results locally that a report can be generated from. However, when running tests from Team City the build hangs and does not even begin to run tests. At the step that it should start running tests it just sits there. The Team City build logs do not show anything wrong. I'm using NUnit 3.6, Team City 2018.1, allure 2,7, allure team city plugin 2.9.
Does anyone have any experience with allure reporting? The documentation is a little out of date but I've done as much as I can with it.
I'm not sure if this is allowed, answering your own question, but in light of little on the ground of existing responses given the Allure developers actually suggest posting to this site as opposed to posting to their own Git hub site, I find it rather frustrating that there has been little response.
Anyway, having spent further hours looking into this I believe that the hanging of the builds is purely down to the allure-config.json file and the path to the allure-results setting within it. Any attempt to change that setting from its default appears to instantly cause the TC builds containing Allure Reports to hang.
hopefully this may help others having similar issues in future.

TFS: Create individual Bug items when XUnit tests fail

Our goal is to implement CI testing and deployment for our DEV web environments:
Goal
Run XUnit tests on check-in.
If tests fail, create individual, associated Bug work items. Stop.
If tests tests pass, deploy build to a UNC file path.
Current Setup
CI is on for the branch, and the build definition currently has enabled Create Work Item on Failure on the Options panel.
XUnit was integrated into the Visual Studio Test build step by providing the Path to Custom Test Adapters necessary.
Problem
Tests run and display results correctly in the build, but no bugs are created for the failed tests, only one for the overall build fail.
Question
How can I create individual Bugs (and include details about the bug in its description)?
You would have to write your own code to create Bugs for each test failure.
I would however recommend against it as this creates unessesery work items and they may not really be bugs. Maybe we have a single test that fails, and the other 200 tests fail as a result. We only have one bug. You will overwhelm people.
You can easily create bugs as you investigate failures using the failed test list that is part of the build results.
https://www.visualstudio.com/en-us/docs/test/continuous-testing/getting-started/getting-started-with-continuous-testing