I would like to publish different tests results. I have two suites. The last one overrides the results of the first one. Is it possible to get both on the same page below “tests and coverage”?
To answer this question, no there's not. You can merge the results of two different suite tests. You can also save the results elsewhere. But it's not possible to display two different results for two different suite tests inside the Azure DevOps Gui.
Related
I have manual test plan in Azure DevOps with tree of suites that correspond to different functions in my app. Let's say it looks like this:
Now, I need to have one place where I can review tests results from whole test plan ran for particular build. Like acceptance tests.
There's no way to run multiple suites in one run, I guess. Didn't find such possibility, though. Tests ran suite by suite produce multiple testruns, which is understandable.
What I want to achieve is one link to all test results for specific build which I can provide further to PM.
I have been searching for a while now and am surprised that I can't find any solutions out there for test result storage with grouping and searching capabilities.
I'd like a service or self hosted solution that supports:
storing test results in xunit/junit organized by keyword. In other words, I want to keep all my "test process A" test results together and all my "test process B" results together. I want to store failure traces and overall pass/fail at a minimum
get last run results for keyword: get the last "auth" test results with failure details
get run history results by keyword in some format
search of some sort on test results
I happen to be have:
Cypress tests
typescript/mocha tests without cypress
custom test framework tests that will need custom reporters
but I am fine with any test results solution that supports a generic input like xunit.
I am definitely open to suggestions that use any other storage system that can accomplish this even if it isn't strictly a test results tool.
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
Issue running into I have one selenium code that need to be run in different environments. One by one .the code in environment(sit) type a keyword and generate list of terms , another environment (prod) do the same thing but generate different list. I need to validate the first appearing term from the list in sit and prod .the code is failing because what is in sit is different from prod .Is there generic way that can be used to run one code on both environments even if they generate different results .Can you please direct me ?
There are several ways to achieve that.
One of the most appropriate (imho) ways to address environment independance is to use environment variables.
Another is to use property files holding different properties for different environments
Another one is to use your execution environment specific properties (like jvm properties in Java).
Options 1 and 3 are imho the most suitable for integrating your code into CI process.
You can passed those values via config file and read and use it in your test code
By many I mean hundreds/thousands. I need to test features that many users will need to see/hear. Obviously these users have different permission levels and some are in different programs. Can a test case be written to pull userids and passwords from the db to test in this way efficiently? Or is this something that is best manually tested by spot checking different log ons?
Call the DB before you run the test to get your users/passwords/whatever else.
Are you using NUnit? If so, you could use the Nunit ValueSourceAttribute to get the data into your test and use a variable for the credentials during your login step.