How to run Playwright spec files in order - testing

Is there anyway to specify in Playwright to run Spec files (not individual tests in a file) in an order. For example I want tests to be in this order:
Login.spec.ts
profile.spec.ts

It depends on whether you need them to run serially, with each test starting after the other finishes, or whether you want files to be started in a specific order, but still in parallel and thus don’t care about anything inside them going before or not (since each file will take a different amount of time, any execution, including individual tests themselves within the file, could happen at various times intermingled with other files).
If the serial option, you’ll need to disable parallelism by limiting workers to 1 and either alphabetically name your files for automatic sorting or create a test list file that runs them in the order you specify, as described by Playwright on controlling test order.
If the parallel option, just wanted to start in a certain order, I imagine the options from the serial approach would cause that behavior when workers is not limited to 1. But it would again only control file start order, not individual tests. Unless you have the fullyParallel option on, in which case I believe it would also individually start tests within a file in order before moving on. Or individual test start order could be theoretically controlled similarly if you have one test per file.
So if you need each test to finish before starting the next, do the serial approach as described by that doc. If you only care about start order and not inside execution or finishing order, theoretically just use one of those approaches but with worker limit more than 1, and fullyParallel on for individual tests or off for ordering at just the file level.
Hope that helps!

Follow an ordered naming convention for test files.
Example:
module_A_01
module_A_02
module_B
module_C
Note: Keep in mind 11 comes before 2 in alphabetical order so make it '02'.

Related

Is it possible to specify a gitlab runner by name?

We have multiple runners that share a tag, and these tags can't be changed because of workplace policies. So we currently have something set up like this:
#12345 (Foo)
tag: foobar
#23456 (fOo)
tag: foobar
#34567 (foO)
tag: foobar
However when we run a job using the "foobar" tag, it sometimes fails solely depending on the runner that gets chosen. I ended up running the pipeline a dozen or so times to check, and runners #12345 and #23456 always end up failing, even when the build is fine. The #34567 runner succeeds when the build is fine and fails when the build isn't. The runner documentation says I can specify the runner by name, but looking over the keyword reference documentation I'm not seeing how to specify it.
It's not possible. The runners can only be selected by tags and runners with an identical tag should be homogeneous in terms of software versions and hardware. The fist one that is ready to take your CI job will run it.
So one should never need to select a specific runner, in a group that share a single tag.
Each job may known the runner executing it by looking at the environment variable CI_RUNNER_ID, but this is not usable for your purpose. Unless you force a job failure if the runner is not the "good" one, and retry it until it will be randomly taken by the runner you want. But of course this would be a weird solution.
No. The documentation is misleading. You can only use tags to limit what runner(s) your jobs run on.
The only other way you might have around this would be to register your own runner(s) for your project/group, giving them the tags you need. Though, I doubt that's an acceptable solution for obvious reasons.
Ultimately, your GitLab administrator will need to configure your runner(s) to have an additional tag by which you can uniquely identify the runner(s) if you want to be able to have your jobs use a specific runner out of your shared runner pool.

karate-gatling: how to force a sequential execution of all existing feature files in parallel even if one of them fails?

Currently I have a workflow.feature for a gatling performance test, that calls all existing functional tests in the given order. If one of the tests breaks the whole workflow is stopped.
How to force the execution of all steps even if one step fails?
Feature: A workflow of all functional tests to be executed for performance/loading tests.
Scenario: Test all functional scenarios in the given order.
* call read('classpath:foo1/bar1.feature')
* call read('classpath:foo2/bar2.feature')
* call read('classpath:foo3/bar3.feature')
...
* call read('classpath:fooX/barX.feature')
This is a manually managed list of calls, but maybe there is a way to grab all existing feature files from all subfolders dynamically?
If one of the tests breaks the whole workflow is stopped.
If you use a Scenario Outline: it processes all rows even if one fails. So maybe:
Scenario Outline:
call read('classpath:' + file)
Examples:
| file |
| foo/bar.feature |
| baz/ban.feature |
maybe there is a dynamic way to grab all existing feature files from all subfolders
You should be able to write Scala code to do this if you insist and this has nothing to do with Karate. Or the above dynamic feature may give you some ideas. Hint - you can mix Java into Karate feature files very easily.
Is there a way to force the execution of a list of features in any order so, that the next feature file is executed if the previous one is failed.
See above. Also don't ask so many questions in one, keep it simple please.

Execute one feature at a time during application execution

I'm using Karate in this way; during application execution, I get the test files from another source and I create feature files based on what I get.
then I iterate over the list of the tests and execute them.
My problem is that by using
CucumberRunner.parallel(getClass(), 5, resultDirectory);
I execute all the tests at every iteration, which causes tests to be executed multiple times.
Is there a way to execute one test at a time during application execution (I'am fully aware of the empty test class with annotation to specify one class but that doesn't seem to serve me here)
I thought about creating every feature file in a new folder so that I can specify the path of the folder that contains only one feature at a time, but CucumberRunner.parallel() accepts Class and not path.
Do you have any suggestions please?
You can explicitly set a single file (or even directory path) to run via the annotation:
#CucumberOptions(features = "classpath:animals/cats/cats-post.feature")
I think you already are aware of the Java API which can take one file at a time, but you won't get reports.
Well you can try this, set a System property cucumber.options with the value classpath:animals/cats/cats-post.feature and see if that works. If you add tags (search doc) each iteration can use a different tag and that would give you the behavior you need.
Just got an interesting idea, why don't you generate a single feature, and in that feature you make calls to all the generated feature files.
Also how about you programmatically delete (or move) the files after you are done with each iteration.
If all the above fails, I would try to replicate some of this code: https://github.com/intuit/karate/blob/master/karate-junit4/src/main/java/com/intuit/karate/junit4/Karate.java

Best way to execute tests on Jenkins using large files

I have a very large tar file(>1GB) that needs to be checked out and is a precondition for executing any tests.
I cannot have dedicated build server for my tests since tests are going to be executed on slave machines which are disposable.
Checking out a file(>1GB) is not optimal since in this case test execution time would increase because of precondition.What is the best optimal way of solving this problem?
I would dedicate a location on the slaves for that file.
Then in your tests, check if the file is in that location. If not, check it out and move it there. Since this location is outside your normal work area it won't get cleaned, and the file will stay there for the next test execution to use, and you won't need to check it out again.
Of course if the file changes you have to clear those caches. A first option would be to do this manual, alternative you can create a hash of the file and keep that hash in the cash and in your version control. You would then compare only the hashes, and only if those change you would check out the file.
Of course this requires that you have the ability to checkout all the rest of your code without the big file. How to do that obviously depends on the version control system in use.

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?