IntelliJ IDEA. Compound doesn't work as 'Before launch' task - intellij-idea

IntelliJ IDEA and other idea-based IDEs have Run/Debug Configurations that help users to create templates of frequently used tasks. One of the possible run configurations is Compound, which can include multiple run configurations/tasks and run them in parallel.
To mix the execution order IDEA also has Before launch option that allows us to define tasks or other run configurations that should run before the given task.
The problem is Compound works great when not included in any execution queue. When I try to define compound as the before launch task, the compound tasks get executed, but the run configuration where I have defined before launch option - not.
Here is a reproducible example.
Create 3 Shell Script run configurations: script_1, script_2 & script_3.
Each script should log its name into the console using provided script text like is shown here.
Combine script_1 & script_2 into the new Compound Run Configuration like is shown here.
Add the created compound to the Before Launch tasks of script_3.
Expected Result: script_1 & script_2 are executed in parallel and after they're done IDEA starts execution of the script_3.
Actual result: script_1 & script_2 are executed in paralel and after they're done nothing happens.
I haven't found any useful information about how exactly compound works along with other run configurations in the execution queue, but I also have tried the Multirun plugin as a workaround. This plugin's documentation states that it should be perfect to use it instead of compounds in this particular situation, however, developers also state that official functionality like Compound is still preferable. I've tried case Runs tasks A, B before task C in many different combinations and it doesn't even work in a plugin, not even talking about official compounds. Anything special in IDEA logs with both compounds and the Multirun plugin.
Question: am I doing something wrong? Or maybe it's an IDEA bug that should be reported?
Anyway, if compounds shouldn't work like this, why IDEA displays them in Before Launch tasks options? Please tell me what you think.
Tested on IDEA versions 2021.2.3 & 2020.3.4

Related

Is it possible to specify a gitlab runner by name?

We have multiple runners that share a tag, and these tags can't be changed because of workplace policies. So we currently have something set up like this:
#12345 (Foo)
tag: foobar
#23456 (fOo)
tag: foobar
#34567 (foO)
tag: foobar
However when we run a job using the "foobar" tag, it sometimes fails solely depending on the runner that gets chosen. I ended up running the pipeline a dozen or so times to check, and runners #12345 and #23456 always end up failing, even when the build is fine. The #34567 runner succeeds when the build is fine and fails when the build isn't. The runner documentation says I can specify the runner by name, but looking over the keyword reference documentation I'm not seeing how to specify it.
It's not possible. The runners can only be selected by tags and runners with an identical tag should be homogeneous in terms of software versions and hardware. The fist one that is ready to take your CI job will run it.
So one should never need to select a specific runner, in a group that share a single tag.
Each job may known the runner executing it by looking at the environment variable CI_RUNNER_ID, but this is not usable for your purpose. Unless you force a job failure if the runner is not the "good" one, and retry it until it will be randomly taken by the runner you want. But of course this would be a weird solution.
No. The documentation is misleading. You can only use tags to limit what runner(s) your jobs run on.
The only other way you might have around this would be to register your own runner(s) for your project/group, giving them the tags you need. Though, I doubt that's an acceptable solution for obvious reasons.
Ultimately, your GitLab administrator will need to configure your runner(s) to have an additional tag by which you can uniquely identify the runner(s) if you want to be able to have your jobs use a specific runner out of your shared runner pool.

TFS Api - trigger test run conditionally (when new files come)

I'm trying to get acquainted with test automation using Microsoft TFS Api.
I've created the program which runs my test set - it uses code similar to one described here, e.g.:
var testRun = _testPoint.Plan.CreateTestRun(false);
testRun.DateStarted = DateTime.Now;
// ...
testRun.Save();
I believe this forces them to start as soon as any of agents can run them, instead of being delayed to certain time. Am I wrong? Anyway, it works all right.
But I was told by my lead that the task should be started each time the new input files are copied to certain folder (on the network I think, or perhaps in TFS).
So I'm searching for a way which allow to trigger tests on some condition - but currently without any luck. Probably I miss proper keywords.
I only found something vaguely related here but it seems they say it is not possible in a proper way.
So are there any facilities in TFS / MTM, any ways or approaches to achieve my goal? Thanks in advance for any hints / links.
You would need to write a system service (or other) that uses the file system watcher. Then when the file changes you can run your code above.
There is no built in feature in TFS to watch a folder for changes.

Display history of a single test result in Jenkins - additional plugin or config issue?

Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?

Intellij running one test in TestNG

So my typical workflow is
I write a data driven test using TestNG in IntelliJ.
I supply hundreds of data items
Run the test and one or two of them fail
I see the list of passed/failed tests in the "Run" pane.
I would like the ability to just right click that "instance" of the test and run that test alone (with breakpoints). Currently IntelliJ does not seem to have that feature. I would have to right click the test and when I run, it runs the whole set of tests with hundreds of data points.
Is this possible?
TestNG supports this at the testng.xml level, where you can specify which indices of your data provider should be used. It's called "invocation-numbers" and you can see what it looks like by running a test with a data provider, failing some of its invocation numbers and looking at the testng-failed.xml that gets generated.
Back to your question: your IDE needs to support this feature in order to make it available in the UI, so I suggest you ask on the IDEA forums
The feature has been added as of Intellij 142.1217: https://youtrack.jetbrains.com/issue/IDEA-57906

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?