How to save a smart folder jobs marked ok in BMC Control-M - automation

Suppose I have a smart folder X having 5 jobs with multiple dependencies. For example, let us assume the job hierarchy is like this:
So, from Planning tab, I order this smart folder for execution. Since I don't want to wait for Job 202 to execute, as it a tape backup job which is not needed in the environment I am working in, I mark Job 202 as "OK" in the monitoring tab. For Job 302, it is a pre-requisite that Job 202 ends "OK".
In a similar set up, I have hundreds of jobs with similar dependencies. I have to order the folder from time to time, and have to manually mark all the jobs that are not required to run as "OK". I cannot simply remove the jobs that I need to mark ok as they have dependencies with the other jobs I want to execute.
My question is - How can I do this once - that is mark ok all unnecessary jobs - and save this for all future instances when I am going to run the workload?

If the job you mentioned as Job202 is not that important for Job302 to start, then this should be independent. Remove if from the flow and make it independant. Make this changes in Control M Desktop and write it to database. You will not have to make the changes daily.

For all jobs not required in the "environment" you are testing, you can check the check box for "run as dummy" to convert those jobs to "dummy" jobs while maintaining the structure, relationships, and dependencies in your folder. A dummy job will not execute the command, script, etc, rather the dummy job will only provide control-m the instructions on the post-processing steps of the job OR in your case the adding of conditions to continue processing the job flow after the dummy job.
(I realize this is an old question; I provided a response should it be helpful to anyone that finds this thread after me)

Related

Bamboo Workspace Handling

I'm not really sure if I got bamboos workspace handling right...
We have the following situation:
stage1:
job1: scm checkout
stage2:
job2: build1
job3: build2
stage3:
job4: build3
job5: build4
The repositorys size is about ~1,5 gb. Therefore, after every build, I want to delete everything except the build artefacts on my agent. But if I delete something on my agent in stage2, my jobs in stage 3 only get the "cleaned" ws. Is this default behaviour? And if yes, how can I prevent that my agent gets "dumped"?
x jobs = x*1,5gb ...
If you want to delete the working dir, go to Job configuration / Miscellaneous tab and check Clean working directory after each build.
I haven't found this option in the documentation, anyway its there (Bamboo 6.0.3)
Also note, that if you have more than one agent, jobs might run on any of them (concurrently). So job1 (checkout) could theoretically run on another machine, than the rest. You can solve that with Tasks, which always run inside one Job, thus one agent.

How to record actions with timings?

Macros are useless to myself, they don't record timings , if I do action A and then action B after 10 seconds, IntelliJ (ver. 15) simply plays all of them simultaneously. Frustratingly useless teasing feature which could be so useful. Is there an alternative?
Scenario is simple:
In embedded application with tomcat(spring boot) we execute 'gulp build' (frontend build tool) which builds files to 'resources/static' folder, since the applciation is .jar we need to reload changed resources via HotSwap.
---> 'gulp build' ---------- wait 10 seconds ------------> alt+u; alt+a; ('Reload changed classes')
Macros don't record timings because they're intended for automating editor operations, and users usually expect these operations to be performed as quickly as possible, and not to sit through the delays that they originally made while they were recording the action.
The best way to accomplish your task would be to write a small plugin that would run the 'gulp build' process, wait for it to complete (rather than waiting a hard-coded number of seconds), and then reload the changed classes.

How to schedule Pentaho Kettle transformations?

I've set up four transformations in Kettle. Now, I would like to schedule them so that they will run daily at a certain time and one after the another. For example,
tranformation1 -> transformation2 -> transformation3 -> transformation4
should run daily at 8.00 am. How can I do that?
There are basically two ways of scheduling jobs in PDI.
1. You can use the command line (as correctly written by Anders):
for transformation scheduling:
<pentaho-installation directory>/pan.sh -file:"your-transformation.ktr"
for job scheduling:
<pentaho-installation directory>/kitchen.sh -file:"your-transformation.kjb"
2. You can also use the inbuilt scheduler in Pentaho Spoon.
If you are using the EE version of PDI, you will have a inbuilt scheduler in the spoon itself. Its an UI interface which you can use it to easily schedule jobs. You can also read this section of doc for more.
You can execute transformation from the command line using the tool Pan:
Pan.bat /file:transform.ktr /param:name=value
The syntax might be different depending on your system - check out the link above for more information. When you have a batch file executing your transformation you can just schedule it to run using any scheduling tool on the whatever system you are running.
Also, you could put all the transformation in a job and execute that from the command line with Kitchen.
I'd like to add another answer that many first-time spoon users miss. Let's say you have a transformation exampleTrafo.ktr that you want to run in a certain interval. Then what you could do is create a job exampleJob.kjb which merely runs the transformation. If you do so, you will have to create something that looks like this:
The START node here is the important thing: right klick on it and choose Edit... and you'll be presented with a job scheduling window where you can specify your desired job schedule. Then save and run this job (either locally or eventually remote on a slave using PDI's carte server). Basically what you will end up with is a indefinitely running job called exampleJob that will execute your exampleTrafo in the desired intervals.

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?