How to prevent execution of a waf task if nothing changes from the last successful execution? - msbuild

I have a waf task that is running a msbuild in order to build a project but I do want to run this only if last execution was not successful.
How should I do this?

Store in your build.env.MS_SUCC = 1 and retrieve the value from the previous build (for the first time you naturally have to check if the dict item MS_SUCC exists)

Related

How to make the SSIS package status to failure when propagate was set to false for a Sequence container

I have an SSIS package with for each loop > sequence container. The sequence container is trying to read file from For each loop and process its data. The requirement was to not fail the entire package when any exception happened in processing a file but to continue processing the next file until all the files were processed from the for each loop. For this, I have set the Propagate variable for the sequence container to False. I have also added email step on On Error event of Sequence container. The package is running as expected and able to process all files even when any exception happened with any file. But I would like the status of my SSIS package to be failed finally since one of the files got failed. How can I achieve that ?
Did you try this options?
(SSIS version in russian on the left side but it's sequence container)
View -> Properties window -> Then click on your sequence container and it will show you ther properties of sequence container.
If i were you first of all i would try property "FailPackageOnFailture" - it should cover your question if i get it right.
P.S. Also you can see the whole properties of your project when you click on a free place in your project
UPDATED (after comments and more clear understanding task):
The idea is - set this param Maximum ErrorCount for SQ as max as you want - in this case it wont stop the package because 1 of the files was failed in SQ and next file will process, but it should stop package after SQ will finish his work because you don't change MaximumErrorCount for package.
Important - a value of zero sets the error count threshold to infinity and package or task never get's Failure

JitterBit Run Only One Instance of an Operation at a Time

I ran into an issue where I had long running JitterBit operations that were scheduled. I had them scheduled close together, since I needed to keep data flowing. But, when they would take longer than expected I would wind up with multiple instances of the operation set running at the same time. This was killing my performance.
I'll put the fix in the answer below.
To resolve this issue I added an additional Script Operation at the beginning of my operation set (with the schedule running on this operation). This script simply checks to see if one of the operations in this set is already running. If not, it starts the next operation. If there is anything running, it exists and waits till the next scheduled instance.
This is a sample of my script. This one assumes that there were originally two operations in this operation set.
<trans>
$isInQueue=GetOperationQueue("<TAG>Operations/OperationToCheck01</TAG>");
$isInQueue2=GetOperationQueue("<TAG>Operations/OperationToCheck02</TAG>");
$isRunning=$isInQueue[0][1];
$isRunning2=$isInQueue2[0][1];
if(($isRunning==1 && $isRunning!=Null()) || ($isRunning2==1 && $isRunning2!=Null()),
WriteToOperationLog("Skip for now: "+$isRunning+" / "+$isRunning2);,
WriteToOperationLog("Nothign is Running - Starting Operation Chain.");
RunOperation("<TAG>Operations/OperationToCheck01</TAG>");
);
</trans>

how to run multiple coordinators in oozie bundle

I'm fresher in oozie bundle. I want to run multiple coordinators one after another in bundle job.My requirement is after completion of one coordinator job _SUCCESS file will be generated, then by using that _SUCCESS file second coordinator should be triggered. I don't know how to do that.For that i used data dependency technique which will keep track for generated output files of previous coordinator. I'm sharing some code which i tried.
Lets say there are 2 coordinator jobs:A and B.and i want to trigger only A coordinator.and if _SUCCESS file for Coordinator A generated then only Coordinator B should get start.
A - coordinator.xml
<workflow>
<app-path>${aDir}/aWorkflow</app-path>
</workflow>
this will call respective workflow.and _SUCCESS file is generated at ${aDir}/aWorkflow/final_data/${date}/aDim location so i included this location in
B coordinator:
<dataset name="input1" frequency="${freq}" initial-instance="${START_TIME1}" timezone="UTC">
<uri-template>${aDir}/aWorkflow/final_data/${date}/aDim</uri-template>
</dataset>
<done-flag>_SUCCESS</done-flag>
<data-in name="coordInput1" dataset="input1">
<instance>${START_TIME1}</instance>
</data-in>
<workflow>
<app-path>${bDir}/bWorkflow</app-path>
</workflow>
but when i run it first coordinator gets KILLED itself, but if i run individually they are running successfully.i'm not getting why these are all getting KILLED.
help to sort out
I find out easy way to do that. I'm sharing solution.For coordinator B coordinator.xml I'm sharing.
1)For Data-set instance should be start time of second one but it should not be time instance of first coordinator.otherwise that particular coordinator will get KILLED.
2)If you want to run multiple coordinators one after another then you can also include controls in coordinator.xml. e.g. concurrency, timeout or throttle. Detailed information about these controls you can find out in "apache oozie" book's 6th chapter.
3)in "" i included latest(0) it will take latest generated folder in mentioned output path.
4)for "input-events" it is mandatory to include it's name as a input to ${coord:dataIn('coordInput1')}.otherwise oozie will not consider dataset.
30
1
${aimDir}/aDimWorkflow/final_data/${date}/aDim
_SUCCESS
${coord:latest(0)}
${bDir}/bWorkflow
input_files
${coord:dataIn('coordInput1')}

How to give a slave/node as a dynamic parameter in hudson?

I have a list of jobs(say 20) in hudson, which are run in sequence(Job1,2,3,....20) and which are parameterized(parameters given for job1 are passed to other jobs) .
All the jobs run on a node, say 'A'.Now if i wan't to run the same 20 jobs next time on server 'B', I have to go to each job's configuration matrix and change the node from 'A' to 'B'. Since I have 20 jobs, I've to do this tedious job of changing the node 20 times. Is there a way to give the node as a parameter when starting job1, so that i don't have to do put in a lot of effort everytime?
We have one plugin Link : https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin which allow to use NODE as Parameter
And in First job you can use the option in post-build action "Trigger Parameterized build on other projects" and then try to pass the node parameter to next job.

Autosys: Concept of Kick Start Attribute and how to use

i have a daily( 09:00am) box containing 10 jobs inside it. All child jobs are sequentially scheduled to run.
On Monday, jobs 1,2 &3 completed and job4 failed. And coz of this, the downstream is stalled and the box is running infinetly( until some actions taken manually)
But the requirement is to run this box again on Tue 09:00am. I heard of Kickstart attribute to kick off the box on next scheduled time irrespective of last run status.
Can someone tell about this kick_start attribute? Also suggest me any other way to schedule this box daily.
TIA
Never heard of the kick_start attribute and could not find it in the R11.3.5 reference guide.
I would look at the box_terminator: y that will fail the box if a job in it fails and the job_terminator: y that will terminate and fail a job if the box it is in fails.
box_criteria is another attribute that may help as you can define what success or failure looks like. For example if you don't care if job4 fails, define box_criteria: s(job3).
Course that only sets your box to FA where it will run the next time it's starting conditions are met. It does nothing to run the downstream for the current run.
Have fun and test, test, test.