How to give a slave/node as a dynamic parameter in hudson? - automation

I have a list of jobs(say 20) in hudson, which are run in sequence(Job1,2,3,....20) and which are parameterized(parameters given for job1 are passed to other jobs) .
All the jobs run on a node, say 'A'.Now if i wan't to run the same 20 jobs next time on server 'B', I have to go to each job's configuration matrix and change the node from 'A' to 'B'. Since I have 20 jobs, I've to do this tedious job of changing the node 20 times. Is there a way to give the node as a parameter when starting job1, so that i don't have to do put in a lot of effort everytime?

We have one plugin Link : https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin which allow to use NODE as Parameter
And in First job you can use the option in post-build action "Trigger Parameterized build on other projects" and then try to pass the node parameter to next job.

Related

How to make the SSIS package status to failure when propagate was set to false for a Sequence container

I have an SSIS package with for each loop > sequence container. The sequence container is trying to read file from For each loop and process its data. The requirement was to not fail the entire package when any exception happened in processing a file but to continue processing the next file until all the files were processed from the for each loop. For this, I have set the Propagate variable for the sequence container to False. I have also added email step on On Error event of Sequence container. The package is running as expected and able to process all files even when any exception happened with any file. But I would like the status of my SSIS package to be failed finally since one of the files got failed. How can I achieve that ?
Did you try this options?
(SSIS version in russian on the left side but it's sequence container)
View -> Properties window -> Then click on your sequence container and it will show you ther properties of sequence container.
If i were you first of all i would try property "FailPackageOnFailture" - it should cover your question if i get it right.
P.S. Also you can see the whole properties of your project when you click on a free place in your project
UPDATED (after comments and more clear understanding task):
The idea is - set this param Maximum ErrorCount for SQ as max as you want - in this case it wont stop the package because 1 of the files was failed in SQ and next file will process, but it should stop package after SQ will finish his work because you don't change MaximumErrorCount for package.
Important - a value of zero sets the error count threshold to infinity and package or task never get's Failure

Autosys: Concept of Kick Start Attribute and how to use

i have a daily( 09:00am) box containing 10 jobs inside it. All child jobs are sequentially scheduled to run.
On Monday, jobs 1,2 &3 completed and job4 failed. And coz of this, the downstream is stalled and the box is running infinetly( until some actions taken manually)
But the requirement is to run this box again on Tue 09:00am. I heard of Kickstart attribute to kick off the box on next scheduled time irrespective of last run status.
Can someone tell about this kick_start attribute? Also suggest me any other way to schedule this box daily.
TIA
Never heard of the kick_start attribute and could not find it in the R11.3.5 reference guide.
I would look at the box_terminator: y that will fail the box if a job in it fails and the job_terminator: y that will terminate and fail a job if the box it is in fails.
box_criteria is another attribute that may help as you can define what success or failure looks like. For example if you don't care if job4 fails, define box_criteria: s(job3).
Course that only sets your box to FA where it will run the next time it's starting conditions are met. It does nothing to run the downstream for the current run.
Have fun and test, test, test.

How to prevent execution of a waf task if nothing changes from the last successful execution?

I have a waf task that is running a msbuild in order to build a project but I do want to run this only if last execution was not successful.
How should I do this?
Store in your build.env.MS_SUCC = 1 and retrieve the value from the previous build (for the first time you naturally have to check if the dict item MS_SUCC exists)

SSIS execute package task - retry on failure on child package

I have a SSIS package which calls a number of execute package tasks which perform ETL operations
Is there a way to configure the Execute package tasks so that they retry a defined number of times (currently, on the failure of one of the tasks in the child package, the execute package task fails. When this happens, I would like the task to be retried before giving up and failing the parent package)
One solution I know of is to set a flag for each package in the database, set it to a defined value on success and call each package in a for loop container till the flag is successful or the count exceeds a predefined retry count.
Is there a cleaner or more generic way to do this?
Yes, put Execute Package Task in a For Loop Container. Define a variable, which will do the count, one for a successrun indicator and a MAX_COUNT Constance. In properties of the Package Task - Expressions, define
FailPackageOnFailure - False
After Execute Task put a Script Task Read/Write Vars: SuccessfulRun, script:
Dts.Variables["SuccessfulRun"].Value = 1
In properties of the For Loop:
InitExpression - #Val_Counter = 0
EvalExpression - #Counter < #MAX_COUNT && #SuccessfulRun == 0
AssingExpression - #Val_Counter = #Val_Counter + 1
Connect PackageTask with ScriptTask using Success line.
OR
In For Loop Container define expression
MaximumErrorCount - Const_MAX_COUNT
But this one hasn't been tested by me yet...

Celery task schedule (Celery, Django and RabbitMQ)

I want to have a task that will execute every 5 minutes, but it will wait for last execution to finish and then start to count this 5 minutes. (This way I can also be sure that there is only one task running) The easiest way I found is to run django application manage.py shell and run this:
while True:
result = task.delay()
result.wait()
sleep(5)
but for each task that I want to execute this way I have to run it's own shell, is there an easy way to do it? May be some king custom ot django celery scheduler?
Wow it's amazing how no one understands this person's question. They are asking not about running tasks periodically, but how to ensure that Celery does not run two instances of the same task simultaneously. I don't think there's a way to do this with Celery directly, but what you can do is have one of the tasks acquire a lock right when it begins, and if it fails, to try again in a few seconds (using retry). The task would release the lock right before it returns; you can make the lock auto-expire after a few minutes if it ever crashes or times out.
For the lock you can probably just use your database or something like Redis.
You may be interested in this simpler method that requires no changes to a celery conf.
#celery.decorators.periodic_task(run_every=datetime.timedelta(minutes=5))
def my_task():
# Insert fun-stuff here
All you need is specify in celery conf witch task you want to run periodically and with which interval.
Example: Run the tasks.add task every 30 seconds
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
"runs-every-30-seconds": {
"task": "tasks.add",
"schedule": timedelta(seconds=30),
"args": (16, 16)
},
}
Remember that you have to run celery in beat mode with the -B option
manage celeryd -B
You can also use the crontab style instead of time interval, checkout this:
http://ask.github.com/celery/userguide/periodic-tasks.html
If you are using django-celery remember that you can also use tha django db as scheduler for periodic tasks, in this way you can easily add trough the django-celery admin panel new periodic tasks.
For do that you need to set the celerybeat scheduler in settings.py in this way
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
To expand on #MauroRocco's post, from http://docs.celeryproject.org/en/v2.2.4/userguide/periodic-tasks.html
Using a timedelta for the schedule means the task will be executed 30 seconds after celerybeat starts, and then every 30 seconds after the last run. A crontab like schedule also exists, see the section on Crontab schedules.
So this will indeed achieve the goal you want.
Because of celery.decorators deprecated, you can use periodic_task decorator like that:
from celery.task.base import periodic_task
from django.utils.timezone import timedelta
#periodic_task(run_every=timedelta(seconds=5))
def my_background_process():
# insert code
Add that task to a separate queue, and then use a separate worker for that queue with the concurrency option set to 1.