Is it possible to restart a failed Job in TWS on z/OS from either a particular step or the entire Job using WAPL.
I am trying to automate the restart from Jenkins using WAPL and was unable to find the right syntax.
Thanks
Related
In TFS, Am using SSH task with 'Commands' option to connect to a remote machine and run a set of few commands. Am using cd to a particular folder and running a shell script using 'sh '
This script usually takes around 2 hours to finish execution. The ssh task timesout after 15 minutes and exits the task. But when I see in the machine manually, the process is running.
Why doesn't the ssh task wait until the script finishes completely
According to your description, you may encountered a time out limitation of SSH task or build definition.
First, please double check the time out setting under control options.
Specifies the maximum time, in minutes, that a task is allowed to
execute before being cancelled by server. A zero value indicates an
infinite timeout.
Another place to check is build job time out, under the settings of your build definition: Option ->Build job timeout in minutes.
Specifies the maximum time a build job is allowed to execute on an
agent before being canceled by the server.
An empty or zero value indicates an infinite timeout.
If both set properly and you still get the time out, please attach more detail related build failed log with Verbose Debug Mode by setting system.debug=true for troubleshooting.
In the Best practices for running Docker guide it's stated, that there should only run one process per docker container. In Ubuntu there are some cron-jobs related to the apache-httpd which run daily (located in the/etc/cron.daily/apache2).
When using the apache-docker-image from the official repository (look here) those cronjobs are not run, only the httpd process is started, cron is not running.
Shouldn't the cron-jobs stated above be executed?
I have a hard time to figure out, how one can execute this cron-jobs from another docker-image, as suggested in the "Best-practices-guide" since the "cron-docker-image" should have access to the apache-process in order to run the cron-jobs correctly.
For basic apache there are no cron jobs to run.
If you have cron jobs to run there is no "right answer".
If they run daily and only run for a certain amount of time, you could certainly just schedule those to run instead of using cron.
If they run more periodically or you dont have a scheduler that can handle that (like AWS lambda) then it's not against best practices to have your webserver run them as a cron, you would just have to build your own container off of apache's to handle it.
If your real question is "How do I run cron jobs" a quick google brought:
https://github.com/aptible/docker-cron-example
https://hub.docker.com/r/hamiltont/docker-cron/
https://getcarina.com/docs/tutorials/schedule-tasks-cron/
You would just modify those to run in the background with & or nohup
What have you tried?
I am exploring Big data plugin in Pentaho 5.2. I was trying to run Pig Script executor. I am unable to understand the usage of
Enabling Blocking. The PDI documentation says that
If checked, the Pig Script Executor job entry will prevent downstream
entries from executing until the script has finished processing.
I am aware that running a pig script will convert the execution to Map reduce jobs. I am running the job with Start job -> Pig Script. If I disable the Enable blocking step I am unable to execute the script. I am getting permission denied errors. As per the documentation " ".
What does downstream mean here. I do not pass any hops from the pig script out. I am unable to understand the Enable blocking step. Any hints can be helpful and will be appreciated.
Enable blocking: the task is deployed to the Hadoop cluster; PDI will follow up on progress and only proceed with the rest of the job tasks AFTER the execution of the Hadoop job finishes;
Enable blocking is disabled: PDI deploys the task to the Hadoop cluster and forgets about it. The rest of the job tasks proceed immediately after the cluster accepts the task, but doesn't wait for it to complete.
I am using pentaho 5.2 community edition for both my production environment and aware that there is no restartability (Checkpoint) in pentaho community edition. How do i setup restartability in pentaho community edition. Any references or link would be very useful.
There is not such feature in CE edition.
The idea of EE restart-ability was to have separate database table (like log tables - that exists in CE edition) and control on fail/success job entries based on this records. The gain is to automatically restart failed job entries and ability to show execution results over time.
For example - one can monitor job execution status code via console and restart job from console. In this case whole job will be restarted.
If checkpoints and restart-ability - job will be restarted from failed entry.
So if you have jobs that usually contains only one or two entries, if in case of restart-ability running time is not critical, or fail-handling implemented some other way - you may don't need this feature at all.
Once again - restart-ability only restart failed job entries. If for some case failed job entry made DB inconsistent - restart of job entry should fix it. If job entries rely on some initial state outside and initial outside state changed during job - for example some files was deleted restart will only restart job - not recover something that got unrecoverable broken.
Have the following requirments.
Execute a Bamboo Job from RunDeck. ( I found plugins to execute Rundeck job from Bamboo, need to vice versa)
Call the jobs created in Bamboo by Command Prompt ( Thinking to execute the jobs using command prompt in Rundeck)
Please suggest any alternatives for the above task. Utilmate goal is to get the bamboo jobs kick off from Rundeck.
I would suggest using the REST API provided by Atlassian. Documentation can be found here and, more specific to your use case, here.
After you've got the correct API call(s) to trigger your Bamboo job, just add that as a curl step to the bottom of your rundeck job and it should do what you need.
FWIW - I've done this for Jenkins & Rundeck, but never in bamboo, but the solution should be the same since they're very similar products.