I am using Ansible tower 3.4.3.
As part of one of my jobs, I need to generate a log file and logfile name should contain Tower_Job_ID to easily recognize which log is generated by which tower job id.
I guess there will be some global variables like "ansible_tower_job_id" but unable to find any documentation or the variable name.
Can some one help, how to capture the current running job ID in ansible tower.
The callback link contains the ID in it.
From the docs: "The ‘1’ in this sample URL is the job template ID in Tower." .
Related
I'm trying to set the tag of a job to that job's unique ID:
some-cool-job:
tags:
- $CI_JOB_ID
however it doesn't seem to resolve the variable. It just sets the tag to "$CI_JOB_ID". Similarly, $CI_PIPELINE_ID doesn't work.
Using $CI_JOB_NAME or $CI_PIPELINE_IID instead, works fine.
Hence I assume that the ID just doesn't exist at the time the tags are parsed.
Following this, how else can I uniquely identify a job using variables available at this time?
GitLab assigns a number of predefined environment variables for you. One of these is CI_JOB_ID. You can view the value by printing it within a script.
some-cool-job:
script:
- echo $CI_JOB_ID
In the context of a .gitlab-ci.yml file, tags map jobs to runners. For instance, I tag my runners with names reflecting the executor being used (e.g. - shell or docker), then I tag jobs within my .gitlab-ci.yml file that need a shell executor with shell.
May I ask, what is the desired outcome of tagging a job with the job ID, in your case?
I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.
The task is to get file names from the folder and then loop the same task (job) over all the files one by one.
I created a simple job with transformation (get files names) and then job with flag "Execute for each row" (now is just logging the name of the file).
Did it the same way it is described here: http://ramathoughts.blogspot.ch/2010/08/processing-group-of-files-with-kettle.html
However, the path of the received files is not passed to the sub-job (logging doesn't display variable value). But the sub-job is executed as many times as there is number of files in the input folder. So it looks like it is passed to some extent, but for some reason is not available as a variable.
Image with log details, as seen the variable is displayed as ${path} instead of value of the path:
http://i.imgur.com/pK1iHtl.png?1
The sample code is below as archive with jobs and transformation and also sample input files. Any help is appreciated, as I may be missing something simple here https://www.hightail.com/download/bXBhL0dNcklCMTVsQXNUQw
The issue is the 2nd Job (i.e. j_log_file_names.kjb) is unable to detect the parameter path. Just try defining the parameter to this Job; like the image below:
This will make sure that the parameter that is coming from the prev. step is correctly fetched into the Job. Rest of your job looks absolutely fine.
Hope this helps :)
I have a oozie workflow wchich consists of a sub-workflow. My main workflow takes three sqoop job names at a time in fork. Then it has to pass those names to the subworkflow. In main workflow there are three shell actions which receive values of job names in three respective variables(${job1},${job2},${job3}) . But my sub-workflow is common for all three shell actions. I want to assign the value of ${job1} to ${job}. Where to create the property ${job} and how to transfer the value of ${job1} to ${job}???? Please help.
Use a java action in between along with capture-output so that you can do whatever assignment or name change logic there.
Java will accept job1 and will output job=job1 using capture-output which in turn you may pass to sub-workflows.
I know that we can normally give the parameters while running the jar file in EC2 instance
But how do we give inputs through code?
I am trying this because I am trying to call my java code from a jsp, so in the java code ,I want to directly pick up data from s3 and proceed , I tries like this but in vain:
DataExtractor.getRelevantData("s3n://syamk/revanthinput/", "999999", "94645", "20120606",
"s3n://revanthufl/gen/testoutput" + "interm");
here s3n://syamk/revanthinput/ I was using input and instead of s3n://revanthufl/gen/testoutput.
I was using output and in the parameters I am using the same strings(s3n://syamk/revanthinput/ and s3n://revanthufl/gen/testoutput) to run the jar.But doing like this from code is throwing and exception,
[java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively).] with root cause
Based on my usage of flume, it would appear that you need to format your URL like s3n://AWS_ACCESS_KEY:AWS_SECRET_KEY#syamk/revanthinput/ when calling s3 from within code.