Autosys job history for last year - jobs

I need to find job history for a job for the last 6 months. I have tried autorep-j -r -n but this just gives me data for last nth execution while I want data for all the days. Is there a way to do this through command line?

There is no way with that cmd to get that information.
I would query: $AUTOUSER/archive
Or look in DB if you have access there
Dave

Related

How to get the job id of the last job run in BigQuery command line tool?

I am running some commands in bq to extract the data from BigQuery to GCS. I am able to achieve the target result.
I am able to query the data and able to put the data into GCS in desired formats. I was just wondering if there's any possible way to get job id of last job and it's state. I know we can get the all jobs details using bq job list methods, but it's giving me an entire result set. I'm just looking to get only the state of that job.
bq --location=US extract --destination_format CSV --compression GZIP dataset_id.table_name gs://bucket_name/table.csv
bq ls -j -n 1
jobId Job Type State Start Time Duration
job_id extract FAILURE 30 Mar 13:36:54 0:00:29
I want only the last job id and it's state part.
I want only the last job id and it's state part.
You can pipe it to awk:
bq ls -j -n 1 | awk '{if(NR>2)print}' | awk '{print $1,$3}'
bquxjob_69ed4f1_169ba1f5665 SUCCESS
Looking at the docs, bq offers the global flag --job_id, which allows you to set the ID for the job you're launching (in this case, via the extract command). They even have a section about best practices around generating a job id.
Once you've created the job, you can get details for that specific job using bq show --job MY_JOB_ID_HERE.
If you don't want to have to generate a job ID yourself, a more hacky approach would be to have bq print out the API calls using the global --apilog stdout option, and you could potentially parse the job ID from that.

How to add today's date into BigQuery destination table name

I am new to Google Cloud BigQuery. I am trying to schedule a job which runs a query periodically. In each run, I would like to create a destination table whose name contains today's date. I need something like:
bq query --destination=[project]:[dataset].[table name_date]
Is it possible to do that automatically? Any help is greatly appreciated.
This example is using shell scripting.
YEAR=$(date -d "$d" '+%Y')
MONTH=$(date -d "$d" '+%m')
DAY=$(date -d "$d" '+%d')
day_partition=$YEAR$MONTH$DAY
bq_partitioned_table="${bq_table}"'_'"${day_partition}"
bq query --destination=$bq_partitioned_table
See if it helps.
Where do you put your periodic query?
I always put in datalab notebook, and then use module datetime to get today's date and assign to the destination table name.
then set the notebook to run every day at certain time. Works great.

Heroku - Update db column each 1st of the month

In my app I have a Budget model with daily_avg column.
Also there are BudgetIncome and BudgetSpending models.
daily_avg = (BudgetIncome.sum(:debit) - BudgetSpending.sum(:credit))/Time.days_in_month(current_month)
I need to update this column every 1st day of month, cause in every month there're different quantity of days.
So, I'll write the script, but I don't know:
Where to put this script?
How to start it every 1st day of month?
I'm using PostgreSQL and deploying my app on Heroku.
Thanks for any help.
It's the first time I'm working with scripts for DB, so I don't know what information I need to provide you to help me with it.
Heroku has a scheduler you can use:
https://devcenter.heroku.com/articles/scheduler
"Scheduler is an add-on for running jobs on your app at scheduled time intervals, much like cron in a traditional server environment."
Install with: $ heroku addons:add scheduler:standard

Best practice to add time partitions to a table

having an event tables, partitioned by time (year,month,day,hour)
Wanna join a few events in hive script that gets the year,month,day,hour as variables,
how can you add for example also events from all 6 hours prior to my time
without 'recover all...'
10x
So basically what i needed was a way to use a date that the Hive script receives as parameter
and add all partitions 3 hour before and 3 hours after that date, without recovering all partitions and add the specific hours in every Where clause.
Didn't find a way to do it inside the hive script, so i wrote a quick python code that gets a date and table name, along with how many hours to add from before/after.
When trying to run it inside the Hive script with:
!python script.py tablename ${hivecond:my.date} 3
i was surprised that the variable substition does not take place in a line that starts with !
my workaround was to get the date that the hive script recieved from the log file in the machine using something like:
'cat /mnt/var/log/hadoop/steps/ls /mnt/var/log/hadoop/steps/ |sort -r|head -n 1/stdout'
and from there you can parse each hive parameter in the python code without passing it via Hive.

Schedule a cronjob on ssh with command line

I am using the amazonaws es3 server.I want to schedule my cron with command line.
I am using the this command for scheduling the cron job
at -f shellscript.sh -v 18:30
but it will schedule for only one time i want to configure manually like once a day or every five minutes .
Please help with command which command i have to used
Thnaks,
As #The.Anti.9 noted, this kind of question fits in Serverfault.
To answer your question, crontab is a little more powerful than 'at' and gives you more flexibility as you can run the job repeatedly for instance daily, weekly, monthly.
For instance for your example, if you need to run the script every day at 18:30 you'd do this,
$ crontab -e
then add the following
30 18 * * * /path/to/your/script.sh
save and you are done.
Note: 30 18 indicates time 18:30, and *s indicate that it should run every day of every month) If you need to run it on a particular day of the month just check it out on the man page of crontab.
Doesn't crontab -e work?
and to generate crontab code http://www.openjs.com/scripts/jslibrary/demos/crontab.php should help.
You can use the command crontab -e to edit your cron planned execution. A good explanation on how to set the time can be found on the Ubuntu forum