Schedule a cronjob on ssh with command line - ssh

I am using the amazonaws es3 server.I want to schedule my cron with command line.
I am using the this command for scheduling the cron job
at -f shellscript.sh -v 18:30
but it will schedule for only one time i want to configure manually like once a day or every five minutes .
Please help with command which command i have to used
Thnaks,

As #The.Anti.9 noted, this kind of question fits in Serverfault.
To answer your question, crontab is a little more powerful than 'at' and gives you more flexibility as you can run the job repeatedly for instance daily, weekly, monthly.
For instance for your example, if you need to run the script every day at 18:30 you'd do this,
$ crontab -e
then add the following
30 18 * * * /path/to/your/script.sh
save and you are done.
Note: 30 18 indicates time 18:30, and *s indicate that it should run every day of every month) If you need to run it on a particular day of the month just check it out on the man page of crontab.

Doesn't crontab -e work?
and to generate crontab code http://www.openjs.com/scripts/jslibrary/demos/crontab.php should help.

You can use the command crontab -e to edit your cron planned execution. A good explanation on how to set the time can be found on the Ubuntu forum

Related

How to create a one-time Dataset Copy (no scheduled recurring) using CLI bq command

I want to use bash script to make a one-time dataset copy in BigQuery, from source_dataset_A to target_dataset_B
This operation is easy to do in BigQuery Console, like this
However, if I use bq mk --transfer_config like below, it will create a dataset copy transfer job with a recurring schedule, "every 24 hours".
bq mk --transfer_config --project_id=data-project --data_source=cross_region_copy \
--display_name='one-time-dataset-copy' \
--target_dataset=target_dataset_B \
--params='{"source_dataset_id":"source_dataset_A","source_project_id":"source_project","overwrite_destination_table":"true"}' \
How could I do a one-time dataset copy in BigQuery?
It took me a while to figure out how to do it, but what it takes is to set the three schedule flags in bq properly.
--schedule: Data transfer schedule. If the data source does not support a custom schedule, this should be empty. If empty, the default value for the data source will be used. The specified times are in UTC. Examples of valid format: 1st,3rd monday of month 15:30, every wed,fri of jan,jun 13:15, and first sunday of quarter 00:00.
--schedule_end_time: Time to stop scheduling transfer runs for the given transfer configuration. If empty, the default value for the end time will be used to schedule runs indefinitely.The format for the time stamp is RFC3339 UTC "Zulu".
--schedule_start_time: Time to start scheduling transfer runs for the given transfer configuration. If empty, the default value for the start time will be used to start runs immediately.The format for the time stamp is RFC3339 UTC "Zulu".
To make a one-time copy, you would need to set the schedule with proper start and end time that it would only run once.
So you could do this
bq mk --transfer_config --project_id=data-project --data_source=cross_region_copy \
--display_name='one-time-dataset-copy' \
--target_dataset=target_dataset \
--params='{"source_dataset_id":"source_dataset","source_project_id":"source_project","overwrite_destination_table":"true"}' \
--schedule_end_time=$(date -u -d '5 mins' +%Y-%m-%dT%H:%M:%SZ)
It means to set up a dataset copy transfer job for a schedule of every 24 hours (--schedule default), with schedule starting immediately --schedule_start_time default, and schedule ending in 5 mins from now ( --schedule_end_time ).
By doing that, the transfer job would only trigger one and only one run.

Automatically updating data from an API to an SQL database?

I have data coming from an API in JSON format which I then run a few functions/transformations in python and then insert the data in to an SQL database through pandas & SQLalchemy.
Now, how do I automatically do this at the end of every day, without having to open up the script and run it manually?
You can use crontab on a server (or your Linux/Mac laptop but it will not run the script if its turned off of course).
You can run crontab -e to edit the crontab file. Add something like the following to run your script everyday at 11 PM:
0 23 * * * ~/myscript.py
crontab guru is a useful resource to try different schedule expressions.

Autosys job history for last year

I need to find job history for a job for the last 6 months. I have tried autorep-j -r -n but this just gives me data for last nth execution while I want data for all the days. Is there a way to do this through command line?
There is no way with that cmd to get that information.
I would query: $AUTOUSER/archive
Or look in DB if you have access there
Dave

How to execute MS SQL PLSQL script multiple time from LINUX Box

I've PL SQL Block ; which needs to be executed multiple times in a day.
This Block updates data in Microsoft SQL Server.
Is there any way; I can connect MS SQL Database from Linux box and schedule query execution multiple times in a day?
Write a script and then use crontab to schedule the task to run as often as you would like.
To Edit: crontab -e
While in crontab it works just like vi. So to edit, press i. To stop editing press the Esc. To save and quit type :wq.
To view: crontab -l
for questions: man crontab
crontab example:54 14 * * * myJob This will run "myJob" at 2:54pm, daily.
You could also use something like this to help you make the schedule for crontab http://crontab-generator.org/

Calculating time of a job in HPC

I'm start to use Cloud resources
In my project I need to run a job, then I need to calculate the time between the begin of the execution of the job in the queue and the end of the job
To I put the job in the queue, I used the command:
qsub myjob
How can I do this?
Thanks in advance
the simplest (although not most accurate) way is to get the report of your queue system. If you use PBS (which is my first guess from your qsub command), you can insert in your script the options:
#PBS -m abe
#PBS -M your email
This sends you a notification at start (b) end (e) and abort(a). The end notification should have a resources_used.walltime with the information of the wall time spent.
If you use another queue system, there must be some similar option in the manual.