Check scheduled job status in SAP BODS - sap

I am new to BODS, At present I have configured a job to execute every 2 mins to perform transaction from MySQL server and Load into HANA tables.
But sometimes when the data volume in MySQL is too large to transform and load into HANA within 2 Mins, the job is still executing my next iteration for the same job starts which results in BODS failure.
My question is: is there is any option BODS to check for the execution status of the scheduled JOB between runs?
Please help me out with this.

You can create a control/audit table to keep history of each run of bods job. The table should contain fields like eExtractionStart, ExtractionEnd, EndTime etc. And you need to make a change in the job, so that it reads status of previous run from this table before starting the load to Hana data flow. If previous run has not finished, the job can raise an exception.
Let me know if this has been helpful or if you need more information.

Related

How to config a schedule query after another schedule has completed in BigQuery

I have 3 tables: A B and C. I scheduled a daily query on A with results appended to B. And I wanna scheduled another daily query on B, the query should run after the previous one had completed. How can I do it? The Big Query can only schedule query at Fix Time.
When the first query has finished, the second query should be trigger and run.
Bigquery is a data warehouse and does not support triggers unfortunately.
I have not used this as just leave 30 mins between scheduled queries but Google cloud appears to offer a solution
https://cloud.google.com/blog/topics/developers-practitioners/how-trigger-cloud-run-actions-bigquery-events

Run Snowflake Task when a data share view is refreshed

Snowflake's documentation illustrates to have a TASK run on a scheduled basis when there are inserts/updates/deletions or other DML operations run on a table by creating a STREAM on that specific table.
Is there any way to have a TASK run if a view from a external Snowflake data share is refreshed, i.e. dropped and recreated?
As part of this proposed pipeline, we receive a one-time refresh of a view within a specific time period in a day and the goal would be to start a downstream pipeline that runs at most once during that time period, when the view is refreshed.
For example for the following TASK schedule
'USING CRON 0,10,20,30,40,50 8-12 * * MON,WED,FRI America/New York', the downstream pipeline should only run once every Monday, Wednesday, and Friday between 8-12.
Yes, I can point you to the documentation if you would like to see if this works for the tables you might already have set up:
Is there any way to have a TASK run if a view from a external
Snowflake data share is refreshed, i.e. dropped and recreated?
If you create a stored procedure to monitor the existence of the table, I have not tried that before though, I will see if I can ask an expert.
Separately, is there any way to guarantee that the task runs at most
once on a specific day or other time period?
Yes, you can use CRON to schedule optional parameters with specific days of the week or time: an example:
CREATE TASK delete_old_data
WAREHOUSE = deletion_wh
SCHEDULE = 'USING CRON 0 0 * * * UTC';
Reference: https://docs.snowflake.net/manuals/user-guide/tasks.html more specifically: https://docs.snowflake.net/manuals/sql-reference/sql/create-task.html#optional-parameters
A TASK can only be triggered by a calendar schedule, either directly or indirectly via a predecessor TASK being run by a schedule.
Since the tasks are only run on a schedule, they will not run more often than the schedule says.
A TASK can't be triggered by a data share change, so you have to monitor it on a calendar schedule.
This limitation is bound to be lifted sometime, but is valid as of Dec, 2019.

Run a SQL Server job until it succeeds

I have a SQL Server job that has run for almost 2 years.
It's connecting to a bad Oracle database that keeps disconnecting, it always fails due to that. And when I run it again after 10 or 15 minutes, it works successfully. I'm getting bored of checking it every day...
Is there a way that make the job run to connect to that Oracle source until it succeeds, or another job that looks over this job status and if it failed, then it runs it again until it succeeds?
A solution we are using is something like this:
Wrap your Oracle query in an SSIS package, and after the query, have the package update a SQL table that keeps either a history of executions, or just a single row that tracks the last time the job ran successfully. In short, if the Oracle query was successful, then put something in a table saying the query ran successfully today. If it was not successful, then don't put anything in the table for today.
Then at the beginning of the package, BEFORE the Oracle query, check to see if the query has been run successfully today. If it has already run successfully, then do nothing and exit the package. If it has not run successfully today, then go ahead and try to run it, following the post-query steps described above. If you have any other conditions about when the package should run (like "only after 10 am" or anything like that) you would include that logic here.
Finally, schedule the job to call the package, and schedule to run every 15 minutes, or however often you like. It will try every 15 minutes until it runs successfully, and after that it will stop doing anything until the next day.
As a bonus, you can use this same package and job to initiate all tasks that you want handled the same way. You just need to keep meta data about all these tasks in your history/metadata table.
an alternative is to create the job step and leave it unscheduled, and create an ssis job that acts as the master to all your jobs and it runs every minute checking all job steps from a config table that have yet to succeed today and any it finds execute using sp_start_job.
if they do run successfully log the stats to a log table and this prevents them ever being launched again until the next day. This prevents all yours jobs needing to be scheduled every 15 minutes etc, they launch asap, and you can add extra logic to handle dependencies, number parallel running, importance level etc, start time, latest start time, max number to retty etc

BigQuery Error: Destination deleted/expired during execution

I've a batch script to load data from google cloud bucket to a table in big query. A scheduled SSIS job executes this batch file daily.
bq load -F "\t" --encoding=UTF-8 --replace=true db_name.tbl_name gs://GSCloudBucket/file.txt "column1:string, column2:string, column3:string"
Weirdly, the execution is successful some days and not some other time. Here is what I have on the log.
Waiting on bqjob_r790a43a4_00000155a65559c2_1 ... (0s) Current status: RUNNING ......
Waiting on bqjob_r790a43a4_00000155a65559c2_1 ... (7s) Current status: DONE
BigQuery error in load operation: Error processing job: Destination
deleted/expired during execution
one option is if you have 1 day (or multiple of days) expiration on that table (either on table directly or via default expiration on dataset). In this case - because actual time of load very you can get to situation when destination table has expired by that time.
You can use configuration.load.createDisposition attribute to address this.
Or/and you can make sure you have proper expiration set - for daily process it would be let's say - 26 hours - so you have extra 2 hours for your SSIS job to complete before table can expire

What happens when bigquery upload job fails after loaded a portion of the JSON file?

As the title mentioned, what happens when I start a bigquery upload job and, let's say, after loading 50% of the rows in the JSON file the job failed. Does bigquery rollback everything of the load job or am I left with 50% of the data loaded?
I am appending data daily into a single table and keeping duplicate-free is very important. We are using the HTTP Rest API
BigQuery appends data atomically. You will never get half of the data in the table if the load fails. If the job completes successfully, all of the data will show up at once.
There are two additional tricks you can use to prevent duplicates:
Specify a job id for the load job. Imagine you pull your network cable mid way through starting the job... how do you know whether it succeeded? Specifying a job id lets you look up the job later if the job creation request fails.
Perform your loads to a temporary table, and specify WRITE_TRUNCATE as the writeDisposition. This means that you can run import jobs idempotently to the temporary table, and if you don't know whether a job succeeded, just run another one, and it will overwrite the data. Once you have a load job that completes successfully, run a table copy job with a writeDisposition to WRITE_APPEND to append the new data to your main table.