How to make an already running PL SQL package run faster? - sql

There is a PL SQL package which has been called by a job. I have run the job and I was monitoring the progress by running a query on the log table which gets populated for each record. There are 20k records to be processed. The code ran reasonably fast for first 8-10k records. After that , the speed slowed down considerably. The code is still running and it is very slow now. I checked all the active sessions and there are no issues. Can a booster be applied without killing the job?

Try inserting commit for every 5 k of records processed

Related

Greenplum not allowing to cancel long running queries

There is a insert query which is loading data into table A from from table B .
Table B is having 3000 million records.
The query is running since 4 hours and after that if the user is forcefully cancelling it from the pivotal greenplum command center.
it's still running in the backend.
tried running the below commands:
pg_cancel_backend(pid)/pg_terminate_backend(pid)
both are returning true with no effect in real time.
how to deal with this , is restarting the db is the only option.
Thanks
check for error in the pg_log and try pstack the process on segment host to see what it is doing

Why is the query for the scheduled query is running but not as Scheduled query in Bigquery?

I have a query which will run if I simply run it through console or from code.
When I created Scheduled Query for the Query, it would not run. The Scheduled Query is successfully created, and the interval I set (every 2 hours) is correctly implemented but only the jobs are not created (I can see in Scheduled query that the time to run is being incremented by 2 hours every time it is supposed to run).
These are the properties when running query from Scheduled query:
Overwrite table, Processing location: US, Allow large results, Batch priority
If I do a Schedule Backfill, it creates 12 jobs which fails with an error messages similar to the following:
Exceeded CPU limit 125%
Exceeded memory
If I cancel all the created jobs and leave one to run, it would run successfully. The Scheduled Query itself would not create any jobs.
I started the Scheduled query at 12:00 and made it to run for every 2 hours in repeats.
I assumed the jobs would run at the start time but apparently it is not the case. Scheduled Query ran perfectly as intended from 14:00 following with 16:00 and so on.
The errors regarding maximum CPU/memory usage is because the query I wrote had ORDER BY statement which was causing this issue. Removing that cleared the issue.

BigQuery Scheduled Query not Appending

I have scheduled query to refresh an existing BQ table.
BQ says the job runs, and confirms the time it finished.
However, the rows never actually get appended.
There are no sorts of errors or anything firing.
The table even says it was last modified at the same time that the scheduled query runs.
The write type is write append.
Anyone experience this issue?
Thank you

Run a SQL Server job until it succeeds

I have a SQL Server job that has run for almost 2 years.
It's connecting to a bad Oracle database that keeps disconnecting, it always fails due to that. And when I run it again after 10 or 15 minutes, it works successfully. I'm getting bored of checking it every day...
Is there a way that make the job run to connect to that Oracle source until it succeeds, or another job that looks over this job status and if it failed, then it runs it again until it succeeds?
A solution we are using is something like this:
Wrap your Oracle query in an SSIS package, and after the query, have the package update a SQL table that keeps either a history of executions, or just a single row that tracks the last time the job ran successfully. In short, if the Oracle query was successful, then put something in a table saying the query ran successfully today. If it was not successful, then don't put anything in the table for today.
Then at the beginning of the package, BEFORE the Oracle query, check to see if the query has been run successfully today. If it has already run successfully, then do nothing and exit the package. If it has not run successfully today, then go ahead and try to run it, following the post-query steps described above. If you have any other conditions about when the package should run (like "only after 10 am" or anything like that) you would include that logic here.
Finally, schedule the job to call the package, and schedule to run every 15 minutes, or however often you like. It will try every 15 minutes until it runs successfully, and after that it will stop doing anything until the next day.
As a bonus, you can use this same package and job to initiate all tasks that you want handled the same way. You just need to keep meta data about all these tasks in your history/metadata table.
an alternative is to create the job step and leave it unscheduled, and create an ssis job that acts as the master to all your jobs and it runs every minute checking all job steps from a config table that have yet to succeed today and any it finds execute using sp_start_job.
if they do run successfully log the stats to a log table and this prevents them ever being launched again until the next day. This prevents all yours jobs needing to be scheduled every 15 minutes etc, they launch asap, and you can add extra logic to handle dependencies, number parallel running, importance level etc, start time, latest start time, max number to retty etc

Is it possible to run multiple SQL statements in parallel within the same transaction from .net?

I want to upsert to 7 tables in SQL Server. I first SqlBulkCopy into 7 staging tables and then I merge them to the real tables. I need it to be faster, so I'm wondering if there is a way to run these in parallel but within the same transaction, because if anything fails I want to roll back all of it.
Thanks
I would use an ssis package and get this etl in a container using the TransactionOption on the container you can have it roll back if it fails. I think this is the fastest way... Other wise you can check if the transactions finished using a table ... But that's silly