IBM DB2 SQL sleep, wait or delay for stored procedure - sql

I have a small loop procedure that is waiting for another process to write a flag to a table. Is there any way to add a delay so this process doesn't consume so much cpu? I believe it may need to run between 1-2 min if everything ends correctly.
BEGIN
DECLARE STOPPED_TOMCAT VARCHAR (1);
UPDATE MRC_MAIN.DAYEND SET DENDSTR = 'Y';
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
WHILE ( STOPPED_TOMCAT <> 'Y')
DO
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
END WHILE;
END;

Use call dbms_alert.sleep(x), where x - number of seconds.

I don't have the resources to test this solution, but why don't try calling IBM i Command DLYJOB in your code:
CALL QCMDEXC('DLYJOB DLY(1)', 13);
The parameter DLY indicates the wait time in seconds and the number 13 is the length of the command string being executed.

Related

SQL - How to call a part of query conditionally

I need to write a SQL query like this:
do step1
do step2
do step3
- this step does a lot of stuffs: define variables, use some variables defined in step 1, query a lot of data into temporary table...
if(cond1)
if(cond2)
Begin
do step 4
call step 3 here
End
else
Begin
do step 5
End
else
Begin
call step 3 here
End
How to make step 3 a reusable query to avoid calling step 3 unnecessarily? notice that step 3 should be a part of the whole query
When I try to create a #step3Query NVARCHAR(MAX), SET #step3Query = '...'
Then call this query in appropriated places by call "EXEC sp_executesql #step3Query". But I got a lot of errors, and I am not sure that is a correct way to do task like that.
Any suggestions would be highly appreciated.
Here is one method that could work (depending on your exact circumstance). The main thing for this is that it's easy, as well as easy to read.
You could have 3 variables to use as flags e.g., #Query3_Flag, #Query4_flag, #Query5_flag.
Your conditional checks just set these flags e.g.,
IF (condition1)
BEGIN
SET #Query4_Flag = 1;
SET #Query3_flag = 1;
SET #Query5_flag = 0;
END
Before each of queries 3, 4 and 5, have the IF statement check the flag and only run the query if the flag is set to 1 e.g.,
IF #Query3_Flag = 1
BEGIN
(run Query 3)
END
Note that the queries will need to be in the correct order.

Bulk Collect with million rows to insert.......Missing Rows?

I want to insert all the rows from the cursor to a table.But it is not inserting all the rows.Only some rows gets inserted.Please help
I have created a procedure BPS_SPRDSHT which takes input as 3 parameters.
PROCEDURE BPS_SPRDSHT(p_period_name VARCHAR2,p_currency_code VARCHAR2,p_source_name VARCHAR2)
IS
CURSOR c_sprdsht
IS
SELECT gcc.segment1 AS company, gcc.segment6 AS prod_seg, gcc.segment2 dept,
gcc.segment3 accnt, gcc.segment4 prd_grp, gcc.segment5 projct,
gcc.segment7 future2, gljh.period_name,gljh.je_source,NULL NULL1,NULL NULL2,NULL NULL3,NULL NULL4,gljh.currency_code Currency,
gjlv.entered_dr,gjlv.entered_cr, gjlv.accounted_dr, gjlv.accounted_cr,gljh.currency_conversion_date,
NULL NULL6,gljh.currency_conversion_rate ,NULL NULL8,NULL NULL9,NULL NULL10,NULL NULL11,NULL NULL12,NULL NULL13,NULL NULL14,NULL NULL15,
gljh.je_category ,NULL NULL17,NULL NULL18,NULL NULL19,tax_code
FROM gl_je_lines_v gjlv, gl_code_combinations gcc, gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code!='STAT'
AND gljh.currency_code=NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name||'%';
type t_spr is table of c_sprdsht%rowtype;
v_t_spr t_spr :=t_spr();
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'TOTAL ROWS FETCHED FOR SPREADSHEETS- '|| v_t_spr.count);
IF v_t_spr.count > 0 THEN
BEGIN
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
EXCEPTION
WHEN OTHERS THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
fnd_file.put_line(fnd_file.output,'Number of failures: ' || l_error_count);
FOR l IN 1 .. l_error_count LOOP
DBMS_OUTPUT.put_line('Error: ' || l ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(l).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(l).ERROR_CODE));
END LOOP;
END;
END IF;
fnd_file.put_line(fnd_file.output,'END TIME: '||TO_CHAR (SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
END BPS_SPRDSHT;
Total rows to be inserted=568388
No of rows getting inserted=48345.
Oracle uses two engines to process PL/SQL code. All procedural code is handled by the PL/SQL engine while all SQL is handled by the SQL statement executor, or SQL engine. There is an overhead associated with each context switch between the two engines.
The entire PL/SQL code could be written in plain SQL which will be much faster and lesser code.
INSERT INTO custom.pwr_bps_gl_register
SELECT gcc.segment1 AS company,
gcc.segment6 AS prod_seg,
gcc.segment2 dept,
gcc.segment3 accnt,
gcc.segment4 prd_grp,
gcc.segment5 projct,
gcc.segment7 future2,
gljh.period_name,
gljh.je_source,
NULL NULL1,
NULL NULL2,
NULL NULL3,
NULL NULL4,
gljh.currency_code Currency,
gjlv.entered_dr,
gjlv.entered_cr,
gjlv.accounted_dr,
gjlv.accounted_cr,
gljh.currency_conversion_date,
NULL NULL6,
gljh.currency_conversion_rate ,
NULL NULL8,
NULL NULL9,
NULL NULL10,
NULL NULL11,
NULL NULL12,
NULL NULL13,
NULL NULL14,
NULL NULL15,
gljh.je_category ,
NULL NULL17,
NULL NULL18,
NULL NULL19,
tax_code
FROM gl_je_lines_v gjlv,
gl_code_combinations gcc,
gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code! ='STAT'
AND gljh.currency_code =NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name
||'%';
Update
It is a myth that **frequent commits* in PL/SQL is good for performance.
Thomas Kyte explained it beautifully here:
Frequent commits -- sure, "frees up" that undo -- which invariabley
leads to ORA-1555 and the failure of your process. Thats good for
performance right?
Frequent commits -- sure, "frees up" locks -- which throws
transactional integrity out the window. Thats great for data
integrity right?
Frequent commits -- sure "frees up" redo log buffer space -- by
forcing you to WAIT for a sync write to the file system every time --
you WAIT and WAIT and WAIT. I can see how that would "increase
performance" (NOT). Oh yeah, the fact that the redo buffer is
flushed in the background
every three seconds
when 1/3 full
when 1meg full
would do the same thing (free up this resource) AND not make you wait.
frequent commits -- there is NO resource to free up -- undo is undo,
big old circular buffer. It is not any harder for us to manage 15
gigawads or 15 bytes of undo. Locks -- well, they are an attribute
of the data itself, it is no more expensive in Oracle (it would be in
db2, sqlserver, informix, etc) to have one BILLION locks vs one lock.
The redo log buffer -- that is continously taking care of itself,
regardless of whether you commit or not.
First of all let me point out that there is a serious bug in the code you are using: that is the reason for which you are not inserting all the records:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht
BULK COLLECT INTO v_t_spr -- this OVERWRITES your array!
-- it does not add new records!
limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
Each iteration OVERWRITES the contents of v_t_spr with the next 50,000 rows to be read.
Actually the 48345 records you are inserting are simply the last block read during the last iteration.
the "insert" statemend should be inside the same loop: you should do an insert for each 50,000 rows read.
you should have written it this way:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
...
...
END LOOP;
CLOSE c_sprdsht;
If you were expecting to have the whole table loaded in memory for doing just one unique insert, then you wouldn't have needed any loop or any "limit 50000" clause... and actually you could have used simply the "insert ... select" approach.
Now: a VERY GOOD reason for NOT using a "insert ... select" could be that there are so many rows in the source table that such insert would make the rollback segments grow so much that there is simply not enough phisical space on your server to hold them. But if this is the issue (you can't have so much rollback data for a single transaction), you should also perform a COMMIT for each 50,000 records block, otherwise your loop would not solve the problem: it would just be slower than the "insert ... select" and it would generate the same "out of rollback space" error (now i don't remember the exact error message...)
now, issuing a commit every 50,000 records is not the nicest thing to do, but if your system actually is not big enough to handle the needed rollback space, you have no other way out (or at least I am not aware of other way outs...)
Don't use EXIT WHEN c_sprdsht%NOTFOUND (this is the cause of your missing rows), instead use EXIT WHEN v_t_spr.COUNT = 0

How to set timeout for anonoymous block or query in plsql?

I know you can set user profiles or set a general timeout for query.
But I wish to set timeout to a specific query inside a procedure and catch the exception, something like :
begin
update tbl set col = v_val; --Unlimited time
delete from tbl where id = 20; --Unlimited time
begin
delete from tbl; -- I want this to have a limited time to perform
exception when (timeout???) then
--code;
end;
end;
Is this possible? is there any timeout exceptions at all I can catch? per block or query? didn't find much info on the topic.
No, you can not set a timeout in pl/sql. You could use a host language for this in which you embed your sql and pl/sql.
You could do:
select * from tbl for update wait 10; --This example will wait 10 seconds. Replace 10 with number of seconds to wait
Then, the select will attempt to lock the specified rows, but if it's unsuccessful after n seconds, it will throw an "ORA-30006: resource busy; acquire with WAIT timeout expired". If lock is achieved, then you can execute your delete.
Hope that helps.
Just an idea : you could do the Delete in a DBMS_JOB.
Then create a procedure to monitor the job, which then calls :
DBMS_JOB.BROKEN(JOBID,TRUE);
DBMS_JOB.remove(JOBID);
v_timer1 := dbms_utility.get_time();
WHILE TRUE
LOOP
v_timer2 := dbms_utility.get_time();
EXIT WHEN (ABS(v_timer1 - v_timer2)/100) > 60; -- cancel after 60 sec.
END LOOP;
MariaDB> select sleep(4);
+----------+
| sleep(4) |
+----------+
| 0 |
+----------+
1 row in set (4.002 sec)
MariaDB>
See: https://mariadb.com/kb/en/sleep/

T-SQL: Stop query after certain time

I am looking to run a query in t-SQL (MS SQL SMS) that will stop after X number of seconds. Say 30 seconds. My goal is to stop a query after 6 minutes. I know the query is not correct, but wanted to give you an idea.
Select * from DB_Table
where (gatedate()+datepart(seconds,'00:00:30')) < getdate()
In SQL Server Management Studio, bring up the options dialog (Tools..Options). Drill down to "Query Execution/SQL Server/General". You should see something like this:
The Execution time-out setting is what you want. A value of 0 specifies an infinite time-out. A positive value the time-out limit in seconds.
NOTE: this value "is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time." (per MSDN).
If you are using ADO.Net (System.Data.SqlClient), the SqlCommand object's CommandTimeout property is what you want. The connect string timeout verb: Connect Timeout, Connection Timeout or Timeout specifies how long to wait whilst establishing a connection with SQL Server. It's got nothing to do with query execution.
Yes, let's try it out.
This is a query that will run for 6 minutes:
DECLARE #i INT = 1;
WHILE (#i <= 360)
BEGIN
WAITFOR DELAY '00:00:01'
print FORMAT(GETDATE(),'hh:mm:ss')
SET #i = #i + 1;
END
Now create an Agent Job that will run every 10 seconds with this step:
-- Put here a part of the code you are targeting or even the whole query
DECLARE #Search_for_query NVARCHAR(300) SET #Search_for_query = '%FORMAT(GETDATE(),''hh:mm:ss'')%'
-- Define the maximum time you want the query to run
DECLARE #Time_to_run_in_minutes INT SET #Time_to_run_in_minutes = 1
DECLARE #SPID_older_than smallint
SET #SPID_older_than = (
SELECT
--text,
session_id
--,start_time
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE text LIKE #Search_for_query
AND text NOT LIKE '%sys.dm_exec_sql_text(sql_handle)%' -- This will avoid the killing job to kill itself
AND start_time < DATEADD(MINUTE, -#Time_to_run_in_minutes, GETDATE())
)
-- SELECT #SPID_older_than -- Use this for testing
DECLARE #SQL nvarchar(1000)
SET #SQL = 'KILL ' + CAST(#SPID_older_than as varchar(20))
EXEC (#SQL)
Make sure the job is run by sa or some valid alternative.
Now you can adapt it to your code by changing:
#Search_for_query = put here a part of the query you are looking for
#Time_to_run_in_minutes = the max number of minutes you want the job to run
What will you be using to execute this query? If you create a .NET application, the timeout for stored procedures by default is 30 seconds. You can change the timeout to be 6 minutes if you wish by changing SqlCommand.CommandTimeout
In SQL Server, I just right click on the connection in the left Object Explorer pane, choose Activity Monitor, then Processes, right click the query that's running, and choose Kill Process.

How to organize infinite while loop in SQL Server?

I want to use infinite WHILE loop in SQL Server 2005 and use BREAK keyword to exit from it on certain condition.
while true does not work, so I have to use while 1=1.
Is there a better way to organize infinite loop ?
I know that I can use goto, but while 1=1 begin ... end looks better structurally.
In addition to the WHILE 1 = 1 as the other answers suggest, I often add a "timeout" to my SQL "infintie" loops, as in the following example:
DECLARE #startTime datetime2(0) = GETDATE();
-- This will loop until BREAK is called, or until a timeout of 45 seconds.
WHILE (GETDATE() < DATEADD(SECOND, 45, #startTime))
BEGIN
-- Logic goes here: The loop can be broken with the BREAK command.
-- Throttle the loop for 2 seconds.
WAITFOR DELAY '00:00:02';
END
I found the above technique useful within a stored procedure that gets called from a long polling AJAX backend. Having the loop on the database-side frees the application from having to constantly hit the database to check for fresh data.
Using While 1 = 1 with a Break statement is the way to do it. There is no constant in T-SQL for TRUE or FALSE.
If you really have to use an infinite loop than using while 1=1 is the way I'd do it.
The question here is, isn't there some other way to avoid an infinite loop? These things just tend to go wrong ;)
you could use the snippet below to kick a sp after soem condition are rised. I assume that you ahev some sort of CurrentJobStatus table where all the jobs/sp keeps their status...
-- *** reload data on N Support.usp_OverrideMode with checks on Status
/* run
Support.usp_OverrideMode.Number1.sql
and
Support.usp_OverrideMode.Number2.sql
*/
DECLARE #FileNameSet TABLE (FileName VARCHAR(255));
INSERT INTO #FileNameSet
VALUES ('%SomeID1%');
INSERT INTO #FileNameSet
VALUES ('%SomeID2%');
DECLARE #BatchRunID INT;
DECLARE #CounterSuccess INT = 0;
DECLARE #CounterError INT = 0;
-- Loop
WHILE WHILE (#CounterError = 0 AND #CounterSuccess < (select COUNT(1) c from #FileNameSet) )
BEGIN
DECLARE #CurrenstStatus VARCHAR(255)
SELECT #CurrenstStatus = CAST(GETDATE() AS VARCHAR)
-- Logic goes here: The loop can be broken with the BREAK command.
SELECT #CounterSuccess = COUNT(1)
FROM dbo.CurrentJobStatus t
INNER JOIN #FileNameSet fns
ON (t.FileName LIKE fns.FileName)
WHERE LoadStatus = 'Completed Successfully'
SELECT #CounterError = COUNT(1)
FROM dbo.CurrentJobStatus t
INNER JOIN #FileNameSet fns
ON (t.FileName LIKE fns.FileName)
WHERE LoadStatus = 'Completed with Error(s)'
-- Throttle the loop for 3 seconds.
WAITFOR DELAY '00:00:03';
select #CurrenstStatus = #CurrenstStatus +char(9)+ '#CounterSuccess ' + CAST(#CounterSuccess AS VARCHAR(11))
+ char(9)+ 'CounterError ' + CAST(#CounterError AS VARCHAR(11))
RAISERROR (
'Looping... # %s'
,0
,1
,#CurrenstStatus
)
WITH NOWAIT;
END
-- TODO add some codition on #CounterError value
/* run
Support.usp_OverrideMode.WhenAllSuceed.sql
*/
Note the code is flexibile you can add as many condition checks on the #FileNameSet table var
Mario