How to set timeout for anonoymous block or query in plsql? - sql

I know you can set user profiles or set a general timeout for query.
But I wish to set timeout to a specific query inside a procedure and catch the exception, something like :
begin
update tbl set col = v_val; --Unlimited time
delete from tbl where id = 20; --Unlimited time
begin
delete from tbl; -- I want this to have a limited time to perform
exception when (timeout???) then
--code;
end;
end;
Is this possible? is there any timeout exceptions at all I can catch? per block or query? didn't find much info on the topic.

No, you can not set a timeout in pl/sql. You could use a host language for this in which you embed your sql and pl/sql.

You could do:
select * from tbl for update wait 10; --This example will wait 10 seconds. Replace 10 with number of seconds to wait
Then, the select will attempt to lock the specified rows, but if it's unsuccessful after n seconds, it will throw an "ORA-30006: resource busy; acquire with WAIT timeout expired". If lock is achieved, then you can execute your delete.
Hope that helps.

Just an idea : you could do the Delete in a DBMS_JOB.
Then create a procedure to monitor the job, which then calls :
DBMS_JOB.BROKEN(JOBID,TRUE);
DBMS_JOB.remove(JOBID);

v_timer1 := dbms_utility.get_time();
WHILE TRUE
LOOP
v_timer2 := dbms_utility.get_time();
EXIT WHEN (ABS(v_timer1 - v_timer2)/100) > 60; -- cancel after 60 sec.
END LOOP;

MariaDB> select sleep(4);
+----------+
| sleep(4) |
+----------+
| 0 |
+----------+
1 row in set (4.002 sec)
MariaDB>
See: https://mariadb.com/kb/en/sleep/

Related

Cause a job failure with sql for data checks

I would like to action some data checks following data imports into my system, im checking that all of my key locations have inventory imported for them and if they dont i would like the job to fail (I then have reporting/alerts set up when any jobs fail)
Ive had a search around and tried a number of options - The lines commented out are what i have tried but when i set INV_CHECK variable above the count level of one of my locations the job still completed succesfully. If i run in TOAD then it will fail and present an error which is what i had wanted the job to do.
Declare valid_loc NUMBER;
Inv_check NUMBER;
no_inv number;
BEGIN
select param_value into Inv_check from
scpomgr.udt_systemparam where param_name = 'INV_CHECK';
select count (*) into valid_loc from
(select distinct loc
from scpomgr.inventory
where loc in ('GB01', 'FR01', 'DE01', 'IT01', 'ES01', 'IE01', 'CN01', 'JP01', 'AU01', 'US01')
having count (*) > Inv_check
group by loc);
if valid_loc
<10 THEN
--raise_application_error(-20001,'Likely Missing Inv Records');
--raiseerror('fail',16,1);
--select 1/0 into no_inv from dual;
--THROW (51000, 'Process Is Not Finished', 1);
END IF;
END;
EXIT
Can anyone point me in the right direction of what ive missed / misunderstood?
Ive added an action into the If statement so i know its running the part after the 'Then' and if i run in TOAD it gives me an error, if i do it via 'PUTTY' which is what i use to run batch processes then it comes out as 'COMPLETE' and doesnt show any sort of failure.
So after a number of trial and error i found the below code gives me the desired result, to cause my process table / putty runs to display failed i needed to use pkg_job.fatel_error and with that i could pass an error message/code.
Declare
valid_loc NUMBER;
Inv_check NUMBER;
BEGIN
select param_value into Inv_check from
scpomgr.udt_systemparam where param_name = 'INV_CHECK';
select count (*) into valid_loc from
(select distinct loc
from scpomgr.inventory
where loc in ('GB01', 'FR01', 'DE01', 'IT01', 'ES01', 'IE01', 'CN01', 'JP01', 'AU01', 'US01')
group by loc
having count (*) > Inv_check
);
if valid_loc < 10 THEN
pkg_job.fatal_error('Likely Missing Inv Records',-20001);
END IF;
END;
/
EXIT
Hope this helps others or gives ideas of what to try.

IBM DB2 SQL sleep, wait or delay for stored procedure

I have a small loop procedure that is waiting for another process to write a flag to a table. Is there any way to add a delay so this process doesn't consume so much cpu? I believe it may need to run between 1-2 min if everything ends correctly.
BEGIN
DECLARE STOPPED_TOMCAT VARCHAR (1);
UPDATE MRC_MAIN.DAYEND SET DENDSTR = 'Y';
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
WHILE ( STOPPED_TOMCAT <> 'Y')
DO
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
END WHILE;
END;
Use call dbms_alert.sleep(x), where x - number of seconds.
I don't have the resources to test this solution, but why don't try calling IBM i Command DLYJOB in your code:
CALL QCMDEXC('DLYJOB DLY(1)', 13);
The parameter DLY indicates the wait time in seconds and the number 13 is the length of the command string being executed.

DB2 SQL Web Pagination - How to tell I reach EOF

Friends,
I am trying to find a very simple solution to tell me I have reach the End of file with a Web Pagination, using Fetch Next. I am using Previous & Next button to trigger stored procedure.
**FREE
// RFC Main Grid
CTL-OPT NOMAIN OPTION (*SRCSTMT : *NODEBUGIO);
DCL-PROC PUR027 EXPORT;
DCL-PI PUR027 EXTPROC(*DCLCASE);
StartingRow PACKED(3:0);
NbrOfRows PACKED(3:0);
TotalRows CHAR(10);
RowCount CHAR(10);
Search CHAR(30);
EndOfFile CHAR(3);
BOF CHAR(1);
EOF CHAR(1);
RSL CHAR(2);
END-PI;
IF Search = '';
EXEC SQL
Declare RSCURSOR cursor for
SELECT CDEPT, CDESC, ROW_NUMBER() OVER(ORDER BY CDESC, CDEPT) as ROWNUMBER
FROM CDPL03
ORDER BY CDESC, CDEPT
OFFSET (:StartingRow - 1) * :NbrOfRows ROWS
FETCH NEXT :NbrOfRows ROWS ONLY;
EXEC SQL Open RSCURSOR;
EXEC SQL SET RESULT SETS Cursor RSCURSOR;
ELSE;
EXEC SQL
Declare RSCURSOR2 cursor for
SELECT CDEPT, CDESC, ROW_NUMBER() OVER(ORDER BY CDESC, CDEPT) as ROWNUMBER
FROM CDPL03
WHERE CDESC LIKE '%' concat trim(:Search) concat '%' OR
CDEPT LIKE '%' concat trim(:Search) concat '%'
ORDER BY CDESC, CDEPT
OFFSET (:StartingRow - 1) * :NbrOfRows ROWS
FETCH NEXT :NbrOfRows ROWS ONLY;
EXEC SQL Open RSCURSOR2;
EXEC SQL SET RESULT SETS Cursor RSCURSOR2;
ENDIF;
// Begin & End of File
IF StartingRow = 1;
BOF = '1';
EOF = '0';
ELSE;
BOF = '0';
EOF = '0';
ENDIF;
// Validate for SQL errors
IF SQLSTATE = '00000';
RSL = '00';
//TotalRows2 = %CHAR(TotalRows);
ELSEIF SQLSTATE = '02000';
RSL = '10';
ELSE;
RSL = '20';
ENDIF;
RETURN;
END-PROC PUR027;
// To create the service program:
// CRTSRVPGM SRVPGM(BPCSO/PUR027WS)
// MODULE(BPCSO/PUR027W)
// SRCFILE(BPCSS/PURBNDF) SRCMBR(PUR027WB)
When reading multiple records in a block, I retrieve the number of records fetched with GET DIAGNOSTICS like this:
exec sql get diagnostics
:cnt = row_count;
Then if the number of records fetched is less than the requested number of records, I know that I am on the last page.
There is a problem with this method though. If the last page is full, you don't know it until you try to read the next page, and it is empty. So one way to handle that is to request one record more than you are going to present on the page. That is, if you are presenting 25 records per page, request 26. If your result set has 26 records, then there is at least one record on the next page. Still only present 25 records, and increment your offset by 25 records each time, just request 26 records. If the record set has less than 26 records, then you know you are on the last page.
Take a look at SQLERRD(2)
For an OPEN statement, if the cursor is insensitive to changes, SQLERRD(2) contains the actual number of rows in the result set. If the cursor is sensitive to changes, SQLERRD(2) contains an estimated number of rows in the result set.
You can also use GET DIAGNOSTICS after the open for the same info...
DB2_NUMBER_ROWS
If the previous SQL statement was an OPEN or a FETCH which caused the size of the result table to be known, returns the number of rows in the result table. For SENSITIVE cursors, this value can be thought of as an approximation since rows inserted and deleted will affect the next retrieval of this value. Otherwise, the value zero is returned.
Key point for both, for an exact count, you'd need to declare your cursor INSENSITIVE which will create a copy of your selected rows so that inserts, deletes and updates don't affect the results. There's also a performance hit.

Timeout on advisory locks in postgresql

I'm migrating from ORACLE. Currently I'm trying to port this call:
lkstat := DBMS_LOCK.REQUEST(lkhndl, DBMS_LOCK.X_MODE, lktimeout, true);
This function tries to acquire lock lkhndl and returns 1 if it fails to get it after timeout seconds.
In postgresql I use
pg_advisory_xact_lock(lkhndl);
However, it seems that it waits for lock forever. pg_try_advisory_xact_lock returns immediately if fails. Is there a way to implement timeout version of lock acquiring?
There is lock_timeout setting, but I'm not sure is it applicable to advisory locks and how pg_advisory_xact_lock would behave after timeout.
This is a prototype of a wrapper that poorly emulates DBMS_LOCK.REQUEST - constrained to only one type of lock (transaction-scope advisory lock).
To make function fully compatible with Oracle's, it would need several hundreds lines. But that's a start.
CREATE OR REPLACE FUNCTION
advisory_xact_lock_request(p_key bigint, p_timeout numeric)
RETURNS integer
LANGUAGE plpgsql AS $$
/* Imitate DBMS_LOCK.REQUEST for PostgreSQL advisory lock.
Return 0 on Success, 1 on Timeout, 3 on Parameter Error. */
DECLARE
t0 timestamptz := clock_timestamp();
BEGIN
IF p_timeout NOT BETWEEN 0 AND 86400 THEN
RAISE WARNING 'Invalid timeout parameter';
RETURN 3;
END IF;
LOOP
IF pg_try_advisory_xact_lock(key) THEN
RETURN 0;
ELSIF clock_timestamp() > t0 + (p_timeout||' seconds')::interval THEN
RAISE WARNING 'Could not acquire lock in % seconds', p_timeout;
RETURN 1;
ELSE
PERFORM pg_sleep(0.01); /* 10 ms */
END IF;
END LOOP;
END;
$$;
Test it using this code:
SELECT CASE
WHEN advisory_xact_lock_request(1, 2.5) = 0
THEN pg_sleep(120)
END; -- and repeat this in parallel session
/* Usage in Pl/PgSQL */
lkstat := advisory_xact_lock_request(lkhndl, lktimeout);

T-SQL: Stop query after certain time

I am looking to run a query in t-SQL (MS SQL SMS) that will stop after X number of seconds. Say 30 seconds. My goal is to stop a query after 6 minutes. I know the query is not correct, but wanted to give you an idea.
Select * from DB_Table
where (gatedate()+datepart(seconds,'00:00:30')) < getdate()
In SQL Server Management Studio, bring up the options dialog (Tools..Options). Drill down to "Query Execution/SQL Server/General". You should see something like this:
The Execution time-out setting is what you want. A value of 0 specifies an infinite time-out. A positive value the time-out limit in seconds.
NOTE: this value "is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time." (per MSDN).
If you are using ADO.Net (System.Data.SqlClient), the SqlCommand object's CommandTimeout property is what you want. The connect string timeout verb: Connect Timeout, Connection Timeout or Timeout specifies how long to wait whilst establishing a connection with SQL Server. It's got nothing to do with query execution.
Yes, let's try it out.
This is a query that will run for 6 minutes:
DECLARE #i INT = 1;
WHILE (#i <= 360)
BEGIN
WAITFOR DELAY '00:00:01'
print FORMAT(GETDATE(),'hh:mm:ss')
SET #i = #i + 1;
END
Now create an Agent Job that will run every 10 seconds with this step:
-- Put here a part of the code you are targeting or even the whole query
DECLARE #Search_for_query NVARCHAR(300) SET #Search_for_query = '%FORMAT(GETDATE(),''hh:mm:ss'')%'
-- Define the maximum time you want the query to run
DECLARE #Time_to_run_in_minutes INT SET #Time_to_run_in_minutes = 1
DECLARE #SPID_older_than smallint
SET #SPID_older_than = (
SELECT
--text,
session_id
--,start_time
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE text LIKE #Search_for_query
AND text NOT LIKE '%sys.dm_exec_sql_text(sql_handle)%' -- This will avoid the killing job to kill itself
AND start_time < DATEADD(MINUTE, -#Time_to_run_in_minutes, GETDATE())
)
-- SELECT #SPID_older_than -- Use this for testing
DECLARE #SQL nvarchar(1000)
SET #SQL = 'KILL ' + CAST(#SPID_older_than as varchar(20))
EXEC (#SQL)
Make sure the job is run by sa or some valid alternative.
Now you can adapt it to your code by changing:
#Search_for_query = put here a part of the query you are looking for
#Time_to_run_in_minutes = the max number of minutes you want the job to run
What will you be using to execute this query? If you create a .NET application, the timeout for stored procedures by default is 30 seconds. You can change the timeout to be 6 minutes if you wish by changing SqlCommand.CommandTimeout
In SQL Server, I just right click on the connection in the left Object Explorer pane, choose Activity Monitor, then Processes, right click the query that's running, and choose Kill Process.