Postgres query over ODBC a order of magnitude slower? - sql

We have an application which gets some data from a PostgreSQL 9.0.3 database through the psqlodbc 09.00.0200 driver in the following way:
1) SQLExecDirect with START TRANSACTION
2) SQLExecDirect with
DECLARE foo SCROLL CURSOR FOR
SELECT table.alotofcolumns
FROM table
ORDER BY name2, id LIMIT 10000
3) SQLPrepare with
SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', ?, ?) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]
4) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to 1
5) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to -1
6) SQLExecute with param1 set to SQL_FETCH_RELATIVE and param2 set to 0
7) deallocate all, close cursor, end transaction
The function fetchcur executes FETCH RELATIVE $3 IN $1 INTO rec where rec is a record and returns that record. Step 4-6 are executed again and again on user request and there are a lot more querys executed in this transaction in the meantime. It can also take quite some time before another user request is made. Usually the querys takes that long:
4) ~ 130 ms
5) ~ 115 ms
6) ~ 110 ms
This is normally too slow for a fast user experience. So i tried the same statements from psql command line with \timing on. For step 3-6 i used that statements:
3)
PREPARE p_foo (INTEGER, INTEGER) AS
SELECT table.alotofcolumns, l01.languagedescription
FROM fetchcur('foo', $1, $2) table (column definitions)
LEFT OUTER JOIN languagetable l01 ON (l01.lang = 'EN'
AND l01.type = 'some type'
AND l01.grp = 'some group'
AND l01.key = table.somecolumn)
[~20 more LEFT OUTER JOINS in the same style, but for an other column]
4-6)
EXPLAIN ANALYZE EXECUTE p_foo (6, x);
For the first EXECUTE it took 89 ms, but then it went down to ~7 ms. Even if i wait several minutes between the executes it stays at under 10 ms per query. So where could the additional 100 ms be gone? The application and database are on the same system, so network delay shouldn't be an issue. Each LEFT OUTER JOIN only returns one row and only one column of that result is added to the result set. So the result is one row with ~200 columns mostly of type VARCHAR and INTEGER. But that shouldn't be so much data to take around 100 ms to transfer on the same machine. So any hints would be helpful.
The machine has 2 GB of RAM and parameters are set to:
shared_buffers = 512MB
effective_cache_size = 256MB
work_mem = 16MB
maintenance_work_mem = 256MB
temp_buffers = 8MB
wal_buffers= 1MB
EDIT: I just found out how to create a mylog from psqlodbc, but i can't find timing values in there.
EDIT2: Also could add a timestamp to every line. This really shows that it takes >100ms until a respond is received by the psqlodbc driver. So i tried again with psql and added the option -h 127.0.0.1 to make sure it also goes over TCP/IP. The result with psql is <10ms. How is this possible?
00:07:51.026 [3086550720][SQLExecute]
00:07:51.026 [3086550720]PGAPI_Execute: entering...1
00:07:51.026 [3086550720]PGAPI_Execute: clear errors...
00:07:51.026 [3086550720]prepareParameters was not called, prepare state:3
00:07:51.026 [3086550720]SC_recycle_statement: self= 0x943b1e8
00:07:51.026 [3086550720]PDATA_free_params: ENTER, self=0x943b38c
00:07:51.026 [3086550720]PDATA_free_params: EXIT
00:07:51.026 [3086550720]Exec_with_parameters_resolved: copying statement params: trans_status=6, len=10128, stmt='SELECT [..]'
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=-15, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720]ResolveOneParam: from(fcType)=4, to(fSqlType)=4(23)
00:07:51.026 [3086550720]cvt_null_date_string=0 pgtype=23 buf=(nil)
00:07:51.026 [3086550720] stmt_with_params = 'SELECT [..]'
00:07:51.027 [3086550720]about to begin SC_execute
00:07:51.027 [3086550720] Sending SELECT statement on stmt=0x943b1e8, cursor_name='SQL_CUR0x943b1e8' qflag=0,1
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SELECT [..]'
00:07:51.027 [3086550720]CC_send_query: conn=0x9424668, query='SAVEPOINT _EXEC_SVP_0x943b1e8'
00:07:51.027 [3086550720]send_query: done sending query 35bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read 21, global_socket_buffersize=4096
00:07:51.027 [3086550720]send_query: got id = 'C'
00:07:51.027 [3086550720]send_query: ok - 'C' - SAVEPOINT
00:07:51.027 [3086550720]send_query: setting cmdbuffer = 'SAVEPOINT'
00:07:51.027 [3086550720]send_query: returning res = 0x8781c90
00:07:51.027 [3086550720]send_query: got id = 'Z'
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: in QR_close_result
00:07:51.027 [3086550720]QResult: free memory in, fcount=0
00:07:51.027 [3086550720]QResult: free memory out
00:07:51.027 [3086550720]QResult: enter DESTRUCTOR
00:07:51.027 [3086550720]QResult: exit close_result
00:07:51.027 [3086550720]QResult: exit DESTRUCTOR
00:07:51.027 [3086550720]send_query: done sending query 1942bytes flushed
00:07:51.027 [3086550720]in QR_Constructor
00:07:51.027 [3086550720]exit QR_Constructor
00:07:51.027 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.027 [3086550720]Lasterror=11
00:07:51.133 [3086550720]!!! poll ret=1 revents=1
00:07:51.133 [3086550720]read 4096, global_socket_buffersize=4096
00:07:51.133 [3086550720]send_query: got id = 'T'
00:07:51.133 [3086550720]QR_fetch_tuples: cursor = '', self->cursor=(nil)
00:07:51.133 [3086550720]num_fields = 166
00:07:51.133 [3086550720]READING ATTTYPMOD
00:07:51.133 [3086550720]CI_read_fields: fieldname='id', adtid=23, adtsize=4, atttypmod=-1 (rel,att)=(0,0)
[last two lines repeated for the other columns]
00:07:51.138 [3086550720]QR_fetch_tuples: past CI_read_fields: num_fields = 166
00:07:51.138 [3086550720]MALLOC: tuple_size = 100, size = 132800
00:07:51.138 [3086550720]QR_next_tuple: inTuples = true, falling through: fcount = 0, fetch_number = 0
00:07:51.139 [3086550720]qresult: len=3, buffer='282'
[last line repeated for the other columns]
00:07:51.140 [3086550720]end of tuple list -- setting inUse to false: this = 0x87807e8 SELECT 1
00:07:51.140 [3086550720]_QR_next_tuple: 'C' fetch_total = 1 & this_fetch = 1
00:07:51.140 [3086550720]QR_next_tuple: backend_rows < CACHE_SIZE: brows = 0, cache_size = 0
00:07:51.140 [3086550720]QR_next_tuple: reached eof now
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720] done sending the query:
00:07:51.140 [3086550720]extend_column_bindings: entering ... self=0x943b270, bindings_allocated=166, num_columns=166
00:07:51.140 [3086550720]exit extend_column_bindings=0x9469500
00:07:51.140 [3086550720]SC_set_Result(943b1e8, 87807e8)
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]retval=0
00:07:51.140 [3086550720]CC_send_query: conn=0x9424668, query='RELEASE _EXEC_SVP_0x943b1e8'
00:07:51.140 [3086550720]send_query: done sending query 33bytes flushed
00:07:51.140 [3086550720]in QR_Constructor
00:07:51.140 [3086550720]exit QR_Constructor
00:07:51.140 [3086550720]read -1, global_socket_buffersize=4096
00:07:51.140 [3086550720]Lasterror=11
00:07:51.140 [3086550720]!!! poll ret=1 revents=1
00:07:51.140 [3086550720]read 19, global_socket_buffersize=4096
00:07:51.140 [3086550720]send_query: got id = 'C'
00:07:51.140 [3086550720]send_query: ok - 'C' - RELEASE
00:07:51.140 [3086550720]send_query: setting cmdbuffer = 'RELEASE'
00:07:51.140 [3086550720]send_query: returning res = 0x877cd30
00:07:51.140 [3086550720]send_query: got id = 'Z'
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: in QR_close_result
00:07:51.140 [3086550720]QResult: free memory in, fcount=0
00:07:51.140 [3086550720]QResult: free memory out
00:07:51.140 [3086550720]QResult: enter DESTRUCTOR
00:07:51.140 [3086550720]QResult: exit close_result
00:07:51.140 [3086550720]QResult: exit DESTRUCTOR
EDIT3: I realized i didn't used the same query from the mylog in the psql test before. It seems psqlodbc doesn't use a PREPARE for SQLPrepare and SQLExecute. It adds the param value and send the query. As araqnid suggested i set the log_duration param to 0 and compared the results from postgresql log with that from the app and psql. The result are as follows:
psql/app pglog
query executed from app: 110 ms 70 ms
psql with PREPARE/EXECUTE: 10 ms 5 ms
psql with full SELECT: 85 ms 70 ms
So how to interpret that values? It seems the most time is spend sending the full query to the database (10000 chars) and generating a execution plan. If that is true changing the calls to SQLPrepare and SQLExecute to explicit PREPARE/EXECUTE statements executed over SQLExecDirect could solve the problem. Any objections?

I finally found the problem and it was that psqlodbc's SQLPrepare/SQLExecute by default doesn't execute a PREPARE/EXECUTE. The driver itself builds the SELECT and sends that.
The solution is to add UseServerSidePrepare=1 to the odbc.ini or to the ConnectionString for SQLDriverConnect. The total execution time for one query measured from the application dropped from >100ms to 5-10ms.

I don't think the timing between psql and your program are comparable.
Maybe I'm missing something, but in psql you are only preparing the statements, but never really fetching the data. EXPLAIN PLAN is not sending data either
So the time difference is most probably the network traffic that is needed to send all rows from the server to the client.
The only way to reduce this time is to either get a faster network or to select fewer columns. Do you really need all the columns that are included in "alotofcolumns"?

Related

Adding microseconds to Timestamp and assigning to Host variable in DB2 - BIND ERROR FROM BUILD

I've been struggling with this for far too long and can't seem to wrap my head around what really is the issue.
As an input I receive from caller TMS = 2020-01-02-03.04.05.060708. I then want to achieve the following - but my build ends in BINDLOG ERROR (see bottom).
DCL TMS CHAR(26);
DCL START CHAR(26);
DCL END CHAR(26);
IF TMS ^= ''
THEN
DO;
EXEC SQL SET :START = TMS + 1 MICROSECOND;
EXEC SQL SET :END = TMS + 30 DAYS;
END;
ELSE
DO;
EXEC SQL SET :START = '0001-01-01 00:00:00.000000';
EXEC SQL SET :END = CURRENT TIMESTAMP;
END;
Seems like TMS is not properly set to the respective Host varriable, even though TMS is in fact a timestamp. Looking at the documentation only seems to confuse me more. Any clues as how to perform arithmetic actions on date/time variables?
I have tried adding days to a date format which worked fine. But adding days, or microseconds to a TIMESTAMP is causing me lots of headache. The below works like a charm, just struggling to understand why?
DCL TMS_VALID CHAR(10);
DCL TMS_EMPTY CHAR(10);
IF TMS ^= ''
THEN
DO;
TMS_VALID = SUBSTR(TMS,1,10);
EXEC SQL SET :START = DATE(:TMS_VALID); /* Would like to add 1 microsecond to :START*/
EXEC SQL SET :END = DATE(:TMS_VALID) + 30 DAYS;
END;
ELSE
DO;
TMS_EMPTY = SUBSTR(CURRENT_TIMESTAMP,1,10);
EXEC SQL SET :START = DATE(:TMS_EMPTY) - 30 DAYS;
EXEC SQL SET :END = DATE(:TMS_EMPTY);
END;
Below is the BINDLOG. Any help would be greatly appreciated, thanks!
DSNX200I :DB2A BIND SQL ERROR
USING TSTSD AUTHORITY
PLAN=(NOT APPLICABLE)
DBRM=AB12345
STATEMENT=455
SQLCODE=-206
SQLSTATE=42703
TOKENS=AB12345I.TMS
CSECT NAME=DSNXORSO
RDS CODE=-225
DSNX200I :DB2A BIND SQL ERROR
USING TSTSD AUTHORITY
PLAN=(NOT APPLICABLE)
DBRM=AB12345
STATEMENT=456
SQLCODE=-206
SQLSTATE=42703
TOKENS=AB12345I.TMS
CSECT NAME=DSNXORSO
RDS CODE=-225

IBM DB2 SQL sleep, wait or delay for stored procedure

I have a small loop procedure that is waiting for another process to write a flag to a table. Is there any way to add a delay so this process doesn't consume so much cpu? I believe it may need to run between 1-2 min if everything ends correctly.
BEGIN
DECLARE STOPPED_TOMCAT VARCHAR (1);
UPDATE MRC_MAIN.DAYEND SET DENDSTR = 'Y';
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
WHILE ( STOPPED_TOMCAT <> 'Y')
DO
SET STOPPED_TOMCAT = (SELECT TOMCSTP FROM MRC_MAIN.DAYEND);
END WHILE;
END;
Use call dbms_alert.sleep(x), where x - number of seconds.
I don't have the resources to test this solution, but why don't try calling IBM i Command DLYJOB in your code:
CALL QCMDEXC('DLYJOB DLY(1)', 13);
The parameter DLY indicates the wait time in seconds and the number 13 is the length of the command string being executed.

PowerBuilder 12.5 sql cursors transaction size error

i have a major problem and trying to find a workaround. I have an application in PB12.5 that works on both sql and oracle dbs.. (with a lot of data)
and i m using CURSOR at a point,, but the aplications crashes only in sql. Using debuging in PB i found that the sql connection returs -1 due to huge transaction size. But i want to fetch row by row my data.. is any work around to fetch data like paging?? i mean lets fetch the first 1000 rows next the other 1000 and so on.. i hope that you understand what i want to achieve (to break the fetch process and so to reduce the transaction size if possible) , here is my code
DECLARE trans_Curs CURSOR FOR
SELECT associate_trans.trans_code
FROM associate_trans
WHERE associate_trans.usage_code = :ggs_vars.usage ORDER BY associate_trans.trans_code ;
OPEN trans_Curs;
FETCH trans_Curs INTO :ll_transId;
DO WHILE sqlca.sqlcode = 0
ll_index += 1
hpb_1.Position = ll_index
if not guo_associates.of_asstrans_updatemaster( ll_transId, ls_error) then
ROLLBACK;
CLOSE trans_Curs;
SetPointer(Arrow!)
MessageBox("Update Process", "Problem with the update process on~r~n" + sqlca.sqlerrtext)
cb_2.Enabled = TRUE
return
end if
FETCH trans_Curs INTO :ll_transId;
LOOP
CLOSE trans_Curs;
Since the structure of your source table s not fully presented, I'll make some assumptions here.
Let's assume that the records include a unique field that can be used as a reference (could be a counter or a timestamp). I'll assume here that the field is a timestamp.
Let's also assume that PB accepts cursors with parameters (not all solutions do; if it does not, there are simple workarounds).
You could modify your cursor to be something like:
[Note: I'm assuming also that the syntax presented here is valid for your environment; if not, adaptations are simple]
DECLARE TopTime TIMESTAMP ;
DECLARE trans_Curs CURSOR FOR
SELECT ots.associate_trans.trans_code
FROM ots.associate_trans
WHERE ots.associate_trans.usage_code = :ggs_vars.usage
AND ots.associate_trans.Timestamp < TopTime
ORDER BY ots.associate_trans.trans_code
LIMIT 1000 ;
:
:
IF (p_Start_Timestamp IS NULL) THEN
TopTime = CURRENT_TIMESTAMP() ;
ELSE
TopTime = p_Start_Timestamp ;
END IF ;
OPEN trans_Curs;
FETCH trans_Curs INTO :ll_transId;
:
:
In the above:
p_Start_Timestamp is a received timestamp parameter which would initially be empty and then will contain the OLDEST timestamp fetched in the previous invocation,
CURRENT_TIMESTAMP() is a function of your environment returning the current timestamp.
This solution will work solely when you need to progress in one direction (i.e. from present to past) and that you are accumulating all the fetched records in an internal buffer in case you need to scroll up again.
Hope this makes things clearer.
First of all thank you FDavidov for your effort, so i managed to do it using dynamic datastore instead of cursor,, so here is my solution in case someone else need this.
String ls_sql, ls_syntax, ls_err
Long ll_row
DataStore lds_info
ls_sql = "SELECT associate_trans.trans_code " &
+ " FROM associate_trans " &
+ " WHERE associate_trans.usage_code = '" + ggs_vars.usage +"' "&
+ " ORDER BY associate_trans.trans_code"
ls_syntax = SQLCA.SyntaxFromSQL( ls_sql, "", ls_err )
IF ls_err <> '' THEN
MessageBox( 'Error...', ls_err )
RETURN
END IF
lds_info = CREATE DataStore
lds_info.Create( ls_syntax, ls_err )
lds_info.SetTransObject( SQLCA )
lds_info.Retrieve( )
DO WHILE sqlca.sqlcode = 0 and ll_row <= ll_count
FOR ll_row = 1 TO ll_count
ll_transId = lds_info.GetItemNumber( ll_row, 'trans_code' )
ll_index += 1
hpb_1.Position = ll_index
do while yield(); loop
if not guo_associates.of_asstrans_updatemaster( ll_transId, ls_error) then
ROLLBACK;
DESTROY lds_info
SetPointer(Arrow!)
MessageBox("Update Process", "Problem with the update process on~r~n" + sqlca.sqlerrtext)
cb_2.Enabled = TRUE
return
end if
NEXT
DESTROY lds_info
LOOP

Bulk Collect with million rows to insert.......Missing Rows?

I want to insert all the rows from the cursor to a table.But it is not inserting all the rows.Only some rows gets inserted.Please help
I have created a procedure BPS_SPRDSHT which takes input as 3 parameters.
PROCEDURE BPS_SPRDSHT(p_period_name VARCHAR2,p_currency_code VARCHAR2,p_source_name VARCHAR2)
IS
CURSOR c_sprdsht
IS
SELECT gcc.segment1 AS company, gcc.segment6 AS prod_seg, gcc.segment2 dept,
gcc.segment3 accnt, gcc.segment4 prd_grp, gcc.segment5 projct,
gcc.segment7 future2, gljh.period_name,gljh.je_source,NULL NULL1,NULL NULL2,NULL NULL3,NULL NULL4,gljh.currency_code Currency,
gjlv.entered_dr,gjlv.entered_cr, gjlv.accounted_dr, gjlv.accounted_cr,gljh.currency_conversion_date,
NULL NULL6,gljh.currency_conversion_rate ,NULL NULL8,NULL NULL9,NULL NULL10,NULL NULL11,NULL NULL12,NULL NULL13,NULL NULL14,NULL NULL15,
gljh.je_category ,NULL NULL17,NULL NULL18,NULL NULL19,tax_code
FROM gl_je_lines_v gjlv, gl_code_combinations gcc, gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code!='STAT'
AND gljh.currency_code=NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name||'%';
type t_spr is table of c_sprdsht%rowtype;
v_t_spr t_spr :=t_spr();
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'TOTAL ROWS FETCHED FOR SPREADSHEETS- '|| v_t_spr.count);
IF v_t_spr.count > 0 THEN
BEGIN
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
EXCEPTION
WHEN OTHERS THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
fnd_file.put_line(fnd_file.output,'Number of failures: ' || l_error_count);
FOR l IN 1 .. l_error_count LOOP
DBMS_OUTPUT.put_line('Error: ' || l ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(l).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(l).ERROR_CODE));
END LOOP;
END;
END IF;
fnd_file.put_line(fnd_file.output,'END TIME: '||TO_CHAR (SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
END BPS_SPRDSHT;
Total rows to be inserted=568388
No of rows getting inserted=48345.
Oracle uses two engines to process PL/SQL code. All procedural code is handled by the PL/SQL engine while all SQL is handled by the SQL statement executor, or SQL engine. There is an overhead associated with each context switch between the two engines.
The entire PL/SQL code could be written in plain SQL which will be much faster and lesser code.
INSERT INTO custom.pwr_bps_gl_register
SELECT gcc.segment1 AS company,
gcc.segment6 AS prod_seg,
gcc.segment2 dept,
gcc.segment3 accnt,
gcc.segment4 prd_grp,
gcc.segment5 projct,
gcc.segment7 future2,
gljh.period_name,
gljh.je_source,
NULL NULL1,
NULL NULL2,
NULL NULL3,
NULL NULL4,
gljh.currency_code Currency,
gjlv.entered_dr,
gjlv.entered_cr,
gjlv.accounted_dr,
gjlv.accounted_cr,
gljh.currency_conversion_date,
NULL NULL6,
gljh.currency_conversion_rate ,
NULL NULL8,
NULL NULL9,
NULL NULL10,
NULL NULL11,
NULL NULL12,
NULL NULL13,
NULL NULL14,
NULL NULL15,
gljh.je_category ,
NULL NULL17,
NULL NULL18,
NULL NULL19,
tax_code
FROM gl_je_lines_v gjlv,
gl_code_combinations gcc,
gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code! ='STAT'
AND gljh.currency_code =NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name
||'%';
Update
It is a myth that **frequent commits* in PL/SQL is good for performance.
Thomas Kyte explained it beautifully here:
Frequent commits -- sure, "frees up" that undo -- which invariabley
leads to ORA-1555 and the failure of your process. Thats good for
performance right?
Frequent commits -- sure, "frees up" locks -- which throws
transactional integrity out the window. Thats great for data
integrity right?
Frequent commits -- sure "frees up" redo log buffer space -- by
forcing you to WAIT for a sync write to the file system every time --
you WAIT and WAIT and WAIT. I can see how that would "increase
performance" (NOT). Oh yeah, the fact that the redo buffer is
flushed in the background
every three seconds
when 1/3 full
when 1meg full
would do the same thing (free up this resource) AND not make you wait.
frequent commits -- there is NO resource to free up -- undo is undo,
big old circular buffer. It is not any harder for us to manage 15
gigawads or 15 bytes of undo. Locks -- well, they are an attribute
of the data itself, it is no more expensive in Oracle (it would be in
db2, sqlserver, informix, etc) to have one BILLION locks vs one lock.
The redo log buffer -- that is continously taking care of itself,
regardless of whether you commit or not.
First of all let me point out that there is a serious bug in the code you are using: that is the reason for which you are not inserting all the records:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht
BULK COLLECT INTO v_t_spr -- this OVERWRITES your array!
-- it does not add new records!
limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
Each iteration OVERWRITES the contents of v_t_spr with the next 50,000 rows to be read.
Actually the 48345 records you are inserting are simply the last block read during the last iteration.
the "insert" statemend should be inside the same loop: you should do an insert for each 50,000 rows read.
you should have written it this way:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
...
...
END LOOP;
CLOSE c_sprdsht;
If you were expecting to have the whole table loaded in memory for doing just one unique insert, then you wouldn't have needed any loop or any "limit 50000" clause... and actually you could have used simply the "insert ... select" approach.
Now: a VERY GOOD reason for NOT using a "insert ... select" could be that there are so many rows in the source table that such insert would make the rollback segments grow so much that there is simply not enough phisical space on your server to hold them. But if this is the issue (you can't have so much rollback data for a single transaction), you should also perform a COMMIT for each 50,000 records block, otherwise your loop would not solve the problem: it would just be slower than the "insert ... select" and it would generate the same "out of rollback space" error (now i don't remember the exact error message...)
now, issuing a commit every 50,000 records is not the nicest thing to do, but if your system actually is not big enough to handle the needed rollback space, you have no other way out (or at least I am not aware of other way outs...)
Don't use EXIT WHEN c_sprdsht%NOTFOUND (this is the cause of your missing rows), instead use EXIT WHEN v_t_spr.COUNT = 0

T-SQL: Stop query after certain time

I am looking to run a query in t-SQL (MS SQL SMS) that will stop after X number of seconds. Say 30 seconds. My goal is to stop a query after 6 minutes. I know the query is not correct, but wanted to give you an idea.
Select * from DB_Table
where (gatedate()+datepart(seconds,'00:00:30')) < getdate()
In SQL Server Management Studio, bring up the options dialog (Tools..Options). Drill down to "Query Execution/SQL Server/General". You should see something like this:
The Execution time-out setting is what you want. A value of 0 specifies an infinite time-out. A positive value the time-out limit in seconds.
NOTE: this value "is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time." (per MSDN).
If you are using ADO.Net (System.Data.SqlClient), the SqlCommand object's CommandTimeout property is what you want. The connect string timeout verb: Connect Timeout, Connection Timeout or Timeout specifies how long to wait whilst establishing a connection with SQL Server. It's got nothing to do with query execution.
Yes, let's try it out.
This is a query that will run for 6 minutes:
DECLARE #i INT = 1;
WHILE (#i <= 360)
BEGIN
WAITFOR DELAY '00:00:01'
print FORMAT(GETDATE(),'hh:mm:ss')
SET #i = #i + 1;
END
Now create an Agent Job that will run every 10 seconds with this step:
-- Put here a part of the code you are targeting or even the whole query
DECLARE #Search_for_query NVARCHAR(300) SET #Search_for_query = '%FORMAT(GETDATE(),''hh:mm:ss'')%'
-- Define the maximum time you want the query to run
DECLARE #Time_to_run_in_minutes INT SET #Time_to_run_in_minutes = 1
DECLARE #SPID_older_than smallint
SET #SPID_older_than = (
SELECT
--text,
session_id
--,start_time
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE text LIKE #Search_for_query
AND text NOT LIKE '%sys.dm_exec_sql_text(sql_handle)%' -- This will avoid the killing job to kill itself
AND start_time < DATEADD(MINUTE, -#Time_to_run_in_minutes, GETDATE())
)
-- SELECT #SPID_older_than -- Use this for testing
DECLARE #SQL nvarchar(1000)
SET #SQL = 'KILL ' + CAST(#SPID_older_than as varchar(20))
EXEC (#SQL)
Make sure the job is run by sa or some valid alternative.
Now you can adapt it to your code by changing:
#Search_for_query = put here a part of the query you are looking for
#Time_to_run_in_minutes = the max number of minutes you want the job to run
What will you be using to execute this query? If you create a .NET application, the timeout for stored procedures by default is 30 seconds. You can change the timeout to be 6 minutes if you wish by changing SqlCommand.CommandTimeout
In SQL Server, I just right click on the connection in the left Object Explorer pane, choose Activity Monitor, then Processes, right click the query that's running, and choose Kill Process.