DB2 SQL Web Pagination - How to tell I reach EOF - sql

Friends,
I am trying to find a very simple solution to tell me I have reach the End of file with a Web Pagination, using Fetch Next. I am using Previous & Next button to trigger stored procedure.
**FREE
// RFC Main Grid
CTL-OPT NOMAIN OPTION (*SRCSTMT : *NODEBUGIO);
DCL-PROC PUR027 EXPORT;
DCL-PI PUR027 EXTPROC(*DCLCASE);
StartingRow PACKED(3:0);
NbrOfRows PACKED(3:0);
TotalRows CHAR(10);
RowCount CHAR(10);
Search CHAR(30);
EndOfFile CHAR(3);
BOF CHAR(1);
EOF CHAR(1);
RSL CHAR(2);
END-PI;
IF Search = '';
EXEC SQL
Declare RSCURSOR cursor for
SELECT CDEPT, CDESC, ROW_NUMBER() OVER(ORDER BY CDESC, CDEPT) as ROWNUMBER
FROM CDPL03
ORDER BY CDESC, CDEPT
OFFSET (:StartingRow - 1) * :NbrOfRows ROWS
FETCH NEXT :NbrOfRows ROWS ONLY;
EXEC SQL Open RSCURSOR;
EXEC SQL SET RESULT SETS Cursor RSCURSOR;
ELSE;
EXEC SQL
Declare RSCURSOR2 cursor for
SELECT CDEPT, CDESC, ROW_NUMBER() OVER(ORDER BY CDESC, CDEPT) as ROWNUMBER
FROM CDPL03
WHERE CDESC LIKE '%' concat trim(:Search) concat '%' OR
CDEPT LIKE '%' concat trim(:Search) concat '%'
ORDER BY CDESC, CDEPT
OFFSET (:StartingRow - 1) * :NbrOfRows ROWS
FETCH NEXT :NbrOfRows ROWS ONLY;
EXEC SQL Open RSCURSOR2;
EXEC SQL SET RESULT SETS Cursor RSCURSOR2;
ENDIF;
// Begin & End of File
IF StartingRow = 1;
BOF = '1';
EOF = '0';
ELSE;
BOF = '0';
EOF = '0';
ENDIF;
// Validate for SQL errors
IF SQLSTATE = '00000';
RSL = '00';
//TotalRows2 = %CHAR(TotalRows);
ELSEIF SQLSTATE = '02000';
RSL = '10';
ELSE;
RSL = '20';
ENDIF;
RETURN;
END-PROC PUR027;
// To create the service program:
// CRTSRVPGM SRVPGM(BPCSO/PUR027WS)
// MODULE(BPCSO/PUR027W)
// SRCFILE(BPCSS/PURBNDF) SRCMBR(PUR027WB)

When reading multiple records in a block, I retrieve the number of records fetched with GET DIAGNOSTICS like this:
exec sql get diagnostics
:cnt = row_count;
Then if the number of records fetched is less than the requested number of records, I know that I am on the last page.
There is a problem with this method though. If the last page is full, you don't know it until you try to read the next page, and it is empty. So one way to handle that is to request one record more than you are going to present on the page. That is, if you are presenting 25 records per page, request 26. If your result set has 26 records, then there is at least one record on the next page. Still only present 25 records, and increment your offset by 25 records each time, just request 26 records. If the record set has less than 26 records, then you know you are on the last page.

Take a look at SQLERRD(2)
For an OPEN statement, if the cursor is insensitive to changes, SQLERRD(2) contains the actual number of rows in the result set. If the cursor is sensitive to changes, SQLERRD(2) contains an estimated number of rows in the result set.
You can also use GET DIAGNOSTICS after the open for the same info...
DB2_NUMBER_ROWS
If the previous SQL statement was an OPEN or a FETCH which caused the size of the result table to be known, returns the number of rows in the result table. For SENSITIVE cursors, this value can be thought of as an approximation since rows inserted and deleted will affect the next retrieval of this value. Otherwise, the value zero is returned.
Key point for both, for an exact count, you'd need to declare your cursor INSENSITIVE which will create a copy of your selected rows so that inserts, deletes and updates don't affect the results. There's also a performance hit.

Related

PL/SQL: ORA-00920: invalid relational operator

I get this error
23/112 PL/SQL: ORA-00920: invalid relational operator
It's pointing to AND CURRENT OF statement..
CREATE OR replace PROCEDURE alga_uz_pasirodyma(grupe_id in grupes.id%TYPE, alga in out number)
IS
v_kliento_id nariai.asm_kodas%TYPE;
TYPE bendras IS RECORD (
alga number
);
globalus bendras;
CURSOR c_klientai IS
SELECT nariai.asm_kodas
FROM nariai
where nariai.fk_grupe = grupe_id
FOR UPDATE OF nariai.alga;
BEGIN
globalus.alga:= alga;
IF grupe_id <= 0 THEN
raise_application_error(-20101, 'Nepavyko surasti grupes');
END IF;
OPEN c_klientai;
LOOP
FETCH c_klientai INTO v_kliento_id;
EXIT WHEN c_klientai%NOTFOUND;
UPDATE nariai set nariai.alga = nariai.alga * globalus.alga where nariai.asm_kodas = v_kliento_id AND CURRENT OF c_klientai;
END LOOP;
UPDATE grupes set grupes.pasirodymu_kiekis = grupes.pasirodymu_kiekis + 1 where grupes.id = grupe_id;
SELECT max(nariai.alga) into alga from nariai where nariai.fk_grupe = grupe_id;
CLOSE c_klientai;
END alga_uz_pasirodyma;
What should I do? I believe everything is declared correctly in the where statement..
"CURRENT OF" should be by itself in the where clause. It allows you to update or delete the record at the current loop iteration of the cursor.
On a different note, I don't see you doing anything significant in the loop to warrant a cursor. Ignore this note if that will change, otherwise just run the update "where nariai.fk_grupe = grupe_id"

Replacing a loop with 2 commits to 2 tables with a single aggregate commit?

(Using DB2)
I have a bit of code that does 2 commits (to 2 tables) per row, that I would like to change to once per 25 rows or something similar.
Here is the basic code:
(Code that finds MG-LOCATOR-NBR and MG-PG-NBR here)
MOVE MG-LOCATOR-NBR TO MT-LOCATOR-NBR
MOVE MG-PG-NBR TO MT-PG-NBR
SET IOC1-DELETE-MG TO TRUE
PERFORM IOC1-IO
EXEC SQL
COMMIT
END-EXEC
SET IOC1-DELETE-MT TO TRUE
PERFORM IOC1-IO
EXEC SQL
COMMIT
END-EXEC
If it was for one commit/table only, I think this would work:
ADD 1 TO WS-REC-COUNT
IF WS-REC-COUNT = 25
MOVE ZERO TO WS-REC-COUNT
EXEC SQL COMMIT END-EXEC
END-IF
(And a final COMMIT in the End-of-Job Method to cover the ending)
But Im confused on how to do with calling 2 different tables at once. Any suggestions?
Edit: The SQL for the deletes are pretty straightforward:
;IOC1-DEL-MG SECTION .
EXEC SQL DELETE FROM VMG
WHERE LOCATOR_NBR = :MG-LOCATOR-NB
AND PG_NBR = :MG-PG-NBR
END-EXEC
IF SQLCODE = 0
SET ;IOC1-OK TO TRUE
ELSE IF SQLCODE = +100
SET ;IOC1-NO-DATA TO TRUE
END-IF
DISPLAY 'DELETE MG' SQLCODE
;IOC1-DEL-MT SECTION .
EXEC SQL DELETE FROM VMT
WHERE LOCATOR_NBR = :MT-LOCATOR-NB
AND PG_NBR = :MT-PG-NBR
END-EXEC
IF SQLCODE = 0
SET ;IOC1-OK TO TRUE
ELSE IF SQLCODE = +100
SET ;IOC1-NO-DATA TO TRUE
END-IF
DISPLAY 'DELETE MT' SQLCODE
DB2 is not doing a commit per table. When you call commit like that, you commit all work since the last commit.
So if you went to one commit, every 25 iterations, or after every 50 deletes, that will work.
Be aware, if your program AbEnds while it is deleting row 36, then you will need to account for going back and cleaning up those rows when you restart.

PowerBuilder 12.5 sql cursors transaction size error

i have a major problem and trying to find a workaround. I have an application in PB12.5 that works on both sql and oracle dbs.. (with a lot of data)
and i m using CURSOR at a point,, but the aplications crashes only in sql. Using debuging in PB i found that the sql connection returs -1 due to huge transaction size. But i want to fetch row by row my data.. is any work around to fetch data like paging?? i mean lets fetch the first 1000 rows next the other 1000 and so on.. i hope that you understand what i want to achieve (to break the fetch process and so to reduce the transaction size if possible) , here is my code
DECLARE trans_Curs CURSOR FOR
SELECT associate_trans.trans_code
FROM associate_trans
WHERE associate_trans.usage_code = :ggs_vars.usage ORDER BY associate_trans.trans_code ;
OPEN trans_Curs;
FETCH trans_Curs INTO :ll_transId;
DO WHILE sqlca.sqlcode = 0
ll_index += 1
hpb_1.Position = ll_index
if not guo_associates.of_asstrans_updatemaster( ll_transId, ls_error) then
ROLLBACK;
CLOSE trans_Curs;
SetPointer(Arrow!)
MessageBox("Update Process", "Problem with the update process on~r~n" + sqlca.sqlerrtext)
cb_2.Enabled = TRUE
return
end if
FETCH trans_Curs INTO :ll_transId;
LOOP
CLOSE trans_Curs;
Since the structure of your source table s not fully presented, I'll make some assumptions here.
Let's assume that the records include a unique field that can be used as a reference (could be a counter or a timestamp). I'll assume here that the field is a timestamp.
Let's also assume that PB accepts cursors with parameters (not all solutions do; if it does not, there are simple workarounds).
You could modify your cursor to be something like:
[Note: I'm assuming also that the syntax presented here is valid for your environment; if not, adaptations are simple]
DECLARE TopTime TIMESTAMP ;
DECLARE trans_Curs CURSOR FOR
SELECT ots.associate_trans.trans_code
FROM ots.associate_trans
WHERE ots.associate_trans.usage_code = :ggs_vars.usage
AND ots.associate_trans.Timestamp < TopTime
ORDER BY ots.associate_trans.trans_code
LIMIT 1000 ;
:
:
IF (p_Start_Timestamp IS NULL) THEN
TopTime = CURRENT_TIMESTAMP() ;
ELSE
TopTime = p_Start_Timestamp ;
END IF ;
OPEN trans_Curs;
FETCH trans_Curs INTO :ll_transId;
:
:
In the above:
p_Start_Timestamp is a received timestamp parameter which would initially be empty and then will contain the OLDEST timestamp fetched in the previous invocation,
CURRENT_TIMESTAMP() is a function of your environment returning the current timestamp.
This solution will work solely when you need to progress in one direction (i.e. from present to past) and that you are accumulating all the fetched records in an internal buffer in case you need to scroll up again.
Hope this makes things clearer.
First of all thank you FDavidov for your effort, so i managed to do it using dynamic datastore instead of cursor,, so here is my solution in case someone else need this.
String ls_sql, ls_syntax, ls_err
Long ll_row
DataStore lds_info
ls_sql = "SELECT associate_trans.trans_code " &
+ " FROM associate_trans " &
+ " WHERE associate_trans.usage_code = '" + ggs_vars.usage +"' "&
+ " ORDER BY associate_trans.trans_code"
ls_syntax = SQLCA.SyntaxFromSQL( ls_sql, "", ls_err )
IF ls_err <> '' THEN
MessageBox( 'Error...', ls_err )
RETURN
END IF
lds_info = CREATE DataStore
lds_info.Create( ls_syntax, ls_err )
lds_info.SetTransObject( SQLCA )
lds_info.Retrieve( )
DO WHILE sqlca.sqlcode = 0 and ll_row <= ll_count
FOR ll_row = 1 TO ll_count
ll_transId = lds_info.GetItemNumber( ll_row, 'trans_code' )
ll_index += 1
hpb_1.Position = ll_index
do while yield(); loop
if not guo_associates.of_asstrans_updatemaster( ll_transId, ls_error) then
ROLLBACK;
DESTROY lds_info
SetPointer(Arrow!)
MessageBox("Update Process", "Problem with the update process on~r~n" + sqlca.sqlerrtext)
cb_2.Enabled = TRUE
return
end if
NEXT
DESTROY lds_info
LOOP

sybase For loop that consists of an Query for every result got from the select statement

SELECT distinct(a.acct_num)
FROM customer_acct a,
customer_acct_history b LIKE "%000%"
WHERE a.acct_num *= b.acct_num AND acct_type='C'
Based on the Output of this query ,It returns a lot of customer account numbers . I am trying to do a select Query returning top 1 for every account number . I am planning to link the o/p of first query in the second sql like where a.acct_num=( output of first Query ) . I want to do this in a loop and select top 1 for every account number. Any help on how to proceed would be appreciated
Is a good place for the CURSOR use:
declare curs cursor
for
SELECT distinct(a.acct_num)
FROM customer_acct a,
customer_acct_history b LIKE "%000%"
WHERE a.acct_num *= b.acct_num AND acct_type='C'
OPEN curs
DECLARE #acct_num INT
fetch curs into #acct_num
/* now loop, processing all the rows
** ##sqlstatus = 0 means successful fetch
** ##sqlstatus = 1 means error on previous fetch
** ##sqlstatus = 2 means end of result set reached
*/
while (##sqlstatus != 2)
BEGIN
SELECT Sometning FROM somwhere where #acct_num = ...
fetch curs into #acct_num
END
GL!

How do I iterate over a set of records in RPG(LE) with embedded SQL?

How do I iterate over a set of records in RPG(LE) with embedded SQL?
Usually I'll create a cursor and fetch each record.
//***********************************************************************
// Main - Main Processing Routine
begsr Main;
exsr BldSqlStmt;
if OpenSqlCursor() = SQL_SUCCESS;
dow FetchNextRow() = SQL_SUCCESS;
exsr ProcessRow;
enddo;
if sqlStt = SQL_NO_MORE_ROWS;
CloseSqlCursor();
endif;
endif;
CloseSqlCursor();
endsr; // Main
I have added more detail to this answer in a post on my website.
As Mike said, iterating over a cursor is the best solution. I would add to give slightly better performance, you might might want to fetch into an array to process in blocks rather than one record at a time.
Example:
EXEC SQL
OPEN order_history;
// Set the length
len = %elem(results);
// Loop through all the results
dow (SqlState = Sql_Success);
EXEC SQL
FETCH FROM order_history FOR :len ROWS INTO :results;
if (SQLER3 <> *zeros);
for i = 1 to SQLER3 by 1;
// Load the output
eval-corr output = results(i);
// Do something
endfor;
endif;
enddo;
HTH,
James R. Perkins