I would like to action some data checks following data imports into my system, im checking that all of my key locations have inventory imported for them and if they dont i would like the job to fail (I then have reporting/alerts set up when any jobs fail)
Ive had a search around and tried a number of options - The lines commented out are what i have tried but when i set INV_CHECK variable above the count level of one of my locations the job still completed succesfully. If i run in TOAD then it will fail and present an error which is what i had wanted the job to do.
Declare valid_loc NUMBER;
Inv_check NUMBER;
no_inv number;
BEGIN
select param_value into Inv_check from
scpomgr.udt_systemparam where param_name = 'INV_CHECK';
select count (*) into valid_loc from
(select distinct loc
from scpomgr.inventory
where loc in ('GB01', 'FR01', 'DE01', 'IT01', 'ES01', 'IE01', 'CN01', 'JP01', 'AU01', 'US01')
having count (*) > Inv_check
group by loc);
if valid_loc
<10 THEN
--raise_application_error(-20001,'Likely Missing Inv Records');
--raiseerror('fail',16,1);
--select 1/0 into no_inv from dual;
--THROW (51000, 'Process Is Not Finished', 1);
END IF;
END;
EXIT
Can anyone point me in the right direction of what ive missed / misunderstood?
Ive added an action into the If statement so i know its running the part after the 'Then' and if i run in TOAD it gives me an error, if i do it via 'PUTTY' which is what i use to run batch processes then it comes out as 'COMPLETE' and doesnt show any sort of failure.
So after a number of trial and error i found the below code gives me the desired result, to cause my process table / putty runs to display failed i needed to use pkg_job.fatel_error and with that i could pass an error message/code.
Declare
valid_loc NUMBER;
Inv_check NUMBER;
BEGIN
select param_value into Inv_check from
scpomgr.udt_systemparam where param_name = 'INV_CHECK';
select count (*) into valid_loc from
(select distinct loc
from scpomgr.inventory
where loc in ('GB01', 'FR01', 'DE01', 'IT01', 'ES01', 'IE01', 'CN01', 'JP01', 'AU01', 'US01')
group by loc
having count (*) > Inv_check
);
if valid_loc < 10 THEN
pkg_job.fatal_error('Likely Missing Inv Records',-20001);
END IF;
END;
/
EXIT
Hope this helps others or gives ideas of what to try.
I want to insert all the rows from the cursor to a table.But it is not inserting all the rows.Only some rows gets inserted.Please help
I have created a procedure BPS_SPRDSHT which takes input as 3 parameters.
PROCEDURE BPS_SPRDSHT(p_period_name VARCHAR2,p_currency_code VARCHAR2,p_source_name VARCHAR2)
IS
CURSOR c_sprdsht
IS
SELECT gcc.segment1 AS company, gcc.segment6 AS prod_seg, gcc.segment2 dept,
gcc.segment3 accnt, gcc.segment4 prd_grp, gcc.segment5 projct,
gcc.segment7 future2, gljh.period_name,gljh.je_source,NULL NULL1,NULL NULL2,NULL NULL3,NULL NULL4,gljh.currency_code Currency,
gjlv.entered_dr,gjlv.entered_cr, gjlv.accounted_dr, gjlv.accounted_cr,gljh.currency_conversion_date,
NULL NULL6,gljh.currency_conversion_rate ,NULL NULL8,NULL NULL9,NULL NULL10,NULL NULL11,NULL NULL12,NULL NULL13,NULL NULL14,NULL NULL15,
gljh.je_category ,NULL NULL17,NULL NULL18,NULL NULL19,tax_code
FROM gl_je_lines_v gjlv, gl_code_combinations gcc, gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code!='STAT'
AND gljh.currency_code=NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name||'%';
type t_spr is table of c_sprdsht%rowtype;
v_t_spr t_spr :=t_spr();
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
FND_FILE.PUT_LINE(FND_FILE.OUTPUT,'TOTAL ROWS FETCHED FOR SPREADSHEETS- '|| v_t_spr.count);
IF v_t_spr.count > 0 THEN
BEGIN
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
EXCEPTION
WHEN OTHERS THEN
l_error_count := SQL%BULK_EXCEPTIONS.count;
fnd_file.put_line(fnd_file.output,'Number of failures: ' || l_error_count);
FOR l IN 1 .. l_error_count LOOP
DBMS_OUTPUT.put_line('Error: ' || l ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(l).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(l).ERROR_CODE));
END LOOP;
END;
END IF;
fnd_file.put_line(fnd_file.output,'END TIME: '||TO_CHAR (SYSDATE, 'DD-MON-YYYY HH24:MI:SS'));
END BPS_SPRDSHT;
Total rows to be inserted=568388
No of rows getting inserted=48345.
Oracle uses two engines to process PL/SQL code. All procedural code is handled by the PL/SQL engine while all SQL is handled by the SQL statement executor, or SQL engine. There is an overhead associated with each context switch between the two engines.
The entire PL/SQL code could be written in plain SQL which will be much faster and lesser code.
INSERT INTO custom.pwr_bps_gl_register
SELECT gcc.segment1 AS company,
gcc.segment6 AS prod_seg,
gcc.segment2 dept,
gcc.segment3 accnt,
gcc.segment4 prd_grp,
gcc.segment5 projct,
gcc.segment7 future2,
gljh.period_name,
gljh.je_source,
NULL NULL1,
NULL NULL2,
NULL NULL3,
NULL NULL4,
gljh.currency_code Currency,
gjlv.entered_dr,
gjlv.entered_cr,
gjlv.accounted_dr,
gjlv.accounted_cr,
gljh.currency_conversion_date,
NULL NULL6,
gljh.currency_conversion_rate ,
NULL NULL8,
NULL NULL9,
NULL NULL10,
NULL NULL11,
NULL NULL12,
NULL NULL13,
NULL NULL14,
NULL NULL15,
gljh.je_category ,
NULL NULL17,
NULL NULL18,
NULL NULL19,
tax_code
FROM gl_je_lines_v gjlv,
gl_code_combinations gcc,
gl_je_headers gljh
WHERE gjlv.code_combination_id = gcc.code_combination_id
AND gljh.je_header_id = gjlv.je_header_id
AND gljh.currency_code! ='STAT'
AND gljh.currency_code =NVL (p_currency_code, gljh.currency_code)
AND gljh.period_name = NVL (p_period_name, gljh.period_name)
AND gljh.je_source LIKE p_source_name
||'%';
Update
It is a myth that **frequent commits* in PL/SQL is good for performance.
Thomas Kyte explained it beautifully here:
Frequent commits -- sure, "frees up" that undo -- which invariabley
leads to ORA-1555 and the failure of your process. Thats good for
performance right?
Frequent commits -- sure, "frees up" locks -- which throws
transactional integrity out the window. Thats great for data
integrity right?
Frequent commits -- sure "frees up" redo log buffer space -- by
forcing you to WAIT for a sync write to the file system every time --
you WAIT and WAIT and WAIT. I can see how that would "increase
performance" (NOT). Oh yeah, the fact that the redo buffer is
flushed in the background
every three seconds
when 1/3 full
when 1meg full
would do the same thing (free up this resource) AND not make you wait.
frequent commits -- there is NO resource to free up -- undo is undo,
big old circular buffer. It is not any harder for us to manage 15
gigawads or 15 bytes of undo. Locks -- well, they are an attribute
of the data itself, it is no more expensive in Oracle (it would be in
db2, sqlserver, informix, etc) to have one BILLION locks vs one lock.
The redo log buffer -- that is continously taking care of itself,
regardless of whether you commit or not.
First of all let me point out that there is a serious bug in the code you are using: that is the reason for which you are not inserting all the records:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht
BULK COLLECT INTO v_t_spr -- this OVERWRITES your array!
-- it does not add new records!
limit 50000;
EXIT WHEN c_sprdsht%notfound;
END LOOP;
CLOSE c_sprdsht;
Each iteration OVERWRITES the contents of v_t_spr with the next 50,000 rows to be read.
Actually the 48345 records you are inserting are simply the last block read during the last iteration.
the "insert" statemend should be inside the same loop: you should do an insert for each 50,000 rows read.
you should have written it this way:
BEGIN
OPEN c_sprdsht;
LOOP
FETCH c_sprdsht BULK COLLECT INTO v_t_spr limit 50000;
EXIT WHEN c_sprdsht%notfound;
FORALL I IN v_t_spr.FIRST..v_t_spr.LAST SAVE EXCEPTIONS
INSERT INTO custom.pwr_bps_gl_register
VALUES v_t_spr(i);
...
...
END LOOP;
CLOSE c_sprdsht;
If you were expecting to have the whole table loaded in memory for doing just one unique insert, then you wouldn't have needed any loop or any "limit 50000" clause... and actually you could have used simply the "insert ... select" approach.
Now: a VERY GOOD reason for NOT using a "insert ... select" could be that there are so many rows in the source table that such insert would make the rollback segments grow so much that there is simply not enough phisical space on your server to hold them. But if this is the issue (you can't have so much rollback data for a single transaction), you should also perform a COMMIT for each 50,000 records block, otherwise your loop would not solve the problem: it would just be slower than the "insert ... select" and it would generate the same "out of rollback space" error (now i don't remember the exact error message...)
now, issuing a commit every 50,000 records is not the nicest thing to do, but if your system actually is not big enough to handle the needed rollback space, you have no other way out (or at least I am not aware of other way outs...)
Don't use EXIT WHEN c_sprdsht%NOTFOUND (this is the cause of your missing rows), instead use EXIT WHEN v_t_spr.COUNT = 0
I know you can set user profiles or set a general timeout for query.
But I wish to set timeout to a specific query inside a procedure and catch the exception, something like :
begin
update tbl set col = v_val; --Unlimited time
delete from tbl where id = 20; --Unlimited time
begin
delete from tbl; -- I want this to have a limited time to perform
exception when (timeout???) then
--code;
end;
end;
Is this possible? is there any timeout exceptions at all I can catch? per block or query? didn't find much info on the topic.
No, you can not set a timeout in pl/sql. You could use a host language for this in which you embed your sql and pl/sql.
You could do:
select * from tbl for update wait 10; --This example will wait 10 seconds. Replace 10 with number of seconds to wait
Then, the select will attempt to lock the specified rows, but if it's unsuccessful after n seconds, it will throw an "ORA-30006: resource busy; acquire with WAIT timeout expired". If lock is achieved, then you can execute your delete.
Hope that helps.
Just an idea : you could do the Delete in a DBMS_JOB.
Then create a procedure to monitor the job, which then calls :
DBMS_JOB.BROKEN(JOBID,TRUE);
DBMS_JOB.remove(JOBID);
v_timer1 := dbms_utility.get_time();
WHILE TRUE
LOOP
v_timer2 := dbms_utility.get_time();
EXIT WHEN (ABS(v_timer1 - v_timer2)/100) > 60; -- cancel after 60 sec.
END LOOP;
MariaDB> select sleep(4);
+----------+
| sleep(4) |
+----------+
| 0 |
+----------+
1 row in set (4.002 sec)
MariaDB>
See: https://mariadb.com/kb/en/sleep/
I have a table displayed in DBGrid the sort order is based on a Sequence field and I want to be able to move an item up or down one place at a time. I have researched here and cannot find anything exactly like I need.
The problem comes when I disable the Sort order to make a change, the list reverts to the order in which the data was originally entered, so I have lost the item next in line.
This is what I have...
Folder Sequence
----------------
Buttons 1
Thread 2 << Current Row
Cotton 3
Rags 4
On clicking the "MoveDown" button I want...
Folder Sequence
----------------
Buttons 1
Cotton 2
Thread 3 << Current Row
Rags 4
But - when I remove the Sort order on Sequence I get the order I entered the items...
Folder Sequence
----------------
Buttons 1
Cotton 2
Rags 4
Thread 3 << Current Row
So far my attempts are proving pretty cumbersome and involve loading the Rows into a listbox, shuffling them and then writing them back to the Table. Gotta be a better way, but it is beyond my current grasp of SQL.
Can someone please point me in the direction to go.
I don't want to trouble anyone too much if it is a difficult thing to do in SQL, as I can always stay with the listbox approach. If it is relatively simple to an SQL-expert, then I would love to see the SQL text.
Thanks
My solution is based on the TDataSet being sorted by the Sequence field:
MyDataSet.Sort := 'Sequence';
And then swapping the Sequence field between the current row and the Next (down) / Prior (up) records e.g.:
type
TDBMoveRecord = (dbMoveUp, dbMoveDown);
function MoveRecordUpDown(DataSet: TDataSet; const OrderField: string;
const MoveKind: TDBMoveRecord): Boolean;
var
I, J: Integer;
BmStr: TBookmarkStr;
begin
Result := False;
with DataSet do
try
DisableControls;
J := -1;
I := FieldByName(OrderField).AsInteger;
BmStr := DataSet.Bookmark;
try
case MoveKind of
dbMoveUp: Prior;
dbMoveDown: Next;
end;
if ((MoveKind = dbMoveUp) and BOF) or ((MoveKind = dbMoveDown) and EOF) then
begin
Beep;
SysUtils.Abort;
end
else
begin
J := DataSet.FieldByName(OrderField).AsInteger;
Edit;
FieldByName(OrderField).AsInteger := I;
Post;
end;
finally
Bookmark := BmStr;
if (J <> -1) then
begin
Edit;
FieldByName(OrderField).AsInteger := J;
Post;
Result := True;
end;
end;
finally
EnableControls;
end;
end;
Usage:
MoveRecordUpDown(MyDataSet, 'Sequence', dbMoveDown);
// or
MoveRecordUpDown(MyDataSet, 'Sequence', dbMoveUp);
If i understand right, you want to find "next sequence item"?
May be you may do something like "get first min value, grater then X"?
In this way, you can pass prev. Row sequence value.
I am using Groovy Sql to fetch results. This is the output from my Linux box. Actually there are 2 statements involved sp_configure 'number of open partitions' and go see below
%isql -U abc -P abc -S support
1> sp_configure 'number of open partitions'
2> go
Parameter Name Default Memory Used Config Value
Run Value Unit Type
------------------------------ ----------- ----------- ------------
------------ -------------------- ----------
number of open partitions 500 5201 5000
5000 number dynamic
(1 row affected)
(return status = 0)
1>
I am using groovy code
def sql = Sql.newInstance("jdbc:abc:sybase://harley:6011;DatabaseName=support;",dbuname,dbpassword,Driver)
sql.eachRow("sp_configure 'number of open partitions'"){ row ->
/*println row.run_value*/
}
Is there a way to execute statements in batch?
I am using Sybase
Not sure if it will work, but you might be able to do:
sql.call("sp_configure 'number of open partitions'")
sql.eachRow("go"){ row ->
...
}
Have not actually tried this [yet] but:
sql.call("sp_configure 'number of open partitions'")
int[] updateCounts = sql.withBatch({
sql.eachRow("go"){ row ->
...
}
})
// check your updateCounts here for errors
Just try this
sql.eachRow("sp_configure 'number of open partitions'"){ row ->
println row.'Parameter Name'.trim }