I'm pretty new to PL/SQL and I have to work with it a lil bit. I had to make some functions which are pretty similar. I simplified it for this question.
I got 2 tables (called them TABLE1, TABLE2 in this example) which have some data. I have to trim, validate the data and insert it into other tables.
TABLE1 -> TABLE3
TABLE2 -> TABLE4
TABLE1 has some orders, while TABLE2 has several positions for each order. As I said its simplfied so I haven't posted things like the exception or the open/close cursors etc. Atm it works like this but I don't think this structure is somewhere near "best-practise" but I didn't found any PL/SQL code on the web which covered this problem, though it must be something pretty common.
COMMIT could be at the end of the outer loop I guess, maybe its even after the whole function.
So could you tell me if its okay like this or completely stupid and what I should/could change and why. I don't wanna get used to a 'bad' codingstyle so I wanna learn it the right way while I'm a beginner at it.
Heres the simplified code:
BEGIN
SAVEPOINT SAVE_Stufe_5;
LOOP
SAVEPOINT SAVE_LOOP;
FETCH CURSOR1 INTO RECORD1;
EXIT WHEN CURSOR1%NOTFOUND OR CURSOR1%NOTFOUND IS NULL;
vError := 0;
RECORD1 := CURSOR1;
-- DATAVALIDATION (vError will be the Errorcode)
IF (vError = 0) THEN
retcode := InsertTABLE3(RECORD1);
IF (retcode != DATABASE_OK) THEN
ROLLBACK TO SAVE_LOOP;
END IF;
END IF;
LOOP
FETCH CURSOR2 INTO CURSOR2;
EXIT WHEN CURSOR2%NOTFOUND OR CURSOR2%NOTFOUND IS NULL OR vError != 0 OR retcode != DATABASE_OK;
RECORD2 := CURSOR2;
-- DATAVALIDATION (vError will be the Errorcode)
IF (vError = 0) THEN
retcode := InsertTABLE4(RECORD2);
IF (retcode = DATABASE_OK) THEN
UPDATE TABLE2
SET TABLE2.Status = 20
WHERE TABLE2.ID = CURSOR2.ID;
ELSE
ROLLBACK TO SAVE_LOOP;
END IF;
END IF;
END LOOP;
IF (vError = 0) THEN
UPDATE TABLE1
SET TABLE1.Status = 20
WHERE TABLE1.ID = CURSOR1.ID
ELSE
ROLLBACK TO SAVE_LOOP;
UPDATE TABLE1
SET TABLE1.Status = vError
WHERE TABLE1.ID = CURSOR1.ID
UPDATE TABLE2
SET TABLE2.Status = vError
WHERE TABLE2.ID = CURSOR2.ID
END IF;
END LOOP;
END;
Small update:
I managed to do the validation set-based, though I don't really know how to get my data into the other table. I tried a insert select with the trim in it but that only inserts one row. If I would use a implicit cursor as suggested I still had to loop, I wouldn't loop the cursor but the SELECT INTO as far as the implicit cursor only has one row.
I guess I could really need a snippet or some link to help me out. Here's a simplified version of my try:
INSERT INTO TABLE3
(
val1,
val2,
val3
)
SELECT TRIM(val1),
TRIM(val2),
TRIM(val3),
FROM TABLE1
WHERE STATUS = 10
AND (TRIM(PK1) || TRIM(PK2)) NOT IN (SELECT (TABLE3.PK1 || TABLE3.PK2) FROM TABLE3);
Generally speaking, it is bad style to do things by looping in SQL. A loop like this will be extremely slow compared to a set-based solution. Instead of all these loops, it would be preferable to use a single insert or merge statement to copy all of table 1 into table 3--or perhaps a few statements if your validation is complex and you need some intermediate steps.
Most types of trimming and data validation you would want to do can be handled like this. Almost never do you need nested loops. There are exceptions, but they are rare. Those who are new to SQL tend to use loops because that is what we know from other languages. I was in that category not so long ago. But to really use the power of the language, you have to get beyond that.
Beyond this general point, not much specific help can be given if we don't know anything about the tables or what kind of validation you are doing.
When you should commit is also dependent on your specific design and what you are trying to accomplish.
Related
I am writing a PL/SQL package to populate a table, that will calculate several metrics depending on the formula(querying from different tables), but the package i end up with is full of select statements from different tables.
is there any other way of clubbing all those queries together without making the code too much complex (i love simplicity).
Please suggest.
--a developer in distress
Use cursors. For example, you might be tempted to write a bunch of statements one after the other, such as:
DECLARE
nCurrent_balance CUSTOMER_RUNNING_BALANCE.CURRENT_BALANCE%TYPE;
nTotal_credit_line CUSTOMER_CREDIT_LINE.TOTAL_CREDIT_LINE%TYPE;
nAvailable_credit CUSTOMER_CREDIT_LINE.AVAILABLE_CREDIT%TYPE;
strLocation_name CUSTOMER_ADDRESS.LOCATION_NAME%TYPE;
strStreet_address_1 CUSTOMER_ADDRESS.STREET_ADDRESS_1%TYPE;
strCity CUSTOMER_ADDRESS.CITY%TYPE;
strState CUSTOMER_ADDRESS.CITY%TYPE;
strZip CUSTOMER_ADDRESS.CITY%TYPE;
BEGIN
FOR aRow IN (SELECT CUSTOMER_ID FROM CUSTOMER)
LOOP
BEGIN
SELECT CURRENT_BALANCE
INTO nCurrent_balance
FROM CUSTOMER_RUNNING_BALANCE rb
WHERE rb.CUSTOMER_ID = aRow.CUSTOMER_ID;
EXCEPTION
WHEN NO_DATA_FOUND THEN
nCurrent_balance := NULL;
END;
BEGIN
SELECT TOTAL_CREDIT_LINE, AVAILABLE_CREDIT
INTO nTotal_credit_Line, nAvailable_credit
FROM CUSTOMER_CREDIT_LINE cl
WHERE cl.CUSTOMER_ID = aRow.CUSTOMER_ID;
EXCEPTION
WHEN NO_DATA_FOUND THEN
nTotal_credit_line := NULL;
nAvailable_credit := NULL;
END;
BEGIN
SELECT LOCATION_NAME,
STREET_ADDRESS_1,
CITY,
STATE,
ZIP
INTO strLocation_name,
strStreet_address_1,
strCity,
strState,
strZip
FROM CUSTOMER_ADDRESS ca
WHERE ca.CUSTOMER_ID = aRow.CUSTOMER_ID AND
ca.ADDRESS_TYPE = 'HEADQUARTERS';
EXCEPTION
WHEN NO_DATA_FOUND THEN
strLocation_name := NULL;
strStreet_address_1 := NULL;
strCity := NULL;
strState := NULL;
strZip := NULL;
END;
// Do something useful with the data here
END LOOP;
END;
Ick. Here we have a cursor and a bunch of individual SELECT statements pulling data from three additional sources into individually declared variables. There's a lot of lines of code here, but most of it is pretty much useless - except that it's all required due to the way this is written - AND there's a huge, glaring, and potentially fatal coding bug in this, although the code may run just fine - can you spot it? :-)
Using a cursor to join the tables together would reduce the code volume a lot, eliminate the need for all those exception handlers, and would completely eliminate the chance to make the bug above:
BEGIN
FOR aRow IN (SELECT c.CUSTOMER_ID,
rb.CURRENT_BALANCE,
cl.TOTAL_CREDIT_LINE,
cl.AVAILABLE_CREDIT,
ca.LOCATION_NAME,
ca.STREET_ADDRESS_1,
ca.CITY,
ca.STATE,
ca.ZIP
FROM CUSTOMER c
LEFT OUTER JOIN CUSTOMER_RUNNING_BALANCE rb
ON rb.CUSTOMER_ID = c.CUSTOMER_ID
LEFT OUTER JOIN CUSTOMER_CREDIT_LINE cl
ON cl.CUSTOMER_ID = c.CUSTOMER_ID
LEFT OUTER JOIN CUSTOMER_ADDRESS ca
ON ca.CUSTOMER_ID = c.CUSTOMER_ID AND
ca.ADDRESS_TYPE = 'HEADQUARTERS')
LOOP
// Do something useful with the data here
END LOOP;
END;
So - 59 lines of code in the first example, compared to 22 in the second. In the second example the relationships between the tables are clearly spelled out by the cursor. And the error in the first example (a classic cut-and-paste bug I actually made when writing it :-) is impossible to make in the second.
Relational databases are built to process queries such as the cursor in the second example quickly and efficiently. And by using a query you're cutting down on the number of occasions where
In Oracle 11g, I am using the following in a procedure.. can someone please provide a better solution to achieve the same results.
FOR REC IN
(SELECT E.EMP FROM EMPLOYEE E
JOIN
COMPANY C ON E.EMP=C.EMP
WHERE C.FLAG='Y')
LOOP
UPDATE EMPLOYEE SET FLAG='Y' WHERE EMP=REC.EMP;
END LOOP;
Is there a more efficient/better way to do this? I feel as if this method will run one update statement for each record found (Please correct me if I am wrong).
Here's the is actual code in full:
create or replace
PROCEDURE ACTION_MSC AS
BEGIN
-- ALL MIGRATED CONTACTS, CANDIDATES, COMPANIES, JOBS
-- ALL MIGRATED CANDIDATES, CONTACTS
FOR REC IN (SELECT DISTINCT AC.PEOPLE_HEX
FROM ACTION AC JOIN PEOPLE P ON AC.PEOPLE_HEX=P.PEOPLE_HEX
WHERE P.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE PEOPLE_HEX=REC.PEOPLE_HEX;
END LOOP;
-- ALL MIGRATED COMPANIES
FOR REC IN (SELECT DISTINCT AC.COMPANY_HEX
FROM ACTION AC JOIN COMPANY CM ON AC.COMPANY_HEX=CM.COMPANY_HEX
WHERE CM.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE COMPANY_HEX=REC.COMPANY_HEX;
END LOOP;
-- ALL MIGRATED JOBS
FOR REC IN (SELECT DISTINCT AC.JOB_HEX
FROM ACTION AC JOIN "JOB" J ON AC.JOB_HEX=J.JOB_HEX
WHERE J.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE JOB_HEX=REC.JOB_HEX;
END LOOP;
COMMIT;
END ACTION_MSC;
You're right, it will do one update for each record found. Looks like you could just do:
UPDATE EMPLOYEE SET FLAG = 'Y'
WHERE EMP IN (SELECT EMP FROM COMPANY WHERE FLAG = 'Y')
AND FLAG != 'Y';
A single update will generally be faster and more efficient than multiple individual row updates in a loop; see this answer for another example. Apart from anything else, you're reducing the number of context switches between PL/SQL and SQL, which add up if you have a lot of rows. You could always benchmark this with your own data, of course.
I've added a check of the current flag state so you don't do a pointless update with no chamges.
It's fairly easy to compare the approaches to see that a single update is faster than one in a loop; with some contrived data:
create table people (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into people (id, people_hex, to_migrate)
select level, to_char(level - 1, 'xx'), 'Y'
from dual
connect by level <= 100;
create table action (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into action (id, people_hex, to_migrate)
select level, to_char(mod(level, 200), 'xx'), 'N'
from dual
connect by level <= 500000;
All of these will update half the rows in the action table. Updating in a loop:
begin
for rec in (select distinct ac.people_hex
from action ac join people p on ac.people_hex=p.people_hex
where p.to_migrate='Y')
loop
update action set to_migrate='Y' where people_hex=rec.people_hex;
end loop;
end;
/
Elapsed: 00:00:10.87
Single update (after rollback; I've left this in a block to mimic your procedure):
begin
update action set to_migrate = 'Y'
where people_hex in (select people_hex from people where to_migrate = 'Y');
end;
/
Elapsed: 00:00:07.14
Merge (after rollback):
begin
merge into action a
using (select people_hex, to_migrate from people where to_migrate = 'Y') p
on (a.people_hex = p.people_hex)
when matched then update set a.to_migrate = p.to_migrate;
end;
/
Elapsed: 00:00:07.00
There's some variation from repeated runs, particularly that update and merge are usually pretty close but sometimes swap which is faster in my environment; but both are always significantly faster than updating in a loop. You can repeat this in your own environment and with your own data spread and volumes, and you should if performance is that critical; but a single update is going to be faster than the loop. Whether you use update or merge isn't likely to make much difference.
declare
CURSOR C1
IS select tgt.exp_date ,(src.eff_date - 1/(24*60*60))eff_date
from mira_rate tgt,mira_rate_dummy src
where src.tc_code = tgt.tc_code and src.carrier_code = tgt.carrier_code and tgt.exp_date is null for update of tgt.exp_date;
v_a date;
v_b date;
i number:=0;
begin
open c1;
loop
fetch c1 into v_a, v_b;
exit when c1%notfound;
update mira_rate
set exp_date =v_b where current of c1;
i:=i+1;
end loop;
dbms_output.put_line(i||' rows updated');
close c1;
commit;
end;
After i excecute the query it is locking the table says
ORA-00054: resource busy and acquire with NOWAIT specified
Also pls tell me how to remove the lock i tried killing the sesssion it is not happening.still it says the same
Affter removing the lock. Pls clear me this requirement
select tgt.exp_date ,(src.eff_date - 1/(24*60*60))eff_date
from mira_rate tgt,mira_rate_dummy src
where src.tc_code = tgt.tc_code and src.carrier_code = tgt.carrier_code and tgt.exp_date is null;
it ill return rows I need to goto the mira_rate table need to update exp_date=eff_date.
Please suggest me how to do i m using Oracle 9i so merge without not matched is working
At first sight, there is no commit in the code.
The code with commit wil be ok. Commit will release the locks(Oracle cursor examples/expl)
But better you would:
MERGE INTO mira_rate tgt
USING mira_rate_dummy src
ON (src.tc_code = tgt.tc_code and src.carrier_code = tgt.carrier_code)
WHEN MATCHED THEN UPDATE
SET exp_date= src.eff_date - 1/(24*60*60) --or just src.eff_date
WHERE tgt.exp_date is null;
This is what you want to do as far as I understand.
As a rule: What you can do in SQL, do in SQL, not PL/SQL.
Take out the 'FOR UPDATE'.
You need to be very clear in your mind why you need it and in my experience you generally don't.
Between us I think we are saying this should be your approach
begin
UPDATE mira_rate
SET exp_date= src.eff_date - 1/(24*60*60)
WHERE exp_date is null;
DBMS_OUTPUT.PUT_LINE
(TO_CHAR(SQL%ROWCOUNT) || ' Rows Updated);
end;
No need for locks and no need for cursors.
Hope that helps.
Edit - still not entirely sure what your requirement is but the following sql may be what you are looking for.
UPDATE MIRA_RATE TGT
SET EXP_DATE =
(
SELECT SRC.EFF_DATE - 1/86400
FROM MIRA_RATE_DUMMY SRC
WHERE
SRC.TC_CODE = TGT.TC_CODE AND
SRC.CARRIER_CODE = TGT.CARRIER_CODE
)
WHERE
TGT.EXP_DATE IS NULL;
#Satheesh, Updatable select will work only for primary key columns. See if the select fetches the primary key and also useins it in where clause. Else the update will throw error.
There is something to check for
cannot modify a column which maps to a non key-preserved table
You can have join but the update needs primary key to update in the base table.
I'm faced with the task of having to look in a database with millions of records, which codes of a set of about 1500 have a corresponding record, which ones of those exist in the db. For example i have 1500 IDs in a csv file. I want to know which ones of those IDs exist in the database, and are therefore correct, and which ones don't.
Is there a better way of doing this without "... WHERE id IN (1, 2, 3, ..., 1500);" ?
The DB/language in question is ORACLE PL/SQL.
Thanks in advance for any help.
Build an external table on your CSV file. These are highly neat things which allow us to query the contents of an OS file in SQL. Find out more.
Then it's a simple matter of issuing a query:
select csv.id
, case ( when tgt.id is null then 'invalid' else 'valid') end as valid_id
from your_external_tab csv
left join target_table tgt on (csv.id = tgt.id)
"CSV table is hardly ideal from a performance point of view"
Performance is a matter of context. In this case it depends on how often the data in the CSV changes and how often we need to query it. If the file is produced once a day and we only need to check the values after it has been delivered then an external table is the most efficient solution. But if this data set is a permanent repository which needs to be queried often then the overhead of writing to a heap table is obviously justified.
To me, a CSV file consisting of a bunch IDs and nothing else sounds like transient data and so
fits the use case for external tables. But the OP may have additional requirements which they haven't mentioned.
Here is an alternative approach which doesn't require creating any permanent database objects. Consequently it is less elegant, and probably will perform worse.
It reads the CSV file labouriously using UTL_FILE and populates a collection based on SYSTEM.NUMBER_TBL_TYPE, a pre-defined collection (nested table of NUMBER) which should be available in your Oracle database.
declare
ids system.number_tbl_type;
fh utl_file.file_handle;
idx pls_integer := 0;
n pls_integer;
begin
fh := utl_file.fopen('your_data_directory', 'your_data.csv', 'r');
begin
utl_file.get_line(fh, n);
loop
idx := idx + 1;
ids.extend();
ids(idx) := n;
utl_file.get_line(fh, n);
end loop;
exception
when no_data_found then
if utl_file.is_open(fh) then
utl_file.fclose(fh);
end if;
when others then
raise;
end;
for id_recs in in ( select csv.column_value
, case ( when tgt.id is null then 'invalid' else 'valid') end as valid_id
from (select * from table(ids)) csv
left join target_table tgt on (csv.column_value = tgt.id)
) loop
dbms_output.put_line '(ID '||id_recs.column_value || ' is '||id_recs.valid_id);
end loop;
end;
Note: I have not tested this code. The principle is sound but the details may need debugging ;)
I have a trigger that verifies if a field is null:
create or replace trigger trig1
after insert on table_1
for each row
begin
if ((select table2.column2 from table2 where table2.id= :new.id) isnull) then
update table2 set table2.column2 = :new.column1 where table2.id = :new.id;
end if;
end trig1;
.
run;
I get an error that the trigger is created with compilation errors. I don't know what the problem is. I use Oracle SQL*Plus 10.2.0
The PL/SQL syntax doesn't allow for including SQL statements in the IF clause.
The correct approach is to separate out the SELECT statement and then test for its result. So that would be:
create or replace trigger trig1
after insert on table_1
for each row
declare
v table2.column2%type;
begin
select table2.column2
into v
from table2
where table2.id= :new.id;
if v is null
then
update table2
set table2.column2 = :new.column1
where table2.id = :new.id;
end if;
end trig1;
Note that this does not handle the existence of multiple rows in table2 matching the criteria, or indeed there being no matching rows. It also doesn't handle locking.
Also, bear in mind that code like this doesn't function well in multi-user environments. That's why I mentioned locking. You ought really to use procedural logic to handle these sorts of requirements. Although as is often the case with ill-conceived triggers the real culprit is a poor data model. table2.column2 should have been normalised out of existence.