Does SELECT start transaction in PL/SQL - sql

I was told that following code won't help me to check duplicity, because result might be different before SELECT and UPDATE statement.
PROCEDURE AddNew(Pname VARCHAR2, Pcountry VARCHAR2)
AS
already_exists BOOLEAN;
BEGIN
SELECT COUNT(*)>0 INTO already_exists FROM Publishers WHERE name=Pname;
IF already_exists THEN
RAISE_APPLICATION_ERROR(-20014,'Publisher already exists!');
END IF;
INSERT INTO Publishers(id,name,country)
VALUES (NewPublisherId(),Pname,Pcountry);
END;
This post claims that SELECT starts a transaction:
Why do I get an open transaction when just selecting from a database View?
This part of documentation suggests otherwise:
A transaction implicitly begins with any operation that obtains a TX
lock:
When a statement that modifies data is issued
When a SELECT ... FOR UPDATE statement is issued
When a transaction is explicitly started with a SET TRANSACTION statement or the DBMS_TRANSACTION package
So? Does SELECT start a transaction or not?

The latter is true: https://docs.oracle.com/cloud/latest/db112/SQLRF/statements_10005.htm#SQLRF01705
A transaction implicitly begins with any operation that obtains a TX
lock:
When a statement that modifies data is issued
When a SELECT ... FOR UPDATE statement is issued
When a transaction is explicitly started with a SET TRANSACTION statement or the DBMS_TRANSACTION package
But it really does not matter, from the point of view of the main problem - to see if the record already exists in the database. Even if the transaction is explicitely startet using SET TRANSACTION ..., your code simply does not detect duplicate transactions !
Just do a simple test manually simulating the procedure in two simultaneous sessions and you will see:
CREATE TABLE Publishers(
id int,
name varchar2(100)
);
Let say that in session #1 the procedure begins at 8:00:00.0000:
SQL> Set transaction name 'session 1';
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Let say that in session #2 the same procedure begins at 8:00:00.0020, just after the insert was made in session 1, but still before the session#1 commits:
SQL> Set transaction name 'session 2';
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
The transaction #2 does not see uncommited changes done by the session 1, so the session 2 assumess that there is no record John, so it also inserts it to the table:
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Now the session 1 commits:
SQL> Commit;
Commit complete.
and a few milliseconds later the session2 commits too:
SQL> Commit;
Commit complete.
And the final result is - a duplicated record even though the transaction has been explicitelly started:
select * from publishers;
ID NAME
---------- ----------------------------------------------------------------------------------------------------
1 John
1 John
========== EDIT =================
You can avoid the duplicity by executing statement SET TRANSACTION
ISOLATION LEVEL SERIALIZABLE in the beginning. – #Draex_
Many think that ISOLATION LEVEL SERIALIZABLE will solve the problem magically. Unfortunately, it will not help.
Let's see how it works on a simple example:
Session #1
SQL> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Session #2
SQL> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Transaction set.
SQL> select count(*) FROM Publishers where name = 'John';
COUNT(*)
----------
0
SQL> INSERT INTO Publishers(id,name) VALUES(1,'John');
1 row created.
Session #1 again:
SQL> commit;
Commit complete.
SQL> select * from publishers;
ID NAME
---------- --------
1 John
and back to session #2
SQL> commit;
Commit complete.
SQL> select * from publishers;
ID NAME
---------- --------
1 John
1 John
As you can see, the magic of ISOLATION LEVEL SERIALIZABLE did not work.

Related

Decrement oracle sequence when an exception occurred

I try to insert a new record in my database using pl/sql, so first of all I generate a new sequence like so:
select my_seq.nextval into seqId from dual;
And then i try to insert a new record using the generated seqId like this :
insert into myTable (id) values seqId ;
But when an error occurred during the insertion I want to decrement my sequence in an exception block. Does anyone have an idea please?
As you were already told, you shouldn't be doing this at all. Anyway, here's how you might do it.
Sample table and a sequence:
SQL> create table mytable (id number primary key);
Table created.
SQL> create sequence seq;
Sequence created.
SQL> set serveroutput on
Procedure which inserts seq.nextval into mytable and decrements the sequence in a case of a failure. I'm doing it in a simplest way - dropping it and recreating with the start parameter set to the last fetched value minus 1. DBMS_OUTPUT calls are here just to show what's going on in the procedure.
SQL> create or replace procedure p_test as
2 seqid number;
3 begin
4 seqid := seq.nextval;
5 dbms_output.put_line('Sequence number to be inserted = ' || seqid);
6
7 insert into mytable(id) values (seqid);
8
9 exception
10 when others then
11 dbms_output.put_line(sqlerrm);
12 execute immediate 'drop sequence seq';
13 execute immediate 'create sequence seq start with ' || to_char(seqid - 1);
14 end;
15 /
Procedure created.
Let's test it: this should insert 1:
SQL> exec p_test;
Sequence number to be inserted = 1
PL/SQL procedure successfully completed.
SQL> select * from mytable;
ID
----------
1
So far so good. Now, I'll manually insert ID = 2 so that the next procedure call violates unique constraint:
SQL> insert into mytable values (2);
1 row created.
Calling the procedure again:
SQL> exec p_test;
Sequence number to be inserted = 2
ORA-00001: unique constraint (SCOTT.SYS_C007547) violated
PL/SQL procedure successfully completed.
OK; procedure silently completed. It didn't insert anything, but it decremented the sequence:
SQL> select * from mytable;
ID
----------
1 --> populated with the first P_TEST call
2 --> populated manually
SQL> select seq.nextval from dual;
NEXTVAL
----------
1 --> sequence is decremented from 2 to 1
If I delete the offending ID = 2 and try again:
SQL> delete from mytable where id = 2;
1 row deleted.
SQL> exec p_test;
Sequence number to be inserted = 2
PL/SQL procedure successfully completed.
SQL> select * from mytable;
ID
----------
1
2
SQL> select seq.nextval from dual;
NEXTVAL
----------
3
SQL>
Right; kind of works, but it's not worth the pain.
Besides, as you commented that there are 20 rows in a table. What will you do if someone deletes row #13. Will you decrement all values between #14 and 20? What if that's a primary key, referenced by some foreign keys?
Seriously, don't do it.
You can not decrement after the sequence is incremented (A sequence is either incremented or decremented not both) or reinitialize the sequence (start with ...) in Oracle. There is no way of doing that. Hope this helps.
However, if you want to continue this absurdity you can try this but you need to initialize your sequence first which is myseq.nextval;
then you can first try to insert currval and if succeeds then you can increment your sequence otherwise sequence will have its previous value.
Declare
currval pls_integer;
inc pls_integer;
Begin
select seq.currval into currval from dual;
insert into myTable (id) values (currval) ;
select seq.nextval into inc from dual;
Exception
when others then
do the exception handling;
end;

How to design task assignment system?

I want to design a system which is similar with stackoverflow review feature. That is to say:
There are n tasks, which should assign to users (users count is unknown). At one time, one task should assign to at most one user, different users should not be assigned same task.
For example, n = 8, if one user enters the system default assign him 3 tasks.
At 17:00, Tom enters the system and get tasks 1, 2, 3.
At 17:01, Jim enters the system and get tasks 4, 5, 6.
At 17:02, Jerry enters the system and get tasks 7, 8.
At 17:03, Bob enters the system and get no task.
At 17:05, Tom completed task 1, 2, and leave the system.
At 17:06, Bob enters the system again and get task 3.
Suppose I use Database to store tasks info.
My solution is that when tasks 1, 2, 3 are assigned to Tom, delete the 3 records from DB and store them to memory. Then others will not get the 3 records. When Tom leaves the system, insert his completed tasks and uncompleted tasks to DB again (with task status "completed" or "uncompleted").
While the disadvantage is that store records to memory is not 100% safe, if the system crashed may cause data missing issue.
Could somebody know how stackoverflow designs review feature? Or share other solutions? I'm wondering whether SELECT ... FOR UPDATE is good in this use case.
What you need to implement is a FIFO stack or simple queue. In Oracle the best thing (unless you want to implement an actual queue with AQ) for such a thing is SELECT ... FOR UPDATE with the SKIP LOCKED clause. SKIP LOCKED allows us to easily operate a stack with multiple users.
Here's a simple interface:
create or replace package task_mgmt is
function get_next_task return tasks.id%type;
procedure complete_task (p_id in tasks.id%type);
procedure release_task (p_id in tasks.id%type);
end task_mgmt;
/
This is a bare-bones implementation:
create or replace package body task_mgmt is
function get_next_task return tasks.id%type
is
return_value tasks.id%type;
cursor c_tsk is
select id
from tasks
where status = 'open'
order by date_created, id
for update skip locked;
begin
open c_tsk;
fetch c_tsk into return_value;
update tasks
set status = 'progress'
, assigned = user
where current of c_tsk;
close c_tsk;
return return_value;
end get_next_task;
procedure complete_task (p_id in tasks.id%type)
is
begin
update tasks
set status = 'complete'
, date_completed = sysdate
where id = p_id;
commit;
end complete_task;
procedure release_task (p_id in tasks.id%type)
is
begin
rollback;
end ;
end task_mgmt;
/
Updating the status when the users pops the stack creates a lock. Because of the SKIP LOCKED clause, the next user won't see that task. This is a lot cleaner than deleting and re-inserting records.
Here's some data:
create table tasks (
id number not null
, descr varchar2(30) not null
, date_created date default sysdate not null
, status varchar2(10) default 'open' not null
, assigned varchar2(30)
, date_completed date
, constraint task_pk primary key (id)
)
/
insert into tasks (id, descr, date_created) values (1000, 'Do something', date '2015-05-28')
/
insert into tasks (id, descr, date_created) values (1010, 'Look busy', date '2015-05-28')
/
insert into tasks (id, descr, date_created) values (1020, 'Get coffee', date '2015-06-12')
/
Let's pop! Here's Session one:
SQL> var tsk1 number;
SQL> exec :tsk1 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk1
TSK1
----------
1000
SQL>
Meanwhile in Session two:
SQL> var tsk2 number;
SQL> exec :tsk2 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk2
TSK2
----------
1010
SQL>
Back in Session one:
SQL> exec task_mgmt.complete_task (:tsk1);
PL/SQL procedure successfully completed.
SQL> exec :tsk1 := task_mgmt.get_next_task ;
PL/SQL procedure successfully completed.
SQL> print :tsk1
TSK
----------
1020
SQL>
The main drawback of this approach is that it requires users to maintain stateful sessions while they work on the task. It that's not the case then you need an API in which get_next_task() is a discrete transaction, and forget about locking.
Incidentally, it's probably better to let users grab a task rather than assign them through a logon trigger (or whatever you have in mind by "Tom enters the system and get tasks 1, 2, 3."). Pulling tasks is how the SO Review queue works.
Also, just assign one task at a time. That way you can get efficient distribution of work. You want to avoid the situation where Tom has three tasks on his plate, one of which he isn't going to complete, and Bob has nothing to do. That is, unless you're Bob.

IF Statement inside Trigger Clause

I want to use an if statement inside trigger but the value if comparison will come from an other select statement.
I have done the following:
create or replace
Trigger MYTRIGGER
After Insert On Table1
Referencing Old As "OLD" New As "NEW"
For Each Row
Begin
Declare Counter Int;
Select Count(*) From Table2 Where Table2."Email" = :New.U_MAIL Into Counter;
IF Counter < 1 THEN
//INSERT Statement here...
END IF;
End;
My logic is simple, if same email user exists, insert will not work.
Above code did not work. How can we do this?
A few syntax errors. Would be closer to something like this:
create or replace
Trigger MYTRIGGER
After Insert On Table1
Referencing Old As "OLD" New As "NEW"
For Each Row
DECLARE
v_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO v_count
FROM Table2
WHERE Email = :New.U_MAIL
;
IF v_count > 0
THEN
RAISE_APPLICATION_ERROR(-20000, 'Not inserted...');
END IF;
END;
Your approach is wrong. Referential integrity should not be made using triggers, it just cannot work as required. See example:
Connected to Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
Connected as test#soft12c1
SQL> create table mail_1 (email varchar2(100));
Table created
SQL> create table mail_2 (email varchar2(100));
Table created
SQL> create trigger mail_1_check
2 before insert on mail_1
3 for each row
4 declare
5 cnt integer;
6 begin
7 select count(*) into cnt from mail_2 where email = :new.email;
8 if cnt > 0 then
9 raise_application_error(-20100, 'Email already exists');
10 end if;
11 end;
12 /
Trigger created
SQL> insert into mail_2 values ('president#gov.us');
1 row inserted
SQL> insert into mail_1 values ('king#kingdom.en');
1 row inserted
SQL> insert into mail_1 values ('president#gov.us');
ORA-20100: Email already exists
ORA-06512: at "TEST.MAIL_1_CHECK", line 6
ORA-04088: error during execution of trigger 'TEST.MAIL_1_CHECK'
It looks like trigger works right, but it's not true. See what happens when several users will works simultaneously.
-- First user in his session
SQL> insert into mail_2 values ('dictator#country.by');
1 row inserted
-- Second user in his session
SQL> insert into mail_1 values ('dictator#country.by');
1 row inserted
-- First user is his session
SQL> commit;
Commit complete
-- Second user is his session
SQL> commit;
Commit complete
-- Any user in any session
SQL> select * from mail_1 natural join mail_2;
EMAIL
--------------------------------------------------------------------------------
dictator#country.by
If using triggers for this task, you should serialize any attempts to use this data, say, execute LOCK TABLE IN EXCLUSIVE MODE unless commit. Generally it's a bad decision. For this concrete task you can use much better approach:
Connected to Oracle Database 12c Enterprise Edition Release 12.1.0.2.0
Connected as test#soft12c1
SQL> create table mail_1_2nd(email varchar2(100));
Table created
SQL> create table mail_2_2nd(email varchar2(100));
Table created
SQL> create materialized view mail_check
2 refresh complete on commit
3 as
4 select 1/0 data from mail_1_2nd natural join mail_2_2nd;
Materialized view created
OK. Let's see, what if we try to use same email:
-- First user in his session
SQL> insert into mail_1_2nd values ('dictator#gov.us');
1 row inserted
-- Second user in his session
SQL> insert into mail_2_2nd values ('dictator#gov.us');
1 row inserted
SQL> commit;
Commit complete
-- First user in his session
SQL> commit;
ORA-12008: error in materialized view refresh path
ORA-01476: divisor is equal to zero
SQL> select * from mail_1_2nd natural join mail_2_2nd;
EMAIL
--------------------------------------------------------------------------------
no rows selected

Is commit necessary after DML operation on table using simple sql query?

Is it necessary to run the COMMIT command after a DML operation in SQL Developer? For example, I performed the following UPDATE query:
UPDATE TAB1
SET TBX_TYP='ZX'
WHERE TBX_TYP IN(SELECT TBX_TYP
FROM(
SELECT DISTINCT TBX_TYP
FROM TAB1
ORDER BY 1 )
WHERE ROWNUM=1);
Then when I tried filtering columns, I found that nothing was updated.
The COMMIT instruction is necessary if you want you changes will be available for other users/connections, for example:
Session1:
SQL> conn hr/hr
Connected.
SQL> truncate table ttt;
Table truncated.
SQL> desc ttt;
Name Null? Type
----------------------------------------- -------- ----------------------------
COL1 NOT NULL VARCHAR2(20 CHAR)
SQL> insert into ttt values('one');
1 row created.
SQL> select col1 from ttt;
COL1
--------------------
one
So new data is available in current session.
Session2:
SQL> conn hr/hr
Connected.
SQL> select col1 from ttt;
no rows selected
But another session can't see this data. So let's commit it:
Session1:
SQL> commit;
Commit complete.
Session2:
SQL> /
COL1
--------------------
one
Now this value is available for both sessions.
But commit also is necessary to store data in you data files.
For example let's add a new row to the ttt table but don't commit it:
Session1:
SQL> insert into ttt values('two');
1 row created.
SQL> select col1 from ttt;
COL1
--------------------
one
two
Then let's shutdown the database abnormally and the start it again
Session2:
SQL> conn / as sysdba
Connected.
SQL> shutdown abort
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2260048 bytes
Variable Size 616563632 bytes
Database Buffers 444596224 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL>
Then reconnect Session1 and look at the ttt table:
SQL> conn hr/hr
Connected.
SQL> select col1 from ttt;
COL1
--------------------
one
As you can see, the database doesn't store uncommited data in its data files.
Add Commit after every DML(update, delete, insert) command.

Can I undo a rollback in sqlplus?

I somehow thought rollback means undo the last action. This is what I did:
SQL> update owns
2 set DRIVER_ID=99999999
3 ^[[A^Z;
4 rows updated.
SQL> select * from owns;
DRIVER_ID LICENSE
-------------------- ----------
99999999 ABC222
99999999 MSA369
99999999 MZZ2828
99999999 ZGA123
SQL> rollback
2 ;
Rollback complete.
SQL> select * from owns;
no rows selected
Is everything gone forever? All my data too? Thanks for any advice and help
what you show is not complete... before that UPDATE you must have done something else like an INSERT...
There is no rollback of the rollback if that is what you are asking... the only way you will get this "undone" is to re-execute all statements executed between the last commit before the rollback and that rollback (in the same order!)...
Another option would exist IF your Oracle DB has an active Flashback area - then you could "rewind" the whole DB to the point in time just before you issued the rollback...