I have an old client software which has a connected oracle database for persistence. As interface the client software only allows the call of functions and procedures. I have nearly full access to the database, i.e., I can define functions and procedures. Because of the interface, only functions can return values and I cannot use the OUT parameter option of procedures.
Now I simply want to read a value from a table:
SELECT value FROM myTable WHERE id = 42;
And increase the value afterwards:
UPDATE myTable SET value = value + 1 WHERE id = 42;
I could use a function for the select statement and a procedure for the update and call both successively. The problem here is the non-existence of transactions on the client side. Thus, between select and update another thread could get wrong values.
So my question is, how can I use both calls in a transaction without using transactions...
Tried Approaches:
Use anonymous PL/SQL Blocks -> the syntax is not supported by the client.
Put both calls in a single function -> DML is not allowed in a select statement.
PRAGMA AUTONOMOUS_TRANSACTION -> I heard it is a bad thing and should not be used.
You can do DML inside a function as demonstrated below, but I stress - take heed of the other comments. Look at using a sequence (even multiple sequences), because doing DML inside a function is generally a bad idea, because the number of executions of a function call (if called from SQL) is not deterministic. Also, there are scalability issues if used in a high volume. And in a multi-user environment, you need to handle locking/serialization otherwise you'll multiple sessions getting the same integer value returned.
So...after all that, you still want to head this path :-(
SQL> create table t ( x int );
Table created.
SQL> insert into t values (0);
1 row created.
SQL>
SQL> create or replace
2 function f return int is
3 pragma autonomous_transaction;
4 retval int;
5 begin
6 update t
7 set x = x + 1
8 returning x into retval;
9 commit;
10 return retval;
11 end;
12 /
Function created.
SQL>
SQL> select f from dual;
F
----------
1
1 row selected.
SQL> select * from t;
X
----------
1
1 row selected.
SQL> select f from dual;
F
----------
2
1 row selected.
SQL> select * from t;
X
----------
2
1 row selected.
SQL> select f from dual;
F
----------
3
1 row selected.
SQL> select * from t;
X
----------
3
1 row selected.
Related
I have a table with two columns: k (primary key) and value.
I'd like to:
select for update by k, if k is not found, insert a new row with a default value.
with the returned value ( existent or new inserted row value) make some processing.
update the row and commit.
Is it possible to make this "select for update and insert default value if not found"?
If implement (1) as a select/check if found/insert if not found, we have concurrency problems, since two sessions could make the select concurrently on non existent key, both will try to insert and one of the instances will fail.
In this case the desired behavior is to perform atomically the select/insert and one of the instance perform it and the second one keep locked until the first one commits, and then use the value inserted by the first one.
We implement it always doing an "insert ... if not exist.../commit" before the "select for update" but this implies always trying to insert when it is a unlikely needed.
Is there any way to implement it on one sql sentence?
Thanks!!
See if k is available
SELECT * FROM table
WHERE k = value
FOR UPDATE
If no rows returned, then it doesn't exist. Insert it:
INSERT INTO table(k, col1, col2)
VALUES (value, val1, default))
select ... for update is the first step you should make; without it, you can't "reserve" that row for further processing (unless you're willing to lock the whole table in exclusive mode; if that "processing" takes no time, that could also be an option, especially if there are not many users who will be doing it).
If row exists, the rest is simple - process it, update it, commit.
But, if it doesn't exist, you'll have to insert a new row (just as you said), and here's a problem of two (or more) users inserting the same value.
To avoid it, create a function which
will return unique ID value for a new row
is an autonomous transaction
why? Because you're performing DML in it (update or insert), and you can't do that in a function unless it is an autonomous transaction
Users will have to use that function to get the next ID value. Here's an example: you'll need a table (my_id) which holds the last used ID (and every user who accesses it via the function will create a new value).
Table:
SQL> create table my_id (id number);
Table created.
Function:
SQL> create or replace function f_id
2 return number
3 is
4 pragma autonomous_transaction;
5 l_nextval number;
6 begin
7 select id + 1
8 into l_nextval
9 from my_id
10 for update of id;
11
12 update my_id set
13 id = l_nextval;
14
15 commit;
16 return (l_nextval);
17
18 exception
19 when no_data_found then
20 lock table my_id in exclusive mode;
21
22 insert into my_id (id)
23 values (1);
24
25 commit;
26 return(1);
27 end;
28 /
Function created.
Use it as
SQL> select f_id from dual;
F_ID
----------
1
SQL>
That's it ... code you'll use will then be something like this:
SQL> create table test
2 (id number constraint pk_test primary key,
3 name varchar2(10),
4 datum date
5 );
Table created.
SQL> create or replace procedure p_test (par_id in number)
2 is
3 l_id test.id%type;
4 begin
5 select id
6 into l_id
7 from test
8 where id = par_id
9 for update;
10
11 update test set datum = sysdate where id = par_id;
12 exception
13 when no_data_found then
14 insert into test (id, name, datum)
15 values (f_id, 'Little', sysdate); --> function call is here
16 end;
17 /
Procedure created.
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:21
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:21 --> row was inserted
SQL> exec p_test (1);
PL/SQL procedure successfully completed.
SQL> select * from test;
ID NAME DATUM
---------- ---------- -------------------
1 Little 04.09.2021 20:49:30 --> row was updated
SQL>
Use a sequence to generate a surrogate primary key instead of using a natural key. If you had a real natural key, then it would be extremely unlikely for two users to submit the same value at the same time.
There are several ways to automatically generate primary keys. I prefer to use sequence defaults, like this:
create sequence test_seq;
create table test1
(
k number default test_seq.nextval,
value varchar2(4000),
constraint test1_pk primary key(k)
);
If you can't switch to a surrogate key or a real natural key:
Change the "insert ... if not exist.../commit" to a simpler "insert ... if not exist", and perform all operations in a single transaction. Inserting the same primary key in different sessions, even uncommitted, is impossible in Oracle. Although the SELECT won't cause a block, the INSERT will. This behavior is an exception to Oracle's implementation of isolation, the "I" in "ACID", and in this case that behavior can work to your advantage.
If two sessions attempt to insert the same primary key at the same time, the second session will hang, and when the first session finally commits, the second session will fail with the exception "ORA-00001: unique constraint (X.Y) violated". Let that exception become your flag for knowing when a user has submitted a duplicate value. You can catch the exception in the application and have the user try again.
I am working on a script to be later used in my SSIS ETL, the source DB is oracle and I am using SQL Developer 20.0.2.75 .
I spent so much time declaring 100 variables but it doesn't see to work in SQL developer.
Define & Initialise:
Declare
V1 number;
V2 number;
.
.
.
V100 number;
Begin
Select UDF(params1,param2) into V1 from dual;
Select UDF(params3,param4) into V2 from dual;
...
End;
I was hoping I'd be able to use these variables in my script like :
select columns from table where Col1=:V1 and Col2=:V2
When used "Run Statement" prompts for values, "Run Script" doesn't see to like into Variable statements.
I even tried :
select columns from table where Col1=&&V1 and Col2=&&V2
Now my query doesn't work !
After below responses, I changed my script to :
Variable V1 Number;
Variable V2 Number;
exec select MyFunction(p1,p2) into :V1 from Dual;
/
Select columns from table where col1=:V1 and col2=:V2
It still prompts for value
This is how I defined my function
Create Function MyFunction(m IN Varchar, s IN Number)
Return Number
IS c Number;
select code into c from table where col1=m and col2=s;
Return(c);
End;
Is there anything wrong with the function?
You define variables as per you would in SQL Plus or SQLcl and then run it as a script
Text below
variable x1 number
begin
select 123 into :x1 from dual;
end;
/
print x1
Similar example in SQL Plus (and will work in SQL Dev as well)
SQL> set serverout on
SQL> variable x1 number
SQL> begin
2 select 5 into :x1 from dual;
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> print x1
X1
----------
5
SQL>
SQL> select rownum from dual
2 connect by level <= :x1;
ROWNUM
----------
1
2
3
4
5
SQL>
SQL> begin
2 dbms_output.put_line('X1 is '||:x1);
3 end;
4 /
X1 is 5
PL/SQL procedure successfully completed.
I spent so much time declaring 100 variables
To me, it looks like a wrong approach. OK, declare a few variables, but 100 of them?! Why wouldn't you switch to something easier to maintain. What? A table, for example.
create table params
(var varchar2(20),
value varchar2(20)
);
Pre-populate it with all variables you use (and then just update their values), or just insert rows:
insert into params (var, value) values ('v1', UDF(params1, param2));
insert into params (var, value) values ('v2', UDF(params3, param4));
...
Fetch values through a function:
create or replace function f_params (par_var in varchar2)
return varchar2
is
retval varchar2(20);
begin
select value
into retval
from params
where var = par_var;
return retval;
end;
Use it (in your query) as:
select columns
from table
where Col1 = f_params('v1')
and Col2 = f_params('v2')
If many users use it, consider creating one "master" params table (which contains all the variables) and a global temporary table (which would be populated and used by each of those users).
Good day,
my customer uses an application that was initially designed for MSSQL, which is probably doing case-insensitive searches by default. But the customer uses Oracle and hence, needs some extra tweaking.
So the question is: How can I tell Oracle to make a given SELECT LIKE-Statement search case-insensitive with the following limitations?
ALTER SESSION cannot be used individually (by trigger: maybe)
Other queries from the same session must not be affected
The SELECT-statement cannot be altered
I know about the possibility to set NLS_SORT on system level, but this will basically kill the performance, as all indexes are disabled.
You can use DBMS_ADVANCED_REWRITE to rewrite the SQL into a case-insensitive version.
Subtly changing queries like this can be confusing and can make troubleshooting and tuning difficult. The package also has some limitations that may make it impractical, such as not supporting bind variables.
1. Sample Schema
SQL> drop table test1;
Table dropped.
SQL> create table test1(a varchar2(100));
Table created.
SQL> insert into test1 values ('case INSENSITIVE');
1 row created.
SQL> commit;
Commit complete.
2. The query is initially case-sensitive and matches 0 rows
SQL> select count(*) total from test1 where a like '%case insensitive%';
TOTAL
----------
0
3. Create rewrite equivalence - add a LOWER function
SQL> begin
2 sys.dbms_advanced_rewrite.declare_rewrite_equivalence(
3 name => 'case_insensitive_1',
4 source_stmt => q'[select count(*) total from test1 where a like '%case insensitive%']',
5 destination_stmt => q'[select count(*) total from test1 where lower(a) like '%case insensitive%']',
6 validate => false
7 );
8 end;
9 /
PL/SQL procedure successfully completed.
4. Now the same query is case-insensitive and matches 1 row
SQL> alter session set query_rewrite_integrity = trusted;
Session altered.
SQL> select count(*) total from test1 where a like '%case insensitive%';
TOTAL
----------
1
I have a procedure I am running from SQL developer. It pumps out about 50 columns. Currently I am working on a bug which is updating one of these columns. It is possible to just show column X from the result?
I am running it as
VARIABLE cursorout REFCURSOR;
EXEC MY_PROC('-1', '-1', '-1', 225835, :cursorout);
PRINT cursorout;
Ideally I want to print out the 20th column so would like to do something like
PRINT cursorout[20];
Thanks
It is possible to just show column X from the result?
Not without additional coding, no.
As #OldProgrammer said in the comment to your question you can use dbms_sql package to describe columns and pick one you like.
But, if, as you said, you know column names, the probably easiest way to display contents of that column would be using XML functions, xmlsequence() and extract() in particular.
Unfortunately we cannot pass SQL*PLUS bind variable as a parameter to the xmlsequence() function, so you might consider to wrap your procedure in a function, which returns refcursor:
Test table:
create table t1(col, col2) as
select level
, level
from dual
connect by level <= 5;
SQL> select * from t1;
COL COL2
---------- ----------
1 1
2 2
3 3
4 4
5 5
Here is a simple procedure, which opens a refcursor for us:
create or replace procedure p1(
p_cursor out sys_refcursor
) is
begin
open p_cursor for
select * from t1;
end;
/
Procedure created
Here is the function-wrapper for the p1 procedure, which simply executes the procedure and returns refcursor:
create or replace function p1_wrapper
return sys_refcursor is
l_res sys_refcursor;
begin
p1(l_res);
return l_res;
end;
/
Function created
The query. Extract path is ROW/COL2/text(), where COL2 is the name of a column we want to print.
select t.extract('ROW/COL2/text()').getstringval() as res
from table(xmlsequence(p1_wrapper)) t ;
Result:
RES
--------
1
2
3
4
5
5 rows selected.
In my opinion,you can define a cursor in procedure MY_PROC,and put which column is updated in the cursor(for example 20) and then return then cursor.Or you just create a table to record every execute result of your procedure.
I'm creating a query which uses 2 embedded server functions multiple times.
Problem: the functions search through a decently large table, and they take a long time to execute.
Goal: Use a subquery as if it were a table so that I can reference columns without running the function to generate the column more than once.
Example Pseudocode:
Select general.column1, general.column2, general.column1-general.column2
from (select package.function1('I take a long time') column1,
package.function2('I take even longer') column2,
normal_column
from bigtable) general;
When I run my code general.column1 will reference the function in the statement of column1, not the data returned by it (which is ultimately what I'm after).
I'm fairly new to SQL, so any help is appreciated and if you need more info, I'll do my best to provide it.
Thanks!
I suggest you tu use the subquery factoring. The first subquery will be executed only once and then used through the rest of he query.
WITH function_result AS
(SELECT package.function1('I take a long time') column1
, package.function2('I take even longer') column2
FROM dual)
SELECT function_result.column1
, function_result.column2
, function_result.column1 - function_result.column2
, bigtable.normal_column
FROM bigtable
In general what you want to do is in this case is take advatage of scalar subquery caching.
i.e. put:
Select general.column1, general.column2, general.column1-general.column2
from (select (select package.function1('I take a long time') from dual) column1,
(select package.function2('I take even longer') from dual) column2,
normal_column
from bigtable) general;
delcaring the function as deterministic too helps if it is deterministic.
a small example:
SQL> create or replace function testfunc(i varchar2)
2 return varchar2
3 is
4 begin
5 dbms_application_info.set_client_info(userenv('client_info')+1 );
6 return 'hi';
7 end;
8 /
Function created.
now lets test a call to the function like you have:
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> set autotrace traceonly
SQL> select *
2 from (select testfunc(owner) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
57954
the function was called 57954 times (once per row). now lets use scalar caching:
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> select *
2 from (select (select testfunc(owner) from dual) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
178
178 calls instead of 57k!
in your case you've only shown that you have a literal and no input that is varying per row (if this is the case, the number of calls after using scalar caching should be 1).
if we add deterministic:
SQL> create or replace function testfunc(i varchar2)
2 return varchar2 deterministic
3 is
4 begin
5 dbms_application_info.set_client_info(userenv('client_info')+1 );
6 return 'hi';
7 end;
8 /
Function created.
SQL> exec dbms_application_info.set_client_info(0);
PL/SQL procedure successfully completed.
SQL> select *
2 from (select (select testfunc(owner) from dual) a
3 from all_objects);
57954 rows selected.
SQL> select userenv('client_info') from dual;
USERENV('CLIENT_INFO')
----------------------------------------------------------------
55
now down to 55. in 11g we have result_cache which we can put in place of deterministic, which would reduce the calls on subsequant runs to 0 calls.