Dynamic SQL or WITH Clause - sql

I've been mauling the following over-and-over, trying to maximize code re-usability, and at the same time, keep optimal performance.
I have a SQL query which is very long and complex, that I'm trying to run multiple times (6 times to be exact), each time with a different WHERE CLAUSE. It took a lot of effort to get it to the form in which I can interchange the WHERE CLAUSE... but I'm having lots of difficulty running the query for each case.
ie>
INSERT INTO table_x (SELECT *
FROM mst_q (+300 lines query)
WHERE complex_where_clause_1);
INSERT INTO table_x (SELECT *
FROM mst_q (+300 lines query)
WHERE complex_where_clause_2);
I've tried using UNION ALL, WHERE with CASE, WHERE with OR, and thought of even having 6 different CURSORS that would independently INSERT the results of each of the 6-cases. Having 6 different CURSORS works and performance is great, there is way too much duplicate code, since all that changes is the WHERE clause.
I thought, great, dynamic SQL! But the "long and complex" statement is much larger than the size of a VARCHAR2.
Is there a way to do something like the following? This is the bast option I can think of, the only issue is that v_sql will definitely be longer that the size limit of a VARCHAR2.
v_sql := 'INSERT INTO table_x (col1, col2) SELECT col1, col2 FROM WHERE ';
v_scen1 := 'complex_where_clause_1';
v_scen2 := 'complex_where_clause_2'
EXECUTE IMMEDIATE v_sql || v_scen1;
EXECUTE IMMEDIATE v_sql || v_scen2;

... ah, too simple, just use: v_sql CLOB;
I'll elaborate in a few hours; but I was able to do exactly what I was asking if possible... Using CLOB breaks down the size limitation of a VARCHAR2, and I simply append the WHERE clauses and execute sequentially...

You mentioned "I thought, great, dynamic SQL! But the "long and complex" statement is much larger than the size of a VARCHAR2."
You could do this if you are still interested in using dynamic SQL:
EXECUTE IMMEDIATE v_sql_stmt_part1 || v_sql_stmt_part2 || ....
That is to say, just break up your SQL statment into 2 (or more) VARCHAR2 variables and then concatenate them when you do the dynanmic SQL.

Related

Oracle Query Logic Declare and With

I want to printout a Dynamic built query. Yet I am stuck at variable declaration; Error At line 2.
I need the maximum size for these VARCHAR2 variables.
Do I have a good overall structure ?
I use the result of the WITH inside the dynamic query.
DECLARE l_sql_query VARCHAR2(2000);
l_sql_queryFinal VARCHAR2(2000);
with cntp as (select distinct
cnt.code code_container,
*STUFF*
FROM container cnt
WHERE
cnt.status !='DESTROYED'
order by cnt.code)
BEGIN
FOR l_counter IN 2022..2032
LOOP
l_sql_query := l_sql_query || 'SELECT cntp.code_container *STUFF*
FROM cntp
GROUP BY cntp.code_container ,cntp.label_container, cntp.Plan_Classement, Years
HAVING
cntp.Years=' || l_counter ||'
AND
/*stuff*/ TO_DATE(''31/12/' || l_counter ||''',''DD/MM/YYYY'')
AND SUM(cntp.IsA)=0
AND SUM(cntp.IsB)=0
UNION
';
END LOOP;
END;
l_sql_queryFinal := SUBSTR(l_sql_query, 0, LENGTH (l_sql_query) – 5);
l_sql_queryFinal := l_sql_queryFinal||';'
dbms_output.put_line(l_sql_queryFinal);
The code you posted has quite a few issues, among them:
you've got the with (CTE) as a standlone fragment in the declare section, which isn't valid. If you want it to be part of the dynamic string then put it in the string;
your END; is in the wrong place;
you have – instead of -;
you remove the last 5 characters, but you end with a new line, so you need to remove 6 to include the U of the last UNION;
the line that appens a semicolon is itself missing one (though for dynamic SQL you usually don't want a semicolon, so the whole line can probably be removed);
2000 characters is too small for your example, but it's OK with the actual maximum of 32767.
DECLARE
l_sql_query VARCHAR2(32767);
l_sql_queryFinal VARCHAR2(32767);
BEGIN
-- initial SQL which just declares the CTE
l_sql_query := q'^
with cntp as (select distinct
cnt.code code_container,
*STUFF*
FROM container cnt
WHERE
cnt.status !='DESTROYED'
order by cnt.code)
^';
-- loop around each year...
FOR l_counter IN 2022..2032
LOOP
l_sql_query := l_sql_query || 'SELECT cntp.code_container *STUFF*
FROM cntp
GROUP BY cntp.code_container ,cntp.label_container, cntp.Plan_Classement, Years
HAVING
cntp.Years=' || l_counter ||'
AND
MAX(TO_DATE(cntp.DISPOSITION_DATE,''DD/MM/YYYY'')) BETWEEN TO_DATE(''01/01/'|| l_counter ||''',''DD/MM/YYYY'') AND TO_DATE(''31/12/' || l_counter ||''',''DD/MM/YYYY'')
AND SUM(cntp.IsA)=0
AND SUM(cntp.IsB)=0
UNION
';
END LOOP;
l_sql_queryFinal := SUBSTR(l_sql_query, 0, LENGTH (l_sql_query) - 6);
l_sql_queryFinal := l_sql_queryFinal||';';
dbms_output.put_line(l_sql_queryFinal);
END;
/
db<>fiddle
The q[^...^] in the first assignment is the alternative quoting mechanism, which means you don't have to escape (by doubling-up) the quotes within that string, around 'DESTYORED'. Notice the ^ delimiters do not appear in the final generated query.
Whether the generated query actually does what you want is another matter... The cntp.Years= part should probably be in a where clause, not having; and you might be able to simplify this to a single query instead of lots of unions, as you're already aggregating. All of that is a bit beyond the scope of your question though.
Is there a way to put the maximum size "automaticcaly" like "VARCHAR2(MAX_STRING_SIZE) does it work ?
No. And no.
The maximum size of varchar2 in PL/SQL is 32767. If you want to hedge against that changing at some point in the future you can declare a user-defined subtype in a shared package ...
create or replace package my_subtypes as
subtype max_string_size is varchar2(32767);
end my_subtypes;
/
... and reference that in your program...
DECLARE
l_sql_query my_subtypes.max_string_size;
l_sql_queryFinal my_subtypes.max_string_size;
...
So if Oracle subsequently raises the maximum permitted size of a VARCHAR2 in PL/SQL you need only change the definition of my_subtypes.max_string_size for the bounds to be raised wherever you used that subtype.
Alternatively, just use a CLOB. Oracle is pretty clever about treating a CLOB as a VARCHAR2 when its size is <= 32k.
To solve your other problem you need to treat the WITH clause as a string and assign it to your query variable.
l_sql_query my_subtypes.max_string_size := q'[
with cntp as (select distinct
cnt.code code_container,
*STUFF*
FROM container cnt
WHERE cnt.status !='DESTROYED'
order by cnt.code) ]';
Note the use of the special quote syntax q'[ ... ]' to avoid the need to escape the quotation marks in your query snippet.
A dynamic string query do not access a temp table ?
Dynamic SQL is a string containing a DML or DDL statement which we execute with EXECUTE IMMEDIATE or DBMS_SQL commands. Otherwise it is exactly the same as static SQL, it doesn't behave any differently. In fact the best way to write dynamic SQL is to start by writing the static statement in a worksheet, make it correct and then figure out which bits need to be dynamic (variables, placeholders) and which bits remain static (boilerplate). In your case the WITH clause is a static part of the statement.
As has already been pointed out, you seem to have misunderstood what a with clause is: it's a clause of a SQL statement, not a procedural declaration. My definition, it must be followed by select.
But also, as a general rule, I would recommend avoiding dynamic SQL when possible. In this case, if you can simulate a table with the range of years you want, you can join instead of having to run the same query multiple times.
The easy trick to doing that is to use Oracle's connect by syntax to use a recursive query to produce the expected number of rows.
Once you've done that, adding this table as a join pretty trivially:
WITH cntp AS
(
SELECT DISTINCT code code_container,
[additional columns]
FROM container
WHERE status !='DESTROYED') cntc,
(
SELECT to_date('01/01/'
|| (LEVEL+2019), 'dd/mm/yyyy') AS start_date,
to_date('31/12/'
|| (LEVEL+2019), 'dd/mm/yyyy') AS end_date,
(LEVEL+2019) AS year
FROM dual
CONNECT BY LEVEL <= 11) year_table
SELECT cntp.code_container,
[additional columns]
FROM cntp
join year_table
ON cntp.years = year_table.year
GROUP BY [additional columns],
years,
year_table.start_date,
year_table.end_date
HAVING max(to_date(cntp.disposition_date,''dd/mm/yyyy'')) BETWEEN year_table.start_date AND year_table.end_date
AND SUM(cntp.isa)=0
AND SUM(cntp.isb)=0
(This query is totally untested and may not actually fulfill your needs; I am providing my best approximation based on the information available.)

Dynamically Trim Column Values

Here's a question for you all, is there any way to make the PL/SQL code below work by dynamically replacing the column_name in the cursor.column_name syntax? I know I'm confusing the pl/sql engine, but I'm not entirely sure on how to fix this...
Currently I get the error below, but I think the actual issue is that the pl/sql engine doesn't know how to interpret TRIM (e_rec.v_temp_column_name) :
"[Error] PLS-00302 (133: 26): PLS-00302: component
'V_TEMP_COLUMN_NAME' must be declared"
Parameter: x_person_rec IN OUT xxsome_table%ROWTYPE
v_temp_column_name dba_tab_columns.column_name%TYPE;
...
BEGIN
FOR e_rec IN (SELECT * FROM xxsome_table WHERE ..)
LOOP
--LOG (3, 'Loading/Sanitizing Header Record');
FOR col IN (SELECT column_name
FROM dba_tab_columns
WHERE table_name = UPPER ('xxsome_table'))
LOOP
--LOG (3, 'Sanitizing Column Name: ' || col.column_name);
v_temp_column_name := col.column_name;
x_person_rec.v_temp_column_name := TRIM (e_rec.v_temp_column_name);
END LOOP;
END LOOP;
...
I've tried doing this (which results in different error): x_person_rec.col.column_name := TRIM (e_rec.col.column_name);
No, you can't and you are indeed confusing the PL/SQL engine. The problem is that v_temp_column_name is a character, so TRIM (e_rec.v_temp_column_name) is evaluated as TRIM (e_rec.'v_temp_column_name'), which doesn't make any sense.
The best things to do, if trailing whitespace is a problem, is to ensure that all data being put into your database is trimmed by your application/ETL processes at the time. If you can't do this then use a trigger to ensure that it happens inside the database. If it's a really bad problem you can even enforce check constraints to stop it from ever happening.
Now, to sort of answer your question, there's no need to do anything dynamically here. Your cursor may be implicit but it's not been dynamically generated. You know every column in the table so be a little less lazy and type them all out.
FOR e_rec IN (SELECT trim(col1) as col1
, trim(col2) as col2
FROM xxsome_table WHERE ...
If you can't fix your data (or if it's not broken!) then this is easily the simplest way to do it.
To actually answer your question, you can dynamically build your SELECT statement using the same techniques you're using here...
declare
l_cols varchar2(4000);
l_curs sys_refcursor;
begin
select wm_concat('trim(' || column_name || ')')
into l_cols
from user_tab_columns
where table_name = 'XXSOME_TABLE'
;
open l_curs for
' select ' || l_cols || ' from xxsome_table where ...';
loop
...
end loop;
end;
/
As you're on 10g you can't use the excellent LISTAGG(), but there are plenty of other string aggregation techniques. Please note that if the resulting string is greater than 4,000 bytes, you'll have to loop rather than generating the column list in a single SQL statement.
P.S., if these columns are CHAR(n) then you'll have to trim them every time you select. It might be worth changing them to VARCHAR2(n)

What is the maximum number of OR clauses within a single Oracle statement?

Suppose I have to check against 100,000 uids for a single update statement - my code currently will break this into 1000-uid chunks:
[...] WHERE UID IN (..., '998', '999', '1000')
OR UID IN ('1001', '1002', ...)
OR (..., etc., ...)
Is there a maximum number of OR clauses you can have? I.e., in my example above, it would generate 100 OR clauses of 1000 IN clauses each.
22.
Well, not exactly. That's how many OR clauses of 1000-item IN lists will run on my system, but that number will probably be different for everyone. There is
no database limit that exactly covers this scenario. It probably falls under the Note:
The limit on how long a SQL statement can be depends on many factors,
including database configuration, disk space, and memory
When I try 23, I get this error in SQL*Plus:
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 2452
Session ID: 135 Serial number: 165
Which is not the real error, that just means the server crashed and SQL*Plus lost its connection. Oddly, when I look in the alert log there are no errors. There are trace files but still no ORA- error messages. All I see is hundreds of lines like this:
*** 2013-11-04 21:59:48.667
minact-scn master-status: grec-scn:0x0000.00821c54 gmin-scn:0x0000.0081d656 gcalc-scn:0x0000.00821c54
minact-scn master-status: grec-scn:0x0000.00823b45 gmin-scn:0x0000.0081d656 gcalc-scn:0x0000.00823b46
The lesson here is to avoid ridiculously large SQL statements. You'll have to do it another way, like loading the data into a table. And don't try to build something that is just small enough. It may work today but fail on a different environment tomorrow.
--Find the maximum number of IN conditions with 1000 items.
--Change the first number until it throws an error.
--This code uses dynamic SQL, but I found that static SQL has the same limit.
declare
c_number_of_ors number := 22;
v_in_sql varchar2(4000);
v_sql clob;
v_count number;
begin
--Comma-separate list of 1000 numbers.
select listagg(level, ',') within group (order by 1)
into v_in_sql
from dual connect by level <= 1000;
--Start the statement.
v_sql := 'select count(*) from dual ';
v_sql := v_sql || 'where 1 in ('||v_in_sql||')';
--Append more ORs to it.
for i in 1 .. c_number_of_ors loop
v_sql := v_sql || ' or '||to_char(i)||' in ('||v_in_sql||')';
end loop;
--Execute it.
execute immediate v_sql into v_count;
end;
/
There is no any limitation of OR clauses in a single query. You may have limitations issues when you are using GROUP BY clauses and in general terms all of the nondistinct aggregate functions (for example, SUM, AVG) that must fit within a single database block.

using comma separated values inside IN clause for NUMBER column

I have 2 procedures inside a package. I am calling one procedure to get a comma separated list of user ids.
I am storing the result in a VARCHAR variable. Now when I am using this comma separated list to put inside an IN clause in it is throwing "ORA-01722:INVALID NUMBER" exception.
This is how my variable looks like
l_userIds VARCHAR2(4000) := null;
This is where i am assigning the value
l_userIds := getUserIds(deptId); -- this returns a comma separated list
And my second query is like -
select * from users_Table where user_id in (l_userIds);
If I run this query I get INVALID NUMBER error.
Can someone help here.
Do you really need to return a comma-separated list? It would generally be much better to declare a collection type
CREATE TYPE num_table
AS TABLE OF NUMBER;
Declare a function that returns an instance of this collection
CREATE OR REPLACE FUNCTION get_nums
RETURN num_table
IS
l_nums num_table := num_table();
BEGIN
for i in 1 .. 10
loop
l_nums.extend;
l_nums(i) := i*2;
end loop;
END;
and then use that collection in your query
SELECT *
FROM users_table
WHERE user_id IN (SELECT * FROM TABLE( l_nums ));
It is possible to use dynamic SQL as well (which #Sebas demonstrates). The downside to that, however, is that every call to the procedure will generate a new SQL statement that needs to be parsed again before it is executed. It also puts pressure on the library cache which can cause Oracle to purge lots of other reusable SQL statements which can create lots of other performance problems.
You can search the list using like instead of in:
select *
from users_Table
where ','||l_userIds||',' like '%,'||cast(user_id as varchar2(255))||',%';
This has the virtue of simplicity (no additional functions or dynamic SQL). However, it does preclude the use of indexes on user_id. For a smallish table this shouldn't be a problem.
The problem is that oracle does not interprete the VARCHAR2 string you're passing as a sequence of numbers, it is just a string.
A solution is to make the whole query a string (VARCHAR2) and then execute it so the engine knows he has to translate the content:
DECLARE
TYPE T_UT IS TABLE OF users_Table%ROWTYPE;
aVar T_UT;
BEGIN
EXECUTE IMMEDIATE 'select * from users_Table where user_id in (' || l_userIds || ')' INTO aVar;
...
END;
A more complex but also elegant solution would be to split the string into a table TYPE and use it casted directly into the query. See what Tom thinks about it.
DO NOT USE THIS SOLUTION!
Firstly, I wanted to delete it, but I think, it might be informative for someone to see such a bad solution. Using dynamic SQL like this causes multiple execution plans creation - 1 execution plan per 1 set of data in IN clause, because there is no binding used and for the DB, every query is a different one (SGA gets filled with lots of very similar execution plans, every time the query is run with a different parameter, more memory is needlessly used in SGA).
Wanted to write another answer using Dynamic SQL more properly (with binding variables), but Justin Cave's answer is the best, anyway.
You might also wanna try REF CURSOR (haven't tried that exact code myself, might need some little tweaks):
DECLARE
deptId NUMBER := 2;
l_userIds VARCHAR2(2000) := getUserIds(deptId);
TYPE t_my_ref_cursor IS REF CURSOR;
c_cursor t_my_ref_cursor;
l_row users_Table%ROWTYPE;
l_query VARCHAR2(5000);
BEGIN
l_query := 'SELECT * FROM users_Table WHERE user_id IN ('|| l_userIds ||')';
OPEN c_cursor FOR l_query;
FETCH c_cursor INTO l_row;
WHILE c_cursor%FOUND
LOOP
-- do something with your row
FETCH c_cursor INTO l_row;
END LOOP;
END;
/

How to improve query performance for dynamic sql in Oracle

I have to fetch data from a running-time-defined table and get data based on a running-time-defined column, I'm now using dynamic sql with ref cursor as below. Is there any more efficient ways to improve the performance ?
PROCEDURE check_error(p_table_name IN VARCHAR2
,p_keyword IN VARCHAR2
,p_column_name IN VARCHAR2
,p_min_num IN NUMBER
,p_time_range IN NUMBER
,p_file_desc IN VARCHAR2
)
IS
type t_crs is ref cursor;
v_cur t_crs;
v_file_name VARCHAR2(100);
v_date_started DATE;
v_date_completed DATE;
v_counter NUMBER := 0;
v_sql VARCHAR2(500);
v_num NUMBER :=0;
BEGIN
v_sql := 'SELECT '||p_column_name||', DATE_STARTED,DATE_COMPLETED FROM '||p_table_name
|| ' WHERE '||p_column_name||' LIKE '''||p_keyword||'%'' AND DATE_STARTED > :TIME_LIMIT ORDER BY '||p_column_name;
OPEN v_cur FOR v_sql USING (sysdate - (p_time_range/1440));
LOOP
FETCH v_cur INTO v_file_name,v_date_started,v_date_completed;
EXIT WHEN v_cur%NOTFOUND;
IF v_date_started IS NOT NULL AND v_date_completed IS NULL
AND (sysdate - v_date_started)*1440 > p_time_range THEN
insert_record(co_alert_stuck,v_file_name,p_table_name,0,p_file_desc,p_time_range);
END IF;
END LOOP;
END;
BTW, will this make it better ?
v_sql := 'SELECT :COLUMN_NAME1, DATE_STARTED,DATE_COMPLETED FROM :TABLE WHERE :COLUMN_NAME2 LIKE :KEYWORD AND DATE_STARTED > :TIME_LIMIT ORDER BY :COLUMN_NAME3';
OPEN v_cur FOR v_sql USING p_column_name,p_table_name,p_column_name,p_keyword||'%',(sysdate - (p_time_range/1440)),p_column_name;
First, I'm not sure that I understand what the code is doing. In the code you posted (which you may have cut down to simplify things), the IF statement checks whether v_date_started IS NOT NULL which is redundant since there is a WHERE clause on DATE_STARTED. It checks whether (sysdate - v_date_started)*1440 > p_time_range which is just a redundant repetition of the WHERE clause on the DATE_STARTED column. And it checks whether v_date_completed IS NULL which would be more efficient as an additional WHERE clause in the dynamic SQL statement that you built. It would make sense to do all of your checks in exactly one place and the most efficient place to do them would be in the SQL statement.
Second, how many rows should this query return and where is the time being spent? If the cursor potentially returns many rows (for some definition of many), you'll get a bit of efficiency from doing a BULK COLLECT from the cursor into a collection and modifying the insert_record procedure to accept and process a collection. If the time is all spent executing the SQL statement and the query itself returns just a handful of rows, PL/SQL bulk operations would probably not make things appreciably more efficient. If the bottleneck is executing the SQL statement, you'd need to hope that an appropriate index existed on whatever table was passed in. If the bottleneck is the insert_record procedure, we'd need to know what that procedure is doing to comment.
Third, if the insert_record procedure is (at least primarily) just inserting the data that you fetched into a different table, it would be more efficient to get rid of all the looping and just generate a dynamic INSERT statement.
Fourth, with respect to your edit, you cannot use bind variables for table names or column names so the syntax you're proposing is invalid. It won't be more efficient because it will generate a bunch of syntax errors.