I wish to search a database table on a nullable column. Sometimes the value I'm search for is itself NULL. Since Null is equal to nothing, even NULL, saying
where MYCOLUMN=SEARCHVALUE
will fail. Right now I have to resort to
where ((MYCOLUMN=SEARCHVALUE) OR (MYCOLUMN is NULL and SEARCHVALUE is NULL))
Is there a simpler way of saying that?
(I'm using Oracle if that matters)
You can do the IsNull or NVL stuff, but it's just going to make the engine do more work. You'll be calling functions to do column conversions which then have to have the results compared.
Use what you have
where ((MYCOLUMN=SEARCHVALUE) OR (MYCOLUMN is NULL and SEARCHVALUE is NULL))
#Andy Lester asserts that the original form of the query is more efficient than using NVL. I decided to test that assertion:
SQL> DECLARE
2 CURSOR B IS
3 SELECT batch_id, equipment_id
4 FROM batch;
5 v_t1 NUMBER;
6 v_t2 NUMBER;
7 v_c1 NUMBER;
8 v_c2 NUMBER;
9 v_b INTEGER;
10 BEGIN
11 -- Form 1 of the where clause
12 v_t1 := dbms_utility.get_time;
13 v_c1 := dbms_utility.get_cpu_time;
14 FOR R IN B LOOP
15 SELECT COUNT(*)
16 INTO v_b
17 FROM batch
18 WHERE equipment_id = R.equipment_id OR (equipment_id IS NULL AND R.equipment_id IS NULL);
19 END LOOP;
20 v_t2 := dbms_utility.get_time;
21 v_c2 := dbms_utility.get_cpu_time;
22 dbms_output.put_line('For clause: WHERE equipment_id = R.equipment_id OR (equipment_id IS NULL AND R.equipment_id IS NULL)');
23 dbms_output.put_line('CPU seconds used: '||(v_c2 - v_c1)/100);
24 dbms_output.put_line('Elapsed time: '||(v_t2 - v_t1)/100);
25
26 -- Form 2 of the where clause
27 v_t1 := dbms_utility.get_time;
28 v_c1 := dbms_utility.get_cpu_time;
29 FOR R IN B LOOP
30 SELECT COUNT(*)
31 INTO v_b
32 FROM batch
33 WHERE NVL(equipment_id,'xxxx') = NVL(R.equipment_id,'xxxx');
34 END LOOP;
35 v_t2 := dbms_utility.get_time;
36 v_c2 := dbms_utility.get_cpu_time;
37 dbms_output.put_line('For clause: WHERE NVL(equipment_id,''xxxx'') = NVL(R.equipment_id,''xxxx'')');
38 dbms_output.put_line('CPU seconds used: '||(v_c2 - v_c1)/100);
39 dbms_output.put_line('Elapsed time: '||(v_t2 - v_t1)/100);
40 END;
41 /
For clause: WHERE equipment_id = R.equipment_id OR (equipment_id IS NULL AND R.equipment_id IS NULL)
CPU seconds used: 84.69
Elapsed time: 84.8
For clause: WHERE NVL(equipment_id,'xxxx') = NVL(R.equipment_id,'xxxx')
CPU seconds used: 124
Elapsed time: 124.01
PL/SQL procedure successfully completed
SQL> select count(*) from batch;
COUNT(*)
----------
20903
SQL>
I was kind of surprised to find out just how correct Andy is. It costs nearly 50% more to do the NVL solution. So, even though one piece of code might not look as tidy or elegant as another, it could be significantly more efficient. I ran this procedure multiple times, and the results were nearly the same each time. Kudos to Andy...
In Expert Oracle Database Architecture I saw:
WHERE DECODE(MYCOLUMN, SEARCHVALUE, 1) = 1
I don't know if it's simpler, but I've occasionally used
WHERE ISNULL(MyColumn, -1) = ISNULL(SearchValue, -1)
Replacing "-1" with some value that is valid for the column type but also not likely to be actually found in the data.
NOTE: I use MS SQL, not Oracle, so not sure if "ISNULL" is valid.
Use NVL to replace null with some dummy value on both sides, as in:
WHERE NVL(MYCOLUMN,0) = NVL(SEARCHVALUE,0)
Another alternative, which is probably optimal from the executed query point of view, and will be useful only if you are doing some kind of query generation is to generate the exact query you need based on the search value.
Pseudocode follows.
if (SEARCHVALUE IS NULL) {
condition = 'MYCOLUMN IS NULL'
} else {
condition = 'MYCOLUMN=SEARCHVALUE'
}
runQuery(query,condition)
If an out-of-band value is possible:
where coalesce(mycolumn, 'out-of-band')
= coalesce(searchvalue, 'out-of-band')
Try
WHERE NVL(mycolumn,'NULL') = NVL(searchvalue,'NULL')
This can also do the job in Oracle.
WHERE MYCOLUMN || 'X' = SEARCHVALUE || 'X'
There are some situations where it beats the IS NULL test with the OR.
I was also surprised that DECODE lets you check NULL against NULL.
WITH
TEST AS
(
SELECT NULL A FROM DUAL
)
SELECT DECODE (A, NULL, 'NULL IS EQUAL', 'NULL IS NOT EQUAL')
FROM TEST
I would think that what you have is OK. You could maybe use:
where NVL(MYCOLUMN, '') = NVL(SEARCHVALUE, '')
This is a situation we find ourselves in a lot with our Oracle functions that drive reports. We want to allow users to enter a value to restrict results or leave it blank to return all records. This is what I have used and it has worked well for us.
WHERE rte_pending.ltr_rte_id = prte_id
OR ((rte_pending.ltr_rte_id IS NULL OR rte_pending.ltr_rte_id IS NOT NULL)
AND prte_id IS NULL)
Related
High value is in decimal format eg.- 100.10, I want to convert it into word so I write below script but not getting execution by this..
SELECT SYMBOL, HIGH, UPPER(TO_CHAR(TO_DATE(HIGH,'J'),'JSP'))
AMT_IN_WORDS FROM BHAV;
getting error of
ORA-01830
please correct this where am wrong....
Thank you in advance...
You can creation a function.
CREATE OR REPLACE FUNCTION big_amt_in_words (p_input VARCHAR2) RETURN VARCHAR2
IS
v_running_input NUMBER;
v_num NUMBER;
v_amt_in_words VARCHAR2(2000);
BEGIN
v_running_input := P_input;
FOR i IN (
SELECT RPAD(1, (rownum*3)+1, 0) num_value,
CASE LENGTH(RPAD(1, (rownum*3)+1, 0))
WHEN 4 THEN 'THOUSAND'
WHEN 7 THEN 'MILLION'
WHEN 10 THEN 'BILLION'
WHEN 13 THEN 'TRILLION'
WHEN 16 THEN 'QUADRILLION'
WHEN 19 THEN 'QUINTILLION'
WHEN 22 THEN 'SEXTILLION'
WHEN 25 THEN 'SEPTILLION'
WHEN 28 THEN 'OCTILLION'
END place_value
FROM DUAL
CONNECT BY rownum < 10
ORDER BY rownum desc)
LOOP
v_num := TRUNC(v_running_input/i.num_value,0);
IF v_num > 0 THEN
v_amt_in_words := v_amt_in_words||' '||TO_CHAR(TO_DATE(v_num,'J'), 'JSP')||' '||i.place_value;
v_running_input := v_running_input - (v_num * i.num_value);
END IF;
END LOOP;
v_amt_in_words := v_amt_in_words||' '||TO_CHAR(TO_DATE(TRUNC(v_running_input),'J'), 'JSP')
||' AND '||UPPER(TO_CHAR(TO_DATE((ROUND(v_running_input-TRUNC(v_running_input),2)*100),'J'),'JSP'))||' CENTS';
RETURN TRIM(v_amt_in_words);
END;
/
To use it,
SELECT BIG_AMT_IN_WORDS(65763245345658.12) amt_in_words
FROM DUAL;
Output
---------------------------------------------
SIXTY-FIVE TRILLION SEVEN HUNDRED SIXTY-THREE BILLION TWO HUNDRED FORTY-FIVE MILLION THREE HUNDRED FORTY-FIVE THOUSAND SIX HUNDRED FIFTY-EIGHT AND TWELVE CENTS
The error is raised since the value of high that you have shown is a decimal, that cannot be cast as an integer implicitly, unlike 100.00. So, it cannot be converted to Julian date.
SELECT UPPER(TO_CHAR(TO_DATE(100.10,'J'),'JSP'))AMT_IN_WORDS FROM DUAL;
This causes
ORA-01830: date format picture ends before converting entire input
string
This can be resolved by rounding the decimal to the nearest integer.
SELECT UPPER(TO_CHAR(TO_DATE(ROUND(100.10),'J'),'JSP'))AMT_IN_WORDS FROM DUAL;
| AMT_IN_WORDS |
|--------------|
| ONE HUNDRED |
Demo
If you really want the float component as well, although limited, you may refer this answer's EDIT2: How to convert number to words - ORACLE
I Have a table tab_1 with below values.
ID Calculation value
1 10
2 10
3 1+2
4 5
5 3-2
6 5+1
Need help writing the query for following logic. I have a table where the records contain either calculation strings or values to be used in calculations. I need to parse the calculation like this:
ID 3 is the sum of ID 1 and 2.
ID 5 is the minus of ID 3 and 2.
ID 6 is the sum of ID 5 and 1.
Then I need to select the records for the referenced IDs and perform the calculations. My
expected output:
ID Calculation value
3 1+2 20
5 3-2 10
6 5+1 20
Thanks -- nani
"Need help writing the query for below logic."
This is not a problem which can be solved in pure SQL, because:
executing the calculation string requires dynamic SQL
you need recursion to look up records and evaluate the results
Here is a recursive function which produces the answers you expect. It has three private procs so that the main body of the function is simple to understand. In pseudo-code:
look up record
if record is value then return it and exit
else explode calculation
recurse 1, 3, 4 for each part of exploded calculation until 2
Apologies for the need to scroll:
create or replace function dyn_calc
(p_id in number)
return number
is
result number;
n1 number;
n2 number;
l_rec t23%rowtype;
l_val number;
type split_calc_r is record (
val1 number
, operator varchar2(1)
, val2 number
);
l_calc_rec split_calc_r;
function get_rec
(p_id in number)
return t23%rowtype
is
rv t23%rowtype;
begin
select *
into rv
from t23
where id = p_id;
return rv;
end get_rec;
procedure split_calc
(p_calc in varchar2
, p_n1 out number
, p_n2 out number
, p_operator out varchar2)
is
begin
p_n1 := regexp_substr(p_calc, '[0-9]+', 1, 1);
p_n2 := regexp_substr(p_calc, '[0-9]+', 1, 2);
p_operator := translate(p_calc, '-+*%01923456789','-+*%'); --regexp_substr(p_calc, '[\-\+\*\%]', 1, 1);
end split_calc;
function exec_calc
(p_n1 in number
, p_n2 in number
, p_operator in varchar2)
return number
is
rv number;
begin
execute immediate
'select :n1 ' || p_operator || ' :n2 from dual'
into rv
using p_n1, p_n2;
return rv;
end exec_calc;
begin
l_rec := get_rec(p_id);
if l_rec.value is not null then
result := l_rec.value;
else
split_calc(l_rec.calculation
, l_calc_rec.val1
, l_calc_rec.val2
, l_calc_rec.operator);
n1 := dyn_calc (l_calc_rec.val1);
n2 := dyn_calc (l_calc_rec.val2);
result := exec_calc(n1, n2, l_calc_rec.operator);
end if;
return result;
end;
/
Run like this:
SQL> select dyn_calc(6) from dual;
DYN_CALC(6)
-----------
20
SQL>
or, to get the output exactly as you require:
select id, calculation, dyn_calc(id) as value
from t23
where calculation is not null;
Notes
There is no exception handling. If the data is invalid the function will just blow up
the split_calc() proc uses translate() to extract the operator rather than regex. This is because regexp_substr(p_calc, '[\-\+\*\%]', 1, 1) mysteriously swallows the -. This appears to be an environment-related bug. Consequently extending this function to process 1+4+2 will be awkward.
Here is a LiveSQL demo.
In SQL:
select 'ID ' +ID+ ' is the ' + case when calculation like '%-%' then ' minus '
when calculation like '%+%' then ' sum ' END +' of
ID'+replace(replace(calculation,'+',' and '),'-',' and ')
from tab_1
where calculation is not null
In Oracle:
select 'ID ' ||ID|| ' is the ' || case when calculation like '%-%' then ' minus '
when calculation like '%+%' then ' sum ' END|| ' of
ID'||replace(replace(calculation,'+',' and '),'-',' and ')
from tab_1
where calculation is not null
I have to write an Oracle query in toad to find all the occurrences of a character in a string. For example if I'm searching for R in the string SSSRNNSRSSR, it should return positions 4, 8 and 11.
I am new to Oracle and tried this.
select instr(mtr_ctrl_flags, 'R', pos + 1, 1) as pos1
from mer_trans_reject
where pos in ( select instr(mtr_ctrl_flags, 'R', 1, 1) as pos
from mer_trans_reject
);
where mtr_ctrl_flags is the column name. I'm getting an error indicating that pos is an invalid identifier.
Extending GolezTrol's answer you can use regular expressions to significantly reduce the number of recursive queries you do:
select instr('SSSRNNSRSSR','R', 1, level)
from dual
connect by level <= regexp_count('SSSRNNSRSSR', 'R')
REGEXP_COUNT() returns the number of times the pattern matches, in this case the number of times R exists in SSSRNNSRSSR. This limits the level of recursion to the exact number you need to.
INSTR() simply searches for the index of R in your string. level is the depth of the recursion but in this case it's also the level th occurrence of the string as we restricted to the number of recurses required.
If the string you're wanting to pick out is more complicated you could go for regular expressions ans REGEXP_INSTR() as opposed to INSTR() but it will be slower (not by much) and it's unnecessary unless required.
Simple benchmark as requested:
The two CONNECT BY solutions would indicate that using REGEXP_COUNT is 20% quicker on a string of this size.
SQL> set timing on
SQL>
SQL> -- CONNECT BY with REGEX
SQL> declare
2 type t__num is table of number index by binary_integer;
3 t_num t__num;
4 begin
5 for i in 1 .. 100000 loop
6 select instr('SSSRNNSRSSR','R', 1, level)
7 bulk collect into t_num
8 from dual
9 connect by level <= regexp_count('SSSRNNSRSSR', 'R')
10 ;
11 end loop;
12 end;
13 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:03.94
SQL>
SQL> -- CONNECT BY with filter
SQL> declare
2 type t__num is table of number index by binary_integer;
3 t_num t__num;
4 begin
5 for i in 1 .. 100000 loop
6 select pos
7 bulk collect into t_num
8 from ( select substr('SSSRNNSRSSR', level, 1) as character
9 , level as pos
10 from dual t
11 connect by level <= length('SSSRNNSRSSR') )
12 where character = 'R'
13 ;
14 end loop;
15 end;
16 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:04.80
The pipelined table function is a fair bit slower, though it would be interesting to see how it performs over large strings with lots of matches.
SQL> -- PIPELINED TABLE FUNCTION
SQL> declare
2 type t__num is table of number index by binary_integer;
3 t_num t__num;
4 begin
5 for i in 1 .. 100000 loop
6 select *
7 bulk collect into t_num
8 from table(string_indexes('SSSRNNSRSSR','R'))
9 ;
10 end loop;
11 end;
12 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:06.54
This is a solution:
select
pos
from
(select
substr('SSSRNNSRSSR', level, 1) as character,
level as pos
from
dual
connect by
level <= length(t.text))
where
character = 'R'
dual is a built in table that just returns a single row. Very convenient!
connect by lets you build recursive queries. This is often used to generate lists from tree-like data (parent/child relations). It allows you to more or less repeat the query in front of it. And you've got special fields, like level that allows you to check how deeply the recursion went.
In this case, I use it to split the string to characters and return a row for each character. Using level, I can repeat the query and get a character until the end of the string is reached.
Then it is just a matter of returning the pos for all rows containing the character 'R'
To take up a_horse_with_no_name's challenge here is another answer with a pipelined table function.
A pipelined function returns an array, which you can query normally. I would expect that over strings with large numbers of matches this will perform better than the recursive query but as with everything test yourself first.
create type num_array as table of number
/
create function string_indexes (
PSource_String in varchar2
, PSearch_String in varchar2
) return num_array pipelined is
begin
for i in 1 .. length(PSource_String) loop
if substr(PSource_String, i, 1) = PSearch_String then
pipe row(i);
end if;
end loop;
return;
end;
/
Then in order to access it:
select *
from table(string_indexes('SSSRNNSRSSR','R'))
SQL Fiddle
For simple things is it better to use the translate function on the premise that it is less CPU intensive or is regexp_replace the way to go?
This question comes forth from How can I replace brackets to hyphens within Oracle REGEXP_REPLACE function?
I think you're running into simple optimization. The regexp expression is so expensive to compute that the result is cached in the hope that it will be used again in the future. If you actually use distinct strings to convert, you will see that the modest translate is naturally faster because it is its specialized function.
Here's my example, running on 11.1.0.7.0:
SQL> DECLARE
2 TYPE t IS TABLE OF VARCHAR2(4000);
3 l t;
4 l_level NUMBER := 1000;
5 l_time TIMESTAMP;
6 l_char VARCHAR2(4000);
7 BEGIN
8 -- init
9 EXECUTE IMMEDIATE 'ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL=2';
10 SELECT dbms_random.STRING('p', 2000)
11 BULK COLLECT
12 INTO l FROM dual
13 CONNECT BY LEVEL <= l_level;
14 -- regex
15 l_time := systimestamp;
16 FOR i IN 1 .. l.count LOOP
17 l_char := regexp_replace(l(i), '[]()[]', '-', 1, 0);
18 END LOOP;
19 dbms_output.put_line('regex :' || (systimestamp - l_time));
20 -- tranlate
21 l_time := systimestamp;
22 FOR i IN 1 .. l.count LOOP
23 l_char := translate(l(i), '()[]', '----');
24 END LOOP;
25 dbms_output.put_line('translate :' || (systimestamp - l_time));
26 END;
27 /
regex :+000000000 00:00:00.979305000
translate :+000000000 00:00:00.238773000
PL/SQL procedure successfully completed
on 11.2.0.3.0 :
regex :+000000000 00:00:00.617290000
translate :+000000000 00:00:00.138205000
Conclusion: In general I suspect translate will win.
For SQL, I tested this with the following script:
set timing on
select sum(length(x)) from (
select translate('(<FIO>)', '()[]', '----') x
from (
select *
from dual
connect by level <= 2000000
)
);
select sum(length(x)) from (
select regexp_replace('[(<FIO>)]', '[\(\)\[]|\]', '-', 1, 0) x
from (
select *
from dual
connect by level <= 2000000
)
);
and found that the performance of translate and regexp_replace were almost always the same, but it could be that the cost of the other operations is overwhelming the cost of the functions I'm trying to test.
Next, I tried a PL/SQL version:
set timing on
declare
x varchar2(100);
begin
for i in 1..2500000 loop
x := translate('(<FIO>)', '()[]', '----');
end loop;
end;
/
declare
x varchar2(100);
begin
for i in 1..2500000 loop
x := regexp_replace('[(<FIO>)]', '[\(\)\[]|\]', '-', 1, 0);
end loop;
end;
/
Here the translate version takes just under 10 seconds, while the regexp_replace version around 0.2 seconds -- around 2 orders of magnitude faster(!)
Based on this result, I will be using regular expressions much more often in my performance critical code -- both SQL and PL/SQL.
I have a query where I need to call a SQL function to format a particular column in the query. The formatting needed is very similar to formatting a phone number, ie. changing 1234567890 into (123)456-7890.
I've read that calling a function from a select statement could be a performance killer, and it was kind of reflected in my situation, the time the query took more than tripled and I did not think the function would take this much longer. The function runs in linear time but does use SQL loops. To give an idea of the size of the database this particular query returns about 220,000 rows. The run time of the query went from < 3s to > 9s when running without calling the function vs. running calling the function. The column that needs formatting isn't indexed or used in a join condition or where clause.
Is the performance drop here expected or is there something I can do to improve it?
This is the function in question:
CREATE OR REPLACE FUNCTION fn(bigint)
RETURNS character varying LANGUAGE plpgsql AS
$BODY$
DECLARE
v_chars varchar[];
v_ret varchar;
v_length int4;
v_count int4;
BEGIN
if ($1 isnull or $1 = 0) then
return null;
end if;
v_chars := regexp_split_to_array($1::varchar,'');
v_ret := '';
v_length := array_upper (v_chars,1);
v_count := 0;
for v_index in 1..11 loop
v_count := v_count + 1;
if (v_index <= v_length) then
v_ret := v_chars[v_length - (v_index - 1)] || v_ret;
else
v_ret := '0' || v_ret;
end if;
if (v_count <= 6 and (v_count % 2) = 0) then
v_ret := '.' || v_ret;
end if;
end loop;
return v_ret;
END
$BODY$
It depends on the specifics of the function. To find out how much a bare function call will cost, create dummy functions like:
CREATE FUNCTION f_bare_plpgsql(text)
RETURNS text LANGUAGE plpgsql IMMUTABLE AS
$BODY$
BEGIN
RETURN $1;
END
$BODY$;
CREATE FUNCTION f_bare_sql(text)
RETURNS text LANGUAGE sql IMMUTABLE AS
$BODY$
SELECT $1;
$BODY$;
And try your query again.
If then you wonder why your function is slow, add it to your question.
Solution for updated question
Your function could be improved in many places, but there is a more radical solution:
SELECT to_char(12345678901, '00000"."00"."00"."00')
Many times faster, obviously. More about to_char() in the manual.
Consider the following demo:
WITH x(n) AS (
VALUES (1::bigint), (12), (123), (1234), (12345), (123456), (1234567)
,(12345678), (123456789), (1234567890), (12345678901), (123456789012)
)
SELECT n, x.fn(n), to_char(n, '00000"."00"."00"."00')
FROM x
n | fn | to_char
--------------+----------------+-----------------
1 | 00000.00.00.01 | 00000.00.00.01
12 | 00000.00.00.12 | 00000.00.00.12
123 | 00000.00.01.23 | 00000.00.01.23
1234 | 00000.00.12.34 | 00000.00.12.34
12345 | 00000.01.23.45 | 00000.01.23.45
123456 | 00000.12.34.56 | 00000.12.34.56
1234567 | 00001.23.45.67 | 00001.23.45.67
12345678 | 00012.34.56.78 | 00012.34.56.78
123456789 | 00123.45.67.89 | 00123.45.67.89
1234567890 | 01234.56.78.90 | 01234.56.78.90
12345678901 | 12345.67.89.01 | 12345.67.89.01
123456789012 | 23456.78.90.12 | #####.##.##.##
to_char() is only prepared for up to 11 decimal digits, as you can see.
Can easily be extended, if need should be.
If you really must perform the formatting in the database then modify your table to include a field to store the formatted number.
A trigger can call your function to generate the formatted number when the value changes, then you only (slightly) increase the time taken to INSERT or UPDATE a few rows at a time, rather than all of them.
Your query returning all 220k rows then becomes a simple SELECT of the formatted value and should be nice and quick.