Table1 has 6 columns that code, code1, %ofcode1, calc, code2, %ofcode2.
code code1 %ofcode1 calc code2 %ofcode2
1 a 20 + b 10
2 1 - c
3 2 10 * d 10
Table2 has 2 columns that field, value.
field value
a 50
b 20
c 10
d 20
I need final calculation value using function
calculation might be like tis using table1 format and getting values from table2.
50*20/100 + 20*10/100
12 - 10
2*10/100 * 20*10/100 = 0.4
I need value that 0.4
You can try implementing something along these lines.
DECLARE
var_a VARCHAR2(50);
int_a NUMBER;
BEGIN
var_a := 'a'; -- SELECT code1 FROM Table1 WHERE code="input";
IF REGEXP_LIKE(var_a, '^\d+(\.\d+)?$')
THEN int_a := to_number(var_a, '9999.99');
ELSE int_a := (-16);-- SELECT value FROM Table2 WHERE field=var_a
END IF;
dbms_output.put_line('first int is: ' || int_a);
END;
/
Related
We want to count how many nulls each column in a table has. There are too many columns to do this one by one, so the following PLSQL procedure was created.
In the first part of the procedure, all column names are obtained. This works, as the dbms_output correctly lists them all.
Secondly, a query inserts the count of null values in the variable 'nullscount'. This part does not work, as the output printed for this variable is always 0, even for columns where we know there are nulls.
Does anyone know how to handle the second part correctly?
Many thanks.
CREATE OR REPLACE PROCEDURE COUNTNULLS AS
nullscount int;
BEGIN
for c in (select column_name from all_tab_columns where table_name = upper('gp'))
loop
select count(*) into nullscount from gp where c.column_name is null;
dbms_output.put_line(c.column_name||' '||nullscount);
end loop;
END COUNTNULLS;
You can get it with just one query like this: this query scans table just once:
DBFiddle: https://dbfiddle.uk/asgrCezT
select *
from xmltable(
'/ROWSET/ROW/*'
passing
dbms_xmlgen.getxmltype(
(
select
'select '
||listagg('count(*)-count("'||column_name||'") as "'||column_name||'"',',')
||' from '||upper('gp')
from user_tab_columns
where table_name = upper('gp')
)
)
columns
column_name varchar2(30) path './name()',
cnt_nulls int path '.'
);
Results:
COLUMN_NAME CNT_NULLS
------------------------------ ----------
A 5
B 4
C 3
Dynamic sql in this query uses (24 chars + column name length) so it should work fine for example for 117 columns with average column name length = 10. If you need more, you can rewrite it a bit, for example:
select *
from xmltable(
'let $cnt := /ROWSET/ROW/CNT
for $r in /ROWSET/ROW/*[name() != "CNT"]
return <R name="{$r/name()}"> {$cnt - $r} </R>'
passing
dbms_xmlgen.getxmltype(
(
select
'select count(*) CNT,'
||listagg('count("'||column_name||'") as "'||column_name||'"',',')
||' from '||upper('gp')
from user_tab_columns
where table_name = upper('gp')
)
)
columns
column_name varchar2(30) path '#name',
cnt_nulls int path '.'
);
create table gp (
id number generated by default on null as identity
constraint gp_pk primary key,
c1 number,
c2 number,
c3 number,
c4 number,
c5 number
)
;
-- add some data with NULLS and numbers
DECLARE
BEGIN
FOR r IN 1 .. 20 LOOP
INSERT INTO gp (c1,c2,c3,c4,c5) VALUES
(CASE WHEN mod(r,2) = 0 THEN NULL ELSE mod(r,2) END
,CASE WHEN mod(r,3) = 0 THEN NULL ELSE mod(r,3) END
,CASE WHEN mod(r,4) = 0 THEN NULL ELSE mod(r,4) END
,CASE WHEN mod(r,5) = 0 THEN NULL ELSE mod(r,5) END
,5);
END LOOP;
END;
/
-- check what is in the table
SELECT * FROM gp;
-- do count of each column
DECLARE
l_colcount NUMBER;
l_statement VARCHAR2(100) := 'SELECT COUNT(*) FROM $TABLE_NAME$ WHERE $COLUMN_NAME$ IS NULL';
BEGIN
FOR r IN (SELECT column_name,table_name FROM user_tab_columns WHERE table_name = 'GP') LOOP
EXECUTE IMMEDIATE REPLACE(REPLACE(l_statement,'$TABLE_NAME$',r.table_name),'$COLUMN_NAME$',r.column_name) INTO l_colcount;
dbms_output.put_line('Table: '||r.table_name||', column'||r.column_name||', COUNT: '||l_colcount);
END LOOP;
END;
/
Table created.
Statement processed.
Result Set 4
ID C1 C2 C3 C4 C5
1 1 1 1 1 5
2 - 2 2 2 5
3 1 - 3 3 5
4 - 1 - 4 5
5 1 2 1 - 5
6 - - 2 1 5
7 1 1 3 2 5
8 - 2 - 3 5
9 1 - 1 4 5
10 - 1 2 - 5
11 1 2 3 1 5
12 - - - 2 5
13 1 1 1 3 5
14 - 2 2 4 5
15 1 - 3 - 5
16 - 1 - 1 5
17 1 2 1 2 5
18 - - 2 3 5
19 1 1 3 4 5
20 - 2 - - 5
20 rows selected.
Statement processed.
Table: GP, columnID, COUNT: 0
Table: GP, columnC1, COUNT: 10
Table: GP, columnC2, COUNT: 6
Table: GP, columnC3, COUNT: 5
Table: GP, columnC4, COUNT: 4
Table: GP, columnC5, COUNT: 0
c.column_name is never null because it's the content of the column "column_name" of the table "all_tab_columns"
not the column of which name is the value of c.column_name, in table gp.
You have to use dynamic query and EXECUTE IMMEDIATE to achieve what you want.
I need to take the max value of the column id which starts with AB from Table A
I have a Table B where I need to update the column values max value +1 for all the rows present in Table B.
For example If I get the max value from Table A as AB1500, and Table B has 20 records and various columns, I need to populate from AB1501 to AB1520 for the records in Table B for the ORG_ID column.
Issue: For loop not able to write data into the TABLE B
Table A:
ORG_ID
ORG_NAME
AE500
Google
AB1500
Amazon
AB1200
Apple
Table B: Here For the available records i need to increment from max of AB from Table A
Country
Country_ID
ORG_ID
US
10
AB1501
UK
11
AB1502
FRANCE
12
AB1503
Create or replace procedure proc_incr(
v_org_id IN TableB.org_id%TYPE
)
v_max_number NUMBER;
v_max_org_id VARCHAR2(20);
v_max_var VARCHAR2(20);
v_temp VARCHAR2(20);
v_count NUMBER;
begin
select max(Table A.org_id)
into v_max_org_id
from TableA where org_id like 'AB%';
select count(*)
into v_count
from Table B;
select regexp_substr(v_max_org_id, '\d+')
into v_max_number
from dual;
select regexp_substr(v_max_org_id, '\D+')
into v_max_var
from dual;
// For Loop to write data to Table B
for i in 1 .. (v_count) LOOP
v_temp := v_max_number+i;
v_max_org_id := v_max_var||v_temp;
v_org_id := v_max_org_id;
End LOOP;
commit;
END proc_incr;
Sample data:
SQL> SELECT * FROM tablea;
ORG_ID
------
AB1500
CD1234
SQL> SELECT * FROM tableb;
ORG_ID NAME
------ ------
xxxxxx Little
Foot
Mahe
Your code, slightly fixed; you didn't perform any update, so I presume that's why you had an "issue"
For loop not able to write data into the TABLE B
Note the cursor for the tableb table; it is used to update current row fetched by it.
SQL> DECLARE
2 v_max_number NUMBER;
3 v_max_org_id VARCHAR2 (20);
4 v_max_var VARCHAR2 (20);
5
6 CURSOR curb IS
7 SELECT org_id, name
8 FROM tableb
9 FOR UPDATE;
10
11 cbr curb%ROWTYPE;
12 i NUMBER := 1;
13 BEGIN
14 SELECT MAX (tablea.org_id)
15 INTO v_max_org_id
16 FROM tablea
17 WHERE org_id LIKE 'AB%';
18
19 SELECT REGEXP_SUBSTR (v_max_org_id, '\d+') INTO v_max_number FROM DUAL;
20
21 SELECT REGEXP_SUBSTR (v_max_org_id, '\D+') INTO v_max_var FROM DUAL;
22
23 -- For Loop to write data to Table B
24 OPEN curb;
25
26 LOOP
27 FETCH curb INTO cbr;
28
29 EXIT WHEN curb%NOTFOUND;
30
31 v_max_org_id := v_max_var || TO_CHAR (v_max_number + i);
32
33 UPDATE tableb
34 SET org_id = v_max_org_id
35 WHERE CURRENT OF curb;
36
37 i := i + 1;
38 END LOOP;
39 END;
40 /
PL/SQL procedure successfully completed.
SQL> SELECT * FROM tableb;
ORG_ID NAME
------ ------
AB1501 Little
AB1502 Foot
AB1503 Mahe
SQL>
This question already has answers here:
How to add a sequence column to an existing table with records
(3 answers)
Closed 4 years ago.
There is a table with millions of records which has duplicate records as well. what is the process of creating a new entity as surrogate key (which denotes sequence no.)
E.g table structure
col1 col2
101 A
101 A
101 B
102 A
102 B
I would like to create a new column (col3) - which denotes a seq no.
col1 col2 col3
101 A 1
101 A 2
101 B 3
102 A 1
102 B 2
Please suggest me steps to follow to create surrogate key for existing records(300 million), and even when new records are loaded ( I assume trigger is needed to while inserting).
Just use row_number function to populate col3 :
For already existing records apply :
SQL> create table tab(col1 int , col2 varchar2(1));
Table created
SQL> insert all
2 into tab values(101,'A')
3 into tab values(101,'A')
4 into tab values(101,'B')
5 into tab values(102,'A')
6 into tab values(102,'B')
7 select * from dual;
5 rows inserted
SQL> create table tab_ as
2 select col1, col2,
3 row_number() over (partition by col1 order by col2) as col3
4 from tab;
Table created
SQL> drop table tab;
Table dropped
SQL> alter table tab_ rename to tab;
Table altered
OR Alternatively ( without recreating the table ) :
SQL> create table tab(col1 int , col2 varchar2(1));
Table created
SQL> insert all
2 into tab values(101,'A')
3 into tab values(101,'A')
4 into tab values(101,'B')
5 into tab values(102,'A')
6 into tab values(102,'B')
7 select * from dual;
5 rows inserted
SQL> alter table tab add col3 integer;
Table altered
SQL> declare
2 i pls_integer := 0;
3 begin
4 for c in
5 (
6 select rowid, col1, col2,
7 row_number() over (partition by col1 order by col2) as col3
8 from tab
9 )
10 loop
11 update tab t
12 set t.col3 = c.col3
13 where t.rowid = c.rowid;
14 i:= i+1;
15 if ( ( i mod 10000 ) = 0 ) then commit; end if;
16 end loop;
17 end;
18 commit;
19 /
PL/SQL procedure successfully completed
SQL> select * from tab;
COL1 COL2 COL3
---- ---- -----
101 A 1
101 A 2
101 B 3
102 A 1
102 B 2
5 rows selected
For upcoming (newly inserted) records you may use a trigger as you mentioned :
SQL> create or replace trigger trg_ins_tab
2 before insert on tab
3 referencing new as new old as old for each row
4 declare
5 begin
6 select nvl(max(col3),0) + 1
7 into :new.col3
8 from tab
9 where col1 = :new.col1;
10 end;
11 /
Trigger created
SQL> insert into tab(col1,col2) values(101,'C');
1 row inserted
SQL> select *
2 from tab t
3 order by t.col1, col3;
COL1 COL2 COL3
---- ---- -----
101 A 1
101 A 2
101 B 3
101 C 4
102 A 1
102 B 2
6 rows selected
I have written a program to generate all possible combinations of strings of length two. The program is as follows:
CREATE OR REPLACE PROCEDURE string_combinations
AS
vblString1 VARCHAR2(100);
vblString2 VARCHAR2(100);
vblChr1 NUMBER;
vblChr2 NUMBER;
BEGIN
vblChr1 := 65;
LOOP
SELECT Chr(vblChr1) INTO vblString1 FROM dual;
vblChr2 := 65;
LOOP
vblString2 := vblString1||Chr(vblChr2);
Dbms_Output.put_line(vblString2);
vblChr2:=vblChr2+1;
EXIT WHEN vblChr2=91;
END LOOP;
vblChr1:=vblChr1+1;
EXIT WHEN vblChr1=91;
END LOOP;
END;
/
I have used a loop inside another loop. So, if I have to generate strings of length three, I can simply use another loop. But that would be lengthy if I wish to generate strings of length 5,6,7 or more. How can I use recursion to achieve it?
I am using oracle.
You don't need PL/SQL to generate an alphabetical sequence. You could do it in pure SQL using Row Generator method.
WITH combinations AS
(SELECT chr( ascii('A')+level-1 ) c FROM dual CONNECT BY level <= 26
)
SELECT * FROM combinations
UNION ALL
SELECT c1.c || c2.c FROM combinations c1, combinations c2
UNION ALL
SELECT c1.c
|| c2.c
|| c3.c
FROM combinations c1,
combinations c2,
combinations c3
/
The above would give you all possible combinations c1, c2, c3 for single and two characters. For more combinations, you could just add combinations as c4, c5 etc.
Why not this?
SELECT * FROM a1;
ID NAME
---------- ----------
1 a
2 b
3 c
4 d
5 e
5 rows selected.
SELECT a.id,b.name
FROM (SELECT id FROM a1) a, (SELECT name FROM a1) b
ORDER BY a.id, b.name;
ID NAME
---------- ----------
1 a
1 b
1 c
1 d
1 e
2 a
2 b
2 c
2 d
2 e
3 a
3 b
3 c
3 d
3 e
4 a
4 b
4 c
4 d
4 e
5 a
5 b
5 c
5 d
5 e
25 rows selected.
I have de-normalized table, something like
CODES
ID | VALUE
10 | A,B,C
11 | A,B
12 | A,B,C,D,E,F
13 | R,T,D,W,W,W,W,W,S,S
The job is to convert is where each token from VALUE will generate new row. Example:
CODES_TRANS
ID | VALUE_TRANS
10 | A
10 | B
10 | C
11 | A
11 | B
What is the best way to do it in PL/SQL without usage of custom pl/sql packages, ideally with pure SQL?
Obvious solution is to implement it via cursors. Any ideas?
Another alternative is to use the model clause:
SQL> select id
2 , value
3 from codes
4 model
5 return updated rows
6 partition by (id)
7 dimension by (-1 i)
8 measures (value)
9 ( value[for i from 0 to length(value[-1])-length(replace(value[-1],',')) increment 1]
10 = regexp_substr(value[-1],'[^,]+',1,cv(i)+1)
11 )
12 order by id
13 , i
14 /
ID VALUE
---------- -------------------
10 A
10 B
10 C
11 A
11 B
12 A
12 B
12 C
12 D
12 E
12 F
13 R
13 T
13 D
13 W
13 W
13 W
13 W
13 W
13 S
13 S
21 rows selected.
I have written up to 6 alternatives for this type of query in this blogpost: http://rwijk.blogspot.com/2007/11/interval-based-row-generation.html
Regards,
Rob.
I have a pure SQL solution for you.
I adapted a trick I found on an old Ask Tom site, posted by Mihail Bratu. My adaptation uses regex to tokenise the VALUE column, so it requires 10g or higher.
The test data.
SQL> select * from t34
2 /
ID VALUE
---------- -------------------------
10 A,B,C
11 A,B
12 A,B,C,D,E,F
13 R,T,D,W1,W2,W3,W4,W5,S,S
SQL>
The query:
SQL> select t34.id
2 , t.column_value value
3 from t34
4 , table(cast(multiset(
5 select regexp_substr (t34.value, '[^(,)]+', 1, level)
6 from dual
7 connect by level <= length(value)
8 ) as sys.dbms_debug_vc2coll )) t
9 where t.column_value != ','
10 /
ID VALUE
---------- -------------------------
10 A
10 B
10 C
11 A
11 B
12 A
12 B
12 C
12 D
12 E
12 F
13 R
13 T
13 D
13 W1
13 W2
13 W3
13 W4
13 W5
13 S
13 S
21 rows selected.
SQL>
Based on Celko's book, here is what I found and it's working well!
SELECT
TABLE1.ID
, MAX(SEQ1.SEQ) AS START_POS
, SEQ2.SEQ AS END_POS
, COUNT(SEQ2.SEQ) AS PLACE
FROM
TABLE1, V_SEQ SEQ1, V_SEQ SEQ2
WHERE
SUBSTR(',' || TABLE1.VALUE || ',', SEQ1.SEQ, 1) = ','
AND SUBSTR(',' || TABLE1.VALUE || ',', SEQ2.SEQ, 1) = ','
AND SEQ1.SEQ < SEQ2.SEQ
AND SEQ2.SEQ <= LENGTH(TABLE1.VALUE)
GROUP BY TABLE1.ID, TABLE1.VALUE, SEQ2.SEQ
Where V_SEQ is a static table with one field:
SEQ, integer values 1 through N, where N >= MAX_LENGTH(VALUE).
This is based on the fact the the VALUE is wrapped by ',' on both ends, like this:
,A,B,C,D,
If your tokens are fixed length (like in my case) I simply used PLACE field to calculate the actual string. If variable length, use start_pos and end_pos
So, in my case, tokens are 2 char long, so the final SQL is:
SELECT
TABLE1.ID
, SUBSTR(TABLE1.VALUE, T_SUB.PLACE * 3 - 2 , 2 ) AS SINGLE_VAL
FROM
(
SELECT
TABLE1.ID
, MAX(SEQ1.SEQ) AS START_POS
, SEQ2.SEQ AS END_POS
, COUNT(SEQ2.SEQ) AS PLACE
FROM
TABLE1, V_SEQ SEQ1, V_SEQ SEQ2
WHERE
SUBSTR(',' || TABLE1.VALUE || ',', SEQ1.SEQ, 1) = ','
AND SUBSTR(',' || TABLE1.VALUE || ',', SEQ2.SEQ, 1) = ','
AND SEQ1.SEQ < SEQ2.SEQ
AND SEQ2.SEQ <= LENGTH(TABLE1.VALUE)
GROUP BY TABLE1.ID, TABLE1.VALUE, SEQ2.SEQ
) T_SUB
INNER JOIN
TABLE1 ON TABLE1.ID = T_SUB.ID
ORDER BY TABLE1.ID, T_SUB.PLACE
Original Answer
In SQL Server TSQL we parse strings and make a table object. Here is sample code - maybe you can translate it.
http://rbgupta.blogspot.com/2007/10/tsql-parsing-delimited-string-into.html
Second Option
Count the number of commas per row. Get the Max number of commas. Let's say that in the entire table you have a row with 5 commas max. Build a SELECT with 5 substrings. This will make it a set based operation and should be much faster than a rbar.