In my query I'm using a for loop, which displays 1000 three times. I have to increment 1000 for each iteration of the loop, i.e. 1001, 1002,.. same number three times i.e. I want to add into my table 1000,1000,1000,1001,1001,1001 and 1002,1002,1002,
declare
CPName varchar(20) :=1000;
a number;
begin
for a in 1 .. 3 loop
insert into clients values (CPName,null,null);
end loop;
end;
How can I do this?
CPName is a VARCHAR; I assume you want this to be a number, in which case you just add it on.
There's no need to define the variable a either, it's implicitly declared by the LOOP. I would call this i as it's a more common name for an index variable.
declare
CPName integer := 1000;
begin
for i in 1 .. 3 loop
insert into clients values (CPName + i, null, null);
end loop;
end;
You can do this all in a single SQL statement though; there's no need to use PL/SQL.
insert into clients
select 1000 + i, null, null
from dual
cross join ( select level as i
from dual
connect by level <= 3 )
Based on your comments you actually want something like this:
insert into clients
with multiply as (
select level - 1 as i
from dual
connect by level <= 3
)
select 1000 + m.i, null, null
from dual
cross join multiply m
cross join multiply
This will only work if you want the same number of records as you want to increase so maybe you'd prefer to do it this way, which will give you a lot more flexibility:
insert into clients
with increments as (
select level - 1 as i
from dual
connect by level <= 5
)
, iterations as (
select level as j
from dual
connect by level <= 3
)
select 1000 + m.i, null, null
from dual
cross join increments m
cross join iterations
Using your LOOP methodology this would involve a second, interior loop:
declare
CPName integer := 1000;
begin
for i in 1 .. 3 loop
for j in 1 .. 3 loop
insert into clients values (CPName + i, null, null);
end loop;
end loop;
end;
Related
I am new to PL/SQL and just want to know what I can do here.
I have created a table that loops a counter up to 10 that is displayed in the table data.
How do I achieve it so I can count to 1-10 but exclude a number such a 5 so that it displays 1, 2, 3, 4, 6, 7, 8, 9, 10?
Current code is as follows;
DROP TABLE COUNTER
CREATE TABLE COUNTER (
COUNTER VARCHAR2(60)
);
DECLARE V_COUNTER NUMBER(2) := 1;
BEGIN
LOOP
INSERT INTO COUNTER (COUNTER)
VALUES (V_COUNTER);
V_COUNTER := V_COUNTER + 1
EXIT WHEN V_COUNTER = 11;
END LOOP;
END;
Table data;
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
Something like:
DROP TABLE COUNTER
CREATE TABLE COUNTER (COUNTER VARCHAR2(60));
BEGIN
FOR i IN 1 .. 10 LOOP
IF i <> 5 THEN
INSERT INTO COUNTER (COUNTER) VALUES (i);
END IF;
END LOOP;
END;
/
In general it's better to process such statements in bulk and avoid context switching ( here's more info on this topic.
DECLARE
TYPE t_values IS
TABLE OF NUMBER;
l_values t_values;
BEGIN
SELECT
level counter
BULK COLLECT
INTO l_values
FROM
dual
WHERE
level NOT IN (
6,
7
)
CONNECT BY
level <= 11;
FORALL i IN l_values.first..l_values.last
INSERT INTO counter VALUES ( l_values(i) );
END;
/
if code is simple enough you can always use plain insert stmt.
INSERT INTO counter
SELECT
level
FROM
dual
WHERE
level NOT IN (
6,
7
)
CONNECT BY
level <= 11;
I am trying to insert comma separated values in a global table. When data is large it's taking long time to process the data. I need to optimize my insert query, is there any other ways to achieve below insert statement for better optimization? Please check code below for more info. Appreciated for any help.
//my proc
emp_id in CLOB;
//insert statement
insert into Global_Emp_Tbl
with inputs(str) as(
select to_clob(emp_id)
from dual
),
temp_table(s, n, empid, st_pos, end_pos) as (
select ',' || str || ',', -1, null, null, 1
from inputs
union all
selct s, n+1, substr(s, st_pos, end_pos - st_pos),
end_pos + 1, instr(s, ',', 1, n+3)
from temp_table
where end_pos != 0
)
select empid from temp_table where empid is not null;
commit;
//using insert table in where clause
exists( select 1 from Global_Emp_Tbl gt where e.id =gt.emp_id ) //joining with main table
You can use simple REGEXP_SUBSTR to achieve the same
insert into Global_Emp_Tbl
SELECT Regexp_substr(empid, '[^,]+', 1, LEVEL) AS empid
FROM (SELECT To_clob(emp_id) empid
FROM dual)
CONNECT BY LEVEL <= Regexp_count(emp_id, '[^,]+');
commit;
Also, one more suggestion try changing the below in your function
select empid from temp_table where n > 0;
Well I can't reporduce your problem in Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
This procedure provides the sample data, a CLOB csv list of the first N interger
create or replace function get_list(N NUMBER) return CLOB
as
v_lst CLOB;
i PLS_INTEGER;
BEGIN
v_lst :='1';
for i in 2 .. n loop
v_lst :=v_lst||','||to_char(i);
end loop;
return(v_lst);
END;
/
Note that the 5K parameter give approximately 20K long list.
select length(get_list(5000)) from dual;
23892
The parsing of this list in a global temporary table is done in seconds not in minutes. Here an example using your SELECT
SQL> set timi on;
SQL> create global temporary table csv_tbl
2 ON COMMIT PRESERVE ROWS
3 as
4 with inputs(str) as(
5 select get_list(5000)
6 from dual
7 ),
8 temp_table(s, n, empid, st_pos, end_pos) as (
9 select ',' || str || ',', -1, null, null, 1
10 from inputs
11 union all
12 select s, n+1, substr(s, st_pos, end_pos - st_pos),
13 end_pos + 1, instr(s, ',', 1, n+3)
14 from temp_table
15 where end_pos != 0
16 )
17 select empid from temp_table where empid is not null
18 ;
Table created.
Elapsed: 00:00:01.35
So the most probably explanation is, that the most elapsed time is spend in the query with the EXISTS clause
exists( select 1 from Global_Emp_Tbl gt where e.id =gt.emp_id )
Two problems come to my mind
1) The datatypes of the EMP_ID in the temporary table and in the EMP table differs, e.g. in temporary table it is VARCHAR2, in the EMP table NUMBER - this will prohibit the use of the index on the EMP table.
2) The missing object statistics on the global temporary table leads the CBO to an wrong execution plan and you use the index (in a large nested loops) where you should use a full table scan.
I have a table "test_calculate" this has a column "CONN_BY" having values
column can have more than 2 number to multiply and this table may contain millions of rows , I need to get the result of the calculation from "CONN_BY" to "MVP".
I have used xmlquery for the calculation and dynamic query but these are quite slow. Is there another way which is much faster .Please suggest.
You can try the dynamic query.
Create a function which returns the calculated value and use it in your insert or select queries.
CREATE OR REPLACE FUNCTION UFN_CALCULATE (CLM_VALUE VARCHAR2)
RETURN NUMBER IS
RES_VAL NUMBER;
BEGIN
EXECUTE IMMEDIATE 'select '||CLM_VALUE||' FROM DUAL' INTO RES_VAL;
RETURN RES_VAL;
END;
You can use that function like below.
SELECT UFN_CALCULATE('.0876543 * .09876') FROM DUAL;
SELECT UFN_CALCULATE(CONN_BY) FROM YOUR_TABLE;
One option is using select ... connect by level <= regexp_count(conn_by,'[^*]+')... query for the implicit cursor within a PL/SQL code block
SQL> set serveroutput on
SQL> declare
mvp owa.nc_arr; -- numeric array to initialize each multiplication to 1 for each id value
begin
dbms_output.put_line('ID MVP');
dbms_output.put_line('--------');
for c in
(
select id,
to_number( regexp_substr(conn_by,'[^*]+',1,level) ) as nr,
level as lvl , max( level ) over ( partition by id ) as mx_lvl
from test_calculate
connect by level <= regexp_count(conn_by,'[^*]+')
and prior sys_guid() is not null
and prior conn_by = conn_by
order by id, lvl
)
loop
if c.lvl = 1 then mvp(c.id) := 1; end if;
mvp(c.id) := c.nr * mvp(c.id);
if c.lvl = c.mx_lvl then
dbms_output.put_line(c.id||' '||mvp(c.id));
end if;
end loop;
end;
/
where test_calculate is assumed to have an identity column(id)
Demo
I am trying to write a query to generates 1000 rows, I have a table called CCHOMEWORK with 2 columns, ID integer (PK) and StudentID varchar which contains the value for all the 1000 rows.
I tried this, but I keep getting errors and does not work
SET #MyCounter = 1
WHILE #MyCounter < 1000
BEGIN
INSERT INTO CCHOMEWORK
(ID)
VALUES
#MyCounter)
set #MyCounter = #MyCounter + 1;
END
This will create 1000 rows:
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= 1000
You can include it in your insert with:
INSERT INTO CCHOMEWORK (ID)
SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= 1000
However, if you want to insert multiple sequential IDs you might be better using a sequence:
CREATE SEQUENCE CCHOMEWORK__ID__SEQ
/
Then:
INSERT INTO CCHOMEWORK (ID)
SELECT CC_HOMEWORK__ID__SEQ.NEXTVAL
FROM DUAL
CONNECT BY LEVEL <= 1000;
Or:
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO CCHOMEWORK (ID) VALUES ( CC_HOMEWORK__ID__SEQ.NEXTVAL );
END LOOP;
END;
/
Syntax for Oracle database (using PL/SQL):
DECLARE
MyCounter NUMBER := 1;
BEGIN
LOOP
EXIT WHEN MyCounter> 1000;
INSERT INTO CCHOMEWORK (ID)
VALUES(MyCounter);
MyCounter := MyCounter+1;
END LOOP;
COMMIT;
END;
/
I currently have a script that calculates the tanimoto coefficient on the fingerprints of a chemical library. However during testing I found my implementation could not be feasibly scaled up due to my method of comparing the bit strings (It just takes far too long). See below. This is the loop I need to improve. I've simplified this so it is just looking at two structures the real script does permutations about the dataset of structures, but that would over complicate the issue I have here.
LOOP
-- Find the NA bit
SELECT SUBSTR(qsar_kb.fingerprint.fingerprint, var_fragment_id ,1) INTO var_na FROM qsar_kb.fingerprint where project_id = 1 AND structure_id = 1;
-- FIND the NB bit
SELECT SUBSTR(qsar_kb.fingerprint.fingerprint, var_fragment_id ,1) INTO var_nb FROM qsar_kb.fingerprint where project_id = 1 AND structure_id = 2;
-- Test for both bits the same
IF var_na > 0 AND var_nb > 0 then
var_tally := var_tally + 1;
END IF;
-- Test for bit in A on and B off
IF var_na > var_nb then
var_tna := var_tna + 1;
END IF
-- Test for bit in B on and A off.
IF var_nb > var_na then
var_tnb := var_tnb + 1;
END IF;
var_fragment_id := var_fragment_id + 1;
EXIT WHEN var_fragment_id > var_maxfragment_id;
END LOOP;
For a simple example
Structure A = '101010'
Structure B = '011001'
In my real data set the length of the binary is 500 bits and up.
I need to know:
1)The number of bits ON common to Both
2)The number of bits ON in A but off in B
3)The number of bits ON in B but off in B
So in this case
1) = 1
2) = 2
3) = 2
Ideally I want to change how I'm doing this. I don't want to be steeping though each bit in each string its just too time expensive when I scale the whole system up with thousands of structures each with fingerprint bit strings in the length order of 500-1000
My logic to fix this would be to:
Take the total number of bits ON in both
A) = 3
B) = 3
Then perform an AND operation and find how many bits are on in both
= 1
Then just subtract this from the totals to find the number of bits on in one but not the other.
So how can I perform an AND like operation on two strings of 0's and 1's to find the number of common 1's?
Check out the BITAND function.
The BITAND function treats its inputs and its output as vectors of bits; the output is the bitwise AND of the inputs.
However, according to the documentation, this only works for 2^128
You should move the SELECT out of the loop. I'm pretty sure you're spending 99% of the time selecting 1 bit 500 times where you could select 500 bits in one go and then loop through the string:
DECLARE
l_structure_a LONG;
l_structure_b LONG;
var_na VARCHAR2(1);
var_nb VARCHAR2(1);
BEGIN
SELECT MAX(decode(structure_id, 1, fingerprint)),
MAX(decode(structure_id, 2, fingerprint))
INTO l_structure_a, l_structure_b
FROM qsar_kb.fingerprint
WHERE project_id = 1
AND structure_id IN (1,2);
LOOP
var_na := substr(l_structure_a, var_fragment_id, 1);
var_nb := substr(l_structure_b, var_fragment_id, 1);
-- Test for both bits the same
IF var_na > 0 AND var_nb > 0 THEN
var_tally := var_tally + 1;
END IF;
-- Test for bit in A on and B off
IF var_na > var_nb THEN
var_tna := var_tna + 1;
END IF;
-- Test for bit in B on and A off.
IF var_nb > var_na THEN
var_tnb := var_tnb + 1;
END IF;
var_fragment_id := var_fragment_id + 1;
EXIT WHEN var_fragment_id > var_maxfragment_id;
END LOOP;
END;
Edit:
You could also do it in a single SQL statement:
SQL> WITH DATA AS (
2 SELECT '101010' fingerprint,1 project_id, 1 structure_id FROM dual
3 UNION ALL SELECT '011001', 1, 2 FROM dual),
4 transpose AS (SELECT ROWNUM fragment_id FROM dual CONNECT BY LEVEL <= 1000)
5 SELECT COUNT(CASE WHEN var_na = 1 AND var_nb = 1 THEN 1 END) nb_1,
6 COUNT(CASE WHEN var_na > var_nb THEN 1 END) nb_2,
7 COUNT(CASE WHEN var_na < var_nb THEN 1 END) nb_3
8 FROM (SELECT to_number(substr(struct_a, fragment_id, 1)) var_na,
9 to_number(substr(struct_b, fragment_id, 1)) var_nb
10 FROM (SELECT MAX(decode(structure_id, 1, fingerprint)) struct_a,
11 MAX(decode(structure_id, 2, fingerprint)) struct_b
12 FROM DATA
13 WHERE project_id = 1
14 AND structure_id IN (1, 2))
15 CROSS JOIN transpose);
NB_1 NB_2 NB_3
---------- ---------- ----------
1 2 2
I'll sort of expand on the answer from Lukas with a little bit more information.
A little bit of an internet search revealed code from Tom Kyte (via Jonathan Lewis) to convert between bases. There is a function to_dec which will take a string and convert it to a number. I have reproduced the code below:
Convert base number to decimal:
create or replace function to_dec(
p_str in varchar2,
p_from_base in number default 16) return number
is
l_num number default 0;
l_hex varchar2(16) default '0123456789ABCDEF';
begin
for i in 1 .. length(p_str) loop
l_num := l_num * p_from_base + instr(l_hex,upper(substr(p_str,i,1)))-1;
end loop;
return l_num;
end to_dec;
Convert decimal to base number:
create or replace function to_base( p_dec in number, p_base in number )
return varchar2
is
l_str varchar2(255) default NULL;
l_num number default p_dec;
l_hex varchar2(16) default '0123456789ABCDEF';
begin
if ( trunc(p_dec) <> p_dec OR p_dec < 0 ) then
raise PROGRAM_ERROR;
end if;
loop
l_str := substr( l_hex, mod(l_num,p_base)+1, 1 ) || l_str;
l_num := trunc( l_num/p_base );
exit when ( l_num = 0 );
end loop;
return l_str;
end to_base;
This function can be called to convert the string bitmap into a number which can then be used with bitand. An example of this would be:
select to_dec('101010', 2) from dual
Oracle only really provides BITAND (and BIT_TO_NUM which isn't really relevant here` as a way of doing logical operations but the operations required here are (A AND B), (A AND NOT B) and (NOT A AND B). So we need a var of converting A to NOT A. A simple way of doing this is to use translate.
So.... the final outcome is:
select
length(translate(to_base(bitand(data_A, data_B),2),'10','1')) as nb_1,
length(translate(to_base(bitand(data_A, data_NOT_B),2),'10','1')) as nb_2,
length(translate(to_base(bitand(data_NOT_A, data_B),2),'10','1')) as nb_3
from (
select
to_dec(data_A,2) as data_A,
to_dec(data_b,2) as data_B,
to_dec(translate(data_A, '01', '10'),2) as data_NOT_A,
to_dec(translate(data_B, '01', '10'),2) as data_NOT_B
from (
select '101010' as data_A, '011001' as data_B from dual
)
)
This is somewhat more complicated than I was hoping when I started writing this answer but it does seem to work.
Can be done pretty simply with something like this:
SELECT utl_raw.BIT_AND( t.A, t.B ) SET_IN_A_AND_B,
length(replace(utl_raw.BIT_AND( t.A, t.B ), '0', '')) SET_IN_A_AND_B_COUNT,
utl_raw.BIT_AND( t.A, utl_raw.bit_complement(t.B) ) ONLY_SET_IN_A,
length(replace(utl_raw.BIT_AND( t.A, utl_raw.bit_complement(t.B) ),'0','')) ONLY_SET_IN_A_COUNT,
utl_raw.BIT_AND( t.B, utl_raw.bit_complement(t.A) ) ONLY_SET_IN_A,
length(replace(utl_raw.BIT_AND( t.B, utl_raw.bit_complement(t.A) ),'0','')) ONLY_SET_IN_A_COUNT
FROM (SELECT '1111100000111110101010' A, '1101011010101010100101' B FROM dual) t
Make sure your bit string has an even length (just pad it with a zero if it has an odd length).