How to fill some column with constant string followed by variable number - sql

I need to fill a table column (in Oracle database) with string values that have variable part, e. g. AB0001, AB0002,...,AB0112...,AB9999, where AB is constant string part, 0001 -9999 is variable number part. i've tried the following solution in SQL for a table with 2 columns:
create table tbl1
(seq1 number(8),
string1 varchar(32));
declare
tst number(8) :=0;
begin
for cntr in 1..100
loop
tst := cntr;
insert into TBL1 values (someseq.nextval, concat('AB',tst));
end loop;
end;
But in this case I get STRING1 filled with values AB1,AB2,...,AB10,.. which is not exactly what I need.
How should I modify my script to insert values like AB0001,...,AB0010?

Either pad the number with zeros, or format it with leading zeros:
insert into TBL1
values (someseq.nextval, concat('AB', to_char(tst, 'FM0000'));
The 'FM' format modifier prevents a space being added (to allow for a minus sign).
For your specific example you don't need a PL/SQL block; you could use a hierarchical query to generate the data for the rows:
insert into tbl1(seq1, string1)
select someseq.nextval, concat('AB', to_char(level, 'FM0000'))
from dual
connect by level <= 100;

use the lpad function
select lpad(1, 4, '0') from dual
--> '0001'

try this one
INSER INTO table_name(code)
VALUES(CONCAT('AB', LPAD('99', 4, '0'));
or You can Update on the basis of PK after insertion
UPDATE table_name SET code = CONCAT('AB', LPAD(PK_Column_Name, 4, '0') ;
or You Can Use Triggers
CREATE TRIGGER trgI_Terms_UpdateTermOrder
ON DB.table_name
AFTER INSERT
AS
BEGIN
UPDATE t
SET code = CONCAT('AB', LPAD(Id, 4, '0')
FROM DB.table_name t INNER JOIN inserted i ON t.Id = I.Id
END;
GO

Related

How to access full OLD data in SQL Trigger

I have a trigger whose purpose is to fire whenever there is a DELETE on a particular table and insert the deleted data into another table in json format.
The trigger works fine if I am specifying each column explicitly. Is there any way to access the entire table row?
This is my code.
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object ('code' VALUE :old.id,
'name' VALUE :old.text,
'description' VALUE :old.text) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
This works fine. However, what I want is something like this. Instead of explicity specifying each column, I want to convert the entire :OLD data
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object (:old) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
Any suggestion please.
The short and correct answer is you can't. We have a few tables in our application where we do this and the developer is responsible for updating the trigger when they add a column: this is enforced with code reviews and is probably the cleanest solution for this scenario.
The long answer is you can get close, but I wouldn't do this in production for several reasons:
Triggers are terrible for performance
Triggers are terrible for code clarity
This requires reading the row again using flashback query so
You aren't getting the values of this row from inside your current transaction: if you update the row in your transaction and then delete it the JSON will show what the values were BEFORE your update
There is a performance penalty for reading from UNDO
There is potential that UNDO won't be available and your trigger will fail
Your user needs permission to execute flashback queries
Your database needs to meet all the perquisites to support flashback queries
Deleting a lot of rows will cause the ROWID collection to get large and consume PGA
There are probably more reasons, but in the interest of "can it be done" here you go...
DROP TABLE t1;
DROP TABLE t2;
DROP TRIGGER t1_ad;
CREATE TABLE t1 (
id NUMBER,
name VARCHAR2(100),
description VARCHAR2(100)
);
CREATE TABLE t2 (
dt TIMESTAMP(9),
json_data CLOB
);
INSERT INTO t1 VALUES (1, 'A','aaaa');
INSERT INTO t1 VALUES (2, 'B','bbbb');
INSERT INTO t1 VALUES (3, 'C','cccc');
INSERT INTO t1 VALUES (4, 'D','dddd');
CREATE OR REPLACE TRIGGER t1_ad
FOR DELETE ON t1
COMPOUND TRIGGER
TYPE t_rowid_tab IS TABLE OF ROWID;
v_rowid_tab t_rowid_tab := t_rowid_tab();
AFTER EACH ROW IS
BEGIN
v_rowid_tab.extend;
v_rowid_tab(v_rowid_tab.last) := :old.rowid;
END AFTER EACH ROW;
AFTER STATEMENT IS
v_scn v$database.current_scn := dbms_flashback.get_system_change_number;
v_json_data CLOB;
v_sql CLOB;
BEGIN
FOR i IN 1 .. v_rowid_tab.count
LOOP
SELECT 'SELECT json_arrayagg(json_object(' ||
listagg('''' || lower(t.column_name) || ''' VALUE ' ||
lower(t.column_name),
', ') within GROUP(ORDER BY t.column_id) || ') RETURNING CLOB) FROM t1 AS OF SCN :scn WHERE rowid = :r'
INTO v_sql
FROM user_tab_columns t
WHERE t.table_name = 'T1';
EXECUTE IMMEDIATE v_sql
INTO v_json_data
USING v_scn, v_rowid_tab(i);
INSERT INTO t2
VALUES
(current_timestamp,
v_json_data);
END LOOP;
END AFTER STATEMENT;
END t1_ad;
/
UPDATE t1
SET NAME = 'zzzz' -- not captured
WHERE id = 2;
DELETE FROM t1 WHERE id < 3;
SELECT *
FROM t2;
-- 13-NOV-20 01.08.15.955426000 PM [{"id":1,"name":"A","description":"aaaa"}]
-- 13-NOV-20 01.08.15.969755000 PM [{"id":2,"name":"B","description":"bbbb"}]

Oracle SQL/PLSQL: change type of specific columns in one time

Assume following table named t1:
create table t1(
clid number,
A number,
B number,
C number
)
insert into t1 values(1, 1, 1, 1);
insert into t1 values(2, 0, 1, 0);
insert into t1 values(3, 1, 0, 1);
clid A B C
1 1 1 1
2 0 1 0
3 1 0 1
Type of columns A, B, and C is number. What I need to do is to change types of those columns to VARCHAR but in a quite tricky way.
In my real table I need to change datatype for hundred of columns so it is not so convenient to write a statement like following hundred of time:
ALTER TABLE table_name
MODIFY column_name datatype;
What i need to do is rather to convert all columns to VARCHAR except CLID column like we can do that in Python or R
Is there any way to do so in Oracle SQL or PLSQL?
Appreciate your help.
Here is a example of procedure that can help...
It accepts two parameters that should be a name of your table and list of columns you do not want to change...
At the begining there is a cursor that gets all the column names for your table except the one that you do not want to change...
Then it loop's though the columns and changes them...
CREATE OR REPLACE procedure test_proc(p_tab_name in varchar2
, p_col_names in varchar2)
IS
v_string varchar2(4000);
cursor c_tab_cols
is
SELECT column_name
FROM ALL_TAB_COLS
WHERE table_name = upper(p_tab_name)
and column_name not in (select regexp_substr(p_col_names,'[^,]+', 1, level) from dual
connect by regexp_substr(p_col_names, '[^,]+', 1, level) is not null);
begin
FOR i_record IN c_tab_cols
loop
v_string := 'alter table ' || p_tab_name || ' modify '
|| i_record.column_name || ' varchar(30)';
EXECUTE IMMEDIATE v_string;
end loop;
end;
/
Here is a demo:
DEMO
You can also extend this procedure with a type of data you want to change into... and with some more options I am sure....
Unfortunately, that isn't as simple as you'd want it to be. It is not a problem to write query which will write query for you (by querying USER_TAB_COLUMNS), but - column must be empty in order to change its datatype:
SQL> create table t1 (a number);
Table created.
SQL> insert into t1 values (1);
1 row created.
SQL> alter table t1 modify a varchar2(1);
alter table t1 modify a varchar2(1)
*
ERROR at line 1:
ORA-01439: column to be modified must be empty to change datatype
SQL>
If there are hundreds of columns involved, maybe you can't even
create additional columns in the same table (of VARCHAR2 datatype)
move values in there
drop "original" columns
rename "new" columns to "old names"
because there'a limit of 1000 columns per table.
Therefore,
creating a new table (with appropriate columns' datatypes),
moving data over there,
dropping the "original" table
renaming the "new" table to "old name"
is probably what you'll finally do. Note that it won't be necessarily easy either, especially if there are foreign keys involved.
A "query that writes query for you" might look like this (Scott's sample tables):
SQL> SELECT 'insert into dept (deptno, dname, loc) '
2 || 'select '
3 || LISTAGG ('to_char(' || column_name || ')', ', ')
4 WITHIN GROUP (ORDER BY column_name)
5 || ' from emp'
6 FROM user_tab_columns
7 WHERE table_name = 'EMP'
8 AND COLUMN_ID <= 3
9 /
insert into dept (deptno, dname, loc) select to_char(EMPNO), to_char(ENAME), to_char(JOB) from emp
SQL>
It'll save you from typing names of hundreds of columns.
I think its not possible to change data type of a column if values are there...
Empty the column by copying values to a dummy column and change data types.

Issue in adding decimal values stored as varchar

I have a staging table with two amount columns, created as varchar2 types to allow import from excel. Later these fields are validated to make sure they contains numeric data & summation of values for each row should greater than zero, before making the updates to actual table.
I have a function to validate whether the column contains any non numeric data, which works as expected.
FUNCTION f_isnumber (pin_string IN VARCHAR2)
RETURN INT
IS
v_new_num NUMBER;
BEGIN
IF TRIM (pin_string) IS NOT NULL
THEN
v_new_num := TO_NUMBER (pin_string);
RETURN 1;
END IF;
RETURN 0;
EXCEPTION
WHEN VALUE_ERROR
THEN
RETURN 0;
END f_isnumber;
Next is I wanted to list all the rows where value of two amount field is zero. Below query works where I am just selecting the total of both columns.
WITH CTE1
AS (SELECT TO_NUMBER (PURCH_AMT) + TO_NUMBER (SELL_AMT) AS Total, ROW_ID
FROM STG_MAINTENANCE
WHERE PKG_MAINTENANCE.f_isnumber (PURCH_AMT) > 0
AND PKG_MAINTENANCE.f_isnumber (SELL_AMT) > 0)
SELECT TO_NUMBER (Total), CTE1.*
FROM CTE1
But as soon as I add where clause, the query fails with error "ORA-01722: invalid number"
WITH CTE1
AS (SELECT TO_NUMBER (PURCH_AMT) + TO_NUMBER (SELL_AMT) AS Total, ROW_ID
FROM STG_MAINTENANCE
WHERE PKG_MAINTENANCE.f_isnumber (PURCH_AMT) > 0
AND PKG_MAINTENANCE.f_isnumber (SELL_AMT) > 0)
SELECT TO_NUMBER (Total), CTE1.*
FROM CTE1
WHERE Total = 0
Table contains both valid numbers, spaces & valid decimals, zero etc. I was hoping inner where clause will eliminate all non numeric values & I would only get valid decimal values & if user has imported 0 or 0.00
But somehow invalid number error comes up.
Sample Data:
Create table STG_MAINTENANCE (ROW_ID INT, PURCH_AMT VARCHAR2(500), SELL_AMT VARCHAR2(500));
INSERT INTO STG_MAINTENANCE values(1, 'A','4.5');
INSERT INTO STG_MAINTENANCE values(2, '0','0.0');
INSERT INTO STG_MAINTENANCE values(3, '5.5','4.5');
INSERT INTO STG_MAINTENANCE values(4, '','4.5');
INSERT INTO STG_MAINTENANCE values(5, 'B','C');
INSERT INTO STG_MAINTENANCE values(6, '','');
I think The way Oracle is processing your query causes the error in your query.
For your example, You can use MATERIALIZE hint and write your query like the following:
WITH CTE1 AS (
SELECT /*+MATERIALIZE*/
TO_NUMBER(PURCH_AMT) + TO_NUMBER(SELL_AMT) AS TOTAL,
ROW_ID
FROM
STG_MAINTENANCE
WHERE
PKG_MAINTENANCE.F_ISNUMBER(PURCH_AMT) > 0
AND PKG_MAINTENANCE.F_ISNUMBER(SELL_AMT) > 0
)
SELECT
TO_NUMBER(TOTAL),
CTE1.*
FROM
CTE1
WHERE
TOTAL = 0
You can learn more about MATERIALIZE hint from oracle docs.
Hope this is helpful to you.

Can't save comma separated number string in varchar2()

I've got a list of items I want to add in a single click, for this purpose I created a table with a column with a type varchar2(4000), in this column I want to list id's that refer to the other table so I can paste the value of this column as a parameter. ex. select t.* from table_name t where t.point_id in (varchar2 string of comma seprated point_ids).
The problem I've got is that when I put more than 1 id in the varchar2 field I get ORA-06502: PL/SQL: numeric or value error: character to number conversion error
How can I avoid this error? My field is varchar2, not number and I don't want it to be converted. I need the value I'm parsing to be saved. ex. (11, 12)
Picture of my Table:
EDIT: Note - My select is working okay, the problem I'm having is with saving the information.
My Insert :
procedure lab_water_pointsgroup (v_group_id lab_water_pointsgroups.group_name%type,
v_group_name lab_water_pointsgroups.group_code%type,
v_group_code lab_water_pointsgroups.lab_points_ids%type,
v_lab_points_ids lab_water_pointsgroups.group_id%type) as
begin
update lab_water_pointsgroups
set group_name = v_group_name,
group_code = v_group_code,
lab_points_ids = v_lab_points_ids
where group_id = v_group_id;
if ( SQL%RowCount = 0 ) then
insert into lab_water_pointsgroups
(group_id, group_name, group_code, lab_points_ids)
values
(v_group_id, v_group_name, v_group_code, v_lab_points_ids);
end if;
end;
Not sure how exactly I can help you here as you gave no example. Have a look at the below demo, maybe the contruct with xmltable solves your problem. HTH KR
create table testtab (id number);
insert into testtab values (1);
select * from testtab where id in ('1'); -- works
select * from testtab where id in (1); -- works
select * from testtab where id in (1,2); -- works
select * from testtab where id in ('1,2'); -- ORA-01722: invalid number
select * from testtab where id in (select to_number(xt.column_value) from xmltable('1,2') xt); -- works
Here is how you defined parameters for your procedure:
v_group_id lab_water_pointsgroups.group_name%type,
v_group_name lab_water_pointsgroups.group_code%type,
v_group_code lab_water_pointsgroups.lab_points_ids%type,
v_lab_points_ids lab_water_pointsgroups.group_id%type
I suspect that you made mistake with types, because id has name type, name has code type etc. So it should be:
v_group_id lab_water_pointsgroups.group_id%type,
v_group_name lab_water_pointsgroups.group_name%type,
v_group_code lab_water_pointsgroups.group_code%type,
v_lab_points_ids lab_water_pointsgroups.lab_points_ids%type
And I suggest to use merge instead of this update / insert, but it's not what you asked for :)
Your error is in that you don't make difference between variable containing comma separated numbers and actual enumeration in the 'in' operator. After your code analize and preparation to execution your statement will be like .. id in ('1,2,3') instead of ..id in (1,2,3), did you notice differnce ? So you need to transform comma separated values into array or in this case into collection. Your code should be like this:
select t.*
from table_name t
where t.point_id in
(select regexp_substr(YOUR_VARCHAR2_COLUMN_VALUE, '[^,]+', 1, level)
from dual
connect by regexp_substr(YOUR_VARCHAR2_COLUMN_VALUE, '[^,]+', 1, level) is not null)

Splitting and inserting a string in PL/SQL

I have a table called LOG that has a column named MESSAGE, which is a VARCHAR2(4000).
Since I'm planning a migration and the column MESSAGE in the new database is a VARCHAR2(2000), I want to iterate over all MESSAGE rows with length > 2000 and substring the text after 2000 characters and insert everything of the text that comes after 2000 characters into a new row.
I also have to update the original rows to have 2000 chars.
How can I do this? It's been really a long time since I worked with PL/SQL, I would really appreciate your help.
It can also easily be done with a connect by as it demonstrates in this example where it should split after every 5th character:
select substr(test.test, (level-1)*5, 5)
from (select 'THIS IS A LONG MESSAGE ACTUALLY' test from dual) test
connect by substr(test.test, (level-1)*5, 5) IS NOT NULL
in this scenario you wouldn´t even have to bother about anything at it would automaticly seperate the values no matter if they are longer than 2000 or not.
select substr(l.message, (level-1)*2000, 2000) message
from log l
substr(l.message, (level-1)*2000, 2000) IS NOT NULL
This could be your final select.
One method is:
select . . ., substr(l.message, 1, 2000) as message
from log l
union all
select . . ., substr(l.message, 2001, 2000) as message
from log l
where lenght(l.message) > 2000;
Your problem can be solved using a PLSQL block as below. Please try to implement the logic mentioned inline and check if this works.
Code:
declare
---getting each message of length 4000 characters
cursor cur is
select message
from log;
var number;
var1 varchar2(2000);
cntr number:=0;
begin
--Loop for each message
for i in cur
loop
--getting length of message
var:= length(i.message);
for len in 1..(var / 2000)
loop
--setting the offset to pick 2000 chracters
if cntr = 0 then
cntr := cntr +1;
else
cntr := cntr + 2000;
end if;
--selecting 2000 characters from message
var1:=substr(i.message,cntr,2000);
---inserting 2000 charcters to table
insert into table_log(col1)
values(var1 );
commit;
end loop;
end loop;
end;
Demo:
SQL> create table log(message varchar2(4000));
SQL> select message from log ;
MESSAGE
--------------------------------------------------------------------------------
KHAGDKAGDHAGDKHAGD
ADSJHA:DAH:DHHAHDH
.
.
.
SQL> select length(message) from log ;
LENGTH(MESSAGE)
---------------
3989
SQL> create table table_log(col1 varchar2(2000));
SQL> select col1 from table_log ;
COL1
--------------------------------------------------------------------------------
SQL> /
PL/SQL procedure successfully completed.
--- Two rows created to destination table with size approx 2000 characters each
SQL> select length(col1) from table_log ;
LENGTH(COL1)
------------
2000
1989
In Simple SQL you can do it as
insert into table_log(col1)
select SUBSTR(l.MESSAGE, 1, 2000) AS MESSAGE
FROM LOG l
UNION ALL
select SUBSTR(l.MESSAGE, 2001, 2000) AS MESSAGE
FROM LOG l
WHERE length(l.MESSAGE) > 2000;
You can split rows during copying to new table. For this you should use INSERT ALL WHEN. For each WHEN clause whose condition evaluates to true, the database executes the corresponding INTO clause list.
create table src_test_table(long_msg varchar2(20));
create table dest_test_table(long_msg varchar2(10));
insert into src_test_table values(lpad('1',20,'1'));
insert into src_test_table values(lpad('2',20,'2'));
insert into src_test_table values(lpad('3',20,'3'));
insert into src_test_table values(lpad('4',10,'4'));
insert into src_test_table values(lpad('5',10,'5'));
insert all
when length(long_msg) <= 10 then
into dest_test_table values(long_msg)
when length(long_msg) > 10 then
into dest_test_table values(substr(long_msg,1,10))
when length(long_msg) > 10 then
into dest_test_table values(substr(long_msg,11))
select long_msg from src_test_table;
And results;
select long_msg,length(long_msg) from dest_test_table;