PL/SQL Replacing characters in a string using another table - sql

I need to create function that replaces characters in a string to characters from another table. What I'm trying returns exactly the same string from the beginning. Table t_symbols is:
+-------------------+-------------------------+
| Symbol_to_replace | Symbol_in_return_string |
+-------------------+-------------------------+
| K | Ќ |
| k | ќ |
| X | Ћ |
| x | ћ |
| A | Є |
| a | є |
| H | Њ |
| h | њ |
| O | ¤ |
| o | µ |
| U | ¦ |
| u | ± |
| Y | ‡ |
| y | ‰ |
| I | І |
| i | і |
| G | Ѓ |
| g | ѓ |
+-------------------+-------------------------+
I need to use cursor and take characters from this table and not just nesting multiple REPLACE
create or replace function f_replace(text in varchar2) return varchar2 is
ResultText varchar2(2000);
begin
for cur in (select t.symbol_to_replace, t.symbol_in_return_string from
t_symbols t) loop
ResultText:= Replace(text, cur.symbol_to_replace,
cur.symbol_in_return_string);
end loop;
return(ResultText);
end f_replace;

SQL has a function exactly for this. It is not REPLACE (where indeed you would need multiple iterations); it's the TRANSLATE function.
If the table contents may change, and you need to write a function that looks things up in the table at the time it is called, you could do something like the function I show below.
I am showing a complete example: First I create a table that will store the required substitutions. I only include the first few substitutions, because I want to show how the behavior of the function changes as the table is being modified - without needing to change anything about the function. (Which is the whole point of this.)
Then I show the function definition, and I demonstrate how it works. Then I insert two more rows in the substitutions table and I run exactly the same query; the result will now reflect the longer "list" of substitutions, as needed.
create table character_substitutions ( symbol_to_replace, symbol_in_return_string )
as
select 'K', 'Ќ' from dual union all
select 'k', 'ќ' from dual union all
select 'X', 'Ћ' from dual union all
select 'x', 'ћ' from dual union all
select 'A', 'Є' from dual union all
select 'a', 'є' from dual
;
create or replace function my_character_substitutions ( input_str varchar2 )
return varchar2
deterministic
as
symbols_to_replace varchar2(4000);
symbols_to_return varchar2(4000);
begin
select listagg(symbol_to_replace ) within group (order by rownum),
listagg(symbol_in_return_string) within group (order by rownum)
into symbols_to_replace, symbols_to_return
from character_substitutions;
return translate(input_str, symbols_to_replace, symbols_to_return);
end;
/
select 'Kags' as input_str, my_character_substitutions('Kags') as replaced_str
from dual;
INPUT_STR REPLACED_STR
---------- ------------
Kags Ќєgs
OK, so now let's insert a couple more rows into the table and run the same query. Notice how now the g is also substituted for.
insert into character_substitutions ( symbol_to_replace, symbol_in_return_string )
select 'G', 'Ѓ' from dual union all
select 'g', 'ѓ' from dual
;
select 'Kags' as input_str, my_character_substitutions('Kags') as replaced_str
from dual;
INPUT_STR REPLACED_STR
---------- ------------
Kags Ќєѓs

Related

How can increase select query row with for loop in Oracle?

I have a query that I pull with select and returns one line at a time.
I wanted this query to write two row to the declared v_output_piece_table by bulk collect it twice with the for loop.But I saw that it wrote a single row in the v_output_piece_table.
I want it to rotate two rows now depending on the situation inside he for loop but this will depend on a variable in future.
v_output_piece_table tbl_met_output_coil;
begin
FOR sayac IN 1..2
loop
SELECTSUBSTR (sl.task_job_id, 1, 12) AS schedule_id,
DENSE_RANK () OVER (ORDER BY sc.seq) AS coil_seq,
round(p.ACTUAL_WEIGHT/3,3)
AS weight,
scd.so_id,
scd.so_line_id
BULK COLLECT
INTO v_output_piece_table
FROM sch_line sl,
sch_input_material sim,
sch_input_piece sip,
sch_output_material som,
sch_cut sc,
sch_cut_detail scd,
piece P
WHERE sl.task_job_id = 180078
AND sl.sch_line_num_id = sim.sch_line_num_id
AND sl.sch_line_num_id = som.sch_line_num_id
AND som.output_mat_num_id = sc.output_mat_num_id
AND sc.schc_cut_num_id = scd.schc_cut_num_id
AND sim.input_mat_num_id = sip.input_mat_num_id
AND sip.piece_num_id = P.piece_num_id
ORDER BY sl.seq, sim.seq, sip.seq;
END LOOP;
end;
QUERY output :
|SCHEDULE_ID| |L3_OUTPUT_CNT| |EN_COIL_ID| |COIL_SEQ| |WEIGHT| |SO_ID| |SO_LINE_ID|
| 180078 | | 1 | | 21TT | | 1 | |39663 | | 2 | | 3 |
What I want:
|SCHEDULE_ID| |EN_COIL_ID| |COIL_SEQ| |WEIGHT| |SO_ID| |SO_LINE_ID|
| 180078 | | 21TT | | 1 | |39663 | | 2 | | 3 |
| 180078 | | 21TT | | 2 | |39663 | | 2 | | 3 |
How can I get the output I want ?
This is really easy to do if you cross join your query with a 2 line query:
WITH your_query AS (SELECT SUBSTR (sl.task_job_id, 1, 12) AS schedule_id,
round(p.ACTUAL_WEIGHT/3,3) AS weight,
scd.so_id,
scd.so_line_id,
sl.seq sl_seq,
sim.seq sim_seq,
sip.seq sip_seq
BULK COLLECT
INTO v_output_piece_table
FROM sch_line sl,
sch_input_material sim,
sch_input_piece sip,
sch_output_material som,
sch_cut sc,
sch_cut_detail scd,
piece P
WHERE sl.task_job_id = 180078
AND sl.sch_line_num_id = sim.sch_line_num_id
AND sl.sch_line_num_id = som.sch_line_num_id
AND som.output_mat_num_id = sc.output_mat_num_id
AND sc.schc_cut_num_id = scd.schc_cut_num_id
AND sim.input_mat_num_id = sip.input_mat_num_id
AND sip.piece_num_id = P.piece_num_id),
dummy AS (SELECT LEVEL ID
FROM dual
CONNECT BY LEVEL <= 2)
SELECT yt.schedule_id,
d.id coil_seq,
yt.weight,
yt.so_id,
yt.so_line_id
FROM your_query yt
CROSS JOIN dummy d
ORDER BY yt.sl_seq,
yt.sim_seq,
yt.sip_seq,
d.id;
The dual table is a special table that only contains one row and one column, and so you can use it to generate rows. You could have simply union'd two rows together in the dummy subquery, e.g.:
dummy AS (SELECT 1 ID FROM dual
UNION ALL
SELECT 2 ID FROM dual)
but I prefer using the hierarchical trick using connect by since it's easy to amend if in the future you need to triplicate (or more!) the rows.

Running multiple inserts into an Oracle table, how can I commit after each insert and restart the stored procedure at the last inserted point?

I'm very new to Oracle and am writing my first stored procedure for a side project. Essentially I have one table for intraday data, and another table to store historical data. I need to insert chunks of the intraday table into the history table, commit those inserts, and restart the stored procedure at the first uninserted point in the case of failure.
Here is what I have so far:
CREATE OR REPLACE PROCEDURE test_proc (p_array_size IN PLS_INTEGER DEFAULT 5000)
IS
TYPE ARRAY IS TABLE OF z_intraday%ROWTYPE;
l_data ARRAY;
CURSOR c IS SELECT *
FROM "intraday";
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_data LIMIT p_array_size;
FORALL i IN 1..l_data.COUNT
INSERT INTO history
VALUES l_data(i);
EXIT WHEN c%notfound
END LOOP;
COMMIT;
EXCEPTION WHEN OTHERS THEN
ROLLBACK;
RAISE;
CLOSE c;
END test_proc;
So I only commit after the loop has finished. How can I refactor so that each insert operation in the loop commits, then if there is a failure, roll back to the previous batch of records that failed and run the procedure again? Sorry I know this is a heavy question, but any guidance would be greatly appreciated.
Use set-based operations wherever possible, not row-by-row operations. A single "insert as select" or "merge" statement with filter would run faster by several orders of magnitude than the row-by-slow construct you have created. Also, committing after every individual row will kill your performance for the entire database instance, not just this procedure, as it forces checkpoints in the redo logs.
insert into history (col1, col2, col3)
as select col1, col2, col3 from intraday d
where d.id not in (select id from history);
commit;
or
merge into history h
using intraday d
on (h.id = d.id)
when not matched then
insert (h.id, h.col2, h.col3) values (d.id, d.col2, d.col3);
commit;
You don't need a complicated procedure, you can use INSERT INTO ... LOG ERRORS INTO ... to capture the errors and then all the errors can be put into one table and the valid rows will all be successfully inserted (continuing on from each error, if you specify REJECT LIMIT UNLIMITED).
If you have the tables:
CREATE TABLE "intraday" (
a INT PRIMARY KEY,
b DATE,
c TIMESTAMP,
d VARCHAR2(30),
e VARCHAR2(10)
);
CREATE TABLE history (
a INT,
b DATE,
c TIMESTAMP NOT NULL,
d VARCHAR2(30),
e DATE
);
INSERT INTO "intraday"
SELECT 1, DATE '2020-01-01', TIMESTAMP '2020-01-01 00:00:00', 'valid', '2020-01-01' FROM DUAL UNION ALL
SELECT 2, DATE '2020-01-02', NULL, 'timestamp null', '2020-01-01' FROM DUAL UNION ALL
SELECT 3, DATE '2020-01-03', TIMESTAMP '2020-01-03 00:00:00', 'implicit date cast fails', '2020-01-XX' FROM DUAL UNION ALL
SELECT 4, DATE '2020-01-04', TIMESTAMP '2020-01-04 00:00:00', 'valid', '2020-01-04' FROM DUAL;
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD';
Then you can create a table to put the errors using:
BEGIN
DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name => 'HISTORY',
err_log_table_name => 'HISTORY_ERRORS'
);
END;
/
Then you can run the SQL statement:
INSERT /*+ APPEND */ INTO history
SELECT * FROM "intraday"
LOG ERRORS INTO history_errors ('INSERT APPEND') REJECT LIMIT UNLIMITED;
Then the history table will contain:
SELECT * FROM history;
A | B | C | D | E
-: | :--------- | :------------------------ | :---- | :---------
1 | 2020-01-01 | 01-JAN-20 00.00.00.000000 | valid | 2020-01-01
4 | 2020-01-04 | 04-JAN-20 00.00.00.000000 | valid | 2020-01-04
And the errors will be:
SELECT * FROM history_errors;
ORA_ERR_NUMBER$ | ORA_ERR_MESG$ | ORA_ERR_ROWID$ | ORA_ERR_OPTYP$ | ORA_ERR_TAG$ | A | B | C | D | E
--------------: | :----------------------------------------------------------------------------------- | :------------- | :------------- | :------------ | :- | :--------- | :--------------------------- | :----------------------- | :---------
1400 | ORA-01400: cannot insert NULL into ("FIDDLE_HSUKHKSUNFGTKKAMLHOA"."HISTORY"."C")<br> | null | I | INSERT APPEND | 2 | 2020-01-02 | null | timestamp null | 2020-01-01
1858 | ORA-01858: a non-numeric character was found where a numeric was expected<br> | null | I | INSERT APPEND | 3 | 2020-01-03 | 03-JAN-20 00.00.00.000000000 | implicit date cast fails | 2020-01-XX
db<>fiddle here

Seperate phone numbers from string in cell - random order

I have a bunch of data that contains a phone number and a birthday as well as other data.
{1997-06-28,07742367858}
{07791100873,1996-07-14}
{30/01/1997,07974335488}
{1997-04-04,07701003703}
{1996-03-11,07480227283}
{1998-06-20,07713817233}
{1996-09-13,07435148920}
{"21 03 2000",07548542539,1st}
{1996-03-09,07539248008}
{07484642432,1996-03-01}
I am trying to extract the phone number from this, however unsure on how to get this out when the data is not always in the same order.
I would expect to one column that return a phone number, the next which returned a birthday then another which return any arbitrary value in the 3rd column slot.
You can try to sort parts of each string by the number of digits they contain. This can be done with the expression:
select length(regexp_replace('1997-06-28', '\D', '', 'g'))
length
--------
8
(1 row)
The query removes curly brackets from strings, splits them by comma, sorts elements by the number of digits and aggregates back to arrays:
with my_data(str) as (
values
('{1997-06-28,07742367858}'),
('{07791100873,1996-07-14}'),
('{30/01/1997,07974335488}'),
('{1997-04-04,07701003703}'),
('{1996-03-11,07480227283}'),
('{1998-06-20,07713817233}'),
('{1996-09-13,07435148920}'),
('{"21 03 2000",07548542539,1st}'),
('{1996-03-09,07539248008}'),
('{07484642432,1996-03-01}')
)
select id, array_agg(elem order by length(regexp_replace(elem, '\D', '', 'g')) desc)
from (
select id, trim(unnest(string_to_array(str, ',')), '"') as elem
from (
select trim(str, '{}') as str, row_number() over () as id
from my_data
) s
) s
group by id
Result:
id | array_agg
----+--------------------------------
1 | {07742367858,1997-06-28}
2 | {07791100873,1996-07-14}
3 | {07974335488,30/01/1997}
4 | {07701003703,1997-04-04}
5 | {07480227283,1996-03-11}
6 | {07713817233,1998-06-20}
7 | {07435148920,1996-09-13}
8 | {07548542539,"21 03 2000",1st}
9 | {07539248008,1996-03-09}
10 | {07484642432,1996-03-01}
(10 rows)
See also this answer Looking for solution to swap position of date format DMY to YMD if you want to normalize dates. You should modify the function:
create or replace function iso_date(text)
returns date language sql immutable as $$
select case
when $1 like '__/__/____' then to_date($1, 'DD/MM/YYYY')
when $1 like '____/__/__' then to_date($1, 'YYYY/MM/DD')
when $1 like '____-__-__' then to_date($1, 'YYYY-MM-DD')
when trim($1, '"') like '__ __ ____' then to_date(trim($1, '"'), 'DD MM YYYY')
end
$$;
and use it:
select id, a[1] as phone, iso_date(a[2]) as birthday, a[3] as comment
from (
select id, array_agg(elem order by length(regexp_replace(elem, '\D', '', 'g')) desc) as a
from (
select id, trim(unnest(string_to_array(str, ',')), '"') as elem
from (
select trim(str, '{}') as str, row_number() over () as id
from my_data
) s
) s
group by id
) s
id | phone | birthday | comment
----+-------------+------------+---------
1 | 07742367858 | 1997-06-28 |
2 | 07791100873 | 1996-07-14 |
3 | 07974335488 | 1997-01-30 |
4 | 07701003703 | 1997-04-04 |
5 | 07480227283 | 1996-03-11 |
6 | 07713817233 | 1998-06-20 |
7 | 07435148920 | 1996-09-13 |
8 | 07548542539 | 2000-03-21 | 1st
9 | 07539248008 | 1996-03-09 |
10 | 07484642432 | 1996-03-01 |
(10 rows)

Is there a way to TRIM all data in a SELECT * FROM statement?

I am trying to select and trim all the entries from a table using the following statement:
SELECT TRIM(*) FROM TABLE
But I get an error. Is there a way to return all entries selected so they are trimmed for blank characters at the beginning and end of each string?
You need to specify each string column by hand:
SELECT TRIM(col1), --LTRIM(RTRIM(...)) If RDBMS is SQL Server
TRIM(col2),
TRIM(col3),
TRIM(col4)
-- ...
FROM table
There is another problem with your proposal. * is placeholder for each column in table so there will be problem with trimming date/decimal/spatial data ....
Addendum
Using Oracle 18c Polymorphic Table Functions(provided code is just PoC, there is a space for a lot of improvements):
CREATE TABLE tab(id INT, d DATE,
v1 VARCHAR2(100), v2 VARCHAR2(100), v3 VARCHAR2(100) );
INSERT INTO tab(id, d,v1, v2, v3)
VALUES (1, SYSDATE, ' aaaa ', ' b ', ' c');
INSERT INTO tab(id, d,v1, v2, v3)
VALUES (2, SYSDATE+1, ' afasd', ' ', ' d');
COMMIT;
SELECT * FROM tab;
-- Output
.----.-----------.-----------.-----------.-----.
| ID | D | V1 | V2 | V3 |
:----+-----------+-----------+-----------+-----:
| 1 | 02-MAR-18 | aaaa | b | c |
:----+-----------+-----------+-----------+-----:
| 2 | 03-MAR-18 | afasd | | d |
'----'-----------'-----------'-----------'-----'
And table function:
CREATE OR REPLACE PACKAGE ptf AS
FUNCTION describe(tab IN OUT dbms_tf.table_t)RETURN dbms_tf.describe_t;
PROCEDURE FETCH_ROWS;
END ptf;
/
CREATE OR REPLACE PACKAGE BODY ptf AS
FUNCTION describe(tab IN OUT dbms_tf.table_t) RETURN dbms_tf.describe_t AS
new_cols DBMS_TF.COLUMNS_NEW_T;
BEGIN
FOR i IN 1 .. tab.column.count LOOP
IF tab.column(i).description.type IN ( dbms_tf.type_varchar2) THEN
tab.column(i).pass_through:=FALSE;
tab.column(i).for_read:= TRUE;
NEW_COLS(i) :=
DBMS_TF.COLUMN_METADATA_T(name=> tab.column(i).description.name,
type => tab.column(i).description.type);
END IF;
END LOOP;
RETURN DBMS_TF.describe_t(new_columns=>new_cols, row_replication=>true);
END;
PROCEDURE FETCH_ROWS AS
inp_rs DBMS_TF.row_set_t;
out_rs DBMS_TF.row_set_t;
rows PLS_INTEGER;
BEGIN
DBMS_TF.get_row_set(inp_rs, rows);
FOR c IN 1 .. inp_rs.count() LOOP
FOR r IN 1 .. rows LOOP
out_rs(c).tab_varchar2(r) := TRIM(inp_rs(c).tab_varchar2(r));
END LOOP;
END LOOP;
DBMS_TF.put_row_set(out_rs, replication_factor => 1);
END;
END ptf;
And final call:
CREATE OR REPLACE FUNCTION trim_col(tab TABLE)
RETURN TABLE pipelined row polymorphic USING ptf;
SELECT *
FROM trim_col(tab); -- passing table as table function argument
.----.-----------.-------.-----.----.
| ID | D | V1 | V2 | V3 |
:----+-----------+-------+-----+----:
| 1 | 02-MAR-18 | aaaa | b | c |
:----+-----------+-------+-----+----:
| 2 | 03-MAR-18 | afasd | - | d |
'----'-----------'-------'-----'----'
db<>fiddle demo

How to substring and join with another table with the substring result

I have 2 tables: errorlookup and errors.
errorlookup has 2 columns: codes and description.
The codes are of length 2.
errors has 2 columns id and errorcodes.
The errorcodes are of length 40 meaning they code store 20 error codes for each id.
I need to display all the description associated with the id by substring the errorcodes and matching with code in errorlookup table.
Sample data for errorlookup:
codes:description
12:Invalid
22:Inactive
21:active
Sample data for errors:
id:errorcodes
1:1221
2:2112
3:1222
I cant use LIKE as it would result in too many errors. I want the errorcodes column to be broken down into strings of length 2 and then joined with the errorlookup.
How can it be done?
If you really cannot alter the tables structure, here's another approach:
Create an auxilary numbers table:
CREATE TABLE numbers
( i INT NOT NULL
, PRIMARY KEY (i)
) ;
INSERT INTO numbers VALUES
( 1 ) ;
INSERT INTO numbers VALUES
( 2 ) ;
--- ...
INSERT INTO numbers VALUES
( 100 ) ;
Then you could use this:
SELECT err.id
, err.errorcodes
, num.i
, look.codes
, look.descriptionid
FROM
( SELECT i, 2*i-1 AS pos --- odd numbers
FROM numbers
WHERE i <= 20 --- 20 pairs
) num
CROSS JOIN
errors err
JOIN
errorlookup look
ON look.codes = SUBSTR(err.errorcodes, pos, 2)
ORDER BY
err.errorcodes
, num.i ;
Test at: SQL-Fiddle
ID ERRORCODES I CODES DESCRIPTIONID
1 1221 1 12 Invalid
1 1221 2 21 Active
3 1222 1 12 Invalid
3 1222 2 22 Inactive
2 2112 1 21 Active
2 2112 2 12 Invalid
I think the cleanest solution is to "normalize" your errocodes table using a PL/SQL function. That way you can keep the current (broken) table design, but still access its content as if it was properly normlized.
create type error_code_type as object (id integer, code varchar(2))
/
create or replace type error_table as table of error_code_type
/
create or replace function unnest_errors
return error_table pipelined
is
codes_l integer;
i integer;
one_row error_code_type := error_code_type(null, null);
begin
for err_rec in (select id, errorcodes from errors) loop
codes_l := length(err_rec.errorcodes);
i := 1;
while i < codes_l loop
one_row.id := err_rec.id;
one_row.code := substr(err_rec.errorcodes, i, 2);
pipe row (one_row);
i := i + 2;
end loop;
end loop;
end;
/
Now with this function you can do something like this:
select er.id, er.code, el.description
from table(unnest_errors) er
join errorlookup el on el.codes = er.code;
You can also create a view based on the function to make the statements a bit easier to read:
create or replace view normalized_errorcodes
as
select *
from table(unnest_errors);
Then you can simply reference the view in the real statement.
(I tested this on 11.2 but I believe it should work on 10.x as well)
I think you're on the right track with LIKE. MySQL has an RLIKE function that allows matching by regular expression (I don't know if it's present in Oracle.) You could use errorlookup.code as a pattern to match against errors.errorcodes. The (..)* pattern is used to prevent things like "1213" from matching, for example, "21".
SELECT *
FROM error
JOIN errorlookup
WHERE errorcodes RLIKE CONCAT('^(..)*',code)
ORDER BY id;
+------+----------+------+
| id | errorcode| code |
+------+----------+------+
| 1 | 11 | 11 |
| 2 | 1121 | 11 |
| 2 | 1121 | 21 |
| 3 | 21313245 | 21 |
| 3 | 21313245 | 31 |
| 3 | 21313245 | 32 |
| 4 | 21 | 21 |
+------+----------+------+