Oracle sql update using regexp_replace - sql

I have almost 2000 rows in a table ("Sensor", which has many more than 2000 rows) in which I need to update one column, the sensorname.
Part of the replacement within the update is based on the contents of another table, deviceport.
Deviceport is related to the updating table through this deviceportid column -- sensor.deviceportid = deviceportid. Thus the actual update is different for every row. I don't want to have to write 2000 update statements, but I haven't been able to figure out what my "where" statement will say.
UPDATE sensor sn SET sn.sensorname = (
SELECT REGEXP_REPLACE(
sensorname,
'^P(\d)',
'J ' || (
SELECT d.deviceportlabel
FROM deviceport d
WHERE d.deviceportid = s.deviceportid
) ||
' Breaker \1'
)
FROM sensor s
WHERE REGEXP_LIKE( sensorname, '^P(\d)')
)
WHERE ...?
Any clues?

UPDATE sensor sn
SET sn.sensorname = 'J ' ||
(
SELECT d.deviceportlabel
FROM deviceport d
WHERE d.deviceportid = sn.deviceportid
) || ' Breaker ' || substr(sn.sensorname, 2, 1)
WHERE REGEXP_LIKE( sn.sensorname, '^P\d')

Try this:-
UPDATE sensor sn, deviceport d
SET sn.sensorname = REGEXP_REPLACE(
sn.sensorname,'^P(\d)', 'J '|| d.deviceportlabel||' Breaker \1')
WHERE d.deviceportid = sn.deviceportid
AND REGEXP_LIKE( sn.sensorname, '^P(\d)');

Related

Select query using json format value

If customer first_name-'Monika',
last_name='Awasthi'
Then I am using below query to return value in json format:
SELECT *
FROM
(
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS '1','VALUE' IS 'Monika'||' '||'Awasthi'))
FROM DUAL
);
It is working fine & give below output:
[{"CODE":"1","VALUE":"Monika Awasthi"}]
But I want one more value which should be reversed means output should be:
[{"CODE":"1","VALUE":"Monika Awasthi"},{"CODE":"2","VALUE":"Awasthi Monika"}]
Kindly give me some suggestions. Thank You
Another approach is to use a CTE to generate the two codes and values; your original version could be written to get the name data from a table or CTE:
-- CTE for sample data
WITH cte (first_name, last_name) AS (
SELECT 'Monika', 'Awasthi' FROM DUAL
)
-- query against CTE or table
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS '1','VALUE' IS last_name ||' '|| first_name))
FROM cte;
And you could then extend that with a CTE that generates the value with the names in both orders:
WITH cte1 (first_name, last_name) AS (
SELECT 'Monika', 'Awasthi' FROM DUAL
),
cte2 (code, value) AS (
SELECT 1 AS code, first_name || ' ' || last_name FROM cte1
UNION ALL
SELECT 2 AS code, last_name || ' ' || first_name FROM cte1
)
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS code,'VALUE' IS value))
FROM cte2;
which gives:
JSON_ARRAYAGG(JSON_OBJECT('CODE'ISCODE,'VALUE'ISVALUE))
-------------------------------------------------------------------------
[{"CODE":1,"VALUE":"Monika Awasthi"},{"CODE":2,"VALUE":"Awasthi Monika"}]
db<>fiddle
A simple logic through use of SQL(without using PL/SQL) in order to generate code values as only be usable for two columns as in this case might be
SELECT JSON_ARRAYAGG(
JSON_OBJECT('CODE' IS tt.column_id,
'VALUE' IS CASE WHEN column_id=1
THEN name||' '||surname
ELSE surname||' '||name
END)
) AS result
FROM t
CROSS JOIN (SELECT column_id FROM user_tab_cols WHERE table_name = 'T') tt
where t is a table which hold name and surname columns
Demo
More resilient solution might be provided through use of PL/SQL, even more columns exist within the data source such as
DECLARE
v_jso VARCHAR2(4000);
v_arr OWA.VC_ARR;
v_arr_t JSON_ARRAY_T := JSON_ARRAY_T();
BEGIN
FOR c IN ( SELECT column_id FROM user_tab_cols WHERE table_name = 'T' )
LOOP
SELECT 'JSON_OBJECT( ''CODE'' IS '||MAX(c.column_id)||',
''VALUE'' IS '||LISTAGG(column_name,'||'' ''||')
WITHIN GROUP (ORDER BY ABS(column_id-c.column_id))
||' )'
INTO v_arr(c.column_id)
FROM ( SELECT * FROM user_tab_cols WHERE table_name = 'T' );
EXECUTE IMMEDIATE 'SELECT '||v_arr(c.column_id)||' FROM t' INTO v_jso;
v_arr_t.APPEND(JSON_OBJECT_T(v_jso));
END LOOP;
DBMS_OUTPUT.PUT_LINE(v_arr_t.STRINGIFY);
END;
/
Demo
As I explained in a comment under your question, I am not clear on how you define the CODE values for your JSON string (assuming you have more than one customer).
Other than that, if you need to create a JSON array of objects from individual strings (as in your attempt), you probably need to use JSON_ARRAY rather than JSON_ARRAYAGG. Something like I show below. Incidentally, I also don't know why you needed to SELECT * FROM (subquery) - the outer SELECT seems entirely unnecessary.
So, if you don't actually aggregate over a table, but just need to build a JSON array from individual pieces:
select json_array
(
json_object('CODE' is '1', 'VALUE' is first_name || ' ' || last_name ),
json_object('CODE' is '2', 'VALUE' is last_name || ' ' || first_name)
) as result
from ( select 'Monika' as first_name, 'Awasthi' as last_name from dual )
;
RESULT
------------------------------------------------------------------------------
[{"CODE":"1","VALUE":"Monika Awasthi"},{"CODE":"2","VALUE":"Awasthi Monika"}]

Optimize long running select query against Oracle database

I'm not an DBA expert, we have an existing Oracle query to extract data for a particular day , the problem we've is if the business volume for a day is extremly large, the query takes 8+ hours and timedout. We cannot do optimization inside the database itself, then how do we usually handle extreme case like this? I've pasted the query below with content masked to show the SQL structure, looking for advises on how to optimizae this query or any alternative way to avoid timeout.
WHENEVER SQLERROR EXIT 1
SET LINESIZE 9999
SET ECHO OFF
SET FEEDBACK OFF
SET PAGESIZE 0
SET HEADING OFF
SET TRIMSPOOL ON
SET COLSEP ","
SELECT co.cid
|| ',' || DECODE(co.cid,'xxxxx','xxxx',null,'N/A','xxxxx')
|| ',' || d.name
|| ',' || ti.rc
|| ',' || DECODE(cf.side_id,1,'x',2,'xx',5,'xx','')
|| ',' || cf.Quantity
|| ',' || cf.price
|| ',' || TO_CHAR(time,'YYYY-mm-dd hh24:mi:ss')
|| ',' || DECODE(co.capacity_id,1,'xxxx',2,'xxxx','')
|| ',' || co.type
|| ',' || cf.id
|| ',' || CASE
WHEN (cf.account_id = xxx OR cf.account_id = xxx) THEN SUBSTR(cf.tag, 1, INSTR(cf.tag, '.')-1) || '_' || ti.ric || '_' || DECODE(cf.side_id,1,'xx',2,'xx',5,'xx','')
WHEN INSTR(cf.clientorder_id, '#') > 0 THEN SUBSTR(cf.clientorder_id, 1, INSTR(cf.clientorder_id, '#')-1)
ELSE cf.clientorder_id
END
|| ',' || co.tag
|| ',' || t.description
|| ',' || CASE
WHEN cf.id = xxx THEN 'xxxx'
ELSE (SELECT t.name FROM taccount t WHERE t.account_id = cf.account_id)
END as Account
FROM clientf cf, tins ti, thistory co, tdk d, tra t
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.orderhistory_id = co.orderhistory_id
AND cf.reporttype_id = 1
AND ti.inst_id = cf.inst_id
AND (ti.rc LIKE '%.xx' or ti.rc LIKE '%.xx' or ti.rc LIKE '%.xx' )
AND d.de_id = t.de_id
AND t.tr_id = co.tr_id
AND nvl(co.type_id,0) <> 3
AND cf.trid not in (SELECT v2.pid FROM port v2 WHERE v2.sessiondate = cf.sessiondate AND v2.exec_id = 4)
ORDER BY co.cid, time, cf.quantity;
I would firstly talk to the people who need the output of this query and ask them about the report and each individual column. Sometimes, some columns are not needed any more, sometimes the whole report. 8+ hours runtime is a good bargaining point ;-)
Next, I would put the original query to one side and start build an test query from scratch, bit by bit, for instance starting with clientf, taking all it's columns in the WHERE clause:
SELECT *
FROM clientf SAMPLE (0.1) cf
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1;
If that's ok, I'd increase the sample size until 99%. If the runtime is already to long, you might suggest an index on clientf.sessiondate (or may be on clientf.reporttype_id, but that's unlikely helpful as it looks like to have too few distinct values).
Once that is done, I'd join the first table:
SELECT *
FROM clientf SAMPLE (0.1) cf
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1
AND cf.trid NOT IN (SELECT v2.pid
FROM port v2
WHERE v2.sessiondate = cf.sessiondate
AND v2.exec_id = 4);
I'd compare NOT IN and WHERE NOT EXISTS, not expecting much differences.
Then I'd join the next table (prefering personally ANSI syntax), again starting with a small sample, again adding it's columns to the where clause:
SELECT *
FROM clientf SAMPLE (0.1) cf
FROM thistory co
ON cf.orderhistory_id = co.orderhistory_id
WHERE cf.sessiondate = TO_DATE('xxxxxx','YYYYMMDD')
AND cf.reporttype_id = 1
AND nvl(co.type_id,0) <> 3
AND cf.trid NOT IN (SELECT v2.pid
FROM port v2
WHERE v2.sessiondate = cf.sessiondate
AND v2.exec_id = 4);
I'd play around replacing nvl(co.type_id,0)<>3 with (co.type_id <>3 OR co.type_id IS NULL), monitoring carefully that the result is logically the same.
And so on...

Oracle get table names based on column value

I have table like this:
Table-1
Table-2
Table-3
Table-4
Table-5
each table is having many columns and one of the column name is employee_id.
Now, I want to write a query which will
1) return all the tables which is having this columns and
2) results should show the tables if the column is having values or empty values by passing employee_id.
e.g. show table name, column name from Table-1, Table-2,Table-3,... where employee_id='1234'.
If one of the table doesn't have this column, then it is not required to show.
I have verified with link, but it shows only table name and column name and not by passing some column values to it.
Also verified this, but here verifies from entire schema which I dont want to do it.
UPDATE:
Found a solution, but by using xmlsequence which is deprecated,
1)how do I make this code as xmltable?
2) If there are no values in the table, then output should have empty/null. or default as "YES" value
WITH char_cols AS
(SELECT /*+materialize */ table_name, column_name
FROM cols
WHERE data_type IN ('CHAR', 'VARCHAR2') and table_name in ('Table-1','Table-2','Table-3','Table-4','Table-5'))
SELECT DISTINCT SUBSTR (:val, 1, 11) "Employee_ID",
SUBSTR (table_name, 1, 14) "Table",
SUBSTR (column_name, 1, 14) "Column"
FROM char_cols,
TABLE (xmlsequence (dbms_xmlgen.getxmltype ('select "'
|| column_name
|| '" from "'
|| table_name
|| '" where upper("'
|| column_name
|| '") like upper(''%'
|| :val
|| '%'')' ).extract ('ROWSET/ROW/*') ) ) t ORDER BY "Table"
/
This query can be done in one step using the (non-deprecated) XMLTABLE.
Sample Schema
--Table-1 and Table-2 match the criteria.
--Table-3 has the right column but not the right value.
--Table-4 does not have the right column.
create table "Table-1" as select '1234' employee_id from dual;
create table "Table-2" as select '1234' employee_id from dual;
create table "Table-3" as select '4321' employee_id from dual;
create table "Table-4" as select 1 id from dual;
Query
--All tables with the column EMPLOYEE_ID, and the number of rows where EMPLOYEE_ID = '1234'.
select table_name, total
from
(
--Get XML results of dynamic query on relevant tables and columns.
select
dbms_xmlgen.getXMLType(
(
--Create a SELECT statement on each table, UNION ALL'ed together.
select listagg(
'select '''||table_name||''' table_name, count(*) total
from "'||table_name||'" where employee_id = ''1234'''
,' union all'||chr(10)) within group (order by table_name) v_sql
from user_tab_columns
where column_name = 'EMPLOYEE_ID'
)
) xml
from dual
) x
cross join
--Convert the XML data to relational.
xmltable('/ROWSET/ROW'
passing x.xml
columns
table_name varchar2(128) path 'TABLE_NAME',
total number path 'TOTAL'
);
Results
TABLE_NAME TOTAL
---------- -----
Table-1 1
Table-2 1
Table-3 0
Just try to use code below.
Pay your attention that may be nessecery clarify scheme name in loop.
This code works for my local db.
set serveroutput on;
DECLARE
ex_query VARCHAR(300);
num NUMBER;
emp_id number;
BEGIN
emp_id := <put your value>;
FOR rec IN
(SELECT table_name
FROM all_tab_columns
WHERE column_name LIKE upper('employee_id')
)
LOOP
num :=0;
ex_query := 'select count(*) from ' || rec.table_name || ' where employee_id = ' || emp_id;
EXECUTE IMMEDIATE ex_query into num;
if (num>0) then
DBMS_OUTPUT.PUT_LINE(rec.table_name);
end if;
END LOOP;
END;
I tried with the xml thing, but I get an error I cannot solve. Something about a zero size result. How difficult is it to solve this instead of raising exception?! Ask Oracle.
Anyway.
What you can do is use the COLS table to know what table has the employee_id column.
1) what table from table TABLE_LIKE_THIS (I assume column with table names is C) has this column?
select *
from COLS, TABLE_LIKE_THIS t
where cols.table_name = t
and cols.column_name = 'EMPLOYEE_ID'
-- think Oracle metadata/ think upper case
2) Which one has the value you are looking for: write a little chunk of Dynamic PL/SQL with EXECUTE IMMEDIATE to count the tables matching above condition
declare
v_id varchar2(10) := 'JP1829'; -- value you are looking for
v_col varchar2(20) := 'EMPLOYEE_ID'; -- column
n_c number := 0;
begin
for x in (
select table_name
from all_tab_columns cols
, TABLE_LIKE_THIS t
where cols.table_name = t.c
and cols.column_name = v_col
) loop
EXECUTE IMMEDIATE
'select count(1) from '||x.table_name
||' where Nvl('||v_col||', ''##'') = ''' ||v_id||'''' -- adding quotes around string is a little specific
INTO n_c;
if n_c > 0 then
dbms_output.put_line(n_C|| ' in ' ||x.table_name||' has '||v_col||'='||v_id);
end if;
-- idem for null values
-- ... ||' where '||v_col||' is null '
-- or
-- ... ||' where Nvl('||v_col||', ''##'') = ''##'' '
end loop;
dbms_output.put_line('done.');
end;
/
Hope this helps

Match a concatenated field to a list of variables

Good Afternoon,
I'm trying to match a list of address fields (concatenated to give value ALL_ADDRESS) to a separate table that contains Suffixes, potentially hundreds of rows long.
my desired output is to show those entries where suffixes are part of the ALL_address variable (i.e PARIS STREET)
this works fine when I concatenate without a join, but when I begin to join I get an error:
select s.suffix,
x.key,
x.B_ADDR1_TX,
x.B_ADDR2_TX,
x.B_ADDR3_TX,
x.b_addr_city,
x.b_addr_postcd,
x.b_addr_cntry,
x.b_addr_state_cd,
x.B_ADDR1_TX || ' ' || x.B_ADDR2_TX || ' ' || x.B_ADDR3_TX || ' ' || x.b_addr_city || ' ' || x.b_addr_postcd || ' ' || x.b_addr_cntry || ' ' || x.b_addr_state_cd as All_Address
from test_table AS x
JOIN suffix_list AS s
WHERE
x.All_Address LIKE CONCAT('%',s.suffix,'%') ;
any help is greatly appreciated
I'm not sure what you are trying to do. But proper syntax requires an on clause for a join:
from test_table x join
suffix_list s
on x.All_Address LIKE CONCAT('%', s.suffix, '%')
As I recall, Oracle doesn't support as for table aliases, so your query might have other syntax problems as well.
In Oracle, this would more typically be written as:
from test_table x join
suffix_list s
on x.All_Address LIKE '%' || s.suffix || '%'
Haven't been using Oracle for a while but:
CREATE table t1 (
a varchar(5),
b varchar(5),
c varchar(5));
INSERT INTO t1 VALUES ('one','two','three');
INSERT INTO t1 VALUES ('two','nine','five');
INSERT INTO t1 VALUES ('two','one','one');
CREATE TABLE t2 (filter varchar(5));
INSERT INTO t2 VALUES ('one');
INSERT INTO t2 VALUES ('six');
WITH t1new AS (SELECT t1.*, a || ' ' || b || ' ' || c as address FROM t1)
SELECT t1new.*
FROM t1new,t2
WHERE address like CONCAT(CONCAT('%', t2.filter),'%')
Above example runs in liveql Oracle.

How to convert two rows into key-value json object in postgresql?

I have a data model like the following which is simplified to show you only this problem (SQL Fiddle Link at the bottom):
A person is represented in the database as a meta table row with a name and with multiple attributes which are stored in the data table as key-value pair (key and value are in separate columns).
Expected Result
Now I would like to retrieve all users with all their attributes. The attributes should be returned as json object in a separate column. For example:
name, data
Florian, { "age":23, "color":"blue" }
Markus, { "age":24, "color":"green" }
My Approach
Now my problem is, that I couldn't find a way to create a key-value pair in postgres. I tried following:
SELECT
name,
array_to_json(array_agg(row(d.key, d.value))) AS data
FROM meta AS m
JOIN (
SELECT d.fk_id, d.key, d.value AS value FROM data AS d
) AS d
ON d.fk_id = m.id
GROUP BY m.name;
But it returns this as data column:
[{"f1":"age","f2":24},{"f1":"color","f2":"blue"}]
Other Solutions
I know there is the function crosstab which enables me to turn the data table into a key as column and value as row table. But this is not dynamic. And I don't know how many attributes a person has in the data table. So this is not an option.
I could also create a json like string with the two row values and aggregate them. But maybe there is a nicer solution.
And no, it is not possible to change the data-model because the real data model is already in use of multiple parties.
SQLFiddle
Check and test out the fiddle i've created for this question:
http://sqlfiddle.com/#!15/bd579/14
Use the aggregate function json_object_agg(key, value):
select
name,
json_object_agg(key, value) as data
from data
join meta on fk_id = id
group by 1;
Db<>Fiddle.
The function was introduced in Postgres 9.4.
I found a way to return crosstab data with dynamic columns. Maybe rewriting this will be better to suit your needs:
CREATE OR REPLACE FUNCTION report.usp_pivot_query_amount_generate(
i_group_id INT[],
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab TEXT
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret TEXT;
BEGIN
-- SELECT DISTNICT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND group_id = ANY (i_group_id)
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
_ret = '
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || '
AND group_id = ANY (ARRAY[' || array_to_string(i_group_id, ',') || '])
ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')';
RETURN QUERY
SELECT _ret;
END;
$ab$ LANGUAGE 'plpgsql';
Call the function to get the string, then execute the string. I think I tried executing it in the function, but it didn't work well.
I ran into the same problem when I needed to update some JSON and remove a few elements in my database. This query below worked well enough for me, as it preserves the string quotes but does not add them to numbers.
select
'{' || substr(x.arr, 3, length(x.arr) - 4) || '}'
from
(
select
replace(replace(cast(array_agg(xx) as varchar), '\"', '"'), '","', ', ') as arr
from
(
select
elem.key,
elem.value,
'"' || elem.key || '":' || elem.value as xx
from
quote q
cross join
json_each(q.detail::json -> 'bQuoteDetail'-> 'quoteHC'->0) as elem
where
elem.key != 'allRiskItems'
) f
) x