I have an internal table in which I have to move line items based on value on 3 variables using the value operator.
types:
ty_table type standard table of string with default key.
Data(Lv_var_1) = 'LINE 1'.
Data(Lv_var_2) = 'LINE 2'.
Data(Lv_var_3) = ''.
data(lt_table) = value ty_table( ( cond #( WHEN lv_var_1 is not initial THEN lv_var_1 ) )
( cond #( WHEN lv_var_2 is not initial THEN lv_var_2 ) )
( cond #( WHEN lv_var_3 is not initial THEN lv_var_3 ) ) ).
Here lv_var_3 is empty. For me when any variable is empty, it shouldn't even create a row in the
lt_table.
How can I achieve this?
The direct answer to your question is to use LINES OF so that to append an arbitrary number of lines from an intermediate internal table, which can be potentially empty (0 line added) or not (any number of lines, in your case 1 line):
types:
ty_table type standard table of string with default key.
Data(Lv_var_1) = `LINE 1`.
Data(Lv_var_2) = `LINE 2`.
Data(Lv_var_3) = ``.
data(lt_table) = value ty_table(
( LINES OF cond ty_table( WHEN lv_var_1 is not initial THEN VALUE #( ( lv_var_1 ) ) ) )
( LINES OF cond ty_table( WHEN lv_var_2 is not initial THEN VALUE #( ( lv_var_2 ) ) ) )
( LINES OF cond ty_table( WHEN lv_var_3 is not initial THEN VALUE #( ( lv_var_3 ) ) ) ) ).
But you may realize that the code is not legible, other codes are possible like adding all the lines then deleting the initial lines:
DATA(lt_table) = VALUE ty_table(
( lv_var_1 ) ( lv_var_2 ) ( lv_var_3 ) ).
DELETE lt_table WHERE table_line IS INITIAL.
Or the same principle but less legible with constructor expressions:
Data(lt_table) = value ty_table(
FOR <line> IN VALUE ty_table( ( lv_var_1 ) ( lv_var_2 ) ( lv_var_3 ) )
WHERE ( table_line IS NOT INITIAL )
( <line> ) ).
Or another possibility like the first one, but without repeating the variable names:
TYPES ty_vars TYPE STANDARD TABLE OF REF TO string WITH EMPTY KEY.
DATA(lt_table) = VALUE ty_table(
FOR var IN VALUE ty_vars( ( REF #( lv_var_1 ) ) ( REF #( lv_var_2 ) ) ( REF #( lv_var_3 ) ) )
( LINES OF COND ty_table( WHEN var->* NE `` THEN VALUE #( ( var->* ) ) ) ) ).
Related
Below the code which I'm trying but getting null value.
SELECT json_query(object_data,'$.AOF.LEAD_DATA.DIRECTOR[*]')
FROM TB_COP_BUSS_OBJ_TXN FD,
JSON_TABLE(FD.OBJECT_DATA,'$.AOF.LEAD_DATA.DIRECTOR[*]' columns
( AUS_FLAG VARCHAR2(40) PATH '$.CHECKBOX.AUS_FLAG.value')) j
WHERE FD.OBJECT_PRI_KEY_1 = 'XXXXXXX' and j.AUS_FLAG ='Y'
I'm trying to get full data which is inside director object/array. when I'm using 0,1,2, instead of * then I'm getting the data but I need to check aus flag and need those index data of that array object. please help
please help.
If you have the sample data:
CREATE TABLE TB_COP_BUSS_OBJ_TXN (
OBJECT_PRI_KEY_1 VARCHAR2(20) PRIMARY KEY,
OBJECT_DATA CLOB CHECK (OBJECT_DATA IS JSON)
);
INSERT INTO TB_COP_BUSS_OBJ_TXN (
OBJECT_PRI_KEY_1,
OBJECT_DATA
) VALUES (
'XXXXXX',
'{"AOF":{"LEAD_DATA":{"DIRECTOR":[1,2,3]}}}'
);
Then you can use:
SELECT JSON_QUERY(OBJECT_DATA,'$.AOF.LEAD_DATA."DIRECTOR"')
from TB_COP_BUSS_OBJ_TXN FD
WHERE OBJECT_PRI_KEY_1 = 'XXXXXX'
Which outputs:
JSON_QUERY(OBJECT_DATA,'$.AOF.LEAD_DATA."DIRECTOR"')
[1,2,3]
Or you can use:
SELECT value
from TB_COP_BUSS_OBJ_TXN FD
CROSS APPLY JSON_TABLE(
fd.object_data,
'$.AOF.LEAD_DATA."DIRECTOR"[*]'
COLUMNS (
value NUMBER PATH '$'
)
)
WHERE OBJECT_PRI_KEY_1 = 'XXXXXX'
Which outputs:
VALUE
1
2
3
db<>fiddle here
Let's say my table of myTable has a column1 that has some values already in it.
Now I am given some new values that I should put in a newly created column named 'column2' .
These are one to one associated together and unique. So for example:
column1 | column2
-----------------
'ABCHi' | 'newOH'
-----------------
'TER12' | 'Meow2'
-----------------
'WhatE' | 'BMW26'
-----------------
So I could say like:
Update myTable SET column2 = 'newOH' WHERE column1 = 'ABCHi'
and do that for each of those rows ( I have 32 of them to do ).
But I thought maybe there is a "nicer" way of doing this? Like if it was C# I could say populate a dictionary and then do a for-each loop!
You can use a Table Value Constructor:
declare #Samples as Table ( Column1 VarChar(10), Column2 VarChar(10) );
-- Initialize the sample data.
insert into #Samples ( Column1 ) values
( 'ABCHi' ), ( 'TER12' ), ( 'WhatE' )
select * from #Samples;
-- Update the second column.
update OSamples
set Column2 = NSamples.Column2
from #Samples as OSamples inner join
( values
( ( 'ABCHi' ), ( 'newOH' ) ),
( ( 'TER12' ), ( 'Meow2' ) ),
( ( 'WhatE' ), ( 'BMW26' ) )
) as NSamples( Column1, Column2 )
on OSamples.Column1 = NSamples.Column1;
select * from #Samples;
DBfiddle.
you could create the "Dictionary" as an inline view using the With clause.
here is the fiddle https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=c0e1393785082fd5cd9352d513b76af6
with Dictionary as(
select 'ABCHi' as column1, 'newOH' as column2
union all
select 'TER12' as column1, 'Meow2' as column2
union all
select 'WhatE' as column1, 'BMW26' as column2
)
UPDATE t
SET t.column2=dictionary.column2
FROM mytable t JOIN Dictionary ON t.column1 = Dictionary.column1
Ciao,
i've one problem with sql.
I've some table with datatype clob where is stored a clob.
In the same table if we have the xml format we use one function for take the xml_fied
CREATE OR REPLACE FUNCTION gettagvalue (
XMLBody IN CLOB, TagXml IN VARCHAR2) RETURN VARCHAR2 IS
BEGIN
return TO_CHAR (SUBSTR (XMLBody,
INSTR (XMLBody, '<'||TagXml||'>') + length(TagXml) + 2,
INSTR (XMLBody, '</'||TagXml||'>')
- INSTR (XMLBody, '<'||TagXml||'>')
- (length(TagXml) + 2)
)
);
END GetTagValue;
/
example :
Select errorhandler.GetTagValue(xml_content,'ORDERID')
from table_order
On the same table we have also some xml in json.
How i can create a copy of same function for take the field?
On xml is eay because we have the field with same name that start with and finish with but with json
I cannot understand how define the end of field
Do NOT try to parse XML or JSON as strings; use a proper XML or JSON parser.
If you have the table with the sample data:
CREATE TABLE table_name (
xml CLOB,
json CLOB CHECK ( json IS JSON )
);
INSERT INTO table_name ( xml, json ) VALUES (
'<a><b>BBB</b><c>CCC</c><d>DDD</d></a>',
'{"a":"aaa","b":"bbb","c":[1,2,3,4],"d":"ddd"}'
);
Then you can get the c values from both using the XMLQUERY or JSON_QUERY functions:
SELECT XMLQUERY(
'*/c/text()'
PASSING XMLTYPE(xml)
RETURNING CONTENT
) AS c_xml_value,
JSON_QUERY(
json,
'$.c'
RETURNING VARCHAR2(50)
) AS c_json_value
FROM table_name
Which outputs:
| C_XML_VALUE | C_JSON_VALUE |
| :---------- | :----------- |
| CCC | [1,2,3,4] |
If you have XML and JSON values in the same column then look at whether the first character is < or not and use the appropriate parsing function; do not try and create your own function to parse the values using substring matching.
For example:
CREATE TABLE table_name ( value CLOB );
INSERT INTO table_name ( value )
SELECT '<a><b>BBB</b><c>CCC</c><d>DDD</d></a>' FROM DUAL UNION ALL
SELECT '{"a":"aaa","b":"bbb","c":[1,2,3,4],"d":"ddd"}' FROM DUAL;
Then:
SELECT CASE
WHEN value LIKE '<%'
THEN CAST(
XMLQUERY( '*/c/text()' PASSING XMLTYPE(value) RETURNING CONTENT )
AS VARCHAR2(50)
)
ELSE JSON_QUERY( value, '$.c' RETURNING VARCHAR2(50) )
END AS c_value
FROM table_name
Outputs:
| C_VALUE |
| :-------- |
| CCC |
| [1,2,3,4] |
db<>fiddle here
Update
You can also use JSON_TABLE and XMLTABLE to get all the values out:
SELECT COALESCE( j.sourceChannel, x.sourceChannel ) AS sourceChannel,
COALESCE( j.transactionId, x.transactionId ) AS transactionId,
COALESCE( j.sessionId, x.sessionId ) AS transactionId,
COALESCE( j.status, x.status ) AS status,
COALESCE( j.errorcode, x.errorcode ) AS errorcode,
COALESCE( j.errordescription, x.errordescription ) AS errordescription
FROM table_name t
OUTER APPLY JSON_TABLE(
t.value,
'$.header'
COLUMNS (
sourceChannel VARCHAR2( 50) PATH '$.sourceChannel',
transactionId VARCHAR2( 50) PATH '$.transactionId',
sessionId VARCHAR2( 50) PATH '$.sessionId',
status VARCHAR2( 50) PATH '$.status',
errorcode VARCHAR2( 50) PATH '$.errorcode',
errordescription VARCHAR2(200) PATH '$.errordescription'
)
) j
LEFT OUTER JOIN LATERAL(
SELECT *
FROM XMLTABLE(
'/header'
PASSING XMLTYPE( value )
COLUMNS
sourceChannel VARCHAR2( 50) PATH 'sourceChannel',
transactionId VARCHAR2( 50) PATH 'transactionId',
sessionId VARCHAR2( 50) PATH 'sessionId',
status VARCHAR2( 50) PATH 'status',
errorcode VARCHAR2( 50) PATH 'errorcode',
errordescription VARCHAR2(200) PATH 'errordescription'
)
) x
ON ( t.value LIKE '<%' )
Which for the sample data:
CREATE TABLE table_name ( value CLOB );
INSERT INTO table_name ( value )
SELECT '<header>
<sourceChannel>xaaa</sourceChannel>
<transactionId>xbbb</transactionId>
<sessionId>xccc</sessionId>
<status>xddd</status>
<errorcode>xeee</errorcode>
<errordescription>xfff</errordescription>
</header>' FROM DUAL UNION ALL
SELECT '{"header":{"sourceChannel":"jaaa","transactionId":"jbbb","sessionId":"jccc","status":"jddd","errorcode":"jeee","errordescription":"jfff"}}' FROM DUAL;
Outputs:
SOURCECHANNEL
TRANSACTIONID
TRANSACTIONID
STATUS
ERRORCODE
ERRORDESCRIPTION
xaaa
xbbb
xccc
xddd
xeee
xfff
jaaa
jbbb
jccc
jddd
jeee
jfff
All the examples I have found for the Postgres 'returning' functionality (https://www.postgresql.org/docs/current/dml-returning.html) return values for a single row.
How do I read multiple result rows into a variable?
Executing the following outside a function gives the desired results:
create sequence core.test_id_seq start with 10000;
create table core.test (
test_id integer not null default nextval('core.test_id_seq'),
field integer not null
);
insert into core.test ( field )
select unnest( array[1, 2] ) as id
returning *
;
test_id | field
---------+-------
10000 | 1
10001 | 2
(2 rows)
But I want to read the results into a variable or table to work with:
do $$
declare
recs ??;
begin
create sequence core.test_id_seq start with 10000;
create table core.test (
test_id integer not null default nextval('core.test_id_seq'),
field integer not null
);
insert into core.test ( field )
select unnest( array[1, 2] ) as id
returning * into recs
;
end $$;
Is this possible?
Thanks
You need to use an array of integers:
do $$
declare
new_ids int[];
begin
with new_rows as (
insert into core.test ( field )
select unnest( array[1, 2] ) as id
returning *
)
select array_agg(field)
into new_ids
from new_rows;
... work with the new_ids array ...
end
$$;
I was wondering if anyone can help me write some code for the following logic.
We have a table
----------------
id, lang, letter
----------------
1 1 E
1 1 E
1 1 E
1 1 E
2 2 F
Problem:
I need to select ALL the rows for which the following condition fails:
id = lang (ie its either 1 or 2)
lang = 1 when letter = 'e' OR lang=2 when letter=2
I know I can hard code it. Also i would like to do this in ONE query only.
Please help
WHERE NOT
(
id = lang
AND
(
(lang = 1 AND letter = 'e')
OR (lang = 2 AND letter = '2')
)
)
select * from table
where id <> lang and
(lang<>1 and letter <> 'e' or
lang<>2 and letter <> '2')
assuming you mean you want all data where both of those conditions are false.
I think this is what you want to exclude the records meeting that criteria:
create table #t
(
id int,
lang int,
letter varchar(1)
)
insert into #t values (1, 1, 'E')
insert into #t values (1, 1, 'E')
insert into #t values (1, 1, 'E')
insert into #t values (1, 1, 'E')
insert into #t values (2, 2, 'F')
insert into #t values (1, 1, 'G')
insert into #t values (1, 1, 'H')
insert into #t values (1, 1, 'I')
insert into #t values (1, 1, 'J')
insert into #t values (2, 2, '2')
SELECT *
FROM #t
WHERE NOT
(
id = lang
AND
(
(
lang = 1
AND letter = 'E'
)
OR
(
lang = 2
AND letter = '2'
)
)
)
drop table #t
to get the records with that, just remove the NOT it:
SELECT *
FROM #t
WHERE
(
id = lang
AND
(
(
lang = 1
AND letter = 'E'
)
OR
(
lang = 2
AND letter = '2'
)
)
)
The idea here is that there are three business rules that may be implemented as three distinct tuple constraints (i.e. not false for every row in the table):
id and lang must be equal (begging the question, why not make one a computed column?).
If letter is 'E' then lang must be 1 (I assume there is a typo in your question where you said 'e' instead of 'E').
If letter is 'F' then lang must be 2 (I assume there is a typo in your question where you said 2 instead of 'F').
The constraints 'don't have anything to say' about any other data (e.g. when letter is 'X') and will allow this to pass.
All three tuple constraints can be written in conjunctive normal form as a constraint validation query:
SELECT * FROM T
WHERE id = lang
AND ( letter <> 'E' OR lang = 1 )
AND ( letter <> 'F' OR lang = 2 )
The data that violates the constraints can be simply shown (in pseudo relational algebra) as:
T MINUS (constraint validation query)
In SQL:
SELECT * FROM T
EXCEPT
SELECT * FROM T
WHERE id = lang
AND ( letter <> 'E' OR lang = 1 )
AND ( letter <> 'F' OR lang = 2 )
It is good to be able to rewrite predicates in case one's query of choice runs like glue on one's DBMS of choice! The above may be rewritten as e.g.
SELECT * FROM T
WHERE NOT ( id = lang
AND ( letter <> 'E' OR lang = 1 )
AND ( letter <> 'F' OR lang = 2 ) )
Applying rewrite laws (De Morgan's and double-negative) e.g.
SELECT * FROM T
WHERE id <> lang
OR ( letter = 'E' AND lang <> 1 )
OR ( letter = 'F' AND lang <> 2 )
Logically speaking, this should be better for the optimizer because for the above to be a contradiction every disjunct member must be false (put another way, it only takes one OR'ed clause to be true for the data to be deemed 'bad'). In practice (in theory?), the optimizer should be able to perform such rewrites anyhow!
p.s. nulls are bad for logic -- avoid them!
Here's my test code with sample data:
WITH Nums AS ( SELECT *
FROM ( VALUES (0), (1), (2) ) AS T (c) ),
Chars AS ( SELECT *
FROM ( VALUES ('E'), ('F'), ('X') ) AS T (c) ),
T AS ( SELECT N1.c AS id, N2.c AS lang,
C1.c AS letter
FROM Nums AS N1, Nums AS N2, Chars AS C1 )
SELECT * FROM T
EXCEPT
SELECT * FROM T
WHERE id = lang
AND ( letter <> 'E' OR lang = 1 )
AND ( letter <> 'F' OR lang = 2 );