Ckeck the presence of xml tag using oracle - sql

<wbi:appData>
<wbi:content wbi:name="1st_status">
<wbi:value xsi:type="xsd:string">Success</wbi:value>
</wbi:content>
</wbi:appData>
this xml is in a table which has a column in the form of CLOB type.
I wanted to find if "wbi:value" tag exists in this xml or not ?
I tried using existsnode but in sql developer it is saying an error as to declare existsnode.

yes use existsnode:
SQL> with yourdata as (select to_clob('<wbi:event xmlns:wbi="http://foo" xmlns:xsi="http://x" xmlns:xsd="http://d">
2 <wbi:appData>
3 <wbi:content wbi:name="1st_status">
4 <wbi:value xsi:type="xsd:string">Success</wbi:value>
5 </wbi:content>
6 <wbi:content wbi:name="2nd_status">
7 <wbi:value xsi:type="xsd:string">Failure</wbi:value>
8 </wbi:content>
9 </wbi:appData>
10 </wbi:event>') c from dual)
11 select existsnode(xmltype(c), '/wbi:event/wbi:appData/wbi:content','xmlns:wbi="http://foo"') is_exist
12 from yourdata t
13 /
IS_EXIST
----------
1
ie
existsnode(xmltype(c), '/wbi:event/wbi:appData/wbi:content','xmlns:wbi="http://foo"')
1 = exists
0 = does not exist.
note that in my sample, i had two matching nodes (as i didn't filter on wbi:name). you can filter the xpath of course. eg:
/wbi:event/wbi:appData/wbi:content[#wbi:name="1st_status"]
to limit matches to the "1st_status" one

select count(*)
from clobtab
where existsNode(xmltype.createxml(clobcol),'/wbi:appData/wbi:content/wbi:value') = 1;
If it reurns more than 0 then it exists otherwise not.
So your trigger would be-
CREATE TRIGGER Tab_a
BEFORE INSERT
FOR EACH ROW
declare
xml_a xmltype;
begin
xml_a:=xmltype(:new.value);
if existsNode(xml_a,'/wbi:appData/wbi:content/wbi:value','xmlns:wbi="http://pat.namespace.com"') = 1
then
----insert ....
end if;
end;

actually you can use oracle's instr function, which is fast.
like:
where instr(field, 'wbi:value') > 0

You can use XMLEXISTS:
SELECT DESCRIPTOR_XML FROM TABLE_WITH_AN_XMLTYPE_COLUMN
WHERE
XMLEXISTS('//functions[function/arg[#name="class.name" and not(starts-with(., "com.example.apps.YouShantSeeMeClass"))]]'
PASSING BY VALUE DESCRIPTOR_XML);

Related

MariaDB's `JSON_LENGTH` - alternative for PostgreSQL

I need to convert some queries written for MariaDB to PostgreSQL syntax. Unfortunately they use the JSON_LENGTH function of MariaDB and im struggling to find an alternative with PostgreSQL.
For clarification:
JSON_LENGTH counts the number of 'entries' on the root level of a JSON object/array in MariaDB:
SELECT
JSON_LENGTH('{"test": 123}') as test1, -- 1
JSON_LENGTH('"123"') as test2, -- 1
JSON_LENGTH('123') as test3, -- 1
JSON_LENGTH('[]') as test4, -- 0
JSON_LENGTH('[123]') as test5, -- 1
JSON_LENGTH('[123, 456]') as test6, -- 2
JSON_LENGTH('[123, {"test": 123}]') as test7; -- 2
Partial solutions in PostgreSQL I came up with:
select
json_array_length('[]'::json) as test1, -- 0
json_array_length('["a"]'::json) as test2, -- 1
length(json_object_keys('{"a": "b"}'::json)) as test3; -- 1
json_array_length is not allowed for JSON objects
json_object_keys is not allowed for JSON arrays
Unfortunately I can't manage to combine the two methods I figured out for PostgreSQL:
I tried to use CASE WHEN:
select
case
when json_typeof('["a"]'::json) = 'array' then json_array_length('["a"]'::json)
when json_typeof('["a"]'::json) = 'object' then length(json_object_keys('["a"]'::json))
else 0
end;
Error:
[0A000] ERROR: set-returning functions are not allowed in CASE
-> json_object_keys is the bad guy here
I tried IF ELSE:
select
IF ('array' = json_typeof('["a"]'::json)) THEN json_array_length('["a"]'::json)
ELEIF (json_typeof('["a"]'::json) = 'object') THEN length(json_object_keys('["a"]'::json))
ELSE 0;
Error:
[42601] ERROR: syntax error at or near "THEN"
I can imagine I have an error in my IF ELSE statement, but I'm unable to figure it out.
Is there any way to replicate MariaDB's JSON_LENGTH behavior with PostgreSQL?
You need to count the number of rows returned by jsonb_object_keys():
This:
with data (input) as (
values
('{"test": 123}'::jsonb),
('"123"'),
('123'),
('[]'),
('[123]'),
('[123, 456]'),
('[123, {"test": 123}]')
)
select input,
case jsonb_typeof(input)
when 'array' then jsonb_array_length(input)
when 'object' then (select count(*) from jsonb_object_keys(input))
when 'string' then 1
else 0
end
from data
returns:
input | case
---------------------+-----
{"test": 123} | 1
"123" | 1
123 | 0
[] | 0
[123] | 1
[123, 456] | 2
[123, {"test": 123}] | 2
Of course this can easily be put into a function:
create or replace function my_json_length(p_input jsonb)
returns int
as
$$
select case jsonb_typeof(p_input)
when 'array' then jsonb_array_length(p_input)
when 'object' then (select count(*) from jsonb_object_keys(p_input))
when 'string' then 1
else 0
end;
$$
language sql
immutable;
I would not name it json_length() to avoid any clashes in case Postgres decides to implement such a function in the future (although I am not aware of such a function in the SQL standard).
Note that jsonb is the recommended data type to store JSON values in Postgres.

How to sort by dynamic column in oracle?

I have some complex oracle query, but I will try to make it simple. I have something like this:
SELECT TBL1.*, TBL2.*
FROM TABLE_1 TBL1
LEFT JOIN (
SELECT *
FROM
(
SELECT TBL2.VERSION_ID, TBL2.CONFIG_ID, TBL2.VALUE
FROM TABLE_2 TBL2
)
PIVOT
(
MAX(VALUE) FOR CONFIG_ID IN (:metadataClassConfigs)
)
) TBL2 ON TBL1.VERSION_ID = TBL2.VERSION_ID
ORDER BY
CASE
WHEN :orderByCustomClass IS NOT NULL THEN
CASE
WHEN :orderByCustomClass = 1 THEN TBL2."1"
WHEN :orderByCustomClass = 21 THEN TBL2."21"
WHEN :orderByCustomClass = 22 THEN TBL2."22"
WHEN :orderByCustomClass = 23 THEN TBL2."23"
WHEN :orderByCustomClass = 24 THEN TBL2."24"
WHEN :orderByCustomClass = 25 THEN TBL2."25"
WHEN :orderByCustomClass = 26 THEN TBL2."26"
WHEN :orderByCustomClass = 27 THEN TBL2."27"
WHEN :orderByCustomClass = 28 THEN TBL2."28"
WHEN :orderByCustomClass = 29 THEN TBL2."29"
WHEN :orderByCustomClass = 30 THEN TBL2."30"
WHEN :orderByCustomClass = 31 THEN TBL2."31"
WHEN :orderByCustomClass = 32 THEN TBL2."32"
WHEN :orderByCustomClass = 34 THEN TBL2."34"
WHEN :orderByCustomClass = 35 THEN TBL2."35"
WHEN :orderByCustomClass = 36 THEN TBL2."36"
WHEN :orderByCustomClass = 41 THEN TBL2."41"
WHEN :orderByCustomClass = 52 THEN TBL2."42"
END
END;
and this is working fine. This input parameters are: :metadataClassConfigs is the list of numbers (1,21,22,23,24,25,26,27,28,29,30,31,32,34,35,36,41,42) and :orderByCustomClass can be any of this number.
I have much more numbers then this list, more than 1000, so I am wondering how can I order by dynamic column something like:
WHEN :orderByCustomClass IS NOT NULL THEN TBL2."{:orderByCustomClass}"
?
There is a multiple way for dynamic SQL in Oracle PL/SQL. I'm assuming that you are talking about PL/SQL, because in other kind of clients (like python-oracle, jdbc) the only way to send a "query" is to create cursor from string. So you've always forced to build query kind of dynamically...
Native Dynamic SQL - execute immediate
Good for simple cases (look how to get the result - works best for one row - its more complicated for arrays).
The query is a string - so you can "build" it. If you need - you can use parameters with USING clause (each parameter in query must have colon as a prefix). Be aware that they are mapped by position in query - not by name.
declare
type t_rec is record (
<describe returned columns>
);
type t_result_array is table of t_rec index by pls_integer;
v_result_array t_result_array;
v_sort_column varchar2(4000);
begin
-- do some logic to determine name of column for order by:
v_sort_column := <some_logic determining column name for sorting>;
-- if logic is based on raw user input, then you should sanitize it:
v_sort_column := DBMS_ASSERT.QUALIFIED_SQL_NAME(v_sort_column);
-- build query based on v_sort_column value
execute immediate 'select ... from ...
order by '||v_sort_column
bulk collect into v_result_array;
<do something with result stored in v_result_array>
end;
/
Native Dynamic SQL - OPEN FOR
Very similar to execute immediate but based on cursor variable and OPEN FOR statement. To accomplish it you have to do 3 steps: open cursor, fetch rows and close cursor.
declare
type t_rec is record (
<describe returned columns>
);
type t_result_array is table of t_rec index by pls_integer;
v_result_array t_result_array;
v_sort_column varchar2(4000);
type t_ret_cursor is ref cursor return t_rec;
v_cursor t_ret_cursor;
begin
-- do some logic to determine name of column for order by:
v_sort_column := <some_logic determining column name for sorting>;
-- if logic is based on raw user input, then you should sanitize it:
v_sort_column := DBMS_ASSERT.QUALIFIED_SQL_NAME(v_sort_column);
open v_cursor for 'select ... from ...
order by '||v_sort_column;
fetch v_cursor bulk collect into v_cursor;
close v_cursor;
<do something with result stored in v_result_array>
end;
/
Dynamic SQL - DBMS_SQL package
This is the most flexible way of doing it - you can even conditionally change selected columns or dynamically check what kind of row is in result (number of columns, data types etc.). Furthermore, it is also one of the best in terms of performance.
I'm just putting information about this option here so you can see for yourself if you need these features and capabilities.
There are many more steps here and they are more complex: open cursor, parse, bind every parameter (optional), define columns, execute, fetch, access data, so I will not post any example. Probably it's an overkill for your purposes.

check if two values are present in a table with plsql in oracle sql

I'm trying to create a procedure, that checks if two values are present in a table.
The logic is as follows: Create a function called get_authority. This function takes two parameters (found in the account_owner table): cust_id and acc_id, and returns 1 (one), if the customer has the right to make withdrawals from the account, or 0 (zero), if the customer doesn't have any authority to the account. I'm writing plsql and using oracle live sql. I can't figure out how to handle the scenario where a customer has two accounts!
account_owner is seen here:
create or replace function get_authority(
p_cust_id in account_owner.cust_id%type,
p_acc_id in account_owner.acc_id%type
)
return varchar2
as
v_return number(1);
v_acc_id account_owner.acc_id%type;
v_cust_id account_owner.cust_id%type;
begin
for v_ in (select account_owner.cust_id,
account_owner.acc_id
from account_owner
where p_cust_id = cust_id)
LOOP
if p_cust_id = v_cust_id and p_acc_id = v_acc_id then
v_return := v_return + 1;
else
v_return := v_return + 0;
end if;
return v_return;
END LOOP;
end;
/
When I check for the cust_id I get the return 0 - but it should be 1??
select get_authority('650707-1111',123) from dual;
return:
GET_AUTHORITY('650707-1111',123)
0
What do I do wrong?
You got 0? How come; should be NULL.
v_return number(1);
so it is initially NULL. Later on, you're adding "something" to it, but - adding anything to NULL will be NULL:
SQL> select 25 + null as result from dual;
RESULT
----------
SQL>
Therefore, set its default value to 0 (zero):
v_return number(1) := 0;
Also, you declared two additional variables:
v_acc_id account_owner.acc_id%type;
v_cust_id account_owner.cust_id%type;
Then you compare them to values passed as parameters; as they are NULL, ELSE is executed.
Furthermore, there's a loop, but you don't do anything with it. If you meant that this:
for v_ in (select account_owner.cust_id,
(rewritten as for v_ in (select cust_id) evaluates to v_cust_id - it does not. Cursor variables are referred to as v_.cust_id (note the dot in between).
Also, if there's only one row per p_cust_id and p_acc_id, why do you use cursor FOR loop at all? To avoid no_data_found or too_many_rows? I wouldn't do that; yes, it fixes such "errors", but is confusing. You'd rather properly handle exceptions.
Here's what you might have done:
Sample data:
SQL> select * From account_owner;
ACCOW_ID CUST_ID ACC_ID
---------- ----------- ----------
1 650707-1111 123
2 560126-1148 123
3 650707-1111 5899
Function; if there are more rows per parameters' combination, max function will make sure that too_many_rows is avoided (as it bothers you). You don't really care what it returns - important is that select returns anything to prove that authority exists for that account.
SQL> create or replace function get_authority
2 (p_cust_id in account_owner.cust_id%type,
3 p_acc_id in account_owner.acc_id%type
4 )
5 return number
6 is
7 l_accow_id account_owner.accow_id%type;
8 begin
9 select max(o.accow_id)
10 into l_accow_id
11 from account_owner o
12 where o.cust_id = p_cust_id
13 and o.acc_id = p_acc_id;
14
15 return case when l_accow_id is not null then 1
16 else 0
17 end;
18 end;
19 /
Function created.
Testing:
SQL> select get_authority('650707-1111', 123) res_1,
2 get_authority('650707-1111', 5899) res_2
3 from dual;
RES_1 RES_2
---------- ----------
1 1
SQL>

Script to generate data, SQL oracle 10g

I'm trying to build a script that insert random datas into my table.
My actual script looks like that :
INSERT INTO Utilisateurs (id_utilisateur, Uti_nom, Uti_prenom, Uti_role, Uti_mdp, Uti_Statut)
SELECT
-- here to input the id (number that increment each time)
dbms_random.string('A', trunc(dbms_random.value(5, 50))), -- data for uti_nom
dbms_random.string('A', trunc(dbms_random.value(5, 100))), -- data for uti_prenom
-- randomly get 'Administrateur' or 'Utilisateur'
dbms_random.string('X', 10), -- data for uti_mdp
trunc(dbms_random.value(0, 1)) -- data for uti_status
FROM dual
CONNECT BY LEVEL < 100;
So if someone can help me to get the both comment line...
There's a sample, but what i really need it's the ID that increments and Uti_role (Administrateur/Utilisateur) the others fields can be generated and looks like "dsjhadakj"
id_utilisateur Uti_nom Uti_prenom Uti_role Uti_mdp Uti_Statut
d--------+---------+---------+---------+---------+---------+---------+---------+
1 Elche Marco Administrateur Haj432Hgn 1
2 Babo Jules Utilisateur Haj432Hgn 0
3 Ghale Alex Administrateur Haj432Hgn 1
For self-incremental ID you can use LEVEL
For uti_role something like this:
CASE WHEN dbms_random.value(0, 1) > 0.5 THEN 'Administrateur' ELSE 'Utilisateur' END
Here's SQL Fiddle for just the SELECT part.

Turning an XML into a select

I'm trying to turn this XML string into a select
I have #Schedule XML = '<days><day enabled="0">0</day><day enabled="1">1</day><day enabled="1">2</day><day enabled="1">3</day><day enabled="1">4</day><day enabled="1">5</day><day enabled="0">6</day></days>'
What I'm trying to see at the end is..
DayNumber DayEnabled
0 0
1 1
2 1
3 1
4 1
5 1
6 0
I've tried a few ways, so far nothing is working right.. I am handling this as an XML data type, I'd prefer not to use a function as this will just be in a stored procedure..
Update: Maybe I didn't explain it correctly..
I have a stored procedure, XML is one of the parameters passed to it, I need to send it to a table to be inserted, so I'm trying to do the following..
INSERT INTO tblDays (DayNumber, DayEnabled)
SELECT #XMLParsedOrTempTableWithResults
I just can't figure out how to parsed the parameter
DECLARE #myXML as XML = '<days><day enabled="0">0</day><day enabled="1">1</day><day enabled="1">2</day><day enabled="1">3</day><day enabled="1">4</day><day enabled="1">5</day><day enabled="0">6</day></days>'
DECLARE #XMLDataTable table
(
DayNumber int
,DayEnabled int
)
INSERT INTO #XMLDataTable
SELECT d.value('text()[1]','int') AS [DayNumber]
,d.value('(#enabled)[1]','int') AS [DayEnabled]
FROM #myXML.nodes('/days/*') ds(d)
SELECT * FROM #XMLDataTable
Refer:
http://beyondrelational.com/modules/2/blogs/28/posts/10279/xquery-labs-a-collection-of-xquery-sample-scripts.aspx
The XMLTABLE function is how most XML-enabled DBMSes shred an XML document into a relational result set.
This example uses DB2's syntax for XMLTABLE and an input parameter passed into a stored procedure:
INSERT INTO tblDays (DayNumber, DayEnabled)
SELECT X.* FROM
XMLTABLE ('$d/days/day' PASSING XMLPARSE( DOCUMENT SPinputParm ) as "d"
COLUMNS
dayNumber INTEGER PATH '.',
dayEnabled SMALLINT PATH '#enabled'
) AS X
;