Pivoting rows into columns dynamically in Oracle - sql

I have the following Oracle 10g table called _kv:
select * from _kv
ID K V
---- ----- -----
1 name Bob
1 age 30
1 gender male
2 name Susan
2 status married
I'd like to turn my keys into columns using plain SQL (not PL/SQL) so that the resulting table would look something like this:
ID NAME AGE GENDER STATUS
---- ----- ----- ------ --------
1 Bob 30 male
2 Susan married
The query should have as many columns as unique Ks exist in the table (there aren't that many)
There's no way to know what columns may exist before running the query.
I'm trying to avoid running an initial query to programatically build the final query.
The blank cells may be nulls or empty strings, doesn't really matter.
I'm using Oracle 10g, but an 11g solution would also be ok.
There are a plenty of examples out there for when you know what your pivoted columns may be called, but I just can't find a generic pivoting solution for Oracle.
Thanks!

Oracle 11g provides a PIVOT operation that does what you want.
Oracle 11g solution
select * from
(select id, k, v from _kv)
pivot(max(v) for k in ('name', 'age', 'gender', 'status')
(Note: I do not have a copy of 11g to test this on so I have not verified its functionality)
I obtained this solution from: http://orafaq.com/wiki/PIVOT
EDIT -- pivot xml option (also Oracle 11g)
Apparently there is also a pivot xml option for when you do not know all the possible column headings that you may need. (see the XML TYPE section near the bottom of the page located at http://www.oracle.com/technetwork/articles/sql/11g-pivot-097235.html)
select * from
(select id, k, v from _kv)
pivot xml (max(v)
for k in (any) )
(Note: As before I do not have a copy of 11g to test this on so I have not verified its functionality)
Edit2: Changed v in the pivot and pivot xml statements to max(v) since it is supposed to be aggregated as mentioned in one of the comments. I also added the in clause which is not optional for pivot. Of course, having to specify the values in the in clause defeats the goal of having a completely dynamic pivot/crosstab query as was the desire of this question's poster.

To deal with situations where there are a possibility of multiple values (v in your example), I use PIVOT and LISTAGG:
SELECT * FROM
(
SELECT id, k, v
FROM _kv
)
PIVOT
(
LISTAGG(v ,',')
WITHIN GROUP (ORDER BY k)
FOR k IN ('name', 'age','gender','status')
)
ORDER BY id;
Since you want dynamic values, use dynamic SQL and pass in the values determined by running a select on the table data before calling the pivot statement.

Happen to have a task on pivot. Below works for me as tested just now on 11g:
select * from
(
select ID, COUNTRY_NAME, TOTAL_COUNT from ONE_TABLE
)
pivot(
SUM(TOTAL_COUNT) for COUNTRY_NAME in (
'Canada', 'USA', 'Mexico'
)
);

First of all, dynamically pivot using pivot xml again needs to be parsed. We have another way of doing this by storing the column names in a variable and passing them in the dynamic sql as below.
Consider we have a table like below.
If we need to show the values in the column YR as column names and the values in those columns from QTY, then we can use the below code.
declare
sqlqry clob;
cols clob;
begin
select listagg('''' || YR || ''' as "' || YR || '"', ',') within group (order by YR)
into cols
from (select distinct YR from EMPLOYEE);
sqlqry :=
'
select * from
(
select *
from EMPLOYEE
)
pivot
(
MIN(QTY) for YR in (' || cols || ')
)';
execute immediate sqlqry;
end;
/
RESULT

Related

Snowflake LISTAGG Encapsulate Values

I was wondering if anyone has solved being able to encapsulate values in the LISTAGG function for Snowflake.
I have a table that looks something like this
ID
NAME
1
PC
1
PC,A
2
ER
The following query:
SELECT
ID,
LISTAGG(DISTINCT NAME, ',') AS LIST
FROM TEST_TABLE
will return this table
ID
LIST
1
PC,PC,A
2
ER
My expected result would be:
ID
LIST
1
PC,"PC,A"
2
ER
Does anyone know how to get the expected result?
I thought about testing if the value had a comma and then a CASE WHEN to switch the logic based on that.
We can aggregate using a CASE expression which detects commas, in which case it wraps the value in double quotes.
SELECT
ID,
LISTAGG(DISTINCT CASE WHEN NAME LIKE '%,%'
THEN CONCAT('"', NAME, '"')
ELSE NAME END, ',') AS LIST
FROM TEST_TABLE;
If I had to use listagg, I would have picked a different delimiter, like so..
select listagg(distinct name,'|')
from t;
Personally, I find array_agg easier to work with in cases like yours
select array_agg(distinct name)
from t;

with XMLDIFF, how to compare only the fields that my xml elements have in common?

introduction:
I have query using a pipeline function. I won't change the names of the returned columns but I will add other columns.
I want to compare the result of the old query with the new query (syntaxal always the same (select * from mypipelinefunction) , but I have changed the pipeline function )
I have used "select *" instead of "select the name of the columns" because there is a lot names.
code:
the code example is simplified to focus on the problem addressed in the title. (no pipeline function. Only two "identic" queries are tested. The second query has one more column that the first.
SELECT
XMLDIFF (
XMLTYPE.createXML (
DBMS_XMLGEN.getxml ('select 1 one, 2 two from dual')),
XMLTYPE.createXML (
DBMS_XMLGEN.getxml ('select 1 one from dual')))
from dual.
I want that XMLDIFF to say that there is no difference because the only columns that I care about are the colums that are in common.
In short I would like to have this result
<xd:xdiff xsi:schemaLocation="http://xmlns.oracle.com/xdb/xdiff.xsd http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xd="http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</xd:xdiff>
instead of this result
<xd:xdiff xsi:schemaLocation="http://xmlns.oracle.com/xdb/xdiff.xsd http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xd="http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><xd:delete-node xd:node-type="element" xd:xpath="/ROWSET[1]/ROW[1]/TWO[1]"/></xd:xdiff>
Is this possible to force XMLdiff to compare only the columns that are in commun?
code
Another way to fix this problem would be to have a shortcut in TOAD that transform select * from t in select first_column, ......last_column from t. And it should work even if t is a pipeline function
If you only care about certain columns then wrap your query in a outer-query to only output the columns you care about:
SELECT XMLDIFF (
XMLTYPE.createXML (
DBMS_XMLGEN.getxml (
'SELECT one FROM (select 1 one, 2 two from dual)'
)
),
XMLTYPE.createXML (
DBMS_XMLGEN.getxml (
'SELECT one FROM (select 1 one from dual)'
)
)
) AS diff
FROM DUAL;
Which outputs:
DIFF
<xd:xdiff xsi:schemaLocation="http://xmlns.oracle.com/xdb/xdiff.xsd http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xd="http://xmlns.oracle.com/xdb/xdiff.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><?oracle-xmldiff operations-in-docorder="true" output-model="snapshot" diff-algorithm="global"?></xd:xdiff>
db<>fiddle here

Cant able to understand the below query ,any one explain?

select table_name,
to_number(
extractvalue(
xmltype(
dbms_xmlgen.getxml(
'select count(*) c from '||table_name
))
,'/ROWSET/ROW/C')
) count
from user_tables
order by table_name;
I know it is giving the total count of row of each table . But what to know how it working ?
Examining the query starting from innermost part is best way to understand:
'select count(*) c from ' || table_name
creates a string containing query selecting the number of records from the table referred to by table_name, so if table_name contains xyz the query will be select count(*) from xyz.
xmltype(dbms_xmlgen.getxml(<query>))
executes the dynamically generated query, producing an XML result.
to_number(extractvalue(<xml>, '/ROWSET/ROW/C'))
fetches a specific value from the XML generated before, following a specific path. We need to assume the XML looks like<ROWSET><ROW><C>value</C></ROW></ROWSET>. The extracted value, yet a string, is then converted to a number.
select table_name, <number> count from user_tables order by table_name
is what finally remains...
Break it down:
'select count(*) c from '||table_name gives you a string containing a query against a specific table (from user_tables);
dbms_xmlgen.getxml('select count(*) c from '||table_name) uses the dbms_xmlgen package to run that dynamic query string and returns the result as an XML document, but as a CLOB;
xmltype(dbms_xmlgen.getxml('select count(*) c from '||table_name)) converts that CLOB to actual XML;
extractvalue(..., '/ROWSET/ROW/C') extracts the C node value from that XML (there is only one per document, and there is one document per table), as a string;
to_number(...) just converts that string to a number.
db<>fiddle with three dummy tables, showing the intermediate steps.
However, the version you have seems to have originated with Laurent Schneider in 2007, and things have moved on a bit since then. The extractvalue() function is deprecated so you should use XMLQuery instead, and you can skip a step by using getxmltype() instead of xmltype(getxml()):
select table_name,
to_number(
xmlquery(
'/ROWSET/ROW/C/text()'
passing dbms_xmlgen.getxmltype('select count(*) c from '||table_name)
returning content
)
) count
from user_tables
order by table_name;
Or (as #Padders mentioned) you could use XMLTable, with a CTE or inline view to provide the XML; which perhaps makes this example a bit more obscure, but is useful if you have more than one value to extract:
select t.table_name, x.count
from (
select table_name,
dbms_xmlgen.getxmltype('select count(*) c from '||table_name) as xml
from user_tables
) t
cross apply xmltable (
'/ROWSET/ROW'
passing t.xml
columns count number path 'C'
) x
order by table_name;
The principal is the same though, and I've included those versions in an expanded db<>fiddle.
(Incidentally, I'm not advocating using a keyword like count as a column alias - it's better to avoid those; I'm just sticking with the alais from your original query.)

concat all rows in one rows

Hi i need to concat all rows from my table.
I have this query select * from table1; this table contains 400 fields
i cannot do this select column1 ||','||column2||','||.....from table1
can someone help e to fix it using select * from table1 to concatinate all rows.
And thank you.
In Oracle (and similar in other DBMS) you could use system tables.... and do this in two steps:
Assuming you want to combine all the columns into 1 column for X rows...
STEP 1:
SELECT LISTAGG(column_Name, '|| Chr(44)||') --this char(44) adds a comma
within group (order by column_ID) as Fields
--Order by column_Id ensures they are in the same order as defined in db.
FROM all_tab_Cols
WHERE table_name = 'YOURTABLE'
and owner = 'YOUROWNER'
--Perhaps exclude system columns
and Virtual_Column = 'NO'
STEP 2:
Copy the results into a new SQL statement and execute.
The would look something like Field1|| Chr(44)||Field2|| Chr(44)||Field3
SELECT <results>
FROM YOURTABLE;
which would result in a comma separated list of values in 1 column for all rows of YOURTABLE
If the length of all the columns (along with , space and ||) would exceed the 4000 characters allowed... we can use a clob data type instead through the use of XML objects...
* Replace step 1 with *
SELECT RTRIM(XMLAGG(XMLELEMENT(Column_ID,Column_Name,'|| Chr(44)||').extract('//text()') order by Column_ID).GetClobVal(),'|| Chr(44)||') fields
FROM all_tab_Cols
WHERE table_name = 'YOURTABLENAME'
and owner = 'YOUROWNER'
--Perhaps exclude system columns
and Virtual_Column = 'NO';
Syntax for the above attributed to This Oracle thread but updated for your needs.

Dynamically build select statement in Oracle 12c

I have posted the similar question before but the solution to this question seems like it will be completely different, thus I hope this does not qualify for a repost.
Req:
I have 2 columns in a table named SETUPS with the following columns:
ID INTEGER NOT NULL
RPT_SCRIPT CLOB NOT NULL
RPT_SCRIPT has select statements in each record. Below is a statement in the clob column WHERE ID = 1:
SELECT ID,
Title,
Desc,
Type,
LVL_CNT,
TYPE_10 VALUE_10,
TYPE_9 VALUE_9,
TYPE_8 VALUE_8,
TYPE_7 VALUE_7,
TYPE_6 VALUE_6,
TYPE_5 VALUE_5,
TYPE_4 VALUE_4,
TYPE_3 VALUE_3,
TYPE_2 VALUE_2,
TYPE_1 VALUE_1
FROM SCHEMA.TABLE
WHERE ID = 1;
Currently I am writing these select statements manually for all records.
SETUPS.ID is mapped to another master table META.ID in order to build the select statement.
The column names with pattern TYPE_%, i.e. TYPE_1, come from the META table; there are total of 20 columns in the table with this pattern but in this example I've used only 10 because META.LVL_CNT = 10. Similarly if META.LVL_CNT = 5 then only select columns TYPE_1,TYPE_2,TYPE_3,TYPE_4,TYPE_5.
The column aliases, i.e. VALUE_1, are values which come from the corresponding column where META.ID = 1 (as in this example).
ID will always be provided, so it can be used to query table META.
EDIT
The column aliases which come from META table will never have a pattern as I have shown in my example, but with LVL_CNT, at runtime we will know the number of columns. I tried to #Asfakul's provided logic and built a dynamic sql using the column names retrieved dynamically but when using EXECUTE IMMEDIATE INTO I realized I don't know how many columns will be retrieved and therefore wouldn't be able to dynamically generate the alias name with this method.
Need an approach to automatically build this select statment using above information.. how can I achieve this? Please provide any examples.
You can use this as the basis
declare
upper_level number;
t_sql varchar2(1000);
l_sql varchar2(1000);
begin
select lvl_cnt into upper_level from
SETUPS S,META S
where s.id=m.id
l_sql:='SELECT ID,
Title,
Desc,
Type,'||
upper_level
for lvl in 1..upper_level
loop
t_sql:=t_sql||'type_'||lvl||','
end loop;
l_sql:=l_sql||t_sql
l_sql:=rtrim(l_sql,',');
l_sql:=l_sql||' FROM SCHEMA.TABLE
WHERE ID = 1;';
end
I recommend this approach, if you already know how to build dynamic SQL, then use this concept to build your query:
SELECT 'TYPE_' || LEVEL
FROM DUAL
CONNECT BY LEVEL <= 10 --10 could be a variable