Unable to query multiple tables via XML: Error occurred in XML processing - sql

I would like to get a column_name and table_name for a list from all_tab_columns (which is not a problem till here) and also for each given column I want to go to the original table/column and see what is the top value with highest occurence.
With the query below I get the desired value for 1 example of column in 1 table:
select col1
from (SELECT col1, rank () over (order by count(*) desc) as rnk
from T1
Group by col1
)
where rnk = 1
now I want something like this:
select table_name,
column_name,
xmlquery('/ROWSET/ROW/C/text()'
passing xmltype(dbms_xmlgen.getxml( 'select ' || column_name || ' from (select ' || column_name ||', rank () over (order by count(*) desc) as rnk from '
|| table_name || ' Group by ' || column_name || ') where rnk = 1;'))
returning content) as C
from all_tab_columns
where owner = 'S1'
and table_name in ('T1', 'T2', 'T3', 'T4')
;
but it does not work. This is the error I get:
ORA-19202: Error occurred in XML processing
ORA-00933: SQL command not properly ended
ORA-06512: at "SYS.DBMS_XMLGEN", line 176
ORA-06512: at line 1
19202. 00000 - "Error occurred in XML processing%s"
*Cause: An error occurred when processing the XML function
*Action: Check the given error message and fix the appropriate problem
I make an example. These are my two tables, for instance; T1:
col.1 col.2 col.3
----- ---------- -----
y m1
y 22 m2
n 45 m2
y 10 m5
and T2:
col.1 col.2 col.3
----- ------- -----
1 germany xxx
2 england xxx
3 germany uzt
3 germany vvx
8 US XXX
so
from T1/Col.1 I should get 'y'
from T1/col.3 I should get 'm2'
from T2/col.3 I should get 'xxx'
and so on.

The important error in what has been reported to you is this one:
ORA-00933: SQL command not properly ended
Remove the semicolon from the query inside the dbms_xmlgen.getxml() call:
select table_name,
column_name,
xmlquery('/ROWSET/ROW/C/text()'
passing xmltype(dbms_xmlgen.getxml( 'select ' || column_name || ' from (select ' || column_name ||', rank () over (order by count(*) desc) as rnk from '
|| table_name || ' Group by ' || column_name || ') where rnk = 1'))
-------^ no semicolon here
returning content) as C
from all_tab_columns
...
Your XPath seems to be wrong too though; you're looking for /ROWSET/ROW/C, but C is the column alias for the entire expression, not the column being counted. You need to alias the column name within the query, and use that in the XPath:
select table_name,
column_name,
xmlquery('/ROWSET/ROW/COL/text()'
-- ^^^
passing xmltype(dbms_xmlgen.getxml( 'select ' || column_name || ' as col from (select ' || column_name ||', rank () over (order by count(*) desc) as rnk from '
-- ^^^^^^
|| table_name || ' Group by ' || column_name || ') where rnk = 1'))
returning content) as C
from all_tab_columns
...
With your sample data that gets:
TABLE_NAME COLUMN_NAME C
------------------------------ ------------------------------ ----------
T1 col.1 y
T1 col.2 224510
T1 col.3 m2
T2 col.1 3
T2 col.2 germany
T2 col.3 xxx
db<>fiddle
The XMLQuery is returning an XMLtype result, which your client is apparently showing as (XMLTYPE). You can probably change that behaviour - e.g. in Sql Developer from Tool->Preferences->Database->Advanced->DIsplay XMl Value in Grid. But you can also convert the reult to a string, using getStringVal() to return a varchar2 (or getClobVal() if you have CLOB values, which might cause you other issues):
select table_name,
column_name,
xmlquery('/ROWSET/ROW/COL/text()'
passing xmltype(dbms_xmlgen.getxml( 'select ' || column_name || ' as col from (select ' || column_name ||', rank () over (order by count(*) desc) as rnk from '
|| table_name || ' Group by ' || column_name || ') where rnk = 1'))
returning content).getStringVal() as C
-- ^^^^^^^^^^^^^^^
from all_tab_columns
...
As you can see, this doesn't do quite what you might expect when there are ties due to equal counts - in your example, there are found different values for T1."col.2" (null, 10, 22, 45) which each appear once; and the XMLQuery is sticking them all together in one result. You need to decide what you want to happen in that case; if you only want to see one then you need to specify how to decide to break ties, within the analytic order by clause.
I actually want to see all results but I expected to see them in different rows
An alternative approach that allows that is to use XMLTable instead of XMLQuery:
select table_name, column_name, value
from (
select atc.table_name, atc.column_name, x.value, x.value_count,
rank() over (partition by atc.table_name, atc.column_name
order by x.value_count desc) as rnk
from all_tab_columns atc
cross join xmltable(
'/ROWSET/ROW'
passing xmltype(dbms_xmlgen.getxml(
'select "' || column_name || '" as value, count(*) as value_count '
|| 'from ' || table_name || ' '
|| 'group by "' || column_name || '"'))
columns value varchar2(4000) path 'VALUE',
value_count number path 'VALUE_COUNT'
) x
where atc.owner = user
and atc.table_name in ('T1', 'T2', 'T3', 'T4')
)
where rnk = 1;
The inner query cross-joins all_tab_columns to an XMLTable which does a simpler dbms_xmlgen.get_xml() call to just get every value and its count, extracts the values and counts as relational data from the generated XML, and includes the ranking function as part of that subquery rather than within the XML generation. If you run the subquery on its own you'll see all possibel values and their counts, along with each values' ranking.
The outer query then just filters on the ranking, and shows you the relevant columns from the subquery for the first-ranked result.
db<>fiddle

Related

ORACLE: SQL syntax to find table with two columns with names like ID, NUM

My question is based on:
Finding table with two column names
If interested, please read the above as it covers much ground that I will not repeat here.
For the answer given, I commented as follows:
NOTE THAT You could replace the IN with = and an OR clause, but generalizing this to like may not work because the like could get more than 1 count per term: e.g.
SELECT OWNER, TABLE_NAME, count(DISTINCT COLUMN_NAME) as ourCount
FROM all_tab_cols WHERE ( (column_name LIKE '%ID%') OR (COLUMN_NAME LIKE '%NUM%') )
GROUP BY OWNER, TABLE_NAME
HAVING COUNT(DISTINCT column_name) >= 2
ORDER BY OWNER, TABLE_NAME ;
This code compiles and runs. However, it will not guarantee that the table has both a column with a name containing ID and a column with a name containging NUM, because there may be two or more columns with names like ID.
Is there a way to generalize the answer given in the above link for a like command.
GOAL: Find tables that contain two column names, one like ID (or some string) and one like NUM (or some other string).
Also, after several answers came in, as "extra credit", I re-did an answer by Ahmed to use variables in Toad, so I've added a tag for Toad as well.
You may use conditional aggregation as the following:
SELECT OWNER, TABLE_NAME, COUNT(CASE WHEN COLUMN_NAME LIKE '%ID%' THEN COLUMN_NAME END) as ID_COUNT,
COUNT(CASE WHEN COLUMN_NAME LIKE '%NUM%' THEN COLUMN_NAME END) NUM_COUNT
FROM all_tab_cols
GROUP BY OWNER, TABLE_NAME
HAVING COUNT(CASE WHEN COLUMN_NAME LIKE '%ID%' THEN COLUMN_NAME END)>=1 AND
COUNT(CASE WHEN COLUMN_NAME LIKE '%NUM%' THEN COLUMN_NAME END)>=1
ORDER BY OWNER, TABLE_NAME ;
See a demo.
If you want to select tables that contain two column names, one like ID and one like NUM, you may replace >=1 with =1 in the having clause.
If I understood you correctly, you want to return tables that contain two (or more) columns whose names contain both ID and NUM (sub)strings.
My all_tab_cols CTE mimics that data dictionary view, just to illustrate the problem.
EMP table contains 3 columns that have the ID (sub)string, but it should count as 1 (not 3); also, as that table doesn't contain any columns that have the NUM (sub)string in their name, the EMP table shouldn't be part of the result set
DEP table contains one ID and one NUM column, so it should be returned
Therefore: the TEMP CTE counts number of ID and NUM columns (duplicates are ignored). The final query expects that table contains both columns.
Sample data:
SQL> with all_tab_cols (table_name, column_name) as
2 (select 'EMP', 'ID_EMP' from dual union all
3 select 'EMP', 'ID_MGR' from dual union all
4 select 'EMP', 'SAL' from dual union all
5 select 'EMP', 'DID_ID' from dual union all
6 --
7 select 'DEP', 'ID_DEP' from dual union all
8 select 'DEP', 'DNUM' from dual union all
9 select 'DEP', 'LOC' from dual
10 ),
Query begins here:
11 temp as
12 (select table_name, column_name,
13 sum(case when regexp_count(column_name, 'ID') = 0 then 0
14 when regexp_count(column_name, 'ID') >= 1 then 1
15 end) cnt_id,
16 sum(case when regexp_count(column_name, 'NUM') = 0 then 0
17 when regexp_count(column_name, 'NUM') >= 1 then 1
18 end) cnt_num
19 from all_tab_cols
20 group by table_name, column_name
21 )
22 select table_name
23 from temp
24 group by table_name
25 having sum(cnt_id) = sum(cnt_num)
26 and sum(cnt_id) = 1;
TABLE_NAME
--------------------
DEP
SQL>
You could do a UNION ALL and then a GroupBy with a Count on a subquery to determine the tables you want by separating your query into seperate result sets, 1 based on ID and the other based on NUM:
SELECT *
FROM
(
SELECT OWNER, TABLE_NAME
FROM all_tab_cols
WHERE column_name LIKE '%ID%'
GROUP BY OWNER, TABLE_NAME
UNION ALL
SELECT OWNER, TABLE_NAME
FROM all_tab_cols
WHERE column_name LIKE '%NUM%'
GROUP BY OWNER, TABLE_NAME
) x
GROUP BY x.OWNER, x.TABLE_NAME
HAVING COUNT(x.TABLE_NAME) >= 2
ORDER BY x.OWNER, x.TABLE_NAME ;
Make functions to re-use easely:
CREATE OR REPLACE FUNCTION get_user_tables_with_collist( i_collist IN VARCHAR2 )
RETURN SYS.ODCIVARCHAR2LIST
AS
w_result SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST();
w_re VARCHAR2(64) := '[^,;./+=*\.\?%[:space:]-]+' ;
BEGIN
WITH collist(colname) AS (
SELECT REGEXP_SUBSTR( UPPER(i_collist), w_re, 1, LEVEL ) FROM DUAL
CONNECT BY REGEXP_SUBSTR( UPPER(i_collist), w_re, 1, LEVEL ) IS NOT NULL
)
SELECT table_name BULK COLLECT INTO w_result FROM (
SELECT table_name, COUNT(column_name) AS n FROM user_tab_columns
WHERE EXISTS(
SELECT 1 FROM collist
WHERE colname = column_name
)
GROUP BY table_name
) d
WHERE d.n = (SELECT COUNT(*) FROM collist)
;
RETURN w_result;
END ;
/
CREATE OR REPLACE FUNCTION get_all_tables_with_collist( i_owner IN VARCHAR2, i_collist IN VARCHAR2 )
RETURN SYS.ODCIVARCHAR2LIST
AS
w_result SYS.ODCIVARCHAR2LIST := SYS.ODCIVARCHAR2LIST();
w_re VARCHAR2(64) := '[^,;./+=*\.\?%[:space:]-]+' ;
BEGIN
WITH collist(colname) AS (
SELECT REGEXP_SUBSTR( UPPER(i_collist), w_re, 1, LEVEL ) FROM DUAL
CONNECT BY REGEXP_SUBSTR( UPPER(i_collist), w_re, 1, LEVEL ) IS NOT NULL
)
SELECT table_name BULK COLLECT INTO w_result FROM (
SELECT table_name, COUNT(column_name) AS n FROM all_tab_columns
WHERE EXISTS(
SELECT 1 FROM collist
WHERE colname = column_name
)
AND owner = UPPER(i_owner)
GROUP BY table_name
) d
WHERE d.n = (SELECT COUNT(*) FROM collist)
;
RETURN w_result;
END ;
/
select * from get_all_tables_with_collist('sys', 'table_name;column_name') ;
ALL_COL_COMMENTS
ALL_COL_PENDING_STATS
ALL_COL_PRIVS
...
This is essentially an "edit" of Littlefoot's answer, that I believe makes things better. I give due credit, but I was asked to make this a separate answer, so I am doing so.
11 temp as -- USE WITH IF not using the data part above
12 (select table_name, column_name,
13 sum(case when regexp_count(column_name, 'ID') = 0 then 0
14 when regexp_count(column_name, 'ID') >= 1 then 1
15 end) cnt_id,
16 sum(case when regexp_count(column_name, 'NUM') = 0 then 0
17 when regexp_count(column_name, 'NUM') >= 1 then 1
18 end) cnt_num
19 from all_tab_cols
20 group by table_name, column_name
21 )
22 select table_name
23 from temp
24 group by table_name
25 having sum(cnt_id) >= 1
26 and sum(cnt_num) >= 1;
This is a variant of the answer by Ahmed that uses conditional aggregation. I just updated it to use variables. This works in Toad. It may not work on other Oracle systems.
I think p3consulting gave a nice answer also, but the code below is shorter and somewhat easier to read (in my opinion).
For how I figured out how to add the variables in Toad, see answers by Alan in: How do I declare and use variables in PL/SQL like I do in T-SQL?
Also, to use the script variables, run in Toad with "Run as script" otherwise, one would input variables, which, to me, is not very desirable.
var searchVal1 varchar2(20);
var searchVal2 varchar2(20);
exec :searchVal1 := '%ID%';
exec :searchVal2 := '%NUM%';
SELECT OWNER, TABLE_NAME
, COUNT(CASE WHEN COLUMN_NAME LIKE :searchVal1 THEN COLUMN_NAME END) as COUNT_1,
COUNT(CASE WHEN COLUMN_NAME LIKE :searchVal2 THEN COLUMN_NAME END) as COUNT_2
FROM all_tab_cols
GROUP BY OWNER, TABLE_NAME
HAVING COUNT(CASE WHEN COLUMN_NAME LIKE :searchVal1 THEN COLUMN_NAME END)>=1 AND
COUNT(CASE WHEN COLUMN_NAME LIKE :searchVal2 THEN COLUMN_NAME END)>=1
ORDER BY OWNER, TABLE_NAME ;

Splitting up Group By for similar values in a column within SQL Server

I am trying to split up the values in column5 so that when using a GROUP BY on column5 (seen below) the 2 value isn't all grouped together. Instead, the values will be separated out so that the first value of 2 is in it's own group the the second group is the value 46675 and the final third groupings will be the last couple 2 values. In short, I am looking for a way to split up the 2 value so that it does not aggregate all the 2 values together, but instead splits them into separate groupings (mirroring the 'Groups' column). The intended outcome is to have the each separated 2 values be aggregated together in their own respective groups. Added tidy table and picture from Excel file.
||Column1||Column2||Column3||Column4||Column5||Groups||
|| 1 || NO || A || F || 2 || 1 ||
|| 2 || Yes || B || C || 46 || 2 ||
|| 3 || NO || C || F || 2 || 3 ||
|| 4 || NO || D || F || 2 || 3 ||
Image of Table from Excel File
This is the prototype gaps and islands problem. You're looking for runs in Column5 ordered by Column1:
with data as (
select *,
row_number() over (order by Column1) -
row_number() over (partition by Column5 order by Column1) as grp
)
select * from data order by Column1;

SQL Query to split result by comma and give result as input of another query

I'm trying to use SQL query where the value of one query is used in another query. Here is my SQL query:
Select *
from ( select detection_class, detection_class_id, matched_alert_id, stream_id, track_id, detection_time, frame_id
from matched_alert
where stream_id = %s
group by track_id )
where (SELECT ',' || detection_class || ',' FROM alerts WHERE alert_id = %s) LIKE '%,' || detection_class || ',%'
this query is stored in query variable.
then execute as below:
place_holders = [stream_id, alert_id]
try:
with connection.cursor() as cursor:
cursor.execute(query, place_holders)
rows = cursor.fetchall()
shows error:
return sql % params
TypeError: not enough arguments for format string
the result of select detection_class from alerts where alert_id = %s is like 'car,bus,bike' but I need result like 'bus', 'car', 'bike' to give input of where detection_class IN like where detection_class IN ('car', 'bus').
alerts table
matched_alert table
so, how can I split this result by a comma and make a separate string?
Instead of IN you can use the operator LIKE:
WHERE (
SELECT ',' || detection_class || ','
FROM alerts
WHERE alert_id = ?
) LIKE '%,' || detection_class || ',%'
and the query as a Python string will be:
query = """
Select *
from ( select detection_class, detection_class_id, matched_alert_id, stream_id, track_id, detection_time, frame_id
from matched_alert
where stream_id = ?
group by track_id )
where (SELECT ',' || detection_class || ',' FROM alerts WHERE alert_id = ?) LIKE '%,' || detection_class || ',%'
"""
You can side-step the problem entirely with an inner join, thus:
Select *
from ( select detection_class, detection_class_id, matched_alert_id, stream_id, track_id, detection_time, frame_id
from matched_alert
where stream_id = %s
group by track_id ) subQ
inner join alerts a on a.detection_class = subQ.detection_class and a.alert_id = subQ.stream_id
Incidentally I am not sure what you want a GROUP BY for in your sub-query since you are not using any aggregation functions.

How to find the changes happened between rows?

I have two tables that I need to find the difference between.
What's required is a table of a summary of what fields have changed (ignoring id columns). Also, I don't know which columns have changed.
e.g. Source table [fields that have changed are {name}, {location}; {id} is ignored]
id || name || location || description
1 || aaaa || ddd || abc
2 || bbbb || eee || abc
e.g. Output Table [outputting {name}, {location} as they have changed]
Table_name || Field_changed || field_was || field_now
Source table || name || aaaa || bbbb
Source table || location || ddd || eee
I have tried to use lag(); but that only gives me the columns I selected. Eventually I'd want to see all changes in all columns as I am not sure what columns are changed.
Also please note that the table has about 150 columns - so one of the biggest issues is how to find the ones that changed
As your table can contain multiple changes in a single row and it needs to be calculated in the result as multiple rows, I have created a query to incorporate them separately as follows:
WITH DATAA(ID, NAME, LOCATION, DESCRIPTION)
AS
(SELECT 1, 'aaaa', 'ddd', 'abc' FROM DUAL UNION ALL
SELECT 2, 'bbbb', 'eee', 'abc' FROM DUAL),
-- YOUR QUERY WILL START FROM HERE
CTE AS (SELECT NAME,
LAG(NAME,1) OVER (ORDER BY ID) PREV_NAME,
LOCATION,
LAG(LOCATION,1) OVER (ORDER BY ID) PREV_LOCATION,
DESCRIPTION,
LAG(DESCRIPTION,1) OVER (ORDER BY ID) PREV_DESCRIPTION
FROM DATAA)
--
SELECT
'Source table' AS TABLE_NAME,
FIELD_CHANGED,
FIELD_WAS,
FIELD_NOW
FROM
(
SELECT
'Name' AS FIELD_CHANGED,
PREV_NAME AS FIELD_WAS,
NAME AS FIELD_NOW
FROM
CTE
WHERE
NAME <> PREV_NAME
UNION ALL
SELECT
'location' AS FIELD_CHANGED,
PREV_LOCATION AS FIELD_WAS,
LOCATION AS FIELD_NOW
FROM
CTE
WHERE
LOCATION <> PREV_LOCATION
UNION ALL
SELECT
'description' AS FIELD_CHANGED,
PREV_DESCRIPTION AS FIELD_WAS,
DESCRIPTION AS FIELD_NOW
FROM
CTE
WHERE
DESCRIPTION <> PREV_DESCRIPTION
);
Output:
DEMO
Cheers!!

PostgreSQL convert columns to rows? Transpose?

I have a PostgreSQL function (or table) which gives me the following output:
Sl.no username Designation salary etc..
1 A XYZ 10000 ...
2 B RTS 50000 ...
3 C QWE 20000 ...
4 D HGD 34343 ...
Now I want the Output as below:
Sl.no 1 2 3 4 ...
Username A B C D ...
Designation XYZ RTS QWE HGD ...
Salary 10000 50000 20000 34343 ...
How to do this?
SELECT
unnest(array['Sl.no', 'username', 'Designation','salary']) AS "Columns",
unnest(array[Sl.no, username, value3Count,salary]) AS "Values"
FROM view_name
ORDER BY "Columns"
Reference : convertingColumnsToRows
Basing my answer on a table of the form:
CREATE TABLE tbl (
sl_no int
, username text
, designation text
, salary int
);
Each row results in a new column to return. With a dynamic return type like this, it's hardly possible to make this completely dynamic with a single call to the database. Demonstrating solutions with two steps:
Generate query
Execute generated query
Generally, this is limited by the maximum number of columns a table can hold. So not an option for tables with more than 1600 rows (or fewer). Details:
What is the maximum number of columns in a PostgreSQL select query
Postgres 9.4+
Dynamic solution with crosstab()
Use the first one you can. Beats the rest.
SELECT 'SELECT *
FROM crosstab(
$ct$SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM '
|| attrelid::regclass || ') t
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || '])
WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, '
|| (SELECT string_agg('r'|| rn ||' text', ', ')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Operating with attnum instead of actual column names. Simpler and faster. Join the result to pg_attribute once more or integrate column names like in the pg 9.3 example.
Generates a query of the form:
SELECT *
FROM crosstab(
$ct$
SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM tbl) t
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, r1 text, r2 text, r3 text, r4 text);
This uses a whole range of advanced features. Just too much to explain.
Simple solution with unnest()
One unnest() can now take multiple arrays to unnest in parallel.
SELECT 'SELECT * FROM unnest(
''{sl_no, username, designation, salary}''::text[]
, ' || string_agg(quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]', E'\n, ')
|| E') \n AS t(col,' || string_agg('row' || sl_no, ',') || ')' AS sql
FROM tbl;
Result:
SELECT * FROM unnest(
'{sl_no, username, designation, salary}'::text[]
,'{10,Joe,Music,1234}'::text[]
,'{11,Bob,Movie,2345}'::text[]
,'{12,Dave,Theatre,2356}'::text[])
AS t(col,row1,row2,row3,row4);
db<>fiddle here
Old sqlfiddle
Postgres 9.3 or older
Dynamic solution with crosstab()
Completely dynamic, works for any table. Provide the table name in two places:
SELECT 'SELECT *
FROM crosstab(
''SELECT unnest(''' || quote_literal(array_agg(attname))
|| '''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || ']) AS val
FROM ' || attrelid::regclass || '
ORDER BY generate_series(1,' || count(*) || '), 2''
) t (col text, '
|| (SELECT string_agg('r'|| rn ||' text', ',')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Could be wrapped into a function with a single parameter ...
Generates a query of the form:
SELECT *
FROM crosstab(
'SELECT unnest(''{sl_no,username,designation,salary}''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) AS val
FROM tbl
ORDER BY generate_series(1,4), 2'
) t (col text, r1 text,r2 text,r3 text,r4 text);
Produces the desired result:
col r1 r2 r3 r4
-----------------------------------
sl_no 1 2 3 4
username A B C D
designation XYZ RTS QWE HGD
salary 10000 50000 20000 34343
Simple solution with unnest()
SELECT 'SELECT unnest(''{sl_no, username, designation, salary}''::text[] AS col)
, ' || string_agg('unnest('
|| quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]) AS row' || sl_no, E'\n , ') AS sql
FROM tbl;
Slow for tables with more than a couple of columns.
Generates a query of the form:
SELECT unnest('{sl_no, username, designation, salary}'::text[]) AS col
, unnest('{10,Joe,Music,1234}'::text[]) AS row1
, unnest('{11,Bob,Movie,2345}'::text[]) AS row2
, unnest('{12,Dave,Theatre,2356}'::text[]) AS row3
, unnest('{4,D,HGD,34343}'::text[]) AS row4
Same result.
If (like me) you were needing this information from a bash script, note there is a simple command-line switch for psql to tell it to output table columns as rows:
psql mydbname -x -A -F= -c "SELECT * FROM foo WHERE id=123"
The -x option is the key to getting psql to output columns as rows.
I have a simpler approach than Erwin pointed above, that worked for me with Postgres (and I think that it should work with all major relational databases whose support SQL standard)
You can use simply UNION instead of crosstab:
SELECT text 'a' AS "text" UNION SELECT 'b';
text
------
a
b
(2 rows)
Of course that depends on the case in which you are going to apply this. Considering that you know beforehand what fields you need, you can take this approach even for querying different tables. I.e.:
SELECT 'My first metric' as name, count(*) as total from first_table UNION
SELECT 'My second metric' as name, count(*) as total from second_table
name | Total
------------------|--------
My first metric | 10
My second metric | 20
(2 rows)
It's a more maintainable approach, IMHO. Look at this page for more information: https://www.postgresql.org/docs/current/typeconv-union-case.html
There is no proper way to do this in plain SQL or PL/pgSQL.
It will be way better to do this in the application, that gets the data from the DB.