PostgreSQL: Get values of a register as multiple rows - sql

Using PostgreSQL 9.3, I'm creating a Jasper reports template to make a pdf report. I want to create reports of different tables, with multiple columns, all with the same template. A solution could be to get values of register as pairs of column name and value per id.
By example, if I had a table like:
id | Column1 | Column2 | Column3
-------------------------------------------------
1 | Register1C1 | Register1C2 | Register1C3
I would like to get the register as:
Id | ColumnName | Value
-----------------------------
1 | Column1 | Register1C1
1 | Column2 | Register1C2
1 | Column3 | Register1C3
The data type of value columns can vary!
Is it possible? How can I do this?

If all your columns share the same data type and order of rows does not have to be enforced:
SELECT t.id, v.*
FROM tbl t, LATERAL (
VALUES
('col1', col1)
, ('col2', col2)
, ('col3', col3)
-- etc.
) v(col, val);
About LATERAL (requires Postgres 9.3 or later):
What is the difference between LATERAL and a subquery in PostgreSQL?
Combining it with a VALUES expression:
Crosstab transpose query request
SELECT DISTINCT on multiple columns
For varying data types, the common denominator would be text, since every type can be cast to text. Plus, order enforced:
SELECT t.id, v.col, v.val
FROM tbl t, LATERAL (
VALUES
(1, 'col1', col1::text)
, (2, 'col2', col2::text)
, (3, 'col3', col3::text)
-- etc.
) v(rank, col, val)
ORDER BY t.id, v.rank;
In Postgres 9.4 or later use the new unnest() for multiple arrays:
SELECT t.id, v.*
FROM tbl t, unnest('{col1,col2,col3}'::text[]
, ARRAY[col1,col2,col3]) v(col, val);
-- , ARRAY[col1::text,col2::text,col3::text]) v(col, val);
The commented alternative for varying data types.
Full automation for Postgres 9.4:
The query above is convenient to automate for a dynamic set of columns:
CREATE OR REPLACE FUNCTION f_transpose (_tbl regclass, VARIADIC _cols text[])
RETURNS TABLE (id int, col text, val text) AS
$func$
BEGIN
RETURN QUERY EXECUTE format(
'SELECT t.id, v.* FROM %s t, unnest($1, ARRAY[%s]) v'
, _tbl, array_to_string(_cols, '::text,') || '::text'))
-- , _tbl, array_to_string(_cols, ','))) -- simple alternative for only text
USING _cols;
END
$func$ LANGUAGE plpgsql;
Call - with table name and any number of column names, any data types:
SELECT * FROM f_transpose('table_name', 'column1', 'column2', 'column3');
Weakness: the list of column names is not safe against SQL injection. You could gather column names from pg_attribute instead. Example:
How to perform the same aggregation on every column, without listing the columns?

SELECT id
,unnest(string_to_array('col1,col2,col3', ',')) col_name
,unnest(string_to_array(col1 || ',' || col2 || ',' || col3, ',')) val
FROM t
Try following method:
My sample table name is t,to get the n columns name you can use this query
select string_agg(column_name,',') cols from information_schema.columns where
table_name='t' and column_name<>'id'
this query will selects all columns in your table except id column.If you want to specify schema name then use table_schema='your_schema_name' in where clause
To create select query dynamically
SELECT 'select id,unnest(string_to_array(''' || cols || ''','','')) col_name,unnest(string_to_array(' || cols1 || ','','')) val from t'
FROM (
SELECT string_agg(column_name, ',') cols -- here we'll get all the columns in table t
,string_agg(column_name, '||'',''||') cols1
FROM information_schema.columns
WHERE table_name = 't'
AND column_name <> 'id'
) tb;
And using following plpgsql function dynamically creates SELECT id,unnest(string_to_array('....')) col_name,unnest(string_to_array(.., ',')) val FROM t and execute.
CREATE OR replace FUNCTION fn ()
RETURNS TABLE (
id INT
,columname TEXT
,columnvalues TEXT
) AS $$
DECLARE qry TEXT;
BEGIN
SELECT 'select id,unnest(string_to_array(''' || cols || ''','','')) col_name,unnest(string_to_array(' || cols1 || ','','')) val from t'
INTO qry
FROM (
SELECT string_agg(column_name, ',') cols
,string_agg(column_name, '||'',''||') cols1
FROM information_schema.columns
WHERE table_name = 't'
AND column_name <> 'id'
) tb;
RETURN QUERY
EXECUTE format(qry);
END;$$
LANGUAGE plpgsql
Call this function like select * from fn()

Related

PL SQL comma delimited to JSON array conversion

I have values in one column with delimit as comma.
Col1
a,b,c,d
I want to convert this into JSON Array. I know JSON_ARRAY function is available in PL/SQL from 12.2 onwards. But JSON_ARRAY converts multiple columns to the array. I have values in the single column.
output: ["a","b","c","d"]
You can use JSON_ARRAYAGG() instead of JSON_ARRAY() function without using PL/SQL, after converting those letters into row-wise style through splitting by commas such as
WITH t(id,col1) AS
(
SELECT 1,'a,b,c,d' FROM dual UNION ALL
SELECT 2,'d,e,f,g,h,i' FROM dual
), t2 AS
(
SELECT REGEXP_SUBSTR(col1,'[^,]+',1,level) AS col, id
FROM t
CONNECT BY level <= REGEXP_COUNT(col1,',')+1
AND PRIOR SYS_GUID() IS NOT NULL
AND PRIOR col1 = col1
)
SELECT id, JSON_ARRAYAGG(col ORDER BY col RETURNING VARCHAR2(100)) As "JSON value"
FROM t2
GROUP BY id
Demo
Just use replace:
SELECT '["'
||REPLACE(REPLACE(col1,'"','\"'),',','","')
||'"]' AS json_value
FROM table_name;
Or, in PL/SQL:
DECLARE
col1 VARCHAR2(50) := 'a,b,c,d';
json VARCHAR2(50);
BEGIN
json := '["'||REPLACE(REPLACE(col1,'"','\"'),',','","')||'"]';
DBMS_OUTPUT.PUT_LINE(json);
END;
/
db<>fiddle here

Select query using json format value

If customer first_name-'Monika',
last_name='Awasthi'
Then I am using below query to return value in json format:
SELECT *
FROM
(
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS '1','VALUE' IS 'Monika'||' '||'Awasthi'))
FROM DUAL
);
It is working fine & give below output:
[{"CODE":"1","VALUE":"Monika Awasthi"}]
But I want one more value which should be reversed means output should be:
[{"CODE":"1","VALUE":"Monika Awasthi"},{"CODE":"2","VALUE":"Awasthi Monika"}]
Kindly give me some suggestions. Thank You
Another approach is to use a CTE to generate the two codes and values; your original version could be written to get the name data from a table or CTE:
-- CTE for sample data
WITH cte (first_name, last_name) AS (
SELECT 'Monika', 'Awasthi' FROM DUAL
)
-- query against CTE or table
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS '1','VALUE' IS last_name ||' '|| first_name))
FROM cte;
And you could then extend that with a CTE that generates the value with the names in both orders:
WITH cte1 (first_name, last_name) AS (
SELECT 'Monika', 'Awasthi' FROM DUAL
),
cte2 (code, value) AS (
SELECT 1 AS code, first_name || ' ' || last_name FROM cte1
UNION ALL
SELECT 2 AS code, last_name || ' ' || first_name FROM cte1
)
SELECT JSON_ARRAYAGG(JSON_OBJECT('CODE' IS code,'VALUE' IS value))
FROM cte2;
which gives:
JSON_ARRAYAGG(JSON_OBJECT('CODE'ISCODE,'VALUE'ISVALUE))
-------------------------------------------------------------------------
[{"CODE":1,"VALUE":"Monika Awasthi"},{"CODE":2,"VALUE":"Awasthi Monika"}]
db<>fiddle
A simple logic through use of SQL(without using PL/SQL) in order to generate code values as only be usable for two columns as in this case might be
SELECT JSON_ARRAYAGG(
JSON_OBJECT('CODE' IS tt.column_id,
'VALUE' IS CASE WHEN column_id=1
THEN name||' '||surname
ELSE surname||' '||name
END)
) AS result
FROM t
CROSS JOIN (SELECT column_id FROM user_tab_cols WHERE table_name = 'T') tt
where t is a table which hold name and surname columns
Demo
More resilient solution might be provided through use of PL/SQL, even more columns exist within the data source such as
DECLARE
v_jso VARCHAR2(4000);
v_arr OWA.VC_ARR;
v_arr_t JSON_ARRAY_T := JSON_ARRAY_T();
BEGIN
FOR c IN ( SELECT column_id FROM user_tab_cols WHERE table_name = 'T' )
LOOP
SELECT 'JSON_OBJECT( ''CODE'' IS '||MAX(c.column_id)||',
''VALUE'' IS '||LISTAGG(column_name,'||'' ''||')
WITHIN GROUP (ORDER BY ABS(column_id-c.column_id))
||' )'
INTO v_arr(c.column_id)
FROM ( SELECT * FROM user_tab_cols WHERE table_name = 'T' );
EXECUTE IMMEDIATE 'SELECT '||v_arr(c.column_id)||' FROM t' INTO v_jso;
v_arr_t.APPEND(JSON_OBJECT_T(v_jso));
END LOOP;
DBMS_OUTPUT.PUT_LINE(v_arr_t.STRINGIFY);
END;
/
Demo
As I explained in a comment under your question, I am not clear on how you define the CODE values for your JSON string (assuming you have more than one customer).
Other than that, if you need to create a JSON array of objects from individual strings (as in your attempt), you probably need to use JSON_ARRAY rather than JSON_ARRAYAGG. Something like I show below. Incidentally, I also don't know why you needed to SELECT * FROM (subquery) - the outer SELECT seems entirely unnecessary.
So, if you don't actually aggregate over a table, but just need to build a JSON array from individual pieces:
select json_array
(
json_object('CODE' is '1', 'VALUE' is first_name || ' ' || last_name ),
json_object('CODE' is '2', 'VALUE' is last_name || ' ' || first_name)
) as result
from ( select 'Monika' as first_name, 'Awasthi' as last_name from dual )
;
RESULT
------------------------------------------------------------------------------
[{"CODE":"1","VALUE":"Monika Awasthi"},{"CODE":"2","VALUE":"Awasthi Monika"}]

Count number of null values for every column on a table

I would like to calculate, for each column in a table, the percent of rows that are null.
For one column, I was using:
SELECT ((SELECT COUNT(Col1)
FROM Table1)
/
(SELECT COUNT(*)
FROM Table1)) AS Table1Stats
Works great and is fast.
However, I want to do this for all ~50 columns of the table, and my environment does not allow me to use dynamic SQL.
Any recommendations? I am using snowflake to connect to AWS, but as an end user I am using the snowflake browser interface.
You can combine this as:
SELECT COUNT(Col1) * 1.0 / COUNT(*)
FROM Table1;
Or, if you prefer:
SELECT AVG( (Col1 IS NOT NULL)::INT )
FROM Table1;
You can use a mix of object_construct() and flatten() to move the column names into rows. Then do the math for the values missing:
create or replace temp table many_cols as
select 1 a, 2 b, 3 c, 4 d
union all select 1, null, 3, 4
union all select 8, 8, null, null
union all select 8, 8, 7, null
union all select null, null, null, null;
select key column_name
, 1-count(*)/(select count(*) from many_cols) ratio_null
from (
select object_construct(a.*) x
from many_cols a
), lateral flatten(x)
group by key
;
You can do this using a SQL generator if you don't mind copying the text and running it once it's done.
-- SQL generator option:
select 'select' || listagg(' ((select count(' || COLUMN_NAME || ') from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10000"."ORDERS") / ' ||
'(select count(*) from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10000"."ORDERS")) as ' || COLUMN_NAME, ',') as SQL_STATEMENT
from "SNOWFLAKE_SAMPLE_DATA"."INFORMATION_SCHEMA"."COLUMNS"
where TABLE_CATALOG = 'SNOWFLAKE_SAMPLE_DATA' and TABLE_SCHEMA = 'TPCH_SF10000' and TABLE_NAME = 'ORDERS'
;
If the copy and paste is not plausible because you need to script it, you can use the results of the SQL generator in a stored procedure I wrote to execute a single line of dynamic SQL:
call run_dynamic_sql(
select 'select' || listagg(' ((select count(' || COLUMN_NAME || ') from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10000"."ORDERS") / ' ||
'(select count(*) from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF10000"."ORDERS")) as ' || COLUMN_NAME, ',') as SQL_STATEMENT
from "SNOWFLAKE_SAMPLE_DATA"."INFORMATION_SCHEMA"."COLUMNS"
where TABLE_CATALOG = 'SNOWFLAKE_SAMPLE_DATA' and TABLE_SCHEMA = 'TPCH_SF10000' and TABLE_NAME = 'ORDERS'
);
If you want the stored procedure, until it's published on Snowflake's blog it's available here: https://snowflake.pavlik.us/index.php/2021/01/22/running-dynamic-sql-in-snowflake/

Dynamic UNION ALL query in Postgres

We are using a Postgres / PostGis connection to get data that is published via a geoserver.
The Query looks like this at the moment:
SELECT
row_number() over (ORDER BY a.ogc_fid) AS qid, a.wkb_geometry AS geometry
FROM
(
SELECT * FROM test
UNION ALL
SELECT * FROM test1
UNION ALL
SELECT * FROM test2
)a
In our db only valid shapefiles will be imported each in a single table so it would make sense to make the UNION ALL part dynamic (loop over each table and make the UNION ALL statement). Is there a way to do this in a standard Postgres way or do I need to write a function and how would the syntax look like? I am pretty new to SQL.
The shapefiles have a different data structure and only the ogc_fid column and the wkb_geometry column are always available and we would like to union all tables from the DB.
This is just general guidelines you need work in the details specially syntaxis.
You need create a store procedure
Create a loop checking information_schema.tables filter for the tablenames you want
DECLARE
rec record;
strSQL text;
BEGIN
Then create a strSQL with each table
FOR rec IN SELECT table_schema, table_name
FROM information_schema.tables
LOOP
strSQL := strSQL || 'SELECT ogc_fid, wkb_geometry FROM ' ||
rec.table_schema || '.' || rec.table_name || ' UNION ';
END LOOP;
-- have to remove the last ' UNION ' from strSQL
strSQL := 'SELECT row_number() over (ORDER BY a.ogc_fid) AS qid,
a.wkb_geometry AS geometry FROM (' || strSQL || ')';
EXECUTE strSQL;
One solution is to serialize the rest of the columns to json with row_to_json(). (available since PostgreSQL9.2).
For PG9.1 (and earlier) you can use hstore, but note that all values are cast to text.
Why serialize? It is not possible to union rows where the number of colums vary, or the datatypes do not match between the union queries.
I created a quick example to illustrate:
--DROP SCHEMA testschema CASCADE;
CREATE SCHEMA testschema;
CREATE TABLE testschema.test1 (
id integer,
fid integer,
metadata text
);
CREATE TABLE testschema.test2 (
id integer,
fid integer,
city text,
count integer
);
CREATE TABLE testschema.test3 (
id integer,
fid integer
);
INSERT INTO testschema.test1 VALUES (1, 4450, 'lala');
INSERT INTO testschema.test2 VALUES (33, 6682, 'London', 12345);
INSERT INTO testschema.test3 VALUES (185, 8991);
SELECT
row_number() OVER (ORDER BY a.fid) AS qid, a.*
FROM
(
SELECT id, fid, row_to_json(t.*) AS jsondoc FROM testschema.test1 t
UNION ALL
SELECT id, fid, row_to_json(t.*) AS jsondoc FROM testschema.test2 t
UNION ALL
SELECT id, fid, row_to_json(t.*) AS jsondoc FROM testschema.test3 t
) a
SELECT output:
qid id fid jsondoc
1; 1; 4450; "{"id":1,"fid":4450,"metadata":"lala"}"
2; 33; 6682; "{"id":33,"fid":6682,"city":"London","count":12345}"
3; 185; 8991; "{"id":185,"fid":8991}"

Transposing a table through select query

I have a table like:
Key type value
---------------------
40 A 12.34
41 A 10.24
41 B 12.89
I want it in the format:
Types 40 41 42 (keys)
---------------------------------
A 12.34 10.24 XXX
B YYY 12.89 ZZZ
How can this be done through an SQL query. Case statements, decode??
What you're looking for is called a "pivot" (see also "Pivoting Operations" in the Oracle Database Data Warehousing Guide):
SELECT *
FROM tbl
PIVOT(SUM(value) FOR Key IN (40, 41, 42))
It was added to Oracle in 11g. Note that you need to specify the result columns (the values from the unpivoted column that become the pivoted column names) in the pivot clause. Any columns not specified in the pivot are implicitly grouped by. If you have columns in the original table that you don't wish to group by, select from a view or subquery, rather than from the table.
You can engage in a bit of wizardry and get Oracle to create the statement for you, so that you don't need to figure out what column values to pivot on. In 11g, when you know the column values are numeric:
SELECT
'SELECT * FROM tbl PIVOT(SUM(value) FOR Key IN ('
|| LISTAGG(Key, ',') WITHIN GROUP (ORDER BY Key)
|| ');'
FROM tbl;
If the column values might not be numeric:
SELECT
'SELECT * FROM tbl PIVOT(SUM(value) FOR Key IN (\''
|| LISTAGG(Key, '\',\'') WITHIN GROUP (ORDER BY Key)
|| '\'));'
FROM tbl;
LISTAGG probably repeats duplicates (would someone test this?), in which case you'd need:
SELECT
'SELECT * FROM tbl PIVOT(SUM(value) FOR Key IN (\''
|| LISTAGG(Key, '\',\'') WITHIN GROUP (ORDER BY Key)
|| '\'));'
FROM (SELECT DISTINCT Key FROM tbl);
You could go further, defining a function that takes a table name, aggregate expression and pivot column name that returns a pivot statement by first producing then evaluating the above statement. You could then define a procedure that takes the same arguments and produces the pivoted result. I don't have access to Oracle 11g to test it, but I believe it would look something like:
CREATE PACKAGE dynamic_pivot AS
-- creates a PIVOT statement dynamically
FUNCTION pivot_stmt (tbl_name IN varchar2(30),
pivot_col IN varchar2(30),
aggr IN varchar2(40),
quote_values IN BOOLEAN DEFAULT TRUE)
RETURN varchar2(300);
PRAGMA RESTRICT_REFERENCES (pivot_stmt, WNDS, RNPS);
-- creates & executes a PIVOT
PROCEDURE pivot_table (tbl_name IN varchar2(30),
pivot_col IN varchar2(30),
aggr IN varchar2(40),
quote_values IN BOOLEAN DEFAULT TRUE);
END dynamic_pivot;
CREATE PACKAGE BODY dynamic_pivot AS
FUNCTION pivot_stmt (
tbl_name IN varchar2(30),
pivot_col IN varchar2(30),
aggr_expr IN varchar2(40),
quote_values IN BOOLEAN DEFAULT TRUE
) RETURN varchar2(300)
IS
stmt VARCHAR2(400);
quote VARCHAR2(2) DEFAULT '';
BEGIN
IF quote_values THEN
quote := '\\\'';
END IF;
-- "\||" shows that you are still in the dynamic statement string
-- The input fields aren't sanitized, so this is vulnerable to injection
EXECUTE IMMEDIATE 'SELECT \'SELECT * FROM ' || tbl_name
|| ' PIVOT(' || aggr_expr || ' FOR ' || pivot_col
|| ' IN (' || quote || '\' \|| LISTAGG(' || pivot_col
|| ', \'' || quote || ',' || quote
|| '\') WITHIN GROUP (ORDER BY ' || pivot_col || ') \|| \'' || quote
|| '));\' FROM (SELECT DISTINCT ' || pivot_col || ' FROM ' || tbl_name || ');'
INTO stmt;
RETURN stmt;
END pivot_stmt;
PROCEDURE pivot_table (tbl_name IN varchar2(30), pivot_col IN varchar2(30), aggr_expr IN varchar2(40), quote_values IN BOOLEAN DEFAULT TRUE) IS
BEGIN
EXECUTE IMMEDIATE pivot_stmt(tbl_name, pivot_col, aggr_expr, quote_values);
END pivot_table;
END dynamic_pivot;
Note: the length of the tbl_name, pivot_col and aggr_expr parameters comes from the maximum table and column name length. Note also that the function is vulnerable to SQL injection.
In pre-11g, you can apply MySQL pivot statement generation techniques (which produces the type of query others have posted, based on explicitly defining a separate column for each pivot value).
Pivot does simplify things greatly. Before 11g however, you need to do this manually.
select
type,
sum(case when key = 40 then value end) as val_40,
sum(case when key = 41 then value end) as val_41,
sum(case when key = 42 then value end) as val_42
from my_table
group by type;
Never tried it but it seems at least Oracle 11 has a PIVOT clause
If you do not have access to 11g, you can utilize a string aggregation and a grouping method to approx. what you are looking for such as
with data as(
SELECT 40 KEY , 'A' TYPE , 12.34 VALUE FROM DUAL UNION
SELECT 41 KEY , 'A' TYPE , 10.24 VALUE FROM DUAL UNION
SELECT 41 KEY , 'B' TYPE , 12.89 VALUE FROM DUAL
)
select
TYPE ,
wm_concat(KEY) KEY ,
wm_concat(VALUE) VALUE
from data
GROUP BY TYPE;
type KEY VALUE
------ ------- -----------
A 40,41 12.34,10.24
B 41 12.89
This is based on wm_concat as shown here: http://www.oracle-base.com/articles/misc/StringAggregationTechniques.php
I'm going to leave this here just in case it helps, but I think PIVOT or MikeyByCrikey's answers would best suit your needs after re-looking at your sample results.