Postgres - how select jsonb key value pairs as colums? - sql

having records in test table like so (metrics column is jsonb type):
id name metrics
1 machine1 {"metric1": 50, "metric2": 100}
2 machine2 {"metric1": 31, "metric2": 46}
I would like to select the metrics as additional columns, e.g. (pseudo-code):
Select *, json_each(test.metrics) from test;
to get the result like:
id name metric1 metric2
1 machine1 50 100
2 machine2 31 46
Is this even possible?

Use the ->> operator:
select id, name,
metrics ->> 'metric1' as metric1,
metrics ->> 'metric2' as metric2
from test;

You can simply use ->>
demo:db<>fiddle
SELECT
id,
name,
metrics ->> 'metric1' as metric1,
metrics ->> 'metric2' as metric2
FROM t
Note, that now the metric columns are of type text. If you want to them to be of type integer, you need to cast them additionally:
(metrics ->> 'metric1')::int

Related

Postgres table selecting multiple columns and dynamically converting the result (column) into row - transposing column to rows

I have a table (Damage) structure of the following type
Rowid damageTypeACount damageTypeBCount damageTypeCCount
1 23 44 33
And also I have a requirement to read these table rows with result as ( title, id are set manually), sort of making transposing to columns to rows but with additional properties
id damagecountsum label
1 23 Damage Type A
2 44 Damage Type B
3 33 Damage Type C
I did the following query and it works but wondering if there is a better way
SELECT 1 as id,
damageTypeACount as damagecount,
"Damage Type A Count" as label
FROM Damage where rowId=1
UNION ALL
SELECT 2 as id,
damageTypeBCount as damagecount,
"Damage Type B Count" as label
FROM Damage where rowId=1
UNION ALL
SELECT 3 as id,
damageTypeCCount as damagecount,
"Damage Type C Count" as label
FROM Damage where rowId=1
The above query works as expected but I was wondering if it is possible to do this in a single select statement transposing the columns into rows
You can unpivot with a lateral join:
select x.*
from damage d
cross join lateral (values
(d.rowId, d.damageTypeACount, 'Damage Type A'),
(d.rowId, d.damageTypeBCount, 'Damage Type B'),
(d.rowId, d.damageTypeCCount, 'Damage Type C')
) as x(id, damagecount, label)
This reaffects the original id to each generated row. You can also generate new ids with row_number():
select row_number() over(order by id, label) id, x.*
from damage d
cross join lateral (values
(d.rowId, d.damageTypeACount, 'Damage Type A'),
(d.rowId, d.damageTypeBCount, 'Damage Type B'),
(d.rowId, d.damageTypeCCount, 'Damage Type C')
) as x(rowId, damagecount, label)
You can filter the resultset with a where clause if needed:
where d.rowId = 1

SQL grouping by distinct values in a multi-value string column

(I want to perform a group-by based on the distinct values in a string column that has multiple values
The said column has a list of strings in a standard format separated by commas. The potential values are only a,b,c,d.
For example the column collection (type: String) contains:
Row 1: ["a","b"]
Row 2: ["b","c"]
Row 3: ["b","c","a"]
Row 4: ["d"]`
The expected output is a count of unique values:
collection | count
a | 2
b | 3
c | 2
d | 1
For all the below i used this table:
create table tmp (
id INT auto_increment,
test VARCHAR(255),
PRIMARY KEY (id)
);
insert into tmp (test) values
("a,b"),
("b,c"),
("b,c,a"),
("d")
;
If the possible values are only a,b,c,d you can try one of this:
Tke note that this will only works if you have not so similar values like test and test_new, because then the test would be joined also with all test_new rows and the count would not match
select collection, COUNT(*) as count from tmp JOIN (
select CONCAT("%", tb.collection, "%") as like_collection, collection from (
select "a" COLLATE utf8_general_ci as collection
union select "b" COLLATE utf8_general_ci as collection
union select "c" COLLATE utf8_general_ci as collection
union select "d" COLLATE utf8_general_ci as collection
) tb
) tb1
ON tmp.test LIKE tb1.like_collection
GROUP BY tb1.collection;
Which will give you the result you want
collection | count
a | 2
b | 3
c | 2
d | 1
or you can try this one
SELECT
(SELECT COUNT(*) FROM tmp WHERE test LIKE '%a%') as a_count,
(SELECT COUNT(*) FROM tmp WHERE test LIKE '%b%') as b_count,
(SELECT COUNT(*) FROM tmp WHERE test LIKE '%c%') as c_count,
(SELECT COUNT(*) FROM tmp WHERE test LIKE '%d%') as d_count
;
The result would be like this
a_count | b_count | c_count | d_count
2 | 3 | 2 | 1
What you need to do is to first explode the collection column into separate rows (like a flatMap operation). In redshift the only way to generate new rows is to JOIN - so let's CROSS JOIN your input table with a static table having consecutive numbers, and take only ones having id less or equal to number of elements in the collection. Then we'll use split_part function to read the item at correct index. Once we have the exploaded table, we'll do a simple GROUP BY.
If your items are stored as JSON array strings ('["a", "b", "c"]') then you can use JSON_ARRAY_LENGTH and JSON_EXTRACT_ARRAY_ELEMENT_TEXT instead of REGEXP_COUNT and SPLIT_PART respectively.
with
index as (
select 1 as i
union all select 2
union all select 3
union all select 4 -- could be substituted with 'select row_number() over () as i from arbitrary_table limit 4'
),
agg as (
select 'a,b' as collection
union all select 'b,c'
union all select 'b,c,a'
union all select 'd'
)
select
split_part(collection, ',', i) as item,
count(*)
from index,agg
where regexp_count(agg.collection, ',') + 1 >= index.i -- only get rows where number of items matches
group by 1

Oracle transpose a simple table

I have a simple table from a select query that looks like this
CATEGORY | EQUAL | LESS | GREATER
VALUE | 60 | 100 | 20
I want to be able to transpose it so it looks like this
CATEGORY | VALUE
EQUAL | 60
LESS | 100
GREATER | 20
I tried using the pivot function in oracle but I can't seem to get it to work.
I've tried looking all over online but I can't find anything that will help me.
Any help is much appreciated thank you!
Using unpivot -
CREATE TABLE Table1
(CATEGORY varchar2(5), EQUAL int, LESS int, GREATER int)
;
INSERT ALL
INTO Table1 (CATEGORY, EQUAL, LESS, GREATER)
VALUES ('VALUE', 60, 100, 20)
SELECT * FROM dual
;
Query -
select COL AS CATEGORY,VALUE from table1
unpivot (value for COL in (EQUAL, LESS, GREATER));
Result -
CATEGORY VALUE
EQUAL 60
LESS 100
GREATER 20
You can use union all:
select 'EQUAL' as category, equal as value from t union all
select 'LESS' as category, less from t union all
select 'GREATER' as category, greater from t;
If you had a large table, you might want to try some other method (such as a lateral join in Oracle 12c). But for a small table, this is fine.
You may unpivot your values by contribution of DECODE and CONNECT BY clauses :
select decode(myId, 1, 'EQUAL',
2, 'LESS',
3, 'GREATER') as category,
decode(myId, 1, EQUAL,
2, LESS,
3, GREATER) as value
from mytable
cross join (select level as myId from dual connect by level <= 3);
CATEGORY VALUE
-------- -----
EQUAL 60
LESS 100
GREATER 20
SQL Fiddle Demo

Oracle SQL: Transpose / rotate a table result having one row and many columns

I'm looking for a way to transpose or rotate a table in Oracle SQL. For this case there is only one row in the SELECT, but multiple columns.
Example:
SELECT
id AS "Id",
name AS "Name",
some_value AS "Favorite color"
FROM
table
WHERE
id = 5;
Result:
id | name | some_value
--- ------ -----------
5 John Orange
What I would like to see is:
Id | 5
Name | John
Favorite color | Orange
I'm aware of PIVOT, but I'm struggling to see a simple code with this case.
You can unpivot the columns to get this result as follows:
select fld, val
from (
select to_char(id) as "Id", -- convert all columns to same type
name as "Name",
some_value as "Favorite color"
from your_table
where id = 5
) unpivot(val for fld in("Id", "Name", "Favorite color"));
Use simple UNION ALL clause
SELECT 'Id' As field_name, cast( id as varchar2(100)) as Value FROM "TABLE" where id = 5
UNION ALL
SELECT 'Name' , name FROM "TABLE" where id = 5
UNION ALL
SELECT 'Favorite color' , some_value FROM "TABLE" where id = 5;
Frank Ockenfuss gave the answer I was looking for. Thanks, Frank!
However, a minor change makes changing the column names a bit more easier:
SELECT * FROM (
SELECT
TO_CHAR(id) AS id,
TO_CHAR(name) AS name,
TO_CHAR(some_value) AS fav_color
FROM my_table
WHERE id = 5
) UNPIVOT(value FOR key IN(
id AS 'Id',
name AS 'Name',
fav_color AS 'Favorite color'
));
Result:
key | value
-------------- ------
Id 5
Name John
Favorite color Orange

BLOB aggregation

I've got the table:
create table example (id number, image varchar2(10));
With 2 rows:
insert into example (24, 'pippo');
insert into example (35,'pluto');
The query is:
select max(case when id=24 then image end) as col1,
max(case when id=35 then image end) as col2
from example;
This works perfectly fine with column "image" as varchar2! The problem is that the column is a BLOB.
How can I produce the same output?
Take in mind that I need to pull out of the table 28 images (so 28 columns and one row).
Still not clear why you need the result in columns and not rows... perhaps you only think you do? Alex asked you the most important question: Who/what will call this and consume the results? How is the result set used in further processing? It is possible you don't actually need the results in columns.
In any case, you can use Alex's suggestion of having 28 subquery expressions. If your concern is with performance, you can first select into a CTE (a WITH clause), whose result set will have just 28 rows - and then write the subquery expressions against the CTE. Something like this:
with prep ( id, blob_col ) as (
select id, blob_col from base_table where id in (24, 35, ... )
)
select
(select blob_col from prep where id = 24) as col_1,
(select blob_col from prep where id = 35) as col_2,
...
from dual
;