Creating a table with dynamic json column - postgresql - sql

I have 3 tables containing array columns: config_table, main_table and field_table. My objective is to get an output_table that can have variable column names (and obviosuly values) depending on what is configured in the config_table.
config_table:
type_id fields feild_ids
1 {A} {1}
1 {B,C} {2,3}
1 {D} {4}
main_table:
type_id value
1 12
1 34
2 56
2 78
3 99
field_table:
value field_data
12 {"1": "Hello",
"2": "foo",
"3": "bar",
"4": "Hi",
"5": "ignore_this",
"6": "ignore_this_too" }
34 {"1": "Iam",
"2": "out",
"3": "of",
"4": "words",
"5": "orange",
"6": "banana" }
56 ...
78 ...
99 ...
EDIT
Since having dynamic/variable column names will not be feasible, the ideal output_table format would be:
type_id value json_data
1 12 {"A": "Hello", "B-C": "foo-bar", D: "Hi"}
1 34 {"A": "Iam", "B-C": "out-of", D: "words"}
I am trying to realize a general solution that would allow me to create output_table for N values in a single "field_ids" in config_table.
EDIT 2
removed the redundant type column, added fields 5 and 6 to field_table.field_data as well as type_id 2 and 3 to main_table.type_id (which need to be ignored in output_table because of their absence in config_table) and to make the question easier to understand

This gave me the desired output:
select M.type_id,
M.value,
--F.field_data,
JSON_OBJECT(ARRAY( select TD.temp_keys
from ( select array_to_string(TMD.fields, '-') as temp_keys,
array_to_string(TMD.field_values, '-') as temp_values
from (select MC.fields,
MC.field_ids,
array(select TF.value
from jsonb_each_text(F.field_data) TF
where TF.key::integer = any(MC.field_ids)) as field_values
from config_table MC
--where MC.field_ids = any(F.fields)
) TMD
) TD
),
ARRAY( select TD.temp_values
from ( select array_to_string(TMD.fields, '-') as temp_keys,
array_to_string(TMD.field_values, '-') as temp_values
from (select MC.fields,
MC.field_ids,
array(select TF.value
from jsonb_each_text(F.field_data) TF
where TF.key::integer = any(MC.field_ids)) as field_values
from main_table MC
--where MC.field_ids = any(F.fields)
) TMD
) TD
)
)
from main_table M
inner join field_table F
on M.value = F.value
where M.type_id in (select distinct CC.type_id from config_table CC)

Related

Key value table to json in BigQuery

Hey all,
I have a table that looks like this:
row
key
val
1
a
100
2
b
200
3
c
"apple
4
d
{}
I want to convert it into JSON:
{
"a": 100,
"b": 200,
"c": "apple",
"d": {}
}
Note: the number of lines can change so this is only an example
Thx in advanced !
With string manipulation,
WITH sample_table AS (
SELECT 'a' key, '100' value UNION ALL
SELECT 'b', '200' UNION ALL
SELECT 'c', '"apple"' UNION ALL
SELECT 'd', '{}'
)
SELECT '{' || STRING_AGG(FORMAT('"%s": %s', key, value)) || '}' json
FROM sample_table;
You can get following result similar to your expected output.

Is there any way in MariaDB to search for less than value from array of json objects

Here's my json doc:
[
{
"ID":1,
"Label":"Price",
"Value":399
},
{
"ID":2,
"Label":"Company",
"Value":"Apple"
},
{
"ID":2,
"Label":"Model",
"Value":"iPhone SE"
},
]
Here's my table:
+----+------------------------------------------------------------------------------------------------------------------------------------+
| ID | Properties |
+----+------------------------------------------------------------------------------------------------------------------------------------+
| 1 | [{"ID":1,"Label":"Price","Value":399},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone SE"}] |
| 2 | [{"ID":1,"Label":"Price","Value":499},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone X"}] |
| 3 | [{"ID":1,"Label":"Price","Value":699},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone 11"}] |
| 4 | [{"ID":1,"Label":"Price","Value":999},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone 11 Pro"}] |
+----+------------------------------------------------------------------------------------------------------------------------------------+
Here's what I want to search on search query:
SELECT *
FROM mobiles
WHERE ($.Label = "Price" AND $.Value < 400)
AND ($.Label = "Model" AND $.Value = "iPhone SE")
Above mentioned query is just for illustration purpose only. I just wanted to convey what I want to perform.
Also I know the table can be normalized into two. But this table is also a place holder table and let's just say it is going to stay the same.
I need to know if it's possible to query the given json structure for following operators: >, >=, <, <=, BETWEEN AND, IN, NOT IN, LIKE, NOT LIKE, <>
Since MariaDB does not support JSON_TABLE(), and JSON_PATH supports only member/object selector, it is not so straightforward to filter JSON here. You can try this query, that tries to overcome that limitations:
with a as (
select 1 as id, '[{"ID":1,"Label":"Price","Value":399},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone SE"}]' as properties union all
select 2 as id, '[{"ID":1,"Label":"Price","Value":499},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone X"}]' as properties union all
select 3 as id, '[{"ID":1,"Label":"Price","Value":699},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone 11"}]' as properties union all
select 4 as id, '[{"ID":1,"Label":"Price","Value":999},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone 11 Pro"}]' as properties
)
select *
from a
where json_value(a.properties,
/*Get path to Price property and replace property name to Value*/
replace(replace(json_search(a.properties, 'one', 'Price'), '"', ''), 'Label', 'Value')
) < 400
and json_value(a.properties,
/*And the same for Model name*/
replace(replace(json_search(a.properties, 'one', 'Model'), '"', ''), 'Label', 'Value')
) = "iPhone SE"
| id | properties
+----+------------
| 1 | [{"ID":1,"Label":"Price","Value":399},{"ID":2,"Label":"Company","Value":"Apple"},{"ID":3,"Label":"Model","Value":"iPhone SE"}]
db<>fiddle here.
I would not use string functions. What is missing in MariaDB is the ability to unnest the array to rows - but it has all the JSON accessors we need to access to the data. Using these methods rather than string methods avoids edge cases, for example when the values contain embedded double quotes.
You would typically unnest the array with the help of a table of numbers that has at least as many rows as there are elements in the biggest array. One method to generate that on the fly is row_number() against a table with sufficient rows - say sometable.
You can unnest the arrays as follows:
select t.id,
json_unquote(json_extract(t.properties, concat('$[', n.rn, '].Label'))) as label,
json_unquote(json_extract(t.properties, concat('$[', n.rn, '].Value'))) as value
from mytable t
inner join (select row_number() over() - 1 as rn from sometable) n
on n.rn < json_length(t.properties)
The rest is just aggregation:
select t.id
from (
select t.id,
json_unquote(json_extract(t.properties, concat('$[', n.rn, '].Label'))) as label,
json_unquote(json_extract(t.properties, concat('$[', n.rn, '].Value'))) as value
from mytable t
inner join (select row_number() over() - 1 as rn from sometable) n
on n.rn < json_length(t.properties)
) t
group by id
having
max(label = 'Price' and value + 0 < 400) = 1
and max(label = 'Model' and value = 'iPhone SE') = 1
Demo on DB Fiddle

PostgreSQL get any value from jsonb object

I want to get the value of either key 'a' or 'b' if either one exists. If neither exists, I want the value of any key in the map.
Example:
'{"a": "aaa", "b": "bbbb", "c": "cccc"}' should return aaa.
'{"b": "bbbb", "c": "cccc"}' should return bbb.
'{"c": "cccc"}' should return cccc.
Currently I'm doing it like this:
SELECT COALESCE(o ->> 'a', o ->> 'b', o->> 'c') FROM...
The problem is that I don't really want to name key 'c' explicitly since there are objects that can have any key.
So how do I achieve the desired effect of "Get value of either 'a' or 'b' if either exists. If neither exists, grab anything that exists."?
I am using postgres 9.6.
maybe too long:
t=# with c(j) as (values('{"a": "aaa", "b": "bbbb", "c": "cccc"}'::jsonb))
, m as (select j,jsonb_object_keys(j) k from c)
, f as (select * from m where k not in ('a','b') limit 1)
t-# select COALESCE(j ->> 'a', j ->> 'b', j->>k) from f;
coalesce
----------
aaa
(1 row)
and with no a,b keys:
t=# with c(j) as (values('{"a1": "aaa", "b1": "bbbb", "c": "cccc"}'::jsonb))
, m as (select j,jsonb_object_keys(j) k from c)
, f as (select * from m where k not in ('a','b') limit 1)
select COALESCE(j ->> 'a', j ->> 'b', j->>k) from f;
coalesce
----------
cccc
(1 row)
Idea is to extract all keys with jsonb_object_keys and get the first "random"(because I don't order by anything) (limit 1) and then use it for last coalesce invariant

Get sql query result as json

I have a table test with two columns A and B and create table1 and table2 from it.
test table1 table2
A B A count(A) B count(B) A
95 1 95 7 1 3 95
5 11 5 2 11 2 5
95 1 9 4 95
95 9
95 1
95 9
5 11
95 9
95 9
How to get a result like:
{"node": [
{"child": [
{"value": 3,
"name": "1"},
{"value": 4,
"name": "9"}],
"value": 7,
"name": "95"},
{"child": [
{"value": 2,
"name": "11"}],
"value": 2,
"name": "5"}],
"name": "test",
"value": 9}
First I group by column A and count the groups name="95", value=7 and name="5", value=2. For each group I count also column B. There are alot of json functions, but till now I have no idea how to get the result above.
finely the query should be similar to:
select row_to_json(t) from ( select * , ( select array_to_json(array_agg(row_to_json(u))) from ( select * from table1 where table1.a=table2.a ) as u ) from table2 ) as t;
You can generate the correct json with a plpgsql function. This is not very difficult, although sometimes a little tedious. Check this one (rename tt to actual table name):
create or replace function test_to_json()
returns json language plpgsql
as $$
declare
rec1 record;
rec2 record;
res text;
begin
res = '{"node": [';
for rec1 in
select a, count(b) ct
from tt
group by 1
loop
res = format('%s{"child": [', res);
for rec2 in
select a, b, count(b) ct
from tt
where a = rec1.a
group by 1,2
loop
res = res || format('{"value": %s, "name": %s},', rec2.ct, rec2.b);
end loop;
res = rtrim(res, ',');
res = format('%s],"value": %s, "name": %s},', res, rec1.ct, rec1.a);
end loop;
res = rtrim(res, ',');
res = format('%s], "value": %s}', res, (select count(b) from tt));
return res:: json;
end $$;
select test_to_json();
Ugly but working and without plpgsql:
select json_build_object('node', json_agg(q3), 'name', 'test', 'value', (select count(1) from test))
from
(select json_agg(q2) from
(select a as name, sum(value) as value, json_agg(json_build_object('name', q1.name, 'value', q1.value)) as child
from
(select a, b as name, count(1) as value from test group by 1, 2) as q1
group by 1) as q2
) as q3;

Combining rows to create two columns of data

I'm a bit confused on how to do this query properly. I have a table that looks like this. Where district 0 represent a value that should be applied to all district (global).
[ district ] [ code ] [ value ]
1 A 11
1 C 12
2 A 13
2 B 14
0 B 15
I have built a query (below) to combine the "global value" on each district.
[ district ] [ code ] [ district value ] [ global value ]
1 A 11 null -> row 1
1 B null 15 -> row 2
1 C 12 null -> row 3
2 A 13 null -> row 4
2 B 14 15 -> row 5
2 C null null -> row 6 (optional)
I did it by joining on the list of all possible district/code.
select all_code.district, all_code.code, table_d.value, table_g.value
from (select distinct b.district, a.code
from temp_table a
inner join (select distinct district
from temp_table
where district <> 0) b
on 1 = 1) all_code
left join temp_table table_d
on table_d.code = all_code.code
and table_d.district = all_code.district
left join temp_table table_g
on table_g.code = all_code.code
and table_g.district = 0
This query works great but seems pretty ugly. Is there a better way of doing this? (note that I don't care if row #6 is there or not).
Here's a script if needed.
create table temp_table
(
district VARCHAR2(5) not null,
code VARCHAR2(5) not null,
value VARCHAR2(5) not null
);
insert into temp_table (district, code, value)
values ('1', 'A', '11');
insert into temp_table (district, code, value)
values ('1', 'C', '12');
insert into temp_table (district, code, value)
values ('2', 'A', '13');
insert into temp_table (district, code, value)
values ('2', 'B', '14');
insert into temp_table (district, code, value)
values ('0', 'B', '15');
Here is one of the options. Since you are on 10g you can make use of partition outer join(partition by() clause) to fill the gaps:
with DCodes(code) as(
select 'A' from dual union all
select 'B' from dual union all
select 'C' from dual
),
DGlobal(code, value1) as(
select code
, value
from temp_table
where district = 0
)
select tt.district
, dc.code
, tt.value
, dg.value1 as global_value
from temp_table tt
partition by(tt.district)
right join DCodes dc
on (dc.code = tt.code)
left join DGlobal dg
on (dg.code = dc.code)
where tt.district != 0
order by 1, 2
Result:
DISTRICT CODE VALUE GLOBAL_VALUE
-------- ---- ----- ------------
1 A 11
1 B 15
1 C 12
2 A 13
2 B 14 15
2 C
I would argue that a lot of the "ugliness" comes from a lack of lookup tables for district and code. Without an authoritative source for those, you have to fabricate one from the values that are in use (hence the sub-queries with distinct).
In terms of cleaning up the query you have, the best I can come up with is to remove an unnecessary sub-query and use the proper syntax for the cross join:
SELECT a.district,
b.code,
c.value1,
d.value1
FROM (SELECT DISTINCT district FROM temp_table WHERE district <> 0) a
CROSS JOIN (SELECT DISTINCT code FROM temp_table) b
LEFT JOIN temp_table c
ON b.code = c.code AND a.district = c.district
LEFT JOIN temp_table d
ON b.code = d.code AND d.district = 0
ORDER BY district, code