I want to sum one field PC value from table test group by metal_id.
I tried this Postgres GROUP BY on jsonb inner field
but not worked for me as I have different JSON format
tbl_test
id json
1 [
{
"pc": "4",
"metal_id": "1"
},
{
"pc": "4",
"metal_id": "5"
}
]
2. [
{
"pc": "1",
"metal_id": "1"
},
{
"pc": "1",
"metal_id": "2"
},
]
output I want is :(group by metal_id and sum of pc).
Thanks in advance!
[
"metal_id": 1
{
"pc": "5",
}
]
You can use json_array_element() to expand the jsonb array, and then aggregate by metal id:
select obj ->> 'metal_id' metal_id, sum((obj ->> 'pc')::int) cnt
from mytable t
cross join lateral jsonb_array_elements(t.js) j(obj)
group by obj ->> 'metal_id'
order by metal_id
Demo on DB Fiddle:
metal_id | cnt
:------- | --:
1 | 5
2 | 1
5 | 4
Related
So I have a column in a Snowflake table that stores JSON data but the column is of a varchar data type.
The JSON looks like this:
{
"FLAGS": [],
"BANNERS": {},
"TOOLS": {
"game.appConfig": {
"type": [
"small",
"normal",
"huge"
],
"flow": [
"control",
"noncontrol"
]
}
},
"PLATFORM": {}
}
I want to filter only the data inside TOOLS and want to get the following result:
TOOLS_ID
TOOLS
game.appConfig
type
game.appConfig
flow
How can I achieve this?
I assumed that the TOOLs can have more than one tool ID, so I wrote this query:
with mydata as ( select
'{
"FLAGS": [],
"BANNERS": {},
"TOOLS": {
"game.appConfig": {
"type": [
"small",
"normal",
"huge"
],
"flow": [
"control",
"noncontrol"
]
}
},
"PLATFORM": {}
}' as v1 )
select main.KEY TOOLS_ID, sub.KEY TOOLS
from mydata,
lateral flatten ( parse_json(v1):"TOOLS" ) main,
lateral flatten ( main.VALUE ) sub;
+----------------+-------+
| TOOLS_ID | TOOLS |
+----------------+-------+
| game.appConfig | flow |
| game.appConfig | type |
+----------------+-------+
Assuming the column name is C1 and table name T1:
select a.t:"TOOLS":"game.appConfig"::string from (select
parse_json(to_variant(C1))t from T1) a
I have some JSON data stored in a column. I want to parse the json data and extract all the values against a particular key.
Here's my sample data:
{
"fragments": [
{
"fragments": [
{
"fragments": [
{
"fragments": [],
"fragmentName": "D"
},
{
"fragments": [],
"fragmentName": "E"
},
{
"fragments": [],
"fragmentName": "F"
}
],
"fragmentName": "C"
}
],
"fragmentName": "B"
}
],
"fragmentName": "A"
}
Expected output:
D, E, F, C, B, A
I want to extract all fragmentName values from the above JSON.
I have gone through the below stacks, but haven't found anything useful:
Collect Recursive JSON Keys In Postgres
Postgres recursive query with row_to_json
Edited:
Here's one approach I have tried on the above stacks:
WITH RECURSIVE key_and_value_recursive(key, value) AS (
SELECT
t.key,
t.value
FROM temp_frg_mapping, json_each(temp_frg_mapping.info::json) AS t
WHERE id=2
UNION ALL
SELECT
t.key,
t.value
FROM key_and_value_recursive,
json_each(CASE
WHEN json_typeof(key_and_value_recursive.value) <> 'object' THEN '{}' :: JSON
ELSE key_and_value_recursive.value
END) AS t
)
SELECT *
FROM key_and_value_recursive;
Output:
Getting only 0 level nesting.
I would use a recursive query, but with jsonb_array_elements():
with recursive cte as (
select id, info ->> 'fragmentName' as val, info -> 'fragments' as info, 1 lvl
from mytable
where id = 2
union all
select c.id, x.info ->> 'fragmentName', x.info -> 'fragments', c.lvl + 1
from cte c
cross join lateral jsonb_array_elements(c.info) as x(info)
where c.info is not null
)
select id, val, lvl
from cte
where val is not null
The query traverses the object depth-first; at each step of the way, we unnest the json array and check if a fragment name is available. We don't need to check the types of the returned values: we just use the standard functions, until the data exhausts.
Demo on DB Fiddle
Sample data:
{
"fragments": [
{
"fragments": [
{
"fragments": [
{
"fragments": [
],
"fragmentName": "D"
},
{
"fragments": [
],
"fragmentName": "E"
},
{
"fragments": [
],
"fragmentName": "F"
}
],
"fragmentName": "C"
}
],
"fragmentName": "B"
}
],
"fragmentName": "A"
}
Results:
id | val | lvl
-: | :-- | --:
2 | A | 1
2 | B | 2
2 | C | 3
2 | D | 4
2 | E | 4
2 | F | 4
I have a JSONB column containing list of objects.>
Here's the table schema:
column Name | Datatype
---------------------
timestamp | timestamp
data | JSONB
Sample Data
1.
timestamp : 2020-02-02 19:01:21.571429+00
data : [
{
"tracker_id": "5",
"position": 1
},
{
"tracker_id": "11",
"position": 2
},
{
"tracker_id": "4",
"position": 1
}
]
2.
timestamp : 2020-02-02 19:01:23.571429+00
data : [
{
"tracker_id": "7",
"position": 3
},
{
"tracker_id": "4",
"position": 2
}
]
3.
timestamp : 2020-02-02 19:02:23.571429+00
data : [
{
"tracker_id": "5",
"position": 2
},
{
"tracker_id": "4",
"position": 1
}
]
I need to find the count of the transitions of tracker_id from position: 1 to position: 2
Here, the output will be 2, since tracker_id 4 and 5 changed their position from 1 to 2.
Note
The transition should be in ascending order depending on the timestamp
The position change need not to be in the consecutive records.
I'm using timescaledb extension
So far I've tried querying the objects in the list od individual record, but I'm not sure how to merge the list objects of each record and query them.
What would be the query for this? Should I write down a stored procedure instead?
I don't use timescaledb extension so I would choose pure SQL solution based on unnesting json:
with t (timestamp,data) as (values
(timestamp '2020-02-02 19:01:21.571429+00', '[
{
"tracker_id": "5",
"position": 1
},
{
"tracker_id": "11",
"position": 2
},
{
"tracker_id": "4",
"position": 1
}
]'::jsonb),
(timestamp '2020-02-02 19:01:23.571429+00', '[
{
"tracker_id": "7",
"position": 3
},
{
"tracker_id": "4",
"position": 2
}
]
'::jsonb),
(timestamp '2020-02-02 19:02:23.571429+00', '[
{
"tracker_id": "5",
"position": 2
},
{
"tracker_id": "4",
"position": 1
}
]
'::jsonb)
), unnested as (
select t.timestamp, r.tracker_id, r.position
from t
cross join lateral jsonb_to_recordset(t.data) AS r(tracker_id text, position int)
)
select count(*)
from unnested u1
join unnested u2
on u1.tracker_id = u2.tracker_id
and u1.position = 1
and u2.position = 2
and u1.timestamp < u2.timestamp;
There are various functions that will help compose several database rows into a single JSON structure: row_to_json(), array_to_json(), and, array_agg().
You will then use the usual SELECT with an ORDER BY clause to get the timestamps/JSON data you want and the use above functions to create a single JSON struture.
I'm struggling to understand arrays and structs in BigQuery. When I run this query in Standard SQL:
with t1 as (
select 1 as id, [1,2] as orders
union all
select 2 as id, null as orders
)
select
id,
orders
from t1
order by 1
I get this result in json:
[
{
"id": "1",
"orders": [
"1",
"2"
]
},
{
"id": "2",
"orders": []
}
]
I want to remove to remove the orders value for id = 2 so that I instead get:
[
{
"id": "1",
"orders": [
"1",
"2"
]
},
{
"id": "2"
}
]
How can I do this? Do I need to add another CTE to remove null values, how?
I have a table called DimCompany in SQL Server like so:
+----+---------+--------+
| id | Company | Budget |
+----+---------+--------+
| 1 | abc | 111 |
| 2 | def | 444 |
+----+---------+--------+
I would like to convert this table into a json file like so:
{
"DimCompany":{
"id":1,
"companydetails": [{
"columnid": "1",
"columnfieldname": "Company",
"columnfieldvalue: "abc"
}
{
"columnid": "2",
"columnfieldname": "Budget",
"columnfieldvalue: "111"
}]
}
},
{
"DimCompany":{
"id":2,
"companydetails": [{
"columnid": "1",
"columnfieldname": "Company",
"columnfieldvalue: "def"
}
{
"columnid": "2",
"columnfieldname": "Budget",
"columnfieldvalue: "444"
}]
}
}
where columnid is a value from sys.columns against the column field name. I've tried doing this by unpivoting the table and joining sys.columns on fieldname where sys.objects.name=DimCompany and putting this in a view, then querying on the view to get json output for migration into DocumentDB.
However I would like to not use unpivot and just directly form a query to get desired output.
I'm just curious whether this is possible in SQL server or in any other tool.
Without using UNPIVOT and doing it yourself, the following SQL:
if object_id(N'dbo.DimCompany') is not null drop table dbo.DimCompany;
create table dbo.DimCompany (
id int not null identity(1,1),
Company nvarchar(50) not null,
Budget float not null
);
insert dbo.DimCompany (Company, Budget) values
('abc', 111),
('def', 444);
go
select id as 'DimCompany.id',
(
select columnid=cast(sc.column_id as nvarchar), columnfieldname, columnfieldvalue
from (
select N'Company', Company from dbo.DimCompany DC2 where DC2.id = DC1.id
union
select N'Budget', cast(Budget as nvarchar) from dbo.DimCompany DC2 where DC2.id = DC1.id
) keyValues (columnfieldname, columnfieldvalue)
join sys.columns sc on sc.object_id=object_id(N'dbo.DimCompany') and sc.name=columnfieldname
for json path
) as 'DimCompany.companydetails'
from dbo.DimCompany DC1
for json path, without_array_wrapper;
Produces the following JSON as per your example:
{
"DimCompany": {
"id": 1,
"companydetails": [
{
"columnid": "2",
"columnfieldname": "Company",
"columnfieldvalue": "abc"
},
{
"columnid": "3",
"columnfieldname": "Budget",
"columnfieldvalue": "111"
}
]
}
},
{
"DimCompany": {
"id": 2,
"companydetails": [
{
"columnid": "2",
"columnfieldname": "Company",
"columnfieldvalue": "def"
},
{
"columnid": "3",
"columnfieldname": "Budget",
"columnfieldvalue": "444"
}
]
}
}
Things to note:
The sys.columns columnid values start at 1 for the dbo.DimCompany.id column. Subtract 1 before casting if that's a requirement.
Using without_array_wrapper removes the surrounding [] characters, per your example, but isn't really valid JSON as a result.
I doubt this would be scalable for tables with large numbers of columns.