How to aggregate the elements in a struct in bigquery - sql

"struct": [
{
"ele_1": "abcd",
"ele_2": "1.0"
},
{
"ele_1": "egf",
"ele_2": "1.0"
}
]
i have data like this in struct format , i am trying to get to something like
1st string_agg on ele_1 in a strut and then sum on ele_2, i have tried unnest( struct) but that is causing duplicated.
"ele_1": "abcd,egf",
"ele_2": "2.0"

You seem to have an array of records and want to separately aggregate the fields:
select t.*,
(select array_agg(rec.ele_1)
from unnest(t.record_array) rec
),
(select sum(rec.ele_2)
from unnest(t.record_array) rec
)
from t;

Another [optimized] option below
select * except(struct_col),
(select as struct
string_agg(ele_1) ele_1,
sum(ele_2) ele_2
from t.struct_col
).*
from `project.dataset.table` t

Related

Extracting Array elements in Presto w/o using unnest function

I have a requirement around this data where I need to extract array elements but I still want to keep them grouped, which means I can not use unnest function. Below is the sample data:
[
{ "emp_id": 8291828, "name": "bruce", },
{ "emp_id": 8291823, "name": "Rolli" }
]
My data is in the same format as above,i.e. (array(row(emp_id varchar, name varchar))) what I need is to get rid of the array, so that data look like
{ "emp_id": 8291828, "name": "bruce", },
{ "emp_id": 8291823, "name": "Rolli" }
Would appreciate if anyone can help me on this.
You could use element_at If you have a sequence table (1,2,3, ..).
with numbers as
(
select * from
(
Values
(1),(2),(3)
) as x(i)
)
,emp as
(
select *
from (
values
(ARRAY[cast(ROW(8291828,'bruce') as row(emp_id bigint, name varchar)), cast(row(8291823,'Rolli') as row(emp_id bigint, name varchar))])
) as emp (records)
)
select
element_at(emp.records,i) record
from numbers n
cross join emp
where n.i <= cardinality(emp.records);

Asking for help on correct way to us SQL with CTE to create JSON_OBJECT

The requested JSON needs to be in this form:
{
"header": {
"InstanceName": "US"
},
"erpReferenceData": {
"erpReferences": [
{
"ServiceID": "fb16e421-792b-4e9c-935b-3cea04a84507",
"ERPReferenceID": "J0000755"
},
{
"ServiceID": "7d13d907-0932-44c0-ad81-600c9b97b6e5",
"ERPReferenceID": "J0000756"
}
]
}
}
The program that I created looks like this:
dcl-s OutFile sqltype(dbclob_file);
exec sql
With x as (
select json_object(
'InstanceName' : trim(Cntry) ) objHeader
from xmlhdr
where cntry = 'US'),
y as (
select json_object(
'ServiceID' VALUE S.ServiceID,
'ERPReferenceID' VALUE I.RefCod) oOjRef
FROM IMH I
INNER JOIN GUIDS G ON G.REFCOD = I.REFCOD
INNER JOIN SERV S ON S.GUID = G.GUID
WHERE G.XMLTYPE = 'Service')
VALUES (
select json_object('header' : objHeader Format json ,
'erpReferenceData' : json_object(
'erpReferences' VALUE
JSON_ARRAYAGG(
ObjRef Format json)))
from x
LEFT OUTER JOIN y ON 1=1
Group by objHeader)
INTO :OutFile;
This is the compile error I get:
SQL0122: Position 41 Column OBJHEADER or expression in SELECT list not valid.
I am asking if this is the correct way to create this SQL statement, is there a better easier way? Any idea how to rewrite the SQL statement to make it work correctly?
The key with generating JSON or XML for that matter is to start from the inside and work your way out.
(I've simplified the raw data into just a test table...)
with elm as(select json_object
('ServiceID' VALUE ServiceID,
'ERPReferenceID' VALUE RefCod) as erpRef
from jsontst)
select * from elm;
Now add the next layer as a CTE the builds on the first CTE..
with elm as(select json_object
('ServiceID' VALUE ServiceID,
'ERPReferenceID' VALUE RefCod) as erpRef
from jsontst)
, arr (arrDta) as (values json_array (select erpRef from elm))
select * from arr;
And the next layer...
with elm as(select json_object
('ServiceID' VALUE ServiceID,
'ERPReferenceID' VALUE RefCod) as erpRef
from jsontst)
, arr (arrDta) as (values json_array (select erpRef from elm))
, erpReferences (refs) as ( select json_object
('erpReferences' value arrDta )
from arr)
select *
from erpReferences;
Nice thing about building with CTE's is at each step, you can see the results so far...
You can actually always go back and stick a Select * from CTE; in the middle to see what you have at some point.
Note that I'm building this in Run SQL Scripts. Once you have the statement complete, you can embed it in your RPG program.

Select from JSON Array postgresql JSON column

I have the following JSON stored in a PostgreSQL JSON column
{
"status": "Success",
"message": "",
"data": {
"serverIp": "XXXX",
"ruleId": 32321,
"results": [
{
"versionId": 555555,
"PriceID": "8abf35ec-3e0e-466b-a4e5-2af568e90eec",
"price": 550,
"Convert": 0.8922953080331764,
"Cost": 10
}
]
}
}
I would like to search for a specific priceID across the entire JSON column (name info) and select the entire results element by the PriceID.
How do i do that in postgresql JSON?
One option uses exists and json(b)_array_elements(). Assuming that your table is called mytable and that the jsonb column is mycol, this would look like:
select t.*
from mytable t
where exists (
select 1
from jsonb_array_elements(t.mycol -> 'data' -> 'results') x(elt)
where x.elt ->> 'PriceID' = '8abf35ec-3e0e-466b-a4e5-2af568e90eec'
)
In the subquery, jsonb_array_elements() unnest the json array located at the given path. Then, the where clause ensures that at least one elment in the array has the given PriceID.
If your data is of json datatype rather than jsonb, you need to use json_array_elements() instead of jsonb_array_elements().
If you want to display some information coming from the matching array element, then it is different. You can use a lateral join instead of exists. Keep in mind, though, that this will duplicate the rows if more than one array element matches:
select t.*, x.elt ->> 'price' price
from mytable t
cross join lateral jsonb_array_elements(t.mycol -> 'data' -> 'results') x(elt)
where x.elt ->> 'PriceID' = '8abf35ec-3e0e-466b-a4e5-2af568e90eec'

postgres how to add a key to dicts in a jsonb array

I have an array of dicts in a jsonb column. I have to update and add a key to all the dicts in this array. Can this be done in a single update statement?
Jsonb column:
select '[{"a":"val1"}, {"b":"val2"}, {"c":"val3"}]'::jsonb;
How do I update it to:
[
{
"a": "val1",
"x": "xval1"
},
{
"b": "val2",
"x": "xval2"
},
{
"c": "val3",
"x": "xval3"
}
]
Firstly jsonb_array_elements_text() function might be used to unnest the elements of jsonb data, and then regexp_replace() might be applied to get new jsonb objects with common keys("x") within the subquery.
In the next step, replace() function together with jsonb_agg() would yield the desired result as in the following query :
select id,
jsonb_agg(
(replace(jj.value,'}',',')||replace(jsonb_set(value2::jsonb, '{x}',
('"x'||(jj.value2::jsonb->>'x')::text||'"')::jsonb)::text,'{',''))::jsonb
)
as result
from
(
select t.id, j.value, regexp_replace(j.value,'[[:alpha:]]+','x') as value2
from t
cross join jsonb_array_elements_text(jsdata) j
) jj
group by id;
Demo
Indeed, using '[[:alpha:]]' pattern for regexp_replace is enough, the plus sign is added for the cases of the data would have key values with more than one letter.
Assuming that your dicts have one and only one key:
update your_table set
jsonb_col = (
select jsonb_agg(
v || jsonb_build_object(
'x',
'x' || (v->>(select min(x) from jsonb_object_keys(v) as x))))
from jsonb_array_elements(jsonb_col) as v);

BigQuery select expect double nested column

I am trying to remove a column from a BigQuery table and I've followed the instructions as stated here:
https://cloud.google.com/bigquery/docs/manually-changing-schemas#deleting_a_column_from_a_table_schema
This did not work directly as the column I'm trying to remove is nested twice in a struct. The following SO questions are relevant but none of them solve this exact case.
Single nested field:
BigQuery select * except nested column
Double nested field (solution has all fields in the schema enumerated, which is not useful for me as my schema is huge):
BigQuery: select * replace from multiple nested column
I've tried adapting the above solutions and I think I'm close but can't quite get it to work.
This one will remove the field, but returns only the nested field, not the whole table (for the examples I want to remove a.b.field_name. See the example schema at the end):
SELECT AS STRUCT * EXCEPT(a), a.* REPLACE (
(SELECT AS STRUCT a.b.* EXCEPT (field_name)) AS b
)
FROM `table`
This next attempt gives me an error: Scalar subquery produced more than one element:
WITH a_tmp AS (
SELECT AS STRUCT a.* REPLACE (
(SELECT AS STRUCT a.b.* EXCEPT (field_name)) AS b
)
FROM `table`
)
SELECT * REPLACE (
(SELECT AS STRUCT a.* FROM a_tmp) AS a
)
FROM `table`
Is there a generalised way to solve this? Or am I forced to use the enumerated solution in the 2nd link?
Example Schema:
[
{
"name": "a",
"type": "RECORD",
"fields": [
{
"name": "b",
"type": "RECORD"
"fields": [
{
"name": "field_name",
"type": "STRING"
},
{
"name": "other_field_name".
"type": "STRING"
}
]
},
]
}
]
I would like the final schema to be the same but without field_name.
Below is for BigQuery Standard SQL
#standardSQL
SELECT * REPLACE(
(SELECT AS STRUCT(SELECT AS STRUCT a.b.* EXCEPT (field_name)) b)
AS a)
FROM `project.dataset.table`
you can test, play with it using dummy data as below
#standardSQL
WITH `project.dataset.table` AS (
SELECT STRUCT<b STRUCT<field_name STRING, other_field_name STRING>>(STRUCT('1', '2')) a
)
SELECT * REPLACE(
(SELECT AS STRUCT(SELECT AS STRUCT a.b.* EXCEPT (field_name)) b)
AS a)
FROM `project.dataset.table`