Updating an array field columns in big query [duplicate] - sql

This question already has an answer here:
Update values in struct arrays in BigQuery
(1 answer)
Closed 2 years ago.
Problem statement :
How to update an array field in big query
Below is my table
Test_table
-------------------------------
file.fileName | file.count
-------------------------------
abc.txt. | 100
-------------------------------
From the above table I need to update the both fileName and count fields
Schema:
{
"name": "file"
"type" : "record"
"mode" : "repeated"
"fields" :[
{
"name": fileName
"type": string
"mode" :nullable
},
{
"name": count
"type": string
"mode" :nullable
}
]
}
can some one help me on how to execute update query on this table

Can't you do something like this?
update t
set file[safe_offset(1)].filename = ?,
file[safe_offset(1)].count = ?

Related

how to use trino/presto to query redis

I have a simple string and hash stored in redis
get test
"1"
hget htest first
"first hash"
I'm able to see the "table" test, but there are no columns
trino> show columns from redis.default.test;
Column | Type | Extra | Comment
--------+------+-------+---------
(0 rows)
and obviously I can't get result from select
trino> select * from redis.default.test;
Query 20210918_174414_00006_dmp3x failed: line 1:8: SELECT * not allowed from relation
that has no columns
I see in the documentation that I might need to create a table definition file, but I wasn't able to create one that will work.
I had few variations of this, but this is the one for example:
{
"tableName": "test",
"schemaName": "default",
"value": {
"dataFormat": "json",
"fields": [
{
"name": "number",
"mapping": 0,
"type": "INT"
}
]
}
}
any idea what am I doing wrong?
I focused on the string since it's simpler, but I also need to query the hash

Creating NUMERIC array from JSONB data in Postgres

I have a Postgres JSONB data which contains an ARRAY of type NUMERIC. I want to extract this ARRAY and store it in a variable of type NUMERIC[]. Here's is my JSONB object.
{
"userIds": [
101,102,103
],
"userRole": {
"id": "1",
"name": "Administrator"
}
}
How can I extract userIds from this JSONB object and store them in NUMERIC[] as I have to iterate on this NUMERIC[]?
Any help would be highly appreciated.
One way is to extract the ids with jsonb_array_elements, parse them to the right data type and aggregate them again in an array, e.g.:
SELECT array_agg(id) FROM (
SELECT
(jsonb_array_elements('{
"userIds": [101,102,103],
"userRole": {
"id": "1",
"name": "Administrator"
}
}'::jsonb->'userIds')::numeric)) j(id);
array_agg
---------------
{101,102,103}
(1 row)
If you want to iterate over these values as rows in your resultset, don't bother with the outer query:
SELECT
jsonb_array_elements('{
"userIds": [101,102,103],
"userRole": {
"id": "1",
"name": "Administrator"
}}'::jsonb->'userIds')::numeric;
jsonb_array_elements
----------------------
101
102
103
(3 rows)

How do I INSERT columns with nested name syntax (ie. "item.description")?

I'm trying to merge two databases with the same schema on Google BigQuery.
I'm following the merge samples here: https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#merge_statement
However, my tables have nested columns, ie "service.id" or "service.description"
My code so far is:
MERGE combined_table
USING table1
ON table1.id = combined_table.id
WHEN NOT MATCHED THEN
INSERT(id, service.id, service.description)
VALUES(id, service.id, service.description)
However, I get the error message: Syntax error: Expected ")" or "," but got ".", and a red squiggly underline under .id on the INSERT(...) line.
Here is a view of part of my table's schema:
[
{
"name": "id",
"type": "STRING"
},
{
"name": "service",
"type": "RECORD",
"fields": [
{
"name": "id",
"type": "STRING"
},
{
"name": "description",
"type": "STRING"
}
]
},
{
"name": "cost",
"type": "FLOAT"
}
...
]
How do I properly structure this INSERT(...) statement so that I can include the nested columns?
Syntax error: Expected ")" or "," but got "."
Looks like you are on the right direction, Note in the documentation how you need to insert value to a REPEATED column,
You need to define the structure to guide BigQuery what to expect, For example:
STRUCT<created DATE, comment STRING>
This is the full example from the documentation
MERGE dataset.DetailedInventory T
USING dataset.Inventory S
ON T.product = S.product
WHEN NOT MATCHED AND quantity < 20 THEN
INSERT(product, quantity, supply_constrained, comments)
-- insert values like this
VALUES(product, quantity, true, ARRAY<STRUCT<created DATE, comment STRING>>[(DATE('2016-01-01'), 'comment1')])
WHEN NOT MATCHED THEN
INSERT(product, quantity, supply_constrained)
VALUES(product, quantity, false)
I've found the answer.
It turns out when referencing the top level of a STRUCT, BigQuery references all of the nested columns as well. So if I wanted to INSERT service and all of it's sub-columns (service.id and service.description), I only have to include service in the INSERT(...) statement.
The following code worked:
...
WHEN NOT MATCHED THEN
INSERT(id, service)
VALUES(id, service)
This would merge all sub columns, including service.id and service.description.

Bigquery: Append to a nested record

I'm currently checking out Bigquery, and I want to know if it's possible to add new data to a nested table.
For example, if I have a table like this:
[
{
"name": "name",
"type": "STRING"
},
{
"name": "phone",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{
"name": "number",
"type": "STRING"
},
{
"name": "type",
"type": "STRING"
}
]
}
]
And then I insert a phone number for the contact John Doe.
INSERT into socialdata.phones_examples (name, phone) VALUES("Jonh Doe", [("555555", "Home")]);
Is there an option to later add another number to the contact ? To get something like this:
I know I can update the whole field, but I want to know if there is way to append to the nested table new values.
When you insert data into BigQuery, the granularity is the level of rows, not elements of the arrays contained within rows. You would want to use a query like this, where you update the relevant row and append to the array:
UPDATE socialdata.phones_examples
SET phone = ARRAY_CONCAT(phone, [("555555", "Home")])
WHERE name = "Jonh Doe"
if you need to update multiple records for some users - you can use below
#standardSQL
UPDATE `socialdata.phones_examples` t
SET phone = ARRAY_CONCAT(phone, [new_phone])
FROM (
SELECT 'John Doe' name, STRUCT<number STRING, type STRING>('123-456-7892', 'work') new_phone UNION ALL
SELECT 'Abc Xyz' , STRUCT('123-456-7893', 'work') new_phone
) u
WHERE t.name = u.name
or if those updates are available in some table (for example socialdata.phones_updates):
#standardSQL
UPDATE `socialdata.phones_examples` t
SET phone = ARRAY_CONCAT(phone, [new_phone])
FROM `socialdata.phones_updates` u
WHERE t.name = u.name

How to generate JSON array from multiple rows, then return with values of another table

I am trying to build a query which combines rows of one table into a JSON array, I then want that array to be part of the return.
I know how to do a simple query like
SELECT *
FROM public.template
WHERE id=1
And I have worked out how to produce the JSON array that I want
SELECT array_to_json(array_agg(to_json(fields)))
FROM (
SELECT id, name, format, data
FROM public.field
WHERE template_id = 1
) fields
However, I cannot work out how to combine the two, so that the result is a number of fields from public.template with the output of the second query being one of the returned fields.
I am using PostGreSQL 9.6.6
Edit, as requested more information, a definition of field and template tables and a sample of each queries output.
Currently, I have a JSONB row on the template table which I am using to store an array of fields, but I want to move fields to their own table so that I can more easily enforce a schema on them.
Template table contains:
id
name
data
organisation_id
But I would like to remove data and replace it with the field table which contains:
id
name
format
data
template_id
At the moment the output of the first query is:
{
"id": 1,
"name": "Test Template",
"data": [
{
"id": "1",
"data": null,
"name": "Assigned User",
"format": "String"
},
{
"id": "2",
"data": null,
"name": "Office",
"format": "String"
},
{
"id": "3",
"data": null,
"name": "Department",
"format": "String"
}
],
"id_organisation": 1
}
This output is what I would like to recreate using one query and both tables. The second query outputs this, but I do not know how to merge it into a single query:
[{
"id": 1,
"name": "Assigned User",
"format": "String",
"data": null
},{
"id": 2,
"name": "Office",
"format": "String",
"data": null
},{
"id": 3,
"name": "Department",
"format": "String",
"data": null
}]
The feature you're looking for is json concatenation. You can do that by using the operator ||. It's available since PostgreSQL 9.5
SELECT to_jsonb(template.*) || jsonb_build_object('data', (SELECT to_jsonb(field) WHERE template_id = templates.id)) FROM template
Sorry for poorly phrasing what I was trying to achieve, after hours of Googling I have worked it out and it was a lot more simple than I thought in my ignorance.
SELECT id, name, data
FROM public.template, (
SELECT array_to_json(array_agg(to_json(fields)))
FROM (
SELECT id, name, format, data
FROM public.field
WHERE template_id = 1
) fields
) as data
WHERE id = 1
I wanted the result of the subquery to be a column in the ouput rather than compiling the entire output table as a JSON.