I am attempting to unnest three record arrays in a single table.
entities, words, sentences.
The issue I've encountered is the "sentiment" field is in both "entities" and "sentences" array and therefore, I get the error "Column name sentiment is ambiguous".
Try to replace sentiment AS entity_sentiment with entity.sentiment AS entity_sentiment and sentiment AS sentences_sentiment with sentence.sentiment AS sentences_sentiment
Related
Hi i have string in BigQuery column like this
cancellation_amount: 602000
after_cancellation_transaction_amount: 144500
refund_time: '2022-07-31T06:05:55.215203Z'
cancellation_amount: 144500
after_cancellation_transaction_amount: 0
refund_time: '2022-08-01T01:22:45.94919Z'
i already using this logic to get cancellation_amount
regexp_extract(file,r'.*cancellation_amount:\s*([^\n\r]*)')
but the output only amount 602000, i need the output 602000 and 144500 become different column
Appreciate for helping
If your lines in the input (which will eventually become columns) are fixed you can use multiple regexp_extracts to get all the values.
SELECT
regexp_extract(file,r'cancellation_amount:\s*([^\n\r]*)') as cancellation_amount
regexp_extract(file,r'. after_cancellation_transaction_amount:\s*([^\n\r]*)') as after_cancellation_transaction_amount
FROM table_name
One issue I found with your regex expression is that .*cancellation_amount won't match after_cancellation_transaction_amount.
There is also a function called regexp_extract_all which returns all the matches as an array which you can later explode into columns, but if you have finite values separating them out in different columns would be a easier.
I am working with a source field which is in the form of an array of tuples
[(a,145), (b,12), (c,63), (d,1), (e,54), (f,99), ...]
I am unable to load this field into a variant type into snowflake. When I try to load this field, I get the following error - Exception: net.snowflake.client.jdbc.SnowflakeSQLException: Error parsing JSON.
As a work around, I loaded this field as a VARCHAR into Snowflake but I am now having trouble trying to parse it as an array and flatten it using lateral flatten function.
My goal is to flatten this array and break out each tuples into its own row. Then I would like to split the tuple into separate columns. Does anyone have suggestions how to get this to work in snowflake ?
select
split_part(regexp_replace(value, '[\\(\\)]'), ',', 1) as left,
split_part(regexp_replace(value, '[\\(\\)]'), ',', 2) as right
from table(split_to_table(
regexp_replace('[(a,145), (b,12), (c,63), (d,1), (e,54), (f,99)]', '[\\[\\]]',''),
' ')
);
I am trying to fix an array in a dataset. Currently, I have a data set that has a reference number to multiple different uuids. What I would like to do is flatten this out in Snowflake to make it so the reference number has separate row for each uuid. For example
Reference UUID
1) 9f823c2a-ced5-4dbe-be65-869311462f75 "[
""05554f65-6aa9-4dd1-6271-8ce2d60f10c4"",
""df662812-7f97-0b43-9d3e-12f64f504fbb"",
""08644a69-76ed-ce2d-afff-b236a22efa69"",
""f1162c2e-eeb5-83f6-5307-2ed644e6b9eb"",
]"
Should end up looking like:
Reference UUID
1) 9f823c2a-ced5-4dbe-be65-869311462f75 05554f65-6aa9-4dd1-6271-8ce2d60f10c4
2) 9f823c2a-ced5-4dbe-be65-869311462f75 df662812-7f97-0b43-9d3e-12f64f504fbb
3) 9f823c2a-ced5-4dbe-be65-869311462f75 08644a69-76ed-ce2d-afff-b236a22efa69
4) 9f823c2a-ced5-4dbe-be65-869311462f75 f1162c2e-eeb5-83f6-5307-2ed644e6b9eb
I just started working in Snowflake so I am new to it. It looks like there is a lateral flatten, but this is either not working on telling me that I have all sorts of errors with it. The documentation from snowflake is a bit perplexing when it comes to this.
While FLATTEN is the right approach when exploding an array, the UUID column value shown in the original description is invalid if interpreted as JSON syntax: "[""val1"", ""val2""]" and that'll need correction before a LATERAL FLATTEN approach can be applied by treating it as a VARIANT type.
If your data sample in the original description is a literal one and applies for all columnar values, then the following query will help transform it into a valid JSON syntax and then apply a lateral flatten to yield the desired result:
SELECT
T.REFERENCE,
X.VALUE AS UUID
FROM (
SELECT
REFERENCE,
-- Attempts to transform an invalid JSON array syntax such as "[""a"", ""b""]"
-- to valid JSON: ["a", "b"] by stripping away unnecessary quotes
PARSE_JSON(REPLACE(REPLACE(REPLACE(UUID, '""', '"'), '["', '['), ']"', ']')) AS UUID_ARR_CLEANED
FROM TABLENAME) T,
LATERAL FLATTEN(T.UUID_ARR_CLEANED) X
If your data is already in a valid VARIANT type with a successful PARSE_JSON done for the UUID column during ingest, and the example provided in the description was just a formatting issue that only displays the JSON invalid in the post, then the simpler version of the same query as above will suffice:
SELECT REFERENCE, X.VALUE AS UUID
FROM TABLENAME, LATERAL FLATTEN(TABLENAME.UUID) X
I am trying to use Standard SQL Dialect in BigQuery to unnest the changelog.histories.items repeated record (outlined in green) to access the rows in the nested items table (outlined in blue). The parent Record "changelog" (outlined in Red) is not a repeated record and therefore I am having issues figuring out what to unnest.
Queries that attempt to unnest changelog.histories or changelog.histories.items result in the below error.
SELECT changelog.histories.items.to
FROM jirasparta_database.jira_issues,
unnest(changelog.histories)
Error: Cannot access field items on a value with type ARRAY, ...>, items ARRAYto STRING, field STRING, fieldtype STRING, ...>>, ...>> at [1:28]
#standardSQL
SELECT item.to
FROM jirasparta_database.jira_issues,
UNNEST(changelog.histories) history, UNNEST(history.items) item
Basically, you have to flatten the STRUCT and ARRAY values. You can have a look into this documentation for more details.
I have a column in our database called min_crew that has varying character arrays such as '{CA, FO, FA}'.
I have a query where I'm trying to get aggregates of these arrays without success:
SELECT use.user_sched_id, array_agg(se.sched_entry_id) AS seids
, array_agg(se.min_crew)
FROM base.sched_entry se
LEFT JOIN base.user_sched_entry use ON se.sched_entry_id = use.sched_entry_id
WHERE se.sched_entry_id = ANY(ARRAY[623, 625])
GROUP BY user_sched_id;
Both 623 and 625 have the same use.user_sched_id, so the result should be the grouping of the seids and the min_crew, but I just keep getting this error:
ERROR: could not find array type for data type character varying[]
If I remove the array_agg(se.min_crew) portion of the code, I do get a table returned with the user_sched_id = 2131 and seids = '{623, 625}'.
The standard aggregate function array_agg() only works for base types, not array types as input.
(But Postgres 9.5+ has a new variant of array_agg() that can!)
You could use the custom aggregate function array_agg_mult() as defined in this related answer:
Selecting data into a Postgres array
Create it once per database. Then your query could work like this:
SELECT use.user_sched_id, array_agg(se.sched_entry_id) AS seids
,array_agg_mult(ARRAY[se.min_crew]) AS min_crew_arr
FROM base.sched_entry se
LEFT JOIN base.user_sched_entry use USING (sched_entry_id)
WHERE se.sched_entry_id = ANY(ARRAY[623, 625])
GROUP BY user_sched_id;
There is a detailed rationale in the linked answer.
Extents have to match
In response to your comment, consider this quote from the manual on array types:
Multidimensional arrays must have matching extents for each dimension.
A mismatch causes an error.
There is no way around that, the array type does not allow such a mismatch in Postgres. You could pad your arrays with NULL values so that all dimensions have matching extents.
But I would rather translate the arrays to a comma-separated lists with array_to_string() for the purpose of this query and use string_agg() to aggregate the text - preferably with a different separator. Using a newline in my example:
SELECT use.user_sched_id, array_agg(se.sched_entry_id) AS seids
,string_agg(array_to_string(se.min_crew, ','), E'\n') AS min_crews
FROM ...
Normalize
You might want to consider normalizing your schema to begin with. Typically, you would implement such an n:m relationship with a separate table like outlined in this example:
How to implement a many-to-many relationship in PostgreSQL?