How to convert array to string value - google-bigquery

Hello i am trying to get my queries log cost, i get the total amount but when i try to break it down per dataset i get this error:
' Cannot access field datasetId on a value with type ARRAY> at '
this is my query i am trying to run:
WITH
data AS (
SELECT
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent AS jobCompletedEvent,
(
SELECT
ARRAY_TO_STRING((
SELECT
ARRAY_AGG(datasetId)
FROM
UNNEST(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.referencedTables.datasetId) ))) AS datasetIds
FROM
`kkk111.bq_audit_log_export.cloudaudit_googleapis_com_data_access_20190206` )
SELECT
datasetIds,
FORMAT('%9.2f',5.0 * (SUM(jobCompletedEvent.job.jobStatistics.totalBilledBytes)/POWER(2, 40))) AS Estimated_USD_Cost
FROM
data
WHERE
jobCompletedEvent.eventName = 'query_job_completed'
GROUP BY
datasetIds
ORDER BY
Estimated_USD_Cost DESC
I am using Standard SQL Dialect
how do i cast this field:
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.referencedTables.datasetId
from array to a string ?
what am i missing ?
Thanks.

Below is for BigQuery Standard SQL
#standardSQL
WITH data AS (
SELECT
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent AS jobCompletedEvent,
ref.datasetId AS datasetId
FROM `kkk111.bq_audit_log_export.cloudaudit_googleapis_com_data_access_20190206`,
UNNEST(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.referencedTables) ref
)
SELECT
datasetId,
FORMAT('%9.2f',5.0 * (SUM(jobCompletedEvent.job.jobStatistics.totalBilledBytes)/POWER(2, 40))) AS Estimated_USD_Cost
FROM data
WHERE jobCompletedEvent.eventName = 'query_job_completed'
GROUP BY datasetId
ORDER BY Estimated_USD_Cost DESC
As you can see there, obviously you need to UNNEST referencedTables ARRAY but also you need to make sure your final calculation of Cost is as close to correct one as possible. The same query can reference multiple tables from the same dataset, so you better have DISTINCT in your CTE. But also, same query can reference tables from multiple datasets - so in this same billing bytes will be attributed to multiple datasets, so you will have overestimation! I don't know your exact intent - but you might want to introduce some logic to distribute the cost among the referenced datasets.

You need to UNNEST the outer array in order to select the dataset ID inside:
SELECT
ARRAY_TO_STRING((
SELECT ARRAY_AGG(datasetId)
FROM UNNEST(protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent.job.jobStatistics.referencedTables)
), ',') AS datasetIds
FROM ...

Related

Bigquery SQL: convert array to columns

I have a table with a field A where each entry is a fixed length array A of integers (say length=1000). I want to know how to convert it into 1000 columns, with column name given by index_i, for i=0,1,2,...,999, and each element is the corresponding integer. I can have it done by something like
A[OFFSET(0)] as index_0,
A[OFFSET(1)] as index_1
A[OFFSET(2)] as index_2,
A[OFFSET(3)] as index_3,
A[OFFSET(4)] as index_4,
...
A[OFFSET(999)] as index_999,
I want to know what would be an elegant way of doing this. thanks!
The first thing to say is that, sadly, this is going to be much more complicated than most people expect. It can be conceptually easier to pass the values into a scripting language (e.g. Python) and work there, but clearly keeping things inside BigQuery is going to be much more performant. So here is an approach.
Cross-joining to turn array fields into long-format tables
I think the first thing you're going to want to do is get the values out of the arrays and into rows.
Typically in BigQuery this is accomplished using CROSS JOIN. The syntax is a tad unintuitive:
WITH raw AS (
SELECT "A" AS name, [1,2,3,4,5] AS a
UNION ALL
SELECT "B" AS name, [5,4,3,2,1] AS a
),
long_format AS (
SELECT name, vals
FROM raw
CROSS JOIN UNNEST(raw.a) AS vals
)
SELECT * FROM long_format
UNNEST(raw.a) is taking those arrays of values and turning each array into a set of (five) rows, every single one of which is then joined to the corresponding value of name (the definition of a CROSS JOIN). In this way we can 'unwrap' a table with an array field.
This will yields results like
name | vals
-------------
A | 1
A | 2
A | 3
A | 4
A | 5
B | 5
B | 4
B | 3
B | 2
B | 1
Confusingly, there is a shorthand for this syntax in which CROSS JOIN is replaced with a simple comma:
WITH raw AS (
SELECT "A" AS name, [1,2,3,4,5] AS a
UNION ALL
SELECT "B" AS name, [5,4,3,2,1] AS a
),
long_format AS (
SELECT name, vals
FROM raw, UNNEST(raw.a) AS vals
)
SELECT * FROM long_format
This is more compact but may be confusing if you haven't seen it before.
Typically this is where we stop. We have a long-format table, created without any requirement that the original arrays all had the same length. What you're asking for is harder to produce - you want a wide-format table containing the same information (relying on the fact that each array was the same length.
Pivot tables in BigQuery
The good news is that BigQuery now has a PIVOT function! That makes this kind of operation possible, albeit non-trivial:
WITH raw AS (
SELECT "A" AS name, [1,2,3,4,5] AS a
UNION ALL
SELECT "B" AS name, [5,4,3,2,1] AS a
),
long_format AS (
SELECT name, vals, offset
FROM raw, UNNEST(raw.a) AS vals WITH OFFSET
)
SELECT *
FROM long_format PIVOT(
ANY_VALUE(vals) AS vals
FOR offset IN (0,1,2,3,4)
)
This makes use of WITH OFFSET to generate an extra offset column (so that we know which order the values in the array originally had).
Also, in general pivoting requires us to aggregate the values returned in each cell. But here we expect exactly one value for each combination of name and offset, so we simply use the aggregation function ANY_VALUE, which non-deterministically selects a value from the group you're aggregating over. Since, in this case, each group has exactly one value, that's the value retrieved.
The query yields results like:
name vals_0 vals_1 vals_2 vals_3 vals_4
----------------------------------------------
A 1 2 3 4 5
B 5 4 3 2 1
This is starting to look pretty good, but we have a fundamental issue, in that the column names are still hard-coded. You wanted them generated dynamically.
Unfortunately expressions for the pivot column values aren't something PIVOT can accept out-of-the-box. Note that BigQuery has no way to know that your long-format table will resolve neatly to a fixed number of columns (it relies on offset having the values 0-4 for each and every set of records).
Dynamically building/executing the pivot
And yet, there is a way. We will have to leave behind the comfort of standard SQL and move into the realm of BigQuery Procedural Language.
What we must do is use the expression EXECUTE IMMEDIATE, which allows us to dynamically construct and execute a standard SQL query!
(as an aside, I bet you - OP or future searchers - weren't expecting this rabbit hole...)
This is, of course, inelegant to say the least. But here is the above toy example, implemented using EXECUTE IMMEDIATE. The trick is that the executed query is defined as a string, so we just have to use an expression to inject the full range of values you want into this string.
Recall that || can be used as a string concatenation operator.
EXECUTE IMMEDIATE """
WITH raw AS (
SELECT "A" AS name, [1,2,3,4,5] AS a
UNION ALL
SELECT "B" AS name, [5,4,3,2,1] AS a
),
long_format AS (
SELECT name, vals, offset
FROM raw, UNNEST(raw.a) AS vals WITH OFFSET
)
SELECT *
FROM long_format PIVOT(
ANY_VALUE(vals) AS vals
FOR offset IN ("""
|| (SELECT STRING_AGG(CAST(x AS STRING)) FROM UNNEST(GENERATE_ARRAY(0,4)) AS x)
|| """
)
)
"""
Ouch. I've tried to make that as readable as possible. Near the bottom there is an expression that generates the list of column suffices (pivoted values of offset):
(SELECT STRING_AGG(CAST(x AS STRING)) FROM UNNEST(GENERATE_ARRAY(0,4)) AS x)
This generates the string "0,1,2,3,4" which is then concatenated to give us ...FOR offset IN (0,1,2,3,4)... in our final query (as in the hard-coded example before).
REALLY dynamically executing the pivot
It hasn't escaped my notice that this is still technically insisting on your knowing up-front how long those arrays are! It's a big improvement (in the narrow sense of avoiding painful repetitive code) to use GENERATE_ARRAY(0,4), but it's not quite what was requested.
Unfortunately, I can't provide a working toy example, but I can tell you how to do it. You would simply replace the pivot values expression with
(SELECT STRING_AGG(DISTINCT CAST(offset AS STRING)) FROM long_format)
But doing this in the example above won't work, because long_format is a Common Table Expression that is only defined inside the EXECUTE IMMEDIATE block. The statement in that block won't be executed until after building it, so at build-time long_format has yet to be defined.
Yet all is not lost. This will work just fine:
SELECT *
FROM d.long_format PIVOT(
ANY_VALUE(vals) AS vals
FOR offset IN ("""
|| (SELECT STRING_AGG(DISTINCT CAST(offset AS STRING)) FROM d.long_format)
|| """
)
)
... provided you first define a BigQuery VIEW (for example) called long_format (or, better, some more expressive name) in a dataset d. That way, both the job that builds the query and the job that runs it will have access to the values.
If successful, you should see both jobs execute and succeed. You should then click 'VIEW RESULTS' on the job that ran the query.
As a final aside, this assumes you are working from the BigQuery console. If you're instead working from a scripting language, that gives you plenty of options to either load and manipulate the data, or build the query in your scripting language rather than massaging BigQuery into doing it for you.
Consider below approach
execute immediate ( select '''
select * except(id) from (
select to_json_string(A) id, * except(A)
from your_table, unnest(A) value with offset
)
pivot (any_value(value) index for offset in ('''
|| (select string_agg('' || val order by offset) from unnest(generate_array(0,999)) val with offset) || '))'
)
If to apply to dummy data like below (with 10 instead of 1000 elements)
select [10,11,12,13,14,15,16,17,18,19] as A union all
select [20,21,22,23,24,25,26,27,28,29] as A union all
select [30,31,32,33,34,35,36,37,38,39] as A
the output is

sql: JSON_QUERY() function to extract objects

I have a field in my dataset that include json objects in the following format:
cars
[{"element":{"name":"honda","id":"34"}}]
[{"element":{"name":"Lexus","id":"56"}}]
I am using the following query to extract the names of the cars, but just returns empty (null) rows. Any ideas what I am doing wrong?
select JSON_QUERY(cars,"$.name") AS car_names
from myTable
limit 100
Consider below approach
select *,
( select string_agg(json_extract_scalar(car, '$.element.name'))
from unnest(json_extract_array(cars)) car
) car_names
from `project.dataset.table`
if applied to sample data in your question - as in below example
with `project.dataset.table` as (
select '[{"element":{"name":"honda","id":"34"}}]' cars union all
select '[{"element":{"name":"Lexus","id":"56"}}]'
)
select *,
( select string_agg(json_extract_scalar(car, '$.element.name'))
from unnest(json_extract_array(cars)) car
) car_names
from `project.dataset.table`
the output is
if you are trying to extract a scalar value, You should simply use JSON_VALUE(expression,path)
For Example:
An object of Info contain a variable name and another object address,
You can get the value of name by using JSON_VALUE as it isn't an object
BUT
To get the address, you have to use JSON_QUERY.

How do you query an array in Standard SQL that meets a certain conditional?

I am trying to pull records whose arrays only meet a certain condition.
For example, I want only the results that contain "IAB3".
Here is what the table looks like
Table Name:
bids
Column Names:
BidderBanner / WinCat
Entries:
1600402 / null
1911048 / null
1893069 / [IAB3-11, IAB3]
1214894 / IAB3
How I initially thought it would be
SELECT * FROM bids WHERE WinCat = "IAB3"
but I get an error that says no match for operator types array, string.
The database is in Google Big Query.
Below is for BigQuery Standard SQL
#standardSQL
SELECT * FROM `project.dataset.bids` WHERE 'IAB3' IN UNNEST(WinCat)
You can test, play with above using sample data from your question as in example below
#standardSQL
WITH `project.dataset.bids` AS (
SELECT 1600402 BidderBanner, NULL WinCat UNION ALL
SELECT 1911048, NULL UNION ALL
SELECT 1893069, ['IAB3-11', 'IAB3'] UNION ALL
SELECT 1214894, ['IAB3']
)
SELECT * FROM `project.dataset.bids` WHERE 'IAB3' IN UNNEST(WinCat)
with result
you need to use single quotes in sql for all strings. it should be WHERE WinCat = 'IAB3' not WHERE WinCat = "IAB3"
One method uses unnest(), something like this:
SELECT b.*
FROM bids b
WHERE 'IAB3' IN (SELECT unnest(b.WinCats))
However, array syntax varies among the databases that support them and they are no part of "standard SQL".
this will work:
SELECT * FROM bids WHERE REGEXP_LIKE (WinCat, '(.)*(IAB3)+()*');

Query to return the amount of time each field equals a true value

I'm collecting data and storing in SQL Server. The table consist of the data shown in the Example Table below. I need to show the amount of time that each field [Fault0-Fault10] is in a fault state. The fault state is represented as a 1 for the fields value.
I have used the following query and got the desired results. However this is for only one field. I need to have the total time for each field in one query. I'm having issues pulling all of the fields into one query.
SELECT
Distinct [Fault0]
,datediff(mi, min(DateAndTime), max(DateAndTime)) TotalTimeInMins
FROM [dbo].[Fault]
Where Fault0 =1
group by [Fault0]
Results
Assuming you want the max() - min(), then you simply need to unpivot the data. Your query can look like:
SELECT v.faultname,
datediff(minute, min(t.DateAndTime), max(t.DateAndTime)) as TotalTimeInMins
FROM [dbo].[Fault] f CROSS APPLY
(VALUES ('fault0', fault0), ('fault1', fault1), . . ., ('fault10', fault10)
) v(faultname, value)
WHERE v.value = 1
GROUP BY v.faultname;

Eliminating Entries Based On Revision

I need to figure out how to eliminate older revisions from my query's results, my database stores orders as 'Q000000' and revisions have an appended '-number'. My query currently is as follows:
SELECT DISTINCT Estimate.EstimateNo
FROM Estimate
INNER JOIN EstimateDetails ON EstimateDetails.EstimateID = Estimate.EstimateID
INNER JOIN EstimateDoorList ON EstimateDoorList.ItemSpecID = EstimateDetails.ItemSpecID
WHERE (Estimate.SalesRepID = '67' OR Estimate.SalesRepID = '61') AND Estimate.EntryDate >= '2017-01-01 00:00:00.000' AND EstimateDoorList.SlabSpecies LIKE '%MDF%'
ORDER BY Estimate.EstimateNo
So for instance, the results would include:
Q120455-10
Q120445-11
Q121675-2
Q122361-1
Q123456
Q123456-1
From this, I need to eliminate 'Q120455-10' because of the presence of '-11' for that order, and 'Q123456' because of the presence of the '-1' revision. I'm struggling greatly with figuring out how to do this, my immediate thought was to use case statements but I'm not sure what is the best way to implement them and how to filter. Thank you in advance, let me know if any more information is needed.
First you have to parse your EstimateNo column into sequence number and revision number using CHARINDEX and SUBSTRING (or STRING_SPLIT in newer versions) and CAST/CONVERT the revision to a numeric type
SELECT
SUBSTRING(Estimate.EstimateNo,0,CHARINDEX('-',Estimate.EstimateNo)) as [EstimateNo],
CAST(SUBSTRING(Estimate.EstimateNo,CHARINDEX('-',Estimate.EstimateNo)+1, LEN(Estimate.EstimateNo)-CHARINDEX('-',Estimate.EstimateNo)+1) as INT) as [EstimateRevision]
FROM
...
You can then use
APPLY - to select TOP 1 row that matches the EstimateNo or
Window function such as ROW_NUMBER to select only records with row number of 1
For example, using a ROW_NUMBER would look something like below:
SELECT
ROW_NUMBER() OVER(PARTITION BY EstimateNo ORDER BY EstimateRevision DESC) AS "LastRevisionForEstimate",
-- rest of the needed columns
FROM
(
-- query above goes here
)
You can then wrap the query above in a simple select with a where predicate filtering out a specific value of LastRevisionForEstimate, for instance
SELECT --needed columns
FROM -- result set above
WHERE LastRevisionForEstimate = 1
Please note that this is to a certain extent, pseudocode, as I do not have your schema and cannot test the query
If you dislike the nested selects, check out the Common Table Expressions