Given a column of integers
ids AS (
SELECT
id
FROM
UNNEST([1, 2, 3, 4, 5, 6, 7]) AS id)
I'd like to convert them into the following (batched) string representations:
"1,2,3,4,5"
"6,7"
Currently, I do this as follows:
SELECT
STRING_AGG(CAST(id AS STRING), ',')
FROM (
SELECT
DIV(ROW_NUMBER() OVER() - 1, 5) batch,
id
FROM
ids)
GROUP BY
batch
Since I use this on multiple occasions, I'd like to move this into a function.
Is this possible, and if so how?
(I guess, since we can't pass the table (ids), we'd need to pass an ARRAY<INT64>, but that would be ok.)
I think you might consider below 2 approches.
UDF
returns result as ARRAY<STRING>.
CREATE TEMP FUNCTION batched_string(ids ARRAY<INT64>) AS (
ARRAY(
SELECT STRING_AGG('' || id) FROM (
SELECT DIV(offset, 5) batch, id
FROM UNNEST(ids) id WITH offset
) GROUP BY batch
)
);
SELECT * FROM UNNEST(batched_string([1, 2, 3, 4, 5, 6, 7]));
Table functions
return result as a Table.
note that a table function shouldn't be a temp function.
CREATE OR REPLACE TABLE FUNCTION `your-project.dataset.batched_string`(ids ARRAY<INT64>) AS (
SELECT STRING_AGG('' || id) batched FROM (
SELECT DIV(offset, 5) batch, id
FROM UNNEST(ids) id WITH offset
) GROUP BY batch
);
SELECT * FROM `your-project.dataset.batched_string`([1, 2, 3, 4, 5, 6, 7]);
Related
In BigQquery how can I aggregate arrays element by element ?
For instance if I have this table
id
array_value
1
[1, 2, 3]
2
[4, 5, 6]
3
[7, 8, 9]
I want to sum all the vector element-wise and output [1+4+7, 2+5+8, 3+6+9] = [12, 15, 18]
I can SUM float fields with SELECT SUM(float_field) FROM table but when I try to apply the SUM on an array I get
No matching signature for aggregate function SUM for argument types: ARRAY.
Supported signatures: SUM(INT64); SUM(FLOAT64); SUM(NUMERIC); SUM(BIGNUMERIC) at [1:8]
I have found ARRAY_AGG in the doc but it is not what I want: it just creates an array from values.
I think you want:
select array_agg(sum_val order by id) as res
from (
select idx, sum(val) as sum_val
from mytable t
cross join unnest(t.array_value) as val with offset as idx
group by idx
) t
I think you want:
select array_agg(sum_val)
from (select (select sum(val)
from unnest(t.array_value) val
) as sum_val
from t
) x
I think technically you simply refer to the individual values in the arrays using offset() or safe_offset() in case there might be missing values
-- example data
with temp as (
select * from unnest([
struct(1 as id, [1, 2, 3] as array_value),
(2, [4,5,6]),
(3, [7,8])
])
)
-- actual query
select
[
SUM( array_value[safe_offset(0)] ),
SUM( array_value[safe_offset(1)] ),
SUM( array_value[safe_offset(2)] )
] as result_array
from temp
I put them in a result array, but you don't have to do that. I had the last array missing one value to show that the query doesn't break. If you want it to break you should use offset() without the 'safe_'
Below is for BigQuery Standard SQL
select array_agg(val order by offset)
from (
select offset, sum(val) as val
from `project.dataset.table` t,
unnest(array_value) as val with offset
group by offset
)
I want to filter out the duplicates from a BigQuery array. I also need the order of the elements to be preserved. The docs mention that this can be done by combining SELECT DISTINCT with UNNEST. However, it doesn't mention any ordering behavior. I ran this query and got the desired ordering of [5, 3, 1, 4, 10, 8].
WITH an_array AS (
SELECT [5, 5, 3, 1, 4, 4, 10, 8, 5, 1] AS nums
)
SELECT
ARRAY((
SELECT DISTINCT num
FROM UNNEST(nums) num
))
FROM an_array;
I don't know if that's coincidence or if that ordering is guaranteed. I also tried adding WITH OFFSET with an ORDER BY to specify the order explicitly, but in that case I get Query error: ORDER BY clause expression references table alias offset which is not visible after SELECT DISTINCT.
You should always be explicit about ordering if you care about it:WITH an_array AS (
WITH an_array as (
SELECT [5, 5, 3, 1, 4, 4, 10, 8, 5, 1] AS nums
)
SELECT ARRAY((SELECT num
FROM UNNEST(nums) num WITH OFFSET o
GROUP BY num
ORDER BY MIN(o)
)
)
FROM an_array;
This is a beginner-question relating arrays. I hope the answer is simple.
The example is taken from Oracle Spatial, but I think it is valid for all arrays.
I have this SELECT:
SELECT
D.FID
, D.GEOM.SDO_ELEM_INFO -- column GEOM contains spatial data
FROM
my_table D
I get this result:
73035 MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1)
73036 MDSYS.SDO_ELEM_INFO_ARRAY(1, 4, 3, 1, 2, 1, 11, 2, 2, 19, 2, 1)
73037 MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1)
Now I want to SELECT all rows where (1,2,1) is defined:
SELECT
D.FID
, D.GEOM.SDO_ELEM_INFO
FROM
my_table D
WHERE
-- Pseudo-Code is following
D.GEOM.SDO_ELEM_INFO is "(1, 2, 1)";
So, in simple words: "array_from_row = defined_array".
I found a lot about IMPLODE and TABLE and COLLECT etc. But how to define a clause on two arrays?
Thanks for help!
Try IN clause, you can also use both
SELECT
D.FID
, D.GEOM.SDO_ELEM_INFO
FROM
my_table D
WHERE
D.GEOM.SDO_ELEM_INFO in (1, 2, 1) or ( D.GEOM.SDO_ELEM_INFO = 1 or D.GEOM.SDO_ELEM_INFO = 2 or D.GEOM.SDO_ELEM_INFO = 3);
I've been struggling to create an Oracle SQL query that will tell me if my SDO table contains curves or arcs. I know that the sdo_elem_info contains the information I need, but I don't know how to use SQL to separate out the etype and interpretation from the sdo_elem_info.
So far, all I have is: select tbl.shape.sdo_elem_info from my_table tbl
You can use the TABLE function to extract the sdo_elem_info_array elements, then pivot that and aggregate the resulting rows to yield a row per element, with a column for offset, etype and interpretation.
Something like this should give you your query... (warning: untested)
WITH elem_info AS (
SELECT
TRUNC((ROWNUM - 1) / 3, 0) element_no
, MAX(DECODE(MOD(ROWNUM, 3), 1, t.COLUMN_VALUE, NULL)) offset
, MAX(DECODE(MOD(ROWNUM, 3), 2, t.COLUMN_VALUE, NULL)) etype
, MAX(DECODE(MOD(ROWNUM, 3), 0, t.COLUMN_VALUE, NULL)) interpretation
FROM my_table tbl
, TABLE(tbl.shape.sdo_elem_info) t
GROUP BY TRUNC((ROWNUM - 1) / 3, 0)
)
SELECT DECODE(COUNT(*), 0, 'false', 'true')
FROM elem_info ei
WHERE ei.etype IN (1005, 2005)
OR ei.interpretation IN (2, 4)
How can I limit a result set to n distinct values of a given column(s), where the actual number of rows may be higher?
Input table:
client_id, employer_id, other_value
1, 2, abc
1, 3, defg
2, 3, dkfjh
3, 1, ldkfjkj
4, 4, dlkfjk
4, 5, 342
4, 6, dkj
5, 1, dlkfj
6, 1, 34kjf
7, 7, 34kjf
8, 6, lkjkj
8, 7, 23kj
desired output, where limit distinct=5 distinct values of client_id:
1, 2, abc
1, 3, defg
2, 3, dkfjh
3, 1, ldkfjkj
4, 4, dlkfjk
4, 5, 342
4, 6, dkj
5, 1, dlkfj
Platform this is intended for is MySQL.
You can use a subselect
select * from table where client_id in
(select distinct client_id from table order by client_id limit 5)
This is for SQL Server. I can't remember, MySQL may use a LIMIT keyword instead of TOP. That may make the query more efficient if you can get rid of the inner most subquery by using the LIMIT and DISTINCT in the same subquery. (It looks like Vinko used this method and that LIMIT is correct. I'll leave this here for the second possible answer though.)
SELECT
client_id,
employer_id,
other_value
FROM
MyTable
WHERE
client_id IN
(
SELECT TOP 5
client_id
FROM
(
SELECT DISTINCT
client_id
FROM
MyTable
) SQ
ORDER BY
client_id
)
Of course, add in your own WHERE clause and ORDER BY clause in the subquery.
Another possibility (compare performance and see which works out better) is:
SELECT
client_id,
employer_id,
other_value
FROM
MyTable T1
WHERE
T1.code IN
(
SELECT
T2.code
FROM
MyTable T2
WHERE
(SELECT COUNT(*) FROM MyTable T3 WHERE T3,code < T2.code) < 5
)
-- Using Common Table Expression in Microsoft SQL Server.
-- LIMIT function does not exist in MS SQL.
WITH CTE
AS
(SELECT DISTINCT([COLUMN_NAME])
FROM [TABLE_NAME])
SELECT TOP (5) [[COLUMN_NAME]]
FROM CTE;
This works for MS SQL if anyone is on that platform:
SET ROWCOUNT 10;
SELECT DISTINCT
column1, column2, column3,...
FROM
Table1
WHERE ...