bigquery url decode - google-bigquery

Is there an easy way to do URL decoding within the BigQuery query language? I'm working with a table that has a column containing URL-encoded strings in some values. For example:
http://xyz.com/example.php?url=http%3A%2F%2Fwww.example.com%2Fhello%3Fv%3D12345&foo=bar&abc=xyz
I extract the "url" parameter like so:
SELECT REGEXP_EXTRACT(column_name, "url=([^&]+)") as url
from [mydataset.mytable]
which gives me:
http%3A%2F%2Fwww.example.com%2Fhello%3Fv%3D12345
What I would like to do is something like:
SELECT URL_DECODE(REGEXP_EXTRACT(column_name, "url=([^&]+)")) as url
from [mydataset.mytable]
thereby returning:
http://www.example.com/hello?v=12345
I would like to avoid using multiple REGEXP_REPLACE() statements (replacing %20, %3A, etc...) if possible.
Ideas?

Below is built on top of #sigpwned answer, but slightly refactored and wrapped with SQL UDF (which has no limitation that JS UDF has so safe to use)
#standardSQL
CREATE TEMP FUNCTION URLDECODE(url STRING) AS ((
SELECT SAFE_CONVERT_BYTES_TO_STRING(
ARRAY_TO_STRING(ARRAY_AGG(
IF(STARTS_WITH(y, '%'), FROM_HEX(SUBSTR(y, 2)), CAST(y AS BYTES)) ORDER BY i
), b''))
FROM UNNEST(REGEXP_EXTRACT_ALL(url, r"%[0-9a-fA-F]{2}|[^%]+")) AS y WITH OFFSET AS i
));
SELECT
column_name,
URLDECODE(REGEXP_EXTRACT(column_name, "url=([^&]+)")) AS url
FROM `project.dataset.table`
can be tested with example from question as below
#standardSQL
CREATE TEMP FUNCTION URLDECODE(url STRING) AS ((
SELECT SAFE_CONVERT_BYTES_TO_STRING(
ARRAY_TO_STRING(ARRAY_AGG(
IF(STARTS_WITH(y, '%'), FROM_HEX(SUBSTR(y, 2)), CAST(y AS BYTES)) ORDER BY i
), b''))
FROM UNNEST(REGEXP_EXTRACT_ALL(url, r"%[0-9a-fA-F]{2}|[^%]+")) AS y WITH OFFSET AS i
));
WITH `project.dataset.table` AS (
SELECT 'http://example.com/example.php?url=http%3A%2F%2Fwww.example.com%2Fhello%3Fv%3D12345&foo=bar&abc=xyz' column_name
)
SELECT
URLDECODE(REGEXP_EXTRACT(column_name, "url=([^&]+)")) AS url,
column_name
FROM `project.dataset.table`
with result
Row url column_name
1 http://www.example.com/hello?v=12345 http://example.com/example.php?url=http%3A%2F%2Fwww.example.com%2Fhello%3Fv%3D12345&foo=bar&abc=xyz
Update with further quite optimized SQL UDF
CREATE TEMP FUNCTION URLDECODE(url STRING) AS ((
SELECT STRING_AGG(
IF(REGEXP_CONTAINS(y, r'^%[0-9a-fA-F]{2}'),
SAFE_CONVERT_BYTES_TO_STRING(FROM_HEX(REPLACE(y, '%', ''))), y), ''
ORDER BY i
)
FROM UNNEST(REGEXP_EXTRACT_ALL(url, r"%[0-9a-fA-F]{2}(?:%[0-9a-fA-F]{2})*|[^%]+")) y
WITH OFFSET AS i
));

It's a good feature request, but currently there is no built in BigQuery function that provides URL decoding.

One more workaround is using a user-defined function.
#standardSQL
CREATE TEMPORARY FUNCTION URL_DECODE(enc STRING)
RETURNS STRING
LANGUAGE js AS """
try {
return decodeURI(enc);;
} catch (e) { return null }
return null;
""";
SELECT ven_session,
URL_DECODE(REGEXP_EXTRACT(para,r'&kw=(\w|[^&]*)')) AS q
FROM raas_system.weblog_20170327
WHERE para like '%&kw=%'
LIMIT 10

I agree with everyone here that URLDECODE should be a native function. However, until that happens, it is possible to write a "native" URLDECODE:
SELECT id, SAFE_CONVERT_BYTES_TO_STRING(ARRAY_TO_STRING(ps, b'')) FROM (SELECT
id,
ARRAY_AGG(CASE
WHEN REGEXP_CONTAINS(y, r"^%") THEN FROM_HEX(SUBSTR(y, 2))
ELSE CAST(y AS bytes)
END ORDER BY i) AS ps
FROM (SELECT x AS id, REGEXP_EXTRACT_ALL(x, r"%[0-9a-fA-F]{2}|[^%]+") AS element FROM UNNEST(ARRAY['domodossola%e2%80%93locarno railway', 'gabu%c5%82t%c3%b3w']) AS x) AS x
CROSS JOIN UNNEST(x.element) AS y WITH OFFSET AS i GROUP BY id);
In this example, I've tried and tested the implementation with a couple of percent-encoded page names from Wikipedia as the input. It should work with your input, too.
Obviously, this is extremely unwieldly! For that reason, I'd suggest building a materialized join table, or wrapping this in a view, rather than using this expression "naked" in your query. However, it does appear to get the job done, and it doesn't hit the UDF limits.
EDIT: #MikhailBerylyant's post below has wrapped this cumbersome implementation into a nice, tidy little SQL UDF. That's a much better way to handle this!

Related

How to apply a user defined function to multiple columns in BigQuery SQL?

In the database I'm working on there are a several wage variables that are recorded as strings with entries like 0000001155,00. I am using a combination of CAST and REPLACE to transform these variables into float. For just one variable, I used:
CAST (REPLACE (wage_var, ",", ".") AS float64) as wage_formatted
I would like to perform this procedure for all variables that have the same problem, without repeating the same line of code. My idea is to use a function and then iterate the function through the columns.
I figure out how I can create a function to perform the standardization after reading the documentation. Then I wrote the following function:
CREATE TEMP FUNCTION wage2float(x STRING) AS (CAST(REPLACE(x, ",", ".") AS float64));
SELECT
wage_var,
wage2float(wage_var) as wage_formatted
FROM
`mydataset.mytable`
However, it's not clear to me how I can iterate this function on several columns. Is there a way to loop through the columns and apply the wage2float function for each column?
EDIT:
Here is sample of input (csv):
vl_remun_media_nom,vl_remun_media_sm,vl_remun_dezembro_nom,vl_remun_dezembro_sm,vl_ultima_remuneracao_ano,vl_salario_contratual,vl_rem_janeiro_cc,vl_rem_fevereiro_cc,vl_rem_marco_cc,vl_rem_abril_cc,vl_rem_maio_cc,vl_rem_junho_cc,vl_rem_julho_cc,vl_rem_agosto_cc,vl_rem_setembro_cc,vl_rem_outubro_cc,vl_rem_novembro_cc
"0000006025,55","000006,42","0000005921,09","000006,31","0005921,09","0005148,77","000000005866,27","000000005866,27","000000005866,27","000000005866,27","000000005866,27","000000005866,27","000000007169,88","000000006254,78","000000005921,09","000000005921,09","000000005921,09"
"0000001447,68","000001,54","0000001726,67","000001,84","0001726,67","0000014,00","000000001645,55","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00"
"0000001304,35","000001,39","0000001304,35","000001,39","0001304,35","0001304,35","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000001304,35","000000001304,35","000000001304,35","000000001304,35"
"0000001447,68","000001,54","0000001726,67","000001,84","0001726,67","0000014,00","000000001645,55","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00"
"0000001447,68","000001,54","0000001726,67","000001,84","0001726,67","0000014,00","000000001645,56","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00"
"0000001447,68","000001,54","0000001726,67","000001,84","0001726,67","0000014,00","000000001645,55","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00","000000000014,00"
"0000001427,95","000001,52","0000001420,68","000001,51","0001420,68","0001420,68","000000001379,30","000000001379,30","000000001379,30","000000001379,30","000000001379,30","000000001379,30","000000001839,07","000000001379,30","000000001379,30","000000001420,68","000000001420,68"
"0000005937,88","000006,33","0000005900,00","000006,29","0005900,00","0000059,00","000000000057,38","000000000057,38","000000000057,38","000000000057,38","000000007650,67","000000000057,38","000000000057,38","000000000057,38","000000000057,38","000000000059,00","000000000059,00"
"0000001087,04","000001,15","0000001076,20","000001,14","0001076,20","0001076,20","000000000010,00","000000000010,00","000000000010,00","000000001076,20","000000001076,20","000000001076,20","000000001076,20","000000001434,93","000000001076,20","000000001076,20","000000001076,20"
"0000002395,30","000002,55","0000002448,79","000002,61","0002448,79","0002448,79","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002377,47","000000002448,79","000000002448,79"
"0000001870,56","000001,99","0000001820,00","000001,94","0001820,00","0000018,00","000000001820,01","000000001820,01","000000001820,01","000000001820,01","000000001820,01","000000000018,20","000000000018,20","000000000018,20","000000000018,20","000000002426,67","000000000018,20"
"0000002960,08","000003,15","0000003068,59","000003,27","0003068,59","0000027,00","000000002724,53","000000002500,09","000000003454,64","000000002700,88","000000002943,15","000000002943,42","000000002943,69","000000003098,28","000000003098,24","000000002976,73","000000003068,79"
"0000003798,04","000004,04","0000003852,69","000004,11","0003852,69","0000030,00","000000002500,45","000000002500,57","000000002500,79","000000005306,55","000000005079,02","000000003430,02","000000004239,21","000000004182,29","000000004913,02","000000003247,38","000000003824,52"
"0000004945,06","000005,27","0000005286,81","000005,64","0005286,81","0000045,00","000000004000,10","000000004000,16","000000005392,43","000000004919,14","000000004500,98","000000004500,21","000000005936,10","000000006133,08","000000004795,43","000000004576,91","000000005299,44"
"0000005810,00","000006,19","0000005540,00","000005,91","0005540,00","0000055,40","000000006933,33","000000000055,40","000000000055,40","000000000055,40","000000000055,40","000000000055,40","000000000055,40","000000007386,67","000000000055,40","000000000055,40","000000000055,40"
"0000001103,62","000001,17","0000001090,00","000001,16","0001090,00","0000010,90","000000000010,31","000000000010,31","000000000010,31","000000001086,20","000000001086,20","000000001086,20","000000001086,20","000000001086,20","000000001086,20","000000001453,33","000000000010,90"
"0000002600,34","000002,77","0000002866,13","000003,05","0002866,13","0000010,91","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000002168,92","000000001999,70","000000002175,13","000000003036,83","000000002909,14","000000002887,45","000000002759,44"
"0000005174,66","000005,51","0000004967,86","000005,30","0004967,86","0000016,15","000000005154,31","000000004621,59","000000005161,25","000000005080,73","000000005185,34","000000004981,24","000000006430,29","000000005584,57","000000005064,43","000000005029,16","000000004835,26"
"0000005693,03","000006,07","0000005650,78","000006,03","0005650,78","0005650,78","000000005433,44","000000005433,44","000000005433,44","000000005433,44","000000007244,59","000000005433,44","000000005433,44","000000005868,12","000000005650,78","000000005650,78","000000005650,78"
"0000002485,76","000002,64","0000002810,52","000002,99","0002810,52","0000010,91","000000002193,56","000000001925,13","000000002352,46","000000002135,21","000000002440,66","000000002232,19","000000002951,81","000000002947,97","000000002588,45","000000002516,61","000000002734,59"
"0000003808,35","000004,06","0000003893,40","000004,15","0003893,40","0003893,40","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000000037,80","000000004006,80"
"0000004648,00","000004,95","0000004549,71","000004,85","0004549,71","0004549,71","000000004212,70","000000004549,71","000000006066,28","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71"
"0000004521,62","000004,82","0000004549,71","000004,85","0004549,71","0004549,71","000000004212,70","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71","000000004549,71"
"0000003024,00","000003,22","0000003024,00","000003,22","0003024,00","0000030,24","000000000028,00","000000000028,00","000000000028,00","000000000028,00","000000000039,20","000000000030,24","000000000030,24","000000000030,24","000000000030,24","000000000030,24","000000000030,24"
"0000002946,43","000003,14","0000002910,00","000003,10","0002910,00","0001923,68","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000000000,00","000000002983,70","000000002945,59"
Desired output:
vl_remun_media_nom,vl_remun_media_sm,vl_remun_dezembro_nom,vl_remun_dezembro_sm,vl_ultima_remuneracao_ano,vl_salario_contratual,vl_rem_janeiro_cc,vl_rem_fevereiro_cc,vl_rem_marco_cc,vl_rem_abril_cc,vl_rem_maio_cc,vl_rem_junho_cc,vl_rem_julho_cc,vl_rem_agosto_cc,vl_rem_setembro_cc,vl_rem_outubro_cc,vl_rem_novembro_cc
6025.55,6.42,5921.09,6.31,5921.09,5148.77,5866.27,5866.27,5866.27,5866.27,5866.27,5866.27,7169.88,6254.78,5921.09,5921.09,5921.09
1447.68,1.54,1726.67,1.84,1726.67,14.0,1645.55,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0
1304.35,1.39,1304.35,1.39,1304.35,1304.35,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1304.35,1304.35,1304.35,1304.35
1447.68,1.54,1726.67,1.84,1726.67,14.0,1645.55,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0
1447.68,1.54,1726.67,1.84,1726.67,14.0,1645.56,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0
1447.68,1.54,1726.67,1.84,1726.67,14.0,1645.55,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0,14.0
1427.95,1.52,1420.68,1.51,1420.68,1420.68,1379.3,1379.3,1379.3,1379.3,1379.3,1379.3,1839.07,1379.3,1379.3,1420.68,1420.68
5937.88,6.33,5900.0,6.29,5900.0,59.0,57.38,57.38,57.38,57.38,7650.67,57.38,57.38,57.38,57.38,59.0,59.0
1087.04,1.15,1076.2,1.14,1076.2,1076.2,10.0,10.0,10.0,1076.2,1076.2,1076.2,1076.2,1434.93,1076.2,1076.2,1076.2
2395.3,2.55,2448.79,2.61,2448.79,2448.79,2377.47,2377.47,2377.47,2377.47,2377.47,2377.47,2377.47,2377.47,2377.47,2448.79,2448.79
1870.56,1.99,1820.0,1.94,1820.0,18.0,1820.01,1820.01,1820.01,1820.01,1820.01,18.2,18.2,18.2,18.2,2426.67,18.2
2960.08,3.15,3068.59,3.27,3068.59,27.0,2724.53,2500.09,3454.64,2700.88,2943.15,2943.42,2943.69,3098.28,3098.24,2976.73,3068.79
3798.04,4.04,3852.69,4.11,3852.69,30.0,2500.45,2500.57,2500.79,5306.55,5079.02,3430.02,4239.21,4182.29,4913.02,3247.38,3824.52
4945.06,5.27,5286.81,5.64,5286.81,45.0,4000.1,4000.16,5392.43,4919.14,4500.98,4500.21,5936.1,6133.08,4795.43,4576.91,5299.44
5810.0,6.19,5540.0,5.91,5540.0,55.4,6933.33,55.4,55.4,55.4,55.4,55.4,55.4,7386.67,55.4,55.4,55.4
1103.62,1.17,1090.0,1.16,1090.0,10.9,10.31,10.31,10.31,1086.2,1086.2,1086.2,1086.2,1086.2,1086.2,1453.33,10.9
2600.34,2.77,2866.13,3.05,2866.13,10.91,0.0,0.0,0.0,0.0,2168.92,1999.7,2175.13,3036.83,2909.14,2887.45,2759.44
5174.66,5.51,4967.86,5.3,4967.86,16.15,5154.31,4621.59,5161.25,5080.73,5185.34,4981.24,6430.29,5584.57,5064.43,5029.16,4835.26
5693.03,6.07,5650.78,6.03,5650.78,5650.78,5433.44,5433.44,5433.44,5433.44,7244.59,5433.44,5433.44,5868.12,5650.78,5650.78,5650.78
2485.76,2.64,2810.52,2.99,2810.52,10.91,2193.56,1925.13,2352.46,2135.21,2440.66,2232.19,2951.81,2947.97,2588.45,2516.61,2734.59
3808.35,4.06,3893.4,4.15,3893.4,3893.4,37.8,37.8,37.8,37.8,37.8,37.8,37.8,37.8,37.8,37.8,4006.8
4648.0,4.95,4549.71,4.85,4549.71,4549.71,4212.7,4549.71,6066.28,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71
4521.62,4.82,4549.71,4.85,4549.71,4549.71,4212.7,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71,4549.71
3024.0,3.22,3024.0,3.22,3024.0,30.24,28.0,28.0,28.0,28.0,39.2,30.24,30.24,30.24,30.24,30.24,30.24
2946.43,3.14,2910.0,3.1,2910.0,1923.68,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2983.7,2945.59
Just the columns starting with vl. There are several other variables which didn't need this procedure
Below is for BigQuery Standard SQL and uses BQ Scripting
execute immediate (select 'select * replace(' ||
string_agg('cast(replace(' || column || ', ",", ".") as float64) as ' || column, ', ') ||
') from YourTable'
from (
select regexp_extract_all(to_json_string(t), r'"(vl_[^"]*)":') as columns
from YourTable t
limit 1
), unnest(columns) column);
if to apply to below simplified example (it still fully represent OP's use case):
select 1 id, "0000006025,55" vl_x, "000006,42" y, "0000005921,09" vl_z union all
select 2, "0000001447,68", "000001,54", "0000001726,67"
output is
You should click on VIEW RESULTS of last row to see final result
Depends on what you want then to do with result - you can adjust code to replace YourTable with this output or create new one, etc. See example of such adjustment (just first line - the rest are the same)
execute immediate (select 'create table NewTable as select * replace(' ||
. . .
If you want a select query, you would just use:
SELECT CAST(REPLACE(wage_var, ',', '.') AS float64) as wage_formatted,
CAST(REPLACE(taxes_var, ',', '.') AS float64) as taxes_formatted,
. . .
FROM t;
If you want to do this "permanently" . . . well, I would suggest a view:
CREATE VIEW v_t AS
SELECT t.*,
CAST(REPLACE(wage_var, ',', '.') AS float64) as wage_formatted,
CAST(REPLACE(taxes_var, ',', '.') AS float64) as taxes_formatted,
. . .
FROM t;
You could also add new columns into the table and give them the floating point value.

Issue with array_agg method when aggregating arrays of different lengths

Here is a lateral query which is part of a bigger query:
lateral (
select
array_agg(
sh.dogsfilters
) filter (
where
sh.dogsfilters is not null
) as dependencyOfFoods
from
shelter sh
where
sh.shelterid = ${shelterid}
) filtersOfAnimals,
the problem is with array_agg method as it fails when it has arrays with different lengths like this ("[[7, 9], [7, 9, 8], [8]]")!
The problem is easy to solve using json_agg but later in the query there's a any check like this:
...
where
cd.dogsid = any(filtersOfAnimals.dependencyOfFoods)
and
...
...
But as any will not work on json data which is prepared using json_agg so I can't use it instead of array_agg!
What might be a better solution to this?
Unnest the arrays and re-aggregate:
lateral
(select array_agg(dogfilter) filter (where dogfilter is not null) as dependencyOfFoods
from shelter sh cross join
unnest(sh.dogsfilters) dogfilter
where sh.shelterid = ${shelterid}
) filtersOfAnimals,
It is interesting that Postgres doesn't have a function that does this. BigQuery offers array_concat_agg() which does exactly what you want.
It is ugly, but it works:
regexp_split_to_array(
array_to_string(
array_agg(
array_to_string(value,',')
),','
),',')::integer[]
I don't know if this could be a valid solution from the the performance point of view ...
In PostgreSQL, you can define your own aggregates. I think that this one does what you want:
create function array_concat_agg_tran(anyarray,anyarray) returns anyarray language sql
as $$ select $1||$2 $$;
create aggregate array_concat_agg(anyarray) (sfunc=array_concat_agg_tran, stype=anyarray);
Then:
select array_concat_agg(x) from (values (ARRAY[1,2]),(ARRAY[3,4,5])) f(x);
array_concat_agg
------------------
{1,2,3,4,5}
With a bit more work, you could make it parallelizable as well.

How to get From & To Ip Address from CIDR BigQuery

BigQuery provides updated geoip2 public dataset here [bigquery-publicdata -> geolite2 -> ipv4_city_blocks] which contains network column with IPv4 CIDR values.
How do I convert the CIDR values in the network column via BigQuery SQL (and not via a utility outside BigQuery) into start & end ip-address values so that I can find if an IP address is within a range or no? Would be helpful if you can provide the query to obtain the range ips for a CIDR value in the table.
Below is for BigQuery Standard SQL
#standardSQL
CREATE TEMP FUNCTION cidrToRange(CIDR STRING)
RETURNS STRUCT<start_IP STRING, end_IP STRING>
LANGUAGE js AS """
var beg = CIDR.substr(CIDR,CIDR.indexOf('/'));
var end = beg;
var off = (1<<(32-parseInt(CIDR.substr(CIDR.indexOf('/')+1))))-1;
var sub = beg.split('.').map(function(a){return parseInt(a)});
var buf = new ArrayBuffer(4);
var i32 = new Uint32Array(buf);
i32[0] = (sub[0]<<24) + (sub[1]<<16) + (sub[2]<<8) + (sub[3]) + off;
var end = Array.apply([],new Uint8Array(buf)).reverse().join('.');
return {start_IP: beg, end_IP: end};
""";
SELECT network, IP_range.*
FROM `bigquery-public-data.geolite2.ipv4_city_blocks`,
UNNEST([cidrToRange(network)]) IP_range
It took about 60 sec to process all 3,037,858 rows with result like below
This query will do the job:
# replace with your source of IP addresses
# here I'm using the same Wikipedia set from the previous article
WITH source_of_ip_addresses AS (
SELECT REGEXP_REPLACE(contributor_ip, 'xxx', '0') ip, COUNT(*) c
FROM `publicdata.samples.wikipedia`
WHERE contributor_ip IS NOT null
GROUP BY 1
)
SELECT city_name, SUM(c) c, ST_GeogPoint(AVG(longitude), AVG(latitude)) point
FROM (
SELECT ip, city_name, c, latitude, longitude, geoname_id
FROM (
SELECT *, NET.SAFE_IP_FROM_STRING(ip) & NET.IP_NET_MASK(4, mask) network_bin
FROM source_of_ip_addresses, UNNEST(GENERATE_ARRAY(9,32)) mask
WHERE BYTE_LENGTH(NET.SAFE_IP_FROM_STRING(ip)) = 4
)
JOIN `fh-bigquery.geocode.201806_geolite2_city_ipv4_locs`
USING (network_bin, mask)
)
WHERE city_name IS NOT null
GROUP BY city_name, geoname_id
ORDER BY c DESC
LIMIT 5000`
Find more details on:
https://towardsdatascience.com/geolocation-with-bigquery-de-identify-76-million-ip-addresses-in-20-seconds-e9e652480bd2
The first thing you need to check is, if that function already exists, so please refer to the BigQuery Functions and Operators documentation.
If not, you need to use Standard SQL User-Defined Functions (UDF), which lets you create a function using another SQL expression or another programming language, such as JavaScript.
Keep in mind when using UDF JavaScript function, BigQuery initializes a JavaScript environment with the function's contents on every shard of execution. There is no optimization to avoid loading the environment, so it can slow down the query.
Regarding to GeoIP2 City and Country CSV Databases site, there is a utility to convert 'network' column to start/end IPs or start/end integers. Refer to Github site for details.
January 2023 solution
Just wanted to respond to Felipe's comment here. I'm not sure why he is suggesting an alternate solution using Snowflake, as his existing solution works just fine. The only difference is that you need to create the dataset yourself.
I managed to solve this by going through the exact same steps listed in Felipe's very helpful original blog article:
Sign-up to MaxMind and download the Geolite2 databases (link)
Download the two CSV files GeoLite2-City-Blocks-IPv4.csv and GeoLite2-City-Locations-en.csv, upload them to a GCP bucket, and create tables from them. I lazily used the BQ automated schema feature and it worked just fine :)
Simply create a geolite2_locs table using a query similar to the one below (just keep or drop your columns as required for your use-case)
CREATE OR REPLACE TALBLE `dataset.geolite2_locs` OPTIONS() AS (
SELECT
ip_ref.network,
NET.IP_FROM_STRING(REGEXP_EXTRACT(ip_ref.network, r'(.*)/' )) network_bin,
CAST(REGEXP_EXTRACT(ip_ref.network, r'/(.*)' ) AS INT64) mask,
ip_ref.geoname_id,
city_ref.continent_name as continent_name,
city_ref.country_name as country_name,
city_ref.city_name as city_name,
city_ref.subdivision_1_name as subdivision_1_name,
city_ref.subdivision_2_name as subdivision_2_name,
ip_ref.latitude as latitude,
ip_ref.longitude as longitude,
FROM `geolite2`.`geolite2-ipv4` ip_ref LEFT JOIN `geolite2`.`geolite2-city-en` city_ref USING (geoname_id)
);
Adapt the query in Felipe's guide or just replace the fh-bigquery.geocode.201806_geolite2_city_ipv4_locs with your new table in his answer above.
Should take you at max 1 hour to get this going. Hope it helps.

Apply like function on an array is SQL Server

I am getting array from front end to perform filters according that inside the SQL query.
I want to apply a LIKE filter on the array. How to add an array inside LIKE function?
I am using Angular with Html as front end and Node as back end.
Array being passed in from the front end:
[ "Sports", "Life", "Relationship", ...]
SQL query is :
SELECT *
FROM Skills
WHERE Description LIKE ('%Sports%')
SELECT *
FROM Skills
WHERE Description LIKE ('%Life%')
SELECT *
FROM Skills
WHERE Description LIKE ('%Relationship%')
But I am getting an array from the front end - how to create a query for this?
In SQL Server 2017 you can use OPENJSON to consume the JSON string as-is:
SELECT *
FROM skills
WHERE EXISTS (
SELECT 1
FROM OPENJSON('["Sports", "Life", "Relationship"]', '$') AS j
WHERE skills.description LIKE '%' + j.value + '%'
)
Demo on db<>fiddle
As an example, for SQL Server 2016+ and STRING_SPLIT():
DECLARE #Str NVARCHAR(100) = N'mast;mode'
SELECT name FROM sys.databases sd
INNER JOIN STRING_SPLIT(#Str, N';') val ON sd.name LIKE N'%' + val.value + N'%'
-- returns:
name
------
master
model
Worth to mention that input data to be strictly controlled, since such way can lead to SQL Injection attack
As the alternative and more safer and simpler approach: SQL can be generated on an app side this way:
Select * from Skills
WHERE (
Description Like '%Sports%'
OR Description Like '%Life%'
OR Description Like '%Life%'
)
A simple map()-call on the words array will allow you to generate the corresponding queries, which you can then execute (with or without joining them first into a single string).
Demo:
var words = ["Sports", "Life", "Relationship"];
var template = "Select * From Skills Where Description Like ('%{0}%')";
var queries = words.map(word => template.replace('{0}', word));
var combinedQuery = queries.join("\r\n");
console.log(queries);
console.log(combinedQuery);

Bigquery - json_extract all elements from an array

i'm trying to extract two key from every json in an arry of jsons(using sql legacy)
currently i am using json extract function :
json_extract(json_column , '$[1].X') AS X,
json_extract(json_column , '$[1].Y') AS Y,
how can i make it run on every json at the 'json arry column', and not just [1] (for example)?
An example json:
[
{"blabla":000,"X":1,"blabla":000,"blabla":000,"blabla":000,,"Y":"2"},
{"blabla":000,"X":3,"blabla":000,"blabla":000,"blabla":000,,"Y":"4"},
]
thanks in advance!
Update 2020: JSON_EXTRACT_ARRAY()
Now BigQuery supports JSON_EXTRACT_ARRAY():
https://cloud.google.com/bigquery/docs/reference/standard-sql/json_functions#json_extract_array
For example, to solve this particular question:
SELECT id
, ARRAY(
SELECT JSON_EXTRACT_SCALAR(x, '$.author.email')
FROM UNNEST(JSON_EXTRACT_ARRAY(payload, "$.commits"))x
) emails
FROM `githubarchive.day.20180830`
WHERE type='PushEvent'
AND id='8188163772'
Previous answer
Let's start with a similar problem - this is not a very convenient way to extract all emails from a json array:
SELECT id
, [ JSON_EXTRACT_SCALAR(JSON_EXTRACT(payload, '$.commits'), '$[0].author.email')
, JSON_EXTRACT_SCALAR(JSON_EXTRACT(payload, '$.commits'), '$[1].author.email')
, JSON_EXTRACT_SCALAR(JSON_EXTRACT(payload, '$.commits'), '$[2].author.email')
, JSON_EXTRACT_SCALAR(JSON_EXTRACT(payload, '$.commits'), '$[3].author.email')
] emails
FROM `githubarchive.day.20180830`
WHERE type='PushEvent'
AND id='8188163772'
The best way we have right now to deal with this is to use some JavaScript in an UDF to split a json-array into a SQL array:
CREATE TEMP FUNCTION json2array(json STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
return JSON.parse(json).map(x=>JSON.stringify(x));
""";
SELECT * EXCEPT(array_commits),
ARRAY(SELECT JSON_EXTRACT_SCALAR(x, '$.author.email') FROM UNNEST(array_commits) x) emails
FROM (
SELECT id
, json2array(JSON_EXTRACT(payload, '$.commits')) array_commits
FROM `githubarchive.day.20180830`
WHERE type='PushEvent'
AND id='8188163772'
)
May 1st, 2020 Update
A new function, JSON_EXTRACT_ARRAY, has been just added to the list of JSON
functions. This function allows you to extract the contents of a JSON document as
a string array.
so in below you can replace use of CUSTOM_JSON_EXTRACT UDF with just in-built function JSON_EXTRACT_ARRAY as in below example
#standardSQL
SELECT
JSON_EXTRACT_SCALAR(json , '$.X') AS X,
JSON_EXTRACT_SCALAR(json , '$.Y') AS Y
FROM t, UNNEST(JSON_EXTRACT_ARRAY(json_column , '$')) json
==============
Below example for BigQuery Standard SQL and allows you to be close to standard way of working with JSONPath and no extra manipulation needed so you just simply use CUSTOM_JSON_EXTRACT(json, json_path) function
#standardSQL
CREATE TEMPORARY FUNCTION CUSTOM_JSON_EXTRACT(json STRING, json_path STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
return jsonPath(JSON.parse(json), json_path);
"""
OPTIONS (
library="gs://your_bucket/jsonpath-0.8.0.js"
);
WITH t AS (
SELECT '''
[
{"blabla1":1,"X":1,"blabla2":3,"blabla3":5,"blabla4":7,"Y":"2"},
{"blabla1":2,"X":3,"blabla2":4,"blabla3":6,"blabla4":8,"Y":"4"}
]
''' AS json_column
)
SELECT
CUSTOM_JSON_EXTRACT(json_column , '$[*].X') AS X,
CUSTOM_JSON_EXTRACT(json_column , '$[*].Y') AS Y
FROM t
result will be
Row X Y
1 1 2
3 4
Note: to overcome current BigQuery's "limitation" for JsonPath, above solution uses custom function along with external library - jsonpath-0.8.0.js that can be downloaded from https://code.google.com/archive/p/jsonpath/downloads and uploaded to Google Cloud Storage - gs://your_bucket/jsonpath-0.8.0.js
Just re-read Felipe's answer - for his example above solution will look like below (just as FYI)
SELECT
id,
CUSTOM_JSON_EXTRACT(payload, '$.commits[*].author.email') emails
FROM `githubarchive.day.20180830`
WHERE type='PushEvent'
AND id='8188163772'