How to pass a param for a binding in PostgreSQL - COPY (... ) TO STDOUT (FORMAT binary)? - sql

I have some simple test table in postgres like below:
--DROP TABLE test_point
CREATE TABLE test_point
(
serie_id INT NOT NULL,
version_ts INT NOT NULL,
PRIMARY KEY (serie_id, version_ts)
);
I try to load a data from it by using COPY TO STDOUT and binary buffers. This is sql definition I use in a test case:
COPY (
SELECT version_ts
FROM test_point
WHERE
serie_id = $1::int
) TO STDOUT (FORMAT binary);
It works ok, if I don't provide any param to bind to in SQL. If I use simple select it recognizes params also as well.
I was trying to provide explicit info about param type during stmt preparation also, but results were similar (it doesn't recognize param).
This is a message I receive during the test case:
0x000001740a288ab0 "ERROR: bind message supplies 1 parameters, but prepared statement \"test1\" requires 0\n"
How to properly provide a param for COPY() statement?
I don't want to cut/concatenate strings for timestamp params and similar types.
Below is a test case showing the issue.
TEST(TSStorage, CopyParamTest)
{
auto sql = R"(
COPY (
SELECT version_ts
FROM test_point
WHERE
serie_id = $1::int
) TO STDOUT (FORMAT binary);
)";
auto connPtr = PQconnectdb("postgresql://postgres:pswd#localhost/some_db");
auto result = PQprepare(connPtr, "test1", sql, 0, nullptr);
// Lambda to test result status
auto testRes = [&](ExecStatusType status)
{
if (PQresultStatus(result) != status)
{
PQclear(result);
auto errorMsg = PQerrorMessage(connPtr);
PQfinish(connPtr);
throw std::runtime_error(errorMsg);
}
};
testRes(PGRES_COMMAND_OK);
PQclear(result);
int seriesIdParam = htonl(5);
const char *paramValues[] = {(const char *)&seriesIdParam};
const int paramLengths[] = {sizeof(seriesIdParam)};
const int paramFormats[] = {1}; // 1 means binary
// Execute prepared statement
result = PQexecPrepared(connPtr,
"test1",
1, //nParams,
paramValues,
paramLengths,
paramFormats,
1); // Output format - binary
// Ensure it's in COPY_OUT state
//testRes(PGRES_COPY_OUT);
if (PQresultStatus(result) != PGRES_COPY_OUT)
{
auto errorMsg = PQerrorMessage(connPtr);
int set_breakpoint_here = 0; // !!! !!! !!!
}
PQclear(result);
PQfinish(connPtr);
}

Related

How can I automatically infer schemas of CSV files on S3 as I load them?

Context
Currently I am using Snowflake as a Data Warehouse and AWS' S3 as a data lake. The majority of the files that land on S3 are in the Parquet format. For these, I am using a new limited feature by Snowflake (documented here) that automatically detects the schema from the parquet files on S3, which I can use to generate a CREATE TABLE statement with the correct column names and inferred data types. This feature currently only works for Apache Parquet, Avro, and ORC files. I would like to find a way that achieves the same desired objective but for CSV files.
What I have tried to do
This is how I currently infer the schema for Parquet files:
select generate_column_description(array_agg(object_construct(*)), 'table') as columns
from table (infer_schema(location=>'${LOCATION}', file_format=>'${FILE_FORMAT}'))
However if I try specifying the FILE_FORMAT as csv that approach will fail.
Other approaches I have considered:
Transferring all files that land on S3 to parquet (this involves more code, and infra setup so wouldn't be my top choice, especially that I'd like to keep some files in their natural type on s3)
Having a script (using libraries like Pandas in Python for example) that infer the schema for files in S3 (this also involves more code, and will be strange in the sense that parquet files are handled in Snowflake, but non parquet files are handled by some script on aws).
Using a Snowflake UDF to infer the schema. Haven't fully considered my options there yet.
Desired Behaviour
As a new csv file lands on S3 (on a pre-existing STAGE), I would like to infer the schema, and be able to generate a CREATE TABLE statement with the inferred data types. Preferably, I would like to do that within Snowflake as the existing aforementioned schema-inference solution exists there. Happy to add further information if needed.
UPDATE: I modified the SP that infers data types in untyped (all string type columns) tables and it now works directly against Snowflake stages. The project code is available here: https://github.com/GregPavlik/InferSchema
I wrote a stored procedure to assist with this; however, its only goal is to infer the data types of untyped columns. It works as follows:
Load the CSV into a table with all columns defined as varchars.
Call the SP with a query against the new table (main point is to get only the columns you want and limit the row count to keep type inference times reasonable).
Also in the SP call is the DB, schema, and table for the old and new locations -- old with all varchar and new with the inferred types.
The SP will then infer the data types and create two SQL statements. One statement will create the new table with the inferred data types. One statement will copy from the untyped (all varchar) table to the new table with appropriate wrappers such as try_multi_timestamp(), a UDF that extends try_to_timestamp() to try various common formats.
I meant to extend this so that it didn't require the untyped (all varchar) table at all, but haven't gotten around to it. Since it's come up here, I may circle back and update the SP with that capability. You can specify a query that reads directly from the stage, but you'd have to use $1, $2... with aliases for the column names (or else the DDL will try to create column names like $1). If the query runs directly against a stage, for the old DB, schema, and table, you could put in whatever because that's only used to generate an insert from select statement.
-- This shows how to use on the Snowflake TPCH sample, but could be any query.
-- Keep the row count down to reduce the time it take to infer the types.
call infer_data_types('select * from SNOWFLAKE_SAMPLE_DATA.TPCH_SF1.LINEITEM limit 10000',
'SNOWFLAKE_SAMPLE_DATA', 'TPCH_SF1', 'LINEITEM',
'TEST', 'PUBLIC', 'LINEITEM');
create or replace procedure INFER_DATA_TYPES(SOURCE_QUERY string,
DATABASE_OLD string,
SCHEMA_OLD string,
TABLE_OLD string,
DATABASE_NEW string,
SCHEMA_NEW string,
TABLE_NEW string)
returns string
language javascript
as
$$
/****************************************************************************************************
* *
* DataType Classes
* *
****************************************************************************************************/
class Query{
constructor(statement){
this.statement = statement;
}
}
class DataType {
constructor(db, schema, table, column, sourceQuery) {
this.db = db;
this.schema = schema;
this.table = table;
this.sourceQuery = sourceQuery
this.column = column;
this.insert = '"#~COLUMN~#"';
this.totalCount = 0;
this.notNullCount = 0;
this.typeCount = 0;
this.blankCount = 0;
this.minTypeOf = 0.95;
this.minNotNull = 1.00;
}
setSQL(sqlTemplate){
this.sql = sqlTemplate;
this.sql = this.sql.replace(/#~DB~#/g, this.db);
this.sql = this.sql.replace(/#~SCHEMA~#/g, this.schema);
this.sql = this.sql.replace(/#~TABLE~#/g, this.table);
this.sql = this.sql.replace(/#~COLUMN~#/g, this.column);
}
getCounts(){
var rs;
rs = GetResultSet(this.sql);
rs.next();
this.totalCount = rs.getColumnValue("TOTAL_COUNT");
this.notNullCount = rs.getColumnValue("NON_NULL_COUNT");
this.typeCount = rs.getColumnValue("TO_TYPE_COUNT");
this.blankCount = rs.getColumnValue("BLANK");
}
isCorrectType(){
return (this.typeCount / (this.notNullCount - this.blankCount) >= this.minTypeOf);
}
isNotNull(){
return (this.notNullCount / this.totalCount >= this.minNotNull);
}
}
class TimestampType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "timestamp";
this.insert = 'try_multi_timestamp(trim("#~COLUMN~#"))';
this.sourceQuery = SOURCE_QUERY;
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class IntegerType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "number(38,0)";
this.insert = 'try_to_number(trim("#~COLUMN~#"), 38, 0)';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class DoubleType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "double";
this.insert = 'try_to_double(trim("#~COLUMN~#"))';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
class BooleanType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "boolean";
this.insert = 'try_to_boolean(trim("#~COLUMN~#"))';
this.setSQL(GetCheckTypeSQL(this.insert, this.sourceQuery));
this.getCounts();
}
}
// Catch all is STRING data type
class StringType extends DataType{
constructor(db, schema, table, column, sourceQuery){
super(db, schema, table, column, sourceQuery)
this.syntax = "string";
this.totalCount = 1;
this.notNullCount = 0;
this.typeCount = 1;
this.minTypeOf = 0;
this.minNotNull = 1;
}
}
/****************************************************************************************************
* *
* Main function *
* *
****************************************************************************************************/
var pass = 0;
var column;
var typeOf;
var ins = '';
var newTableDDL = '';
var insertDML = '';
var columnRS = GetResultSet(GetTableColumnsSQL(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD));
while (columnRS.next()){
pass++;
if(pass > 1){
newTableDDL += ",\n";
insertDML += ",\n";
}
column = columnRS.getColumnValue("COLUMN_NAME");
typeOf = InferDataType(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD, column, SOURCE_QUERY);
newTableDDL += '"' + typeOf.column + '" ' + typeOf.syntax;
ins = typeOf.insert;
insertDML += ins.replace(/#~COLUMN~#/g, typeOf.column);
}
return GetOpeningComments() +
GetDDLPrefixSQL(DATABASE_NEW, SCHEMA_NEW, TABLE_NEW) +
newTableDDL +
GetDDLSuffixSQL() +
GetDividerSQL() +
GetInsertPrefixSQL(DATABASE_NEW, SCHEMA_NEW, TABLE_NEW) +
insertDML +
GetInsertSuffixSQL(DATABASE_OLD, SCHEMA_OLD, TABLE_OLD) ;
/****************************************************************************************************
* *
* Helper functions *
* *
****************************************************************************************************/
function InferDataType(db, schema, table, column, sourceQuery){
var typeOf;
typeOf = new IntegerType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new DoubleType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new BooleanType(db, schema, table, column, sourceQuery); // May want to do a distinct and look for two values
if (typeOf.isCorrectType()) return typeOf;
typeOf = new TimestampType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
typeOf = new StringType(db, schema, table, column, sourceQuery);
if (typeOf.isCorrectType()) return typeOf;
return null;
}
/****************************************************************************************************
* *
* SQL Template Functions *
* *
****************************************************************************************************/
function GetCheckTypeSQL(insert, sourceQuery){
var sql =
`
select count(1) as TOTAL_COUNT,
count("#~COLUMN~#") as NON_NULL_COUNT,
count(${insert}) as TO_TYPE_COUNT,
sum(iff(trim("#~COLUMN~#")='', 1, 0)) as BLANK
--from "#~DB~#"."#~SCHEMA~#"."#~TABLE~#";
from (${sourceQuery})
`;
return sql;
}
function GetTableColumnsSQL(dbName, schemaName, tableName){
var sql =
`
select COLUMN_NAME
from ${dbName}.INFORMATION_SCHEMA.COLUMNS
where TABLE_CATALOG = '${dbName}' and
TABLE_SCHEMA = '${schemaName}' and
TABLE_NAME = '${tableName}'
order by ORDINAL_POSITION;
`;
return sql;
}
function GetOpeningComments(){
return `
/**************************************************************************************************************
* *
* Copy and paste into a worksheet to create the typed table and insert into the new table from the old one. *
* *
**************************************************************************************************************/
`;
}
function GetDDLPrefixSQL(db, schema, table){
var sql =
`
create or replace table "${db}"."${schema}"."${table}"
(
`;
return sql;
}
function GetDDLSuffixSQL(){
return "\n);";
}
function GetDividerSQL(){
return `
/**************************************************************************************************************
* *
* The SQL statement below this attempts to copy all rows from the string tabe to the typed table. *
* *
**************************************************************************************************************/
`;
}
function GetInsertPrefixSQL(db, schema, table){
var sql =
`\ninsert into "${db}"."${schema}"."${table}" select\n`;
return sql;
}
function GetInsertSuffixSQL(db, schema, table){
var sql =
`\nfrom "${db}"."${schema}"."${table}" ;`;
return sql;
}
//function GetInsertSuffixSQL(db, schema, table){
//var sql = '\nfrom "${db}"."${schema}"."${table}";';
//return sql;
//}
/****************************************************************************************************
* *
* SQL functions *
* *
****************************************************************************************************/
function GetResultSet(sql){
cmd1 = {sqlText: sql};
stmt = snowflake.createStatement(cmd1);
var rs;
rs = stmt.execute();
return rs;
}
function ExecuteNonQuery(queryString) {
var out = '';
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
rs = stmt.execute();
}
function ExecuteSingleValueQuery(columnName, queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(columnName);
}
catch(err) {
if (err.message.substring(0, 18) == "ResultSet is empty"){
throw "ERROR: No rows returned in query.";
} else {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
}
return out;
}
function ExecuteFirstValueQuery(queryString) {
var out;
cmd1 = {sqlText: queryString};
stmt = snowflake.createStatement(cmd1);
var rs;
try{
rs = stmt.execute();
rs.next();
return rs.getColumnValue(1);
}
catch(err) {
if (err.message.substring(0, 18) == "ResultSet is empty"){
throw "ERROR: No rows returned in query.";
} else {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
}
return out;
}
function getQuery(sql){
var cmd = {sqlText: sql};
var query = new Query(snowflake.createStatement(cmd));
try {
query.resultSet = query.statement.execute();
} catch (err) {
throw "ERROR: " + err.message.replace(/\n/g, " ");
}
return query;
}
$$;
Have you tried STAGES?
Create 2 stages ... one with no header and the other with header .. .
see examples below.
Then a bit of SQL and voila your DDL.
Only issue - you need to know the # of cols to put correct number of t.$'s.
If someone could automate that ... we'd have an almost automatic DDL generator for CSV's.
Obviously once you have the SQL stmt then just add the create or replace table to the front and your table is nicely created with all the names from the CSV.
:-)
-- create or replace stage CSV_NO_HEADER
URL = 's3://xxx-x-dev-landing/xxx/'
STORAGE_INTEGRATION = "xxxLAKE_DEV_S3_INTEGRATION"
FILE_FORMAT = ( TYPE = CSV SKIP_HEADER = 1 FIELD_OPTIONALLY_ENCLOSED_BY = '"' )
-- create or replace stage CSV
URL = 's3://xxx-xxxlake-dev-landing/xxx/'
STORAGE_INTEGRATION = "xxxLAKE_DEV_S3_INTEGRATION"
FILE_FORMAT = ( TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' )
select concat('select t.$1 ', t.$1, ',t.$2 ', t.$2,',t.$3 ', t.$3, ',t.$4 ', t.$4,',t.$5 ', t.$5,',t.$6 ', t.$6,',t.$7 ', t.$7,',t.$8 ', t.$8,',t.$9 ', t.$9,
',t.$10 ', t.$10, ',t.$11 ', t.$11,',t.$12 ', t.$12 ,',t.$13 ', t.$13, ',t.$14 ', t.$14 ,',t.$15 ', t.$15 ,',t.$16 ', t.$16 ,',t.$17 ', t.$17 ,' from #xxxx_NO_HEADER/SUB_TRANSACTION_20201204.csv t') from
--- CHANGE TABLE ---
#xxx/SUB_TRANSACTION_20201204.csv t limit 1;

sqlite_exec for insert query is successful, but entry not found in sqlite table

I'm facing a strange issue where in my insert query using sqlite_exec API says successful return value, but when I check in sqlite table I dont see that entry, Below is my code
Insert query : INSERT INTO table_name VALUES (0,1584633967816,1584634000,'dasdasda','1584634000','28641','dasdas','dsadas','dsadsa','/rewrwe','rwerewr','rewrewr','0',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
sqlite_exec code block:
sqliteError = sqlite3_exec(pSqlHandle, oSqlQuery.str().c_str(), NULL, NULL, NULL);
if (sqliteError == SQLITE_OK)
{
LOG(DbgLogger, LOG_LEVEL_DEBUG, "Query %s successful\n",
oSqlQuery.str().c_str());
if (pSqlHandle) {
sqliteError = sqlite3_close(pSqlHandle);
if (sqliteError != SQLITE_OK) {
sqlRet = SQL_API_FAILURE;
}
else {
pSqlHandle = NULL;
}
}
sqlRet = SQL_API_SUCCESS;
if (m_useLockFile) {
//write done, release write lock
sqlRet = releaseWriteLock();
/* reset write lock file name */
bWriteLckAvailable = false;
}
break;
}
I can see print "LOG(DbgLogger, LOG_LEVEL_DEBUG, "Query %s successful\n",)"
But when I do a select from command line on the I dont see any entry as such
eg: select * from table_name where column_name=1584633967816;
Anyone faced similar issue ?

Stream analytics - How to handle json in reference input

I have an Azure Stream Analytics (ASA) job which processes device telemetry data from event hub. The stream should be joined with reference data from a sql table, to enhance each message with additional device meta data. The merged entry should be stored in CosmosDb.
The sql database to serve the device metadata:
CREATE TABLE [dbo].[MyTable]
(
[DeviceId] NVARCHAR(20) NOT NULL PRIMARY KEY,
[MetaData] NVARCHAR(MAX) NULL /* this stores json, which can vary per record */
)
In ASA I have configured the reference data input with a simple query:
SELECT DeviceId, JSON_QUERY(MetaData) FROM [dbo].[MyTable]
And I have the main ASA query that performs the join:
WITH temptable AS (
SELECT * FROM [telemetry-input] TD PARTITION BY PartitionId
LEFT OUTER JOIN [metadata-input] MD
ON TD.DeviceId = MD.DeviceId
)
SELECT TD.*, MD.MetaData
INTO [cosmos-db-output]
FROM temptable PARTITION BY PartitionId
It all works and merged data gets stored in CosmosDb. However, the value of the Metadata column from sql is treated as string, and stored in comos with quotes and escape chars. Example:
{ "DeviceId" : "abc1234", … , "MetaData" : "{ \"TestKey\": \"test value\" }" };
Is there a way to handle & store the json from Metadata as a proper Json object i.e.
{ "DeviceId" : "abc1234", … , "MetaData" : { "TestKey": "test value" } };
I found the way to achieve it in ASA - you need to create javascript user function:
function parseJson(strjson){
return JSON.parse(strjson);
}
And call it in your query:
...
SELECT TD.*, udf.parseJson(MD.MetaData)
...
As you mentioned in your question,the reference json data is treated as json string, not json object. Based on my researching on the Query Syntax in ASA, there is no built-in function to convert that.
However, I'd suggest you using Azure Function Cosmos DB Trigger to process every document which is created. Please refer to my function code:
using System;
using System.Collections.Generic;
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json.Linq;
namespace ProcessJson
{
public class Class1
{
[FunctionName("DocumentUpdates")]
public static void Run(
[CosmosDBTrigger(databaseName:"db",collectionName: "item", ConnectionStringSetting = "CosmosDBConnection",LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]
IReadOnlyList<Document> documents,
TraceWriter log)
{
log.Verbose("Start.........");
String endpointUrl = "https://***.documents.azure.com:443/";
String authorizationKey = "***";
String databaseId = "db";
String collectionId = "import";
DocumentClient client = new DocumentClient(new Uri(endpointUrl), authorizationKey);
for (int i = 0; i < documents.Count; i++)
{
Document doc = documents[i];
if((doc.alreadyFormat == Undefined.Value) ||(!doc.alreadyFormat)){
String MetaData = doc.GetPropertyValue<String>("MetaData");
JObject o = JObject.Parse(MetaData);
doc.SetPropertyValue("MetaData", o);
doc.SetPropertyValue("alreadyFormat", true);
client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(databaseId, collectionId, doc.Id), doc);
log.Verbose("Update document Id " + doc.Id);
}
}
}
}
}
In addition, please refer to the case: Azure Cosmos DB SQL - how to unescape inner json property

Is there unpivot or cross apply in ServiceStack ormlite?

I am using ServiceStack 4.5.14. I want to pass a list of Guid to such as below query.
Table Name: Image
Columns: (Id -> Type=Guid) (ImageId -> Type=Guid) (Guid -> Type=Guid)
var result = Db.ExecuteSql("select value from image unpivot (value for col in (Id, ImageId)) un where Guid=(#param) order by Guid",
new { param = "5de7f247-f590-479a-9c29-2e68a57e711c" });
It returns a result which their Id and ImageId are 000.... while they are null.
Another question is: how can I send a list of Guid as parameter to above query?
To query a parameterized field you should include the Guid instead of the string, e.g:
var result = Db.ExecuteSql(
#"select value from image unpivot (value for col in (Id, ImageId)) un
where Guid=(#param) order by Guid",
new { param = new Guid("5de7f247-f590-479a-9c29-2e68a57e711c") });
If values are null, it's likely masquerading an error, you can bubble errors with:
OrmLiteConfig.ThrowOnError = true;
Or enable debug logging with:
LogManager.LogFactory = new ConsoleLogFactory();
In v5+ you can also inspect SQL commands before they're executed with:
OrmLiteConfig.BeforeExecFilter = dbCmd => Console.WriteLine(dbCmd.GetDebugString());

bigquery standard sql udf mapping to struct is returning internal error

I have a code block below for parsing query params using udf. It works fine when the value passed to function is hardcoded as in the example. Thought when I try to parse the same value fetched from a table I get a
An internal error occurred and the request could not be completed. (error code: internalError)
CREATE TEMPORARY FUNCTION parse(queryString STRING) RETURNS ARRAY<STRUCT<key STRING, value STRING>> LANGUAGE js AS
"""
var params = {}
var array = []
// split into key/value pairs
var queries = queryString.split('&');
var ind = 0
// convert the array of strings into an object
for (var i = 0; i < queries.length; i++ ) {
var temp = queries[i].split('=');
if(temp.length < 2) continue;
array[ind++] = { key: temp[0], value: decodeURI(temp[1]) }
}
return array;
""";
select parse('ca_chid=2002810&ca_source=gaw&ca_ace=&ca_nw=g&ca_dev=c&ca_pl=&ca_pos=1t3&ca_agid=32438864366&ca_caid=260997846&ca_adid=151983037851&ca_kwt=florists%20in%20walsall&ca_mt=e&ca_fid=&ca_tid=aud-117534990726:kwd-420175760&ca_lp=9045676&ca_li=&ca_devm=&ca_plt=&ca_sadt=&ca_smid=&ca_spc=&ca_spid=&ca_sco=&ca_sla=&ca_sptid=&ca_ssc=&gclid=CLaDoa6ZrdACFcyRGwodG8IFvQ') as params
--not working
--select parse(page_urlquery) from (
--SELECT page_urlquery FROM `query_param_snapshot` where page_urlquery != '' LIMIT 1
Also reported on the issue tracker (we are working on a fix). One workaround is to use a SQL function rather than a JavaScript function, e.g.:
CREATE TEMPORARY FUNCTION parse(queryString STRING)
RETURNS ARRAY<STRUCT<key STRING, value STRING>> AS (
(SELECT
ARRAY_AGG(STRUCT(
entry[OFFSET(0)] AS key,
entry[OFFSET(1)] AS value))
FROM (
SELECT SPLIT(pairString, '=') AS entry
FROM UNNEST(SPLIT(queryString, '&')) AS pairString)
)
);
SELECT parse('ca_chid=2002810&ca_source=gaw&ca_ace=&ca_nw=g&ca_dev=c&ca_pl=&ca_pos=1t3&ca_agid=32438864366&ca_caid=260997846&ca_adid=151983037851&ca_kwt=florists%20in%20walsall&ca_mt=e&ca_fid=&ca_tid=aud-117534990726:kwd-420175760&ca_lp=9045676&ca_li=&ca_devm=&ca_plt=&ca_sadt=&ca_smid=&ca_spc=&ca_spid=&ca_sco=&ca_sla=&ca_sptid=&ca_ssc=&gclid=CLaDoa6ZrdACFcyRGwodG8IFvQ') AS params;