I have a column stored in JSON that looks like
column name: s2s_payload
Values:
{
"checkoutdate":"2019-10-31",
"checkindate":"2019-10-30",
"numtravelers":"2",
"domain":"www.travel.com.mx",
"destination": {
"country":"MX",
"city":"Manzanillo"
},
"eventtype":"search",
"vertical":"hotels"
}
I want to query exact values in the array rather than returning all values for a certain data type. I was using JSON_EXTRACT to get distinct counts.
SELECT
COUNT(JSON_EXTRACT(s2s_payload, '$.destination.code')) AS total,
JSON_EXTRACT(s2s_payload, '$.destination.code') AS destination
FROM
"db"."events_data_json5_temp"
WHERE
id = '111000'
AND s2s_payload IS NOT NULL
AND yr = '2019'
AND mon = '10'
AND dt >= '26'
AND JSON_EXTRACT(s2s_payload, '$.destination.code')
GROUP BY
JSON_EXTRACT(s2s_payload, '$.destination.code')
If I want to filter where ""eventtype"":""search"" how can I do this?
I tried using CAST(s2s_payload AS CHAR) = '{"eventtype"":""search"}' but that didn't work.
You need to use json_extract + a CAST to get actual value to compare against:
CAST(json_extract(s2s_payload, '$.eventtype') AS varchar) = 'search'
or, same with json_extract_scalar (and thus with no need for a CAST):
json_extract_scalar(s2s_payload, '$.eventtype')
Related
Ive a data set in below format and trying below big query
Some additional content, Key & value are array format.
Table format: id, {key: "abc", value: "true"} and when I unnest it looks as in above screenshot.
I want to run a SQL query based on key & value : ex, I want to get only those value if abc = false, def = true
Sample output I'm expecting.
I'm not sure if I'm making any sense. but this is what I'm trying to achieve.
Thanks!
With the data you shared you could use a CASE expression. The Query below should work fine for you:
SELECT
id,
key,
CASE key
WHEN 'abc' THEN 'FALSE'
WHEN 'def' THEN 'TRUE'
ELSE NULL
END
AS VALUE
FROM `project.dataset.table`
WHERE key='abc' or key='def'
I am building a CRUD application that allows the user to input some search criteria and get the documents corresponding to those criteria. Unfortunately i have some difficulties in creating a query in postgres that uses different conditions in the where part, based on the input sent by the user.
For example if the user set as search criteria only the document number the query would be defined like this:
select * from document where document_num = "value1"
On the other hand if the user gave two criteria the query would be set up like this:
select * from document where document_num = "value1" and reg_date = "value2"
How can i set up a query that is valid for all the cases? Looking in other threads i saw as a possible solution using coalesce in the where part:
document_num = coalesce("value1", document_num)
The problem with this approach is that when no value is provided postgres converts the condition to document_num IS NOT NULL which is not what i need (my goal is to set the condition to always true).
Thanks in advance
So the solution by #D-shih will work if you have a default value and you can also use COALESCE as below.
SELECT *
FROM document
WHERE document_num = COALESCE("value1", default_value)
AND reg_date = COALESCE("value2", default_value);
If you don't have default values then you can create your query using CASE WHEN(here I am supposing you have some variables from which you will determine which conditions to apply like when to apply document_num or when to apply reg_date or when to apply both). Giving a little example below.
SELECT *
FROM document
WHERE
(
CASE
WHEN "value1" IS NOT NULL THEN document_num = "value1"
ELSE TRUE
END
)
AND (
CASE
WHEN "value2" IS NOT NULL THEN reg_date = "value2"
ELSE TRUE
END
)
You can read more how to use CASE WHEN here.
If I understand correctly, you can try to pass the user input value by parameter.
parameter default value might design that the user can't pass if the user didn't want to use the parameter it will use the default value.
we can use OR to judge whether use parameter otherwise ignores that.
SELECT *
FROM document
WHERE (document_num = :value1 OR :value1 = [default value])
AND (reg_date = :value2 OR :value2 = [default value])
New to the BigQuery, I have a repeated field in BigQuery, like this
myTable
{
"id": 12345
"myNestedStringArrayField": []
}
How can I query all rows with the myNestedStringArrayField value is empty?
I tried using myNestedStringArrayField is null, but return no results, I know I have rows that have [] as the value. I also tried using the = '[]' , but the query edit throws an error.
Thank you in advance.
You can try using ARRAY_LENGTH, all rows you are seeking have a myNestedStringArrayField with a length of zero:
WITH sample AS(
SELECT STRUCT("12345" AS id, [] AS myNestedStringArrayField) AS myTable
)
SELECT *
FROM sample
WHERE ARRAY_LENGTH(myTable.myNestedStringArrayField) = 0
This returns:
I have a json type column(Status) in Postgres database(9.4.9). I want to add new key value pair for existing value of status. Example:
Existing Status:
"status": {
"keysterStatus": "In Progress"
}
After Adding Key value pair i want it to look like this
"status": {
"provisioningStatus": "In Progress",
"keysterStatus": "In Progress"
}
I have been using repository save() method as of now to get this done but that is writing whole row and there is chance of concurrent read and write in case of multiple request. So wanted to get rid of save() method and go with column level update.
First of all PG9.4 is obsolette and even unsopperted now. PG9.5 contains as json_set function:
SELECT jsonb_set(status::jsonb,
'{provisioningStatus}',
to_jsonb('In Progress'))::jsonb
FROM ....;
as possibility to use concatenation || swith converting to jsonb and than back:
SELECT (status::jsonb || '{"provisioningStatus": "In Progress"}')::json
FROM ....;
For PG9.4,if you know schema for json, uou can use json_populate_record/row_to_json :
SELECT (
SELECT row_to_json(r)
FROM (
SELECT r.*, 'In Progress' AS provisioningStatus
FROM json_populate_record(null::myrowtype, status) AS r
) AS r
) AS result
FROM ....
Or you can use json_each_text:
SELECT (
SELECT json_object_agg(key, value)
FROM (
SELECT *
FROM json_each_text(status)
UNION ALL
SELECT 'provisioningStatus', 'In Progress'
) AS a
) AS result
FROM ...
And probably the last (but ugly) method is just convert json to string, remove last '}', add "provisioningStatus": "In Progress"}' and convert back to json:
SELECT (substr(status::text, 1, length(status::text) - 1)
|| ', "provisioningStatus": "In Progress"}')::json
FROM ...
UPDATE table_name SET column_name = jsonb_set(cast(column_name as jsonb), '{key}', '"value"', true) WHERE id = 'target_id';
This will add the key value pair in the json column if it doesn't exist already, if the key exist it will override the value of it.
Test data
DROP TABLE t;
CREATE TABLE t(_id serial PRIMARY KEY, data jsonb);
INSERT INTO t(data) VALUES
('{"a":1,"b":2, "c":3}')
, ('{"a":11,"b":12, "c":13}')
, ('{"a":21,"b":22, "c":23}')
Problem statement: I want to receive an arbitrary JSONB parameter which acts as a filter on column t.data, such as
{ "b":{ "from":0, "to":20 }, "c":13 }
and use this to select matching rows from my test table t.
In this example, I want rows where b is between 0 and 20 and c = 13.
No error is required if the filter specifies a "column" (or "tag") which does not exist in t.data - it just fails to find a match.
I've used numeric values for simplicity but would like an approach which generalises to text as well.
What I have tried so far. I looked at the containment approach, which works for equality conditions, but am stumped on a generic way of handling range conditions:
select * from t
where t.data#> '{"c":13}'::jsonb;
Background: This problem arose when building a generic table-preview page on a website (for Admin users).
The page displays a filter based on various columns in whichever table is selected for preview.
The filter is then passed to a function in Postgres DB which applies this dynamic filter condition to the table.
It returns a jsonb array of the rows matching the filter specified by the user.
This jsonb array is then used to populate the Preview resultset.
The columns which make up the filter may change.
My Postgres version is 9.6 - thanks.
if you want to parse { "b":{ "from":0, "to":20 }, "c":13 } you need a parser. It is out of scope of json functions, but you can write "generic" query using AND and OR to filter by such json, eg:
https://www.db-fiddle.com/f/jAPBQggG3p7CxqbKLMbPKw/0
with filt(f) as (values('{ "b":{ "from":0, "to":20 }, "c":13 }'::json))
select *
from t
join filt on
(f->'b'->>'from')::int < (data->>'b')::int
and
(f->'b'->>'to')::int > (data->>'b')::int
and
(data->>'c')::int = (f->>'c')::int
;
Thanks for the comments/suggestions.
I will definitely look at GraphQL when I have more time - I'm working under a tight deadline at the moment.
It seems the consensus is that a fully generic solution is not achievable without a parser.
However, I got a workable first draft - it's far from ideal but we can work with it. Any comments/improvements are welcome ...
Test data (expanded to include dates & text fields)
DROP TABLE t;
CREATE TABLE t(_id serial PRIMARY KEY, data jsonb);
INSERT INTO t(data) VALUES
('{"a":1,"b":2, "c":3, "d":"2018-03-10", "e":"2018-03-10", "f":"Blah blah" }')
, ('{"a":11,"b":12, "c":13, "d":"2018-03-14", "e":"2018-03-14", "f":"Howzat!"}')
, ('{"a":21,"b":22, "c":23, "d":"2018-03-14", "e":"2018-03-14", "f":"Blah blah"}')
First draft of code to apply a jsonb filter dynamically, but with restrictions on what syntax is supported.
Also, it just fails silently if the syntax supplied does not match what it expects.
Timestamp handling a bit kludgey, too.
-- Handle timestamp & text types as well as int
-- See is_timestamp(text) function at bottom
with cte as (
select t.data, f.filt, fk.key
from t
, ( values ('{ "a":11, "b":{ "from":0, "to":20 }, "c":13, "d":"2018-03-14", "e":{ "from":"2018-03-11", "to": "2018-03-14" }, "f":"Howzat!" }'::jsonb ) ) as f(filt) -- equiv to cross join
, lateral (select * from jsonb_each(f.filt)) as fk
)
select data, filt --, key, jsonb_typeof(filt->key), jsonb_typeof(filt->key->'from'), is_timestamp((filt->key)::text), is_timestamp((filt->key->'from')::text)
from cte
where
case when (filt->key->>'from') is null then
case jsonb_typeof(filt->key)
when 'number' then (data->>key)::numeric = (filt->>key)::numeric
when 'string' then
case is_timestamp( (filt->key)::text )
when true then (data->>key)::timestamp = (filt->>key)::timestamp
else (data->>key)::text = (filt->>key)::text
end
when 'boolean' then (data->>key)::boolean = (filt->>key)::boolean
else false
end
else
case jsonb_typeof(filt->key->'from')
when 'number' then (data->>key)::numeric between (filt->key->>'from')::numeric and (filt->key->>'to')::numeric
when 'string' then
case is_timestamp( (filt->key->'from')::text )
when true then (data->>key)::timestamp between (filt->key->>'from')::timestamp and (filt->key->>'to')::timestamp
else (data->>key)::text between (filt->key->>'from')::text and (filt->key->>'to')::text
end
when 'boolean' then false
else false
end
end
group by data, filt
having count(*) = ( select count(distinct key) from cte ) -- must match on all filter elements
;
create or replace function is_timestamp(s text) returns boolean as $$
begin
perform s::timestamp;
return true;
exception when others then
return false;
end;
$$ strict language plpgsql immutable;