Postgres statement for JSON_VALUE - sql

what is the postgres statement for this SQL statement.
SELECT * FROM table1 where JSON_VALUE(colB,'$.Items[0].Item') ='abc'
i have tried follow postgres document but result
No function matches the given name and argument types

You can use the -> operator to access an element in an index.
SELECT *
FROM table1
where colb -> 'Items' -> 0 ->> 'Item' = 'abc'
colb -> 'Items' -> 0 returns the first array element of Items as a JSON value. And ->> 'Item' then returns the key "Item" from within that JSON as a text (aka varchar) value.
This requires that colb is defined as jsonb (or at least json). If not, you need to cast it like this colb::jsonb.
But in the long run you should really convert that column to jsonb then.
If you want to search for Item = 'abc' anywhere in the Items array (not just position 0), you can use the #> operator:
select *
from data
where colb #> '{"Items": [{"Item": "abc"}]}';
Online example: https://rextester.com/BQWB24156
The above can use a GIN index on the column colb. The first query will require an index on that expression.
With Postgres 12 you can use a JSON path query like you have:
SELECT *
FROM table1
where jsonb_path_exists(colb, '$.Items[0].Item' ? (# == "abc")');
If you want to search anywhere in the array, you can use:
SELECT *
FROM table1
where jsonb_path_exists(colb, '$.Items[*].Item' ? (# == "abc")');
That again can not make use of a GIN index on the column, it would require an index on that expression

Something like this.
SELECT t.*
FROM table1 t
cross join json_array_elements(colb->'Items') as j
where j->>'Item' = 'abc'
DEMO

Related

PostgreSQL JSONB overlaps operator on multiple JSONB columns

I have a table that contains two jsonb columns both consisting of jsonb data that represent an array of strings. These can be empty arrays too.
I am now trying to query this table and retrieve the rows where either (or both) jsonb arrays contain at least one item of an array I pass, I managed to figure out a working query
SELECT *
FROM TABLE T
WHERE (EXISTS (SELECT *
FROM JSONB_ARRAY_ELEMENTS_TEXT(T.DATA1) AS DATA1
WHERE ARRAY[DATA1] && ARRAY['some string','some other string']))
OR (EXISTS (SELECT *
FROM JSONB_ARRAY_ELEMENTS_TEXT(T.DATA2) AS DATA2
WHERE ARRAY[DATA2] && ARRAY['random string', 'another random string']));
But i think this is not optimal at all, I am trying to do it with a cross join but the issue is that this data1 and data2 in the jsonb columns can be an empty array, and then the join will exclude these rows, while maybe the other jsonb column does satisfy the overlaps && condition.
I tried other approaches too, like:
SELECT DISTINCT ID
FROM table,
JSONB_ARRAY_ELEMENTS_TEXT(data1) data1,
JSONB_ARRAY_ELEMENTS_TEXT(data2) data2
WHERE data1 in ('some string', 'some other string')
OR data2 in ('random string', 'string');
But this one also does not include rows where data1 or data2 is an empty string. So I thought of a FULL OUTER JOIN but because this is a lateral reference it does not work:
The combining JOIN type must be INNER or LEFT for a LATERAL reference.
You don't need to unnest the JSON array. The JSONB operator ?| can do that directly - it checks if any of the array elements of the argument on the right hand side is contained as a top-level element in the JSON value on the left hand side.
SELECT *
FROM the_table t
WHERE t.data1 ?| ARRAY['some string','some other string']))
OR t.data2 ?| ARRAY['random string', 'another random string']));
This will not return rows where both array are empty (or if neither of the columns contains the searched keys)

Extract elements from an array inside a jsonb field in Postgresql

In my Postgresql table, I have a jsonb field called data, which contains data in the following format:
{
list:[1,2,3,4,5]
}
I use the query:
select data->'list' from "Table" where id=1
This gives me the array [1,2,3,4,5]
The problem is that I want to use this result in another select query within the IN clause. It's not accepting the array.
IN ([1,2,3,4,5]) fails
It wants:
IN (1,2,3,4,5)
So, In my original query I don't know how to covert [1,2,3,4,5] to just 1,2,3,4,5
My current query is:
select * from "Table2" where "items" in (select data->'list' from "Table" where id=1)
Please help
You can use the array contains operator (#>) rather than IN if you cast the search value to jsonb. For example:
SELECT *
FROM "Table2"
WHERE items::jsonb <# (SELECT data->'list' FROM "Table" WHERE id=1)
Note that if items is an int you will need to cast it char before casting to jsonb:
SELECT *
FROM "Table2"
WHERE cast(items as char)::jsonb <# (SELECT data->'list' FROM "Table" WHERE id=1)
Demo on dbfiddle
Use jsonb_array_elements() to turn the elements into rows
select t2.*
from table_2 t2
where t2.items in (select jsonb_array_elements_text(t1.data -> 'list')::int
from table_1 t1
where t1.id = 1);
This assumes that items is defined as text or varchar and contains a single value - however the name (plural!) seems to indicate yet another de-normalized column.

SELECT on JSON operations of Postgres array column?

I have a column of type jsonb[] (a Postgres array of jsonb objects) and I'd like to perform a SELECT on rows where a criteria is met on at least one of the objects. Something like:
-- Schema would be something like
mytable (
id UUID PRIMARY KEY,
col2 jsonb[] NOT NULL
);
-- Query I'd like to run
SELECT
id,
x->>'field1' AS field1
FROM
mytable
WHERE
x->>'field2' = 'user' -- for any x in the array stored in col2
I've looked around at ANY and UNNEST but it's not totally clear how to achieve this, since you can't run unnest in a WHERE clause. I also don't know how I'd specify that I want the field1 from the matching object.
Do I need a WITH table with the values expanded to join against? And how would I achieve that and keep the id from the other column?
Thanks!
You need to unnest the array and then you can access each json value
SELECT t.id,
c.x ->> 'field1' AS field1
FROM mytable t
cross join unnest(col2) as c(x)
WHERE c.x ->> 'field2' = 'user'
This will return one row for each json value in the array.

SQL query, which search all rows with specific items of array column

my sql case in postgres:9.6 database
CREATE TABLE my_table (
id serial PRIMARY KEY,
numbers INT []
);
INSERT INTO my_table (numbers) VALUES ('{2, 3, 4}');
INSERT INTO my_table (numbers) VALUES ('{2, 1, 4}');
-- which means --
test=# select * from my_table;
id | numbers
----+---------
1 | {2,3,4}
2 | {2,1,4}
(2 rows)
I need to find all rows with numbers 1 and/or 2. According this answer I use query like this:
SELECT * FROM my_table WHERE numbers = ANY('{1,2}'::int[]);
And got following error:
LINE 1: SELECT * FROM my_table WHERE numbers = ANY('{1,2}'::int[]);
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
How does correct sql query look?
Using var = ANY(array) works well for finding if a single value (var) is contained in an array.
To check if an array contains parts of another array, you would use the && operator
&& -- overlap (have elements in common) -- ARRAY[1,4,3] && ARRAY[2,1] --> true
SELECT * FROM my_table WHERE numbers && '{1,2}'::int[];
To check if an array contains all members of another array, you would use the #> operator
#> -- contains -- ARRAY[1,4,3] #> ARRAY[3,1] --> true
SELECT * FROM my_table WHERE numbers #> '{1,2}'::int[];

PostgreSQL case insensitive SELECT on array

I'm having problems finding the answer here, on google or in the docs ...
I need to do a case insensitive select against an array type.
So if:
value = {"Foo","bar","bAz"}
I need
SELECT value FROM table WHERE 'foo' = ANY(value)
to match.
I've tried lots of combinations of lower() with no success.
ILIKE instead of = seems to work but I've always been nervous about LIKE - is that the best way?
One alternative not mentioned is to install the citext extension that comes with PostgreSQL 8.4+ and use an array of citext:
regress=# CREATE EXTENSION citext;
regress=# SELECT 'foo' = ANY( '{"Foo","bar","bAz"}'::citext[] );
?column?
----------
t
(1 row)
If you want to be strictly correct about this and avoid extensions you have to do some pretty ugly subqueries because Pg doesn't have many rich array operations, in particular no functional mapping operations. Something like:
SELECT array_agg(lower(($1)[n])) FROM generate_subscripts($1,1) n;
... where $1 is the array parameter. In your case I think you can cheat a bit because you don't care about preserving the array's order, so you can do something like:
SELECT 'foo' IN (SELECT lower(x) FROM unnest('{"Foo","bar","bAz"}'::text[]) x);
This seems hackish to me but I think it should work
SELECT value FROM table WHERE 'foo' = ANY(lower(value::text)::text[])
ilike could have issues if your arrays can have _ or %
Note that what you are doing is converting the text array to a single text string, converting it to lower case, and then back to an array. This should be safe. If this is not sufficient you could use various combinations of string_to_array and array_to_string, but I think the standard textual representations should be safer.
Update building on subquery solution below, one option would be a simple function:
CREATE OR REPLACE FUNCTION lower(text[]) RETURNS text[] LANGUAGE SQL IMMUTABLE AS
$$
SELECT array_agg(lower(value)) FROM unnest($1) value;
$$;
Then you could do:
SELECT value FROM table WHERE 'foo' = ANY(lower(value));
This might actually be the best approach. You could also create GIN indexes on the output of the function if you want.
Another alternative would be with unnest()
WITH tbl AS (SELECT 1 AS id, '{"Foo","bar","bAz"}'::text[] AS value)
SELECT value
FROM (SELECT id, value, unnest(value) AS val FROM tbl) x
WHERE lower(val) = 'foo'
GROUP BY id, value;
I added an id column to get exactly identical results - i.e. duplicate value if there are duplicates in the base table. Depending on your circumstances, you can probably omit the id from the query to collapse duplicates in the results or if there are no dupes to begin with. Also demonstrating a syntax alternative:
SELECT value
FROM (SELECT value, lower(unnest(value)) AS val FROM tbl) x
WHERE val = 'foo'
GROUP BY value;
If array elements are unique within arrays in lower case, you don't even need the GROUP BY, since every value can only match once.
SELECT value
FROM (SELECT value, lower(unnest(value)) AS val FROM tbl) x
WHERE val = 'foo';
'foo' must be lower case, obviously.
Should be fast.
If you want that fast wit a big table, I would create a functional GIN index, though.
my solution to exclude values using a sub select...
and groupname not ilike all (
select unnest(array[exceptionname||'%'])
from public.group_exceptions
where ...
and ...
)
Regular expression may do the job for most cases
SELECT array_to_string('{"a","b","c"}'::text[],'|') ~* ANY('{"A","B","C"}');
I find creating a custom PostgreSQL function works best for me
CREATE OR REPLACE FUNCTION lower(text_array text[]) RETURNS text[] AS
$BODY$
SELECT (lower(text_array::text))::text[]
$BODY$
LANGUAGE SQL IMMUTABLE;