Tried multiple answers from here and elsewhere and couldn't find the right answer yet.
create table mstore (
muuid uuid PRIMARY KEY,
msid text,
m_json JSONb[] not NULL
);
inserted first row:
insert into mstore (muuid, msid, m_json) values (
'3b691440-ee54-4d9d-a5b3-5f1863b78755'::uuid,
'<163178891004.4772968682254423915#XYZ-73SM>',
(array['{"m": 123, "mts": "2021-09-16T10:53:43.599012", "dstatus": "Dropped", "rcpt": "abc1#xyz.com"}']::jsonb[])
);
inserted second row:
insert into mstore (muuid, msid, m_json) values (
'3b691440-ee54-4d9d-a5b3-5f1863b78757'::uuid,
'<163178891004.4772968682254423915#XYZ-75SM>',
(array['{"m": 125, "mts": "2021-09-16T10:53:43.599022", "dstatus": "Dropped", "rcpt": "abc3#xyz.com"}']::jsonb[])
);
updated the first row:
update mstore
set m_json = m_json || '{"m": 124, "mts": "2021-09-16T10:53:43.599021", "dstatus": "Delivered", "rcpt": "abc2#xyz.com"}'::jsonb
where muuid = '3b691440-ee54-4d9d-a5b3-5f1863b78755';
Now table looks like:
muuid | msid | m_json
--------------------------------------+----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3b691440-ee54-4d9d-a5b3-5f1863b78757 | <163178891004.4772968682254423915#XYZ-75SM> | {"{\"mid\": 125, \"rcpt\": \"abc3#xyz.com\", \"msg_ts\": \"2021-09-16T10:53:43.599022\", \"dstatus\": \"Dropped\"}"}
3b691440-ee54-4d9d-a5b3-5f1863b78755 | <163178891004.4772968682254423915#XYZ-73SM> | {"{\"mid\": 123, \"rcpt\": \"abc1#xyz.com\", \"msg_ts\": \"2021-09-16T10:53:43.599012\", \"dstatus\": \"Dropped\"}","{\"mid\": 124, \"rcpt\": \"abc2#xyz.com\", \"msg_ts\": \"2021-09-16T10:53:43.599021\", \"dstatus\": \"Delivered\"}"}
Now, I need to query based on the status. I tried few but most relevant one was
select * from mstore,jsonb_array_elements(m_json) with ordinality arr(item_object, position) where item_object->>'{"dstatus": "Delivered"}';
and
select * from mstore where m_json #> '[{"dstatus": "Delivered"}]';
Neither work, as they have syntax errors. How to run this query with dstatus values?
Please note that mstore.m_json is a Postgres array of JSONB elements and not a JSONB array and therefore unnest must be used rather than jsonb_array_elements. Also have a look at ->> operator in the documentation.
The same applies to your second example. It would work if mstore.m_json is a JSONB array and not a Postgres array of JSONB elements.
select m.muuid, m.msid, l.item_object, l.pos
from mstore m
cross join lateral unnest(m.m_json) with ordinality l(item_object, pos)
where l.item_object ->> 'dstatus' = 'Delivered';
It would be better to use JSONB data type for column mstore.m_json rather than JSONB[] or - much better - normalize the data design.
Related
Let's suppose I have a table my_table with a field named data, of type jsonb, which thus contains a json data structure.
let's suppose that if I run
select id, data from my_table where id=10;
I get
id | data
------------------------------------------------------------------------------------------
10 | {
|"key_1": "value_1" ,
|"key_2": ["value_list_element_1", "value_list_element_2", "value_list_element_3" ],
|"key_3": {
| "key_3_1": "value_3_1",
| "key_3_2": {"key_3_2_1": "value_3_2_1", "key_3_2_2": "value_3_2_2"},
| "key_3_3": "value_3_3"
| }
| }
so in pretty formatting, the content of column data is
{
"key_1": "value_1",
"key_2": [
"value_list_element_1",
"value_list_element_2",
"value_list_element_3"
],
"key_3": {
"key_3_1": "value_3_1",
"key_3_2": {
"key_3_2_1": "value_3_2_1",
"key_3_2_2": "value_3_2_2"
},
"key_3_3": "value_3_3"
}
}
I know that If I want to get directly in a column the value of a key (of "level 1") of the json, I can do it with the ->> operator.
For example, if I want to get the value of key_2, what I do is
select id, data->>'key_2' alias_for_key_2 from my_table where id=10;
which returns
id | alias_for_key_2
------------------------------------------------------------------------------------------
10 |["value_list_element_1", "value_list_element_2", "value_list_element_3" ]
Now let's suppose I want to get the value of key_3_2_1, that is value_3_2_1.
How can I do it?
I have tryed with
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
but I get
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 from my_table where id=10;
^
HINT: No operators found with name and argument types provided. Types may need to be converted explicitly.
what am I doing wrong?
The problem in the query
select id, data->>'key_3'->>'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --this is wrong!
from my_table
where id=10;
was that by using the ->> operand I was turning a json to a string, so that with the next ->> operand I was trying to get a json key object key_3_2 out of a string object, which makes no sense.
Thus one has to use the -> operand, which does not convert json into string, until one gets to the "final" key.
so the query I was looking for was
select id, data->'key_3'->'key_3_2'->>'key_3_2_1' alias_for_key_3_2_1 --final ->> : this gets the value of 'key_3_2_1' as string
from my_table
where id=10;
or either
select id, data->'key_3'->'key_3_2'->'key_3_2_1' alias_for_key_3_2_1 --final -> : this gets the value of 'key_3_2_1' as json / jsonb
from my_table
where id=10;
More info on JSON Functions and Operators can be find here
I am trying to build a schedule, I generate an array of objects on the client containing date ranges
[
{start: "2020-07-06 0:0", end: "2020-07-10 23:59"},
{start: "2020-07-13 0:0", end: "2020-07-17 23:59"}
]
I have a column of type daterange[] what is the proper way to format this data to insert it into my table?
This is what I have so far:
INSERT INTO schedules(owner, name, dates) VALUES (
1,
'work',
'{
{[2020-07-06 0:0,2020-07-10 23:59]},
{[2020-07-13 0:0,2020-07-17 23:59]}
}'
)
I think you want:
insert into schedules(owner, name, dates) values (
1,
'work',
array[
'[2020-07-06, 2020-07-11)'::daterange,
'[2020-07-13, 2020-07-18)'::daterange
]
);
Rationale:
you are using dateranges, so you cannot have time portions (for this, you would need tsrange instead); as your code stands, it seems like you want an inclusive lower bound and an exclusive upper bound (hence [ at the left side, and ) at the right side)
explicit casting is needed so Postgres can recognize the that array elements have the proper datatype (otherwise, they look like text)
then, you can surround the list of ranges with the array[] constructor
Demo on DB Fiddle:
owner | name | dates
----: | :--- | :----------------------------------------------------
1 | work | {"[2020-07-06,2020-07-11)","[2020-07-13,2020-07-18)"}
I have a table where one column is array:
CREATE TABLE inherited_tags (
id serial,
tags text[]
);
Sample values:
INSERT INTO inherited_tags (tags) VALUES
(ARRAY['A','B','C']), -- id: 1
(ARRAY['D','E']), -- id: 2
(ARRAY['A','B']), -- id: 3
(ARRAY['C','D']), -- id: 4
(ARRAY['D','F']), -- id: 5
(ARRAY['A']); -- id: 6
I want to find rows which tags column contains some subset of words inside array. For example for input:
ARRAY[ARRAY['A','C'], ARRAY['F'], ARRAY['E']]::text[][]
I want to find all rows that contain ('A' and 'C') OR ('F') OR ('E'). So for example above I should get rows with ids: 1, 2, 5.
I was hoping that I could use syntax like this:
SELECT * FROM inherited_tags WHERE
tags #> ANY(ARRAY[ARRAY['A','C'], ARRAY['F'], ARRAY['E']]::text[][])
but I get error:
ERROR: operator does not exist: text[] #> text
LINE 1: SELECT * FROM inherited_tags where tags <# ANY(ARRAY[ARRAY['...
Postgres 9.6
plpgsql solution is acceptable but SQL is preferred.
DB-FIDDLE: https://www.db-fiddle.com/f/cKCr7Sfab6u8rqaCHhJvPk/0
The problem comes from the fact that the text[] and text[][] data types are internally the same data type. An array has a base type and dimensions, and the ANY operator will always extract the base type to compare, which will always be text and not text[]. It doesn't help that multidimensional arrays require that each subelement has the same length as every other. You can have ARRAY[ARRAY['A','C'],ARRAY['B','N']], but not ARRAY[ARRAY[2,3],ARRAY[1]].
In short, there is no direct way to make that particular query work. I tried to create a function and an operator for this as well, and that doesn't work, either, for different reasons. See how that went:
CREATE OR REPLACE FUNCTION check_tag_matches(
IN leftside text[],
IN rightside text)
RETURNS BOOLEAN AS
$BODY$
DECLARE rightarr text[];
BEGIN
SELECT CAST(rightside as text[]) INTO rightarr;
RETURN SELECT leftside #> rightarr;
END;
$BODY$
LANGUAGE plpgsql STABLE;
CREATE OPERATOR public.>>(
PROCEDURE = check_tag_matches,
LEFTARG = text[],
RIGHTARG = text,
COMMUTATOR = >>);
Then when testing it:
test=# SELECT * FROM inherited_tags WHERE
tags >> ANY(ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][]);
ERROR: malformed array literal: "A"
DETAIL: Array value must start with "{" or dimension information.
CONTEXT: SQL statement "SELECT CAST(rightside as text[])"
PL/pgSQL function check_tag_matches(text[],text) line 4 at SQL statement
It seems that when you try using a multidimensional array like ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][] in ANY(), it iterates not over ARRAY['A','M'], then ARRAY['F','E'], then ARRAY['E','R'], but over 'A','M','F','E','E','R'. The same thing happens when with unnest.
test=# SELECT unnest(ARRAY[ARRAY['A','M'], ARRAY['F','E'], ARRAY['E','R']]::text[][]);
unnest
--------
A
M
F
E
E
R
(6 rows)
Your remaining optiona are to define a function that will read array_length(rightside,1) and array_length(rightside,2) and use nested loops to check it all, or you can send multiple queries to get the inherited tags for each tag, or restructure your data somehow. And you can't even access the ARRAY['A','M'] element using rightside[1] to iterate over it, you're forced to go to the deepest level.
I don't think you can do that with a single condition because of the "contains A and C" requirement.
SELECT *
FROM inherited_tags
WHERE tags #> ARRAY['A','C']
OR tags && array['F', 'E'];
tags #> ARRAY['A','C'] selects those where tags contains all elements from ARRAY['A','C'] and tags && array['F', 'E'] selects those rows that contain at least one of the tags from array['F', 'E']
Updated DB Fiddle: https://www.db-fiddle.com/f/rXsjqEN3ry67uxJtEs3GM9/0
u can try
SELECT * FROM table WHERE
tags #> ARRAY['A','C']::varchar[]
OR
tags #> ARRAY['E']::varchar[]
OR
tags #> ARRAY['F']::varchar[]
I have a field (content) in a table containing keys and values (hstore) like this :
content: {"price"=>"15.2", "quantity"=>"3", "product_id"=>"27", "category_id"=>"2", "manufacturer_id"=>"D"}
I can easily select product having ONE category_id with :
SELECT * FROM table WHERE "content #> 'category_id=>27'"
I want to select all lines having (for example) category_id IN a list of value.
In classic SQL it would be something like this :
SELECT * FROM TABLE WHERE category_id IN (27, 28, 29, ....)
Thanks you in advance
De-reference the key and test it with IN as normal.
CREATE TABLE hstoredemo(content hstore not null);
INSERT INTO hstoredemo(content) VALUES
('"price"=>"15.2", "quantity"=>"3", "product_id"=>"27", "category_id"=>"2", "manufacturer_id"=>"D"');
then one of these. The first is cleaner, as it casts the extracted value to integer rather than doing string compares on numbers.
SELECT *
FROM hstoredemo
WHERE (content -> 'category_id')::integer IN (2, 27, 28, 29);
SELECT *
FROM hstoredemo
WHERE content -> 'category_id' IN ('2', '27', '28', '29');
If you had to test more complex hstore contains operations, say with multiple keys, you could use #> ANY, e.g.
SELECT *
FROM hstoredemo
WHERE
content #> ANY(
ARRAY[
'"category_id"=>"27","product_id"=>"27"',
'"category_id"=>"2","product_id"=>"27"'
]::hstore[]
);
but it's not pretty, and it'll be a lot slower, so don't do this unless you have to.
category_ids = ["27", "28", "29"]
Tablename.where("category_id IN(?)", category_ids)
I have a table called map_tags:
map_id | map_license | map_desc
And another table (widgets) whose records contains a foreign key reference (1 to 1) to a map_tags record:
widget_id | map_id | widget_name
Given the constraint that all map_licenses are unique (however are not set up as keys on map_tags), then if I have a map_license and a widget_name, I'd like to perform an insert on widgets all inside of the same SQL statement:
INSERT INTO
widgets w
(
map_id,
widget_name
)
VALUES (
(
SELECT
mt.map_id
FROM
map_tags mt
WHERE
// This should work and return a single record because map_license is unique
mt.map_license = '12345'
),
'Bupo'
)
I believe I'm on the right track but know right off the bat that this is incorrect SQL for Postgres. Does anybody know the proper way to achieve such a single query?
Use the INSERT INTO SELECT variant, including whatever constants right into the SELECT statement.
The PostgreSQL INSERT syntax is:
INSERT INTO table [ ( column [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) [, ...] | query }
[ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
Take note of the query option at the end of the second line above.
Here is an example for you.
INSERT INTO
widgets
(
map_id,
widget_name
)
SELECT
mt.map_id,
'Bupo'
FROM
map_tags mt
WHERE
mt.map_license = '12345'
INSERT INTO widgets
(
map_id,
widget_name
)
SELECT
mt.map_id, 'Bupo'
FROM
map_tags mt
WHERE
mt.map_license = '12345'
Quick Answer:
You don't have "a single record" you have a "set with 1 record"
If this were javascript: You have an "array with 1 value" not "1 value".
In your example, one record may be returned in the sub-query,
but you are still trying to unpack an "array" of records into separate
actual parameters into a place that takes only 1 parameter.
It took me a few hours to wrap my head around the "why not".
As I was trying to do something very similiar:
Here are my notes:
tb_table01: (no records)
+---+---+---+
| a | b | c | << column names
+---+---+---+
tb_table02:
+---+---+---+
| a | b | c | << column names
+---+---+---+
|'d'|'d'|'d'| << record #1
+---+---+---+
|'e'|'e'|'e'| << record #2
+---+---+---+
|'f'|'f'|'f'| << record #3
+---+---+---+
--This statement will fail:
INSERT into tb_table01
( a, b, c )
VALUES
( 'record_1.a', 'record_1.b', 'record_1.c' ),
( 'record_2.a', 'record_2.b', 'record_2.c' ),
-- This sub query has multiple
-- rows returned. And they are NOT
-- automatically unpacked like in
-- javascript were you can send an
-- array to a variadic function.
(
SELECT a,b,c from tb_table02
)
;
Basically, don't think of "VALUES" as a variadic
function that can unpack an array of records. There is
no argument unpacking here like you would have in a javascript
function. Such as:
function takeValues( ...values ){
values.forEach((v)=>{ console.log( v ) });
};
var records = [ [1,2,3],[4,5,6],[7,8,9] ];
takeValues( records );
//:RESULT:
//: console.log #1 : [1,2,3]
//: console.log #2 : [4,5,7]
//: console.log #3 : [7,8,9]
Back to your SQL question:
The reality of this functionality not existing does not change
just because your sub-selection contains only one result. It is
a "set with one record" not "a single record".