RETURNING rows using unnest()? - sql

I'm trying to return a set of rows after doing UPDATE.
Something like this.
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING unnest(old_noti.notis);
but postgres complains, rightly so:
set-valued function called in context that cannot accept a set
How am I supposed to go about implementing this?
That is, RETURNING a set of rows from SELECTed array after UPDATE?
I'm aware that a function can achieve this using RETURNS SETOF but rather prefer not to if possible.

Use WITH statement:
WITH upd AS (
UPDATE Notis new_noti SET notis = '{}'::noti_record_type[]
FROM (SELECT * FROM Notis WHERE user_id = 2 FOR UPDATE) old_noti
WHERE old_noti.user_id = new_noti.user_id RETURNING old_noti.notis
)
SELECT unnest(notis) FROM upd;

Use a data-modifying CTE.
You can use a set-returning function in the SELECT list, but it is cleaner to move it to the FROM list with a LATERAL subquery since Postgres 9.3. Especially if you need to extract multiple columns (from a row type like you commented). It would also be inefficient to call unnest() multiple times.
WITH upd AS (
UPDATE notis n
SET notis = '{}'::noti_record_type[] -- explicit cast optional
FROM (
SELECT user_id, notis
FROM notis
WHERE user_id = 2
FOR UPDATE
) old_n
WHERE old_n.user_id = n.user_id
RETURNING old_n.notis
)
SELECT n.*
FROM upd u, unnest(u.notis) n; -- implicit CROSS JOIN LATERAL
If the array can be empty and you want to preserve empty / NULL results use LEFT JOIN LATERAL ... ON true. See:
What is the difference between LATERAL JOIN and a subquery in PostgreSQL?
Call a set-returning function with an array argument multiple times
Also, multiple set-returning functions in the same SELECT can exhibit surprising behavior. Avoid that.
This has been sanitized with Postgres 10. See:
What is the expected behaviour for multiple set-returning functions in SELECT clause?
Alternative to unnest multiple arrays in parallel before and after Postgres 10:
Unnest multiple arrays in parallel
Related:
Return pre-UPDATE column values using SQL only
Behavior of composite / row values
Postgres has an oddity when assigning a row type (or composite or record type) from a set-returning function to a column list. One might expect that the row-type field is treated as one column and assigned to the respective column, but that is not so. It is decomposed automatically (one row-layer only!) and assigned element-by-element.
So this does not work as expected:
SELECT (my_row).*
FROM upd u, unnest(u.notis) n(my_row);
But this does (like #klin commented):
SELECT (my_row).*
FROM upd u, unnest(u.notis) my_row;
Or the simpler version I ended up using:
SELECT n.*
FROM upd u, unnest(u.notis) n;
Another oddity: A composite (or row) type with a single field is decomposed automatically. Thus, table alias and column alias end up doing the same in the outer SELECT list:
SELECT n FROM unnest(ARRAY[1,2,3]) n;
SELECT n FROM unnest(ARRAY[1,2,3]) n(n);
SELECT n FROM unnest(ARRAY[1,2,3]) t(n);
SELECT t FROM unnest(ARRAY[1,2,3]) t(n); -- except output column name is "t"
For more than one field, the row-wrapper is preserved:
SELECT t FROM unnest(ARRAY[1,2,3]) WITH ORDINALITY t(n); -- requires 9.4+
Confused? There is more. For composite types (the case at hand) like:
CREATE TYPE my_type AS (id int, txt text);
While this works as expected:
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n;
You are in for a surprise here:
SELECT n FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) n(n);
And that's the error I had: When providing a column list, Postgres decomposes the row and assigns provided names one-by-one. Referring to n in the SELECT list does not return the composite type, but only the (renamed) first element. I had mistakenly expected the row type and tried to decompose with (my_row).* - which only returns the first element nonetheless.
Then again:
SELECT t FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]) t(n);
(Be aware that the first element has been renamed to "n"!)
With the new form of unnest() taking multiple array arguments (Postgres 9.4+):
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n;
Column aliases only for the first two output columns:
SELECT *
FROM unnest(ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n(a, b);
Column aliases for all output columns:
SELECT *
FROM unnest(ARRAY[(1,'foo')::my_type, (2,'bar')::my_type]
, ARRAY[(3,'baz')::my_type, (4,'bak')::my_type]) n(a,b,c,d);
db<>fiddle here
Old sqlfiddle

Probably
For:
SELECT *
FROM unnest (ARRAY[(1, 'foo')::my_type, (2, 'bar')::my_type]
, ARRAY[(3, 'baz')::my_type, (4, 'bak')::my_type]) n(a, b);
Use:
SELECT *
FROM unnest (ARRAY[(1, 'foo')::text, (2, 'bar')::text]
, ARRAY[(3, 'baz')::text, (4, 'bak')::text]) WITH ORDINALITY AS t(first_col, second_col);

Related

Select rows according to another table with a comma-separated list of items

Have a table test.
select b from test
b is a text column and contains Apartment,Residential
The other table is a parcel table with a classification column. I'd like to use test.b to select the right classifications in the parcels table.
select * from classi where classification in(select b from test)
this returns no rows
select * from classi where classification =any(select '{'||b||'}' from test)
same story with this one
I may make a function to loop through the b column but I'm trying to find an easier solution
Test case:
create table classi as
select 'Residential'::text as classification
union
select 'Apartment'::text as classification
union
select 'Commercial'::text as classification;
create table test as
select 'Apartment,Residential'::text as b;
You don't actually need to unnest the array:
SELECT c.*
FROM classi c
JOIN test t ON c.classification = ANY (string_to_array(t.b, ','));
db<>fiddle here
The problem is that = ANY takes a set or an array, and IN takes a set or a list, and your ambiguous attempts resulted in Postgres picking the wrong variant. My formulation makes Postgres expect an array as it should.
For a detailed explanation see:
How to match elements in an array of composite type?
IN vs ANY operator in PostgreSQL
Note that my query also works for multiple rows in table test. Your demo only shows a single row, which is a corner case for a table ...
But also note that multiple rows in test may produce (additional) duplicates. You'd have to fold duplicates or switch to a different query style to get de-duplicate. Like:
SELECT c.*
FROM classi c
WHERE EXISTS (
SELECT FROM test t
WHERE c.classification = ANY (string_to_array(t.b, ','))
);
This prevents duplication from elements within a single test.b, as well as from across multiple test.b. EXISTS returns a single row from classi per definition.
The most efficient query style depends on the complete picture.
You need to first split b into an array and then get the rows. A couple of alternatives:
select * from nj.parcels p where classification = any(select unnest(string_to_array(b, ',')) from test)
select p.* from nj.parcels p
INNER JOIN (select unnest(string_to_array(b, ',')) from test) t(classification) ON t.classification = p.classification;
Essential to both is the unnest surrounding string_to_array.

SQL Snowflake - Put an SQL list / array into a column

Based on a specific project architecture, I have a LIST ('Bob', 'Alice') that I want to SELECT as a column (and do a specific JOIN afterwards).
Right now, I did :
SELECT *
FROM TABLE(flatten(input => ('Bob', 'Alice'))) as v1
But this resulted in one row / two columns, and I need one column / two rows (to do the JOIN).
Same if I use :
select * from (values ('Bob', 'Alice'))
The basic idea would be to PIVOT, however, the list may be of arbitrary length so I can't manually list all column names in PIVOT query...
Also I can't use the following (which would work) :
select * from (values ('Bob'), ('Alice'))
because I inherit the list as a string and can't modify it on the fly.
If you have a fixed set of values that you are wanting to JOIN against, and looking at some of the SQL you have tried the correct form to use VALUES is:
select * from (values ('Bob'), ('Alice'));
or
select * from values ('Bob'), ('Alice');
if you have a exist array you can FLATTEN it like for first example
SELECT v1.value::text
FROM TABLE(flatten(input => array_construct('Bob', 'Alice'))) as v1;
V1.VALUE::TEXT
Bob
Alice
or if you have a string "Bob, Alice" then SPLIT_TO_TABLE
SELECT trim(v1.value)
FROM TABLE(split_to_table('Bob, Alice', ',')) as v1;
If the input is provided as ('Bob','Alice') then STRTOK_SPLIT_TO_TABLE could be used:
SELECT table1.value
FROM table(strtok_split_to_table($$('Bob','Alice')$$, '(),''')) AS table1;
Output:

INSERT SELECT FROM VALUES casting

It's often desirable to INSERT from a SELECT expression (e.g. to qualify with a WHERE clause), but this can get postgresql confused about the column types.
Example:
CREATE TABLE example (a uuid primary key, b numeric);
INSERT INTO example
SELECT a, b
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)) as data(a,b);
=> ERROR: column "a" is of type uuid but expression is of type text
This can be fixed by explicitly casting in the values:
INSERT INTO example
SELECT a, b
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a'::uuid, NULL::numeric)) as data(a,b);
But that's messy and a maintenance burden. Is there some way to make postgres understand that the VALUES expression has the same type as a table row, i.e. something like
VALUES('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)::example%ROWTYPE
Edit:
The suggestion of using (data::example).* is neat, but unfortunately it complete seems to screw up the postgres query planner when combined with a WHERE clause like so:
INSERT INTO example
SELECT (data::example).*
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL)) as data
WHERE NOT EXISTS (SELECT * FROM example
WHERE (data::example)
IS NOT DISTINCT FROM example);
This takes minutes with a large table.
You can cast a record to a row type of your table:
INSERT INTO example
SELECT (data::example).*
FROM (
VALUES
('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL),
('54514c89-f188-490a-abbb-268f9154ab2c', 42)
) as data;
data::example casts the complete row to a record of type example. The (...).* then turns that into the columns defined in the table type example
You could use VALUES directly:
INSERT INTO example(a, b)
VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL);
DBFiddle Demo
Or just cast once:
INSERT INTO example(a, b)
SELECT a::uuid, b::numeric
FROM (VALUES ('d853b5a8-d453-11e7-9296-cec278b6b50a', NULL),
('bb53b5a8-d453-11e7-9296-cec278b6b50a',1) ) as data(a,b);
DBFiddle Demo2
Note, please always explicitly define columns list.

in Postgres, use array of values for INSERT

have an array of values. I wish to INSERT each of them as records if it does not already exist. can this be done in one statement in SQL?
traditional javascript approach:
[4,5,8]forEach( x => queryParams(
insert into t1( c1 ) values( $1 )
where not exists( select 1 from t1 where c1 = $1`,
[x]
);
what would be ideal is something like
queryParams(
`some fancy SQL here`,
[ `{${join[4,5,8]}}` ]
);
many reasons for this, including limiting the cost of server round trips and transactions.
You can use a correlated sub-query to find the values that don't exist matching a condition:
INSERT INTO Records (X)
SELECT X
FROM unnest(ARRAY[4,5,8]) T (X)
WHERE NOT EXISTS (SELECT * FROM Records WHERE X = T.X);
SQL Fiddle: http://sqlfiddle.com/#!15/e0334/29/0
Edited above to use unnest

Does PostgreSQL have a mechanism to update the same row multiple times in a single query?

Consider the following:
create table tmp.x (i integer, t text);
create table tmp.y (i integer, t text);
delete from tmp.x;
delete from tmp.y;
insert into tmp.x values (1, 'hi');
insert into tmp.y values(1, 'there');
insert into tmp.y values(1, 'wow');
In the above, there is one row in table x, which I want to update. In table y, there are two rows, both of which I want to "feed data into" the update.
Below is my attempt:
update tmp.x
set t = x.t || y.t
from ( select * from tmp.y order by t desc ) y
where y.i = x.i;
select * from tmp.x;
I want the value of x.t to be 'hiwowthere' but the value ends up being 'hiwow'. I believe the cause of this is that the subquery in the update statement returns two rows (the y.t value of 'wow' being returned first), and the where clause y.i = x.i only matches the first row.
Can I achieve the desired outcome using a single update statement, and if so, how?
UPDATE: The use of the text type above was for illustration purposes only. I do not actually want to modify textual content, but rather JSON content using the json_set function that I posted here (How do I modify fields inside the new PostgreSQL JSON datatype?), although I'm hoping the principle could be applied to any function, such as the fictional concat_string(column_name, 'string-to-append').
UPDATE 2: Rather than waste time on this issue, I actually wrote a small function to accomplish it. However, it would still be nice to know if this is possible, and if so, how.
What you can do is to build up a concatenated string using string_agg, grouped by the integer i, which you can then join onto during the update:
update tmp.x
set t = x.t || y.txt
from (select i, string_agg(t, '') as txt
from(
select tmp.y.i,tmp.y.t
from tmp.y
order by t desc
) z
group by z.i) y
where y.i = x.i ;
In order to preserve the order, you may need an additional wrapping derived table. SqlFiddle here
Use string_agg, as follows:
update tmp.x x
set t = x.t || (
select string_agg(t,'' order by t desc)
from tmp.y where i = x.i
group by i
)
SQLFiddle
with cte as (
select y.i, string_agg(t, '' order by t desc) as txt
from y
group by y.i
)
update x set t= x.t||cte.txt
from cte where cte.i=x.i