One-statement Insert+delete in PostgreSQL - sql
Suppose I have a PostgreSQL table t that looks like
id | name | y
----+------+---
0 | 'a' | 0
1 | 'b' | 0
2 | 'c' | 0
3 | 'd' | 1
4 | 'e' | 2
5 | 'f' | 2
With id being the primary key and with a UNIQUE constraint on (name, y).
Suppose I want to update this table in such a way that the part of the data set with y = 0 becomes (without knowing what is already there)
id | name | y
----+------+---
0 | 'a' | 0
1 | 'x' | 0
2 | 'y' | 0
I could use
DELETE FROM t WHERE y = 0 AND name NOT IN ('a', 'x', 'y');
INSERT INTO t (name, y) VALUES ('a', 0), ('x', 0), ('y', 0)
ON CONFLICT (name) DO NOTHING;
I feel like there must be a one-statement way to do this (like what upsert does for the task "update the existing entries and insert missing ones", but then for "insert the missing entries and delete the entries that should not be there"). Is there? I heard rumours that oracle has something called MERGE... I'm not sure what it does exactly.
This can be done with a single statement. But I doubt whether that classifies as "simpler".
Additionally: your expected output doesn't make sense.
Your insert statement does not provide a value for the primary key column (id), so apparently, the id column is a generated (identity/serial) column.
But in that case, news rows can't have the same IDs as the ones before because when the new rows were inserted, new IDs were generated.
Given the above change to your expected output, the following does what you want:
with data (name, y) as (
values ('a', 0), ('x', 0), ('y', 0)
), changed as (
insert into t (name, y)
select *
from data
on conflict (name,y) do nothing
)
delete from t
where (name, y) not in (select name, y from data);
That is one statement, but certainly not "simpler". The only advantage I can see is that you do not have to specify the list of values twice.
Online example: https://rextester.com/KKB30299
Unless there's a tremendous number of rows to be updated, do it as three update statements.
update t set name = 'a' where id = 0;
update t set name = 'x' where id = 1;
update t set name = 'y' where id = 2;
This is simple. It's easily done in a loop with a SQL builder. There's no race conditions as there are with deleting and inserting. And it preserves the ids and other columns of those rows.
To demonstrate with some psuedo-Ruby code.
new_names = ['a', 'x', 'y']
# In a transaction
db.transaction {
# Query the matching IDs in the same order as their new names
ids_to_update = db.select("
select id from t where y = 0 order by id
")
# Iterate through the IDs and new names together
ids_to_update.zip(new_names).each { |id,name|
# Update the row with its new name
db.execute("
update t set name = ? where id = ?
", name, id)
}
}
Fooling around some, here's how I did it in "one" statement, or at least one thing sent to the server, while preserving the IDs and no race conditions.
do $$
declare change text[];
declare changes text[][];
begin
select array_agg(array[id::text,name])
into changes
from unnest(
(select array_agg(id order by id) from t where y = 0),
array['a','y','z']
) with ordinality as a(id, name);
foreach change slice 1 in array changes
loop
update t set name = change[2] where id = change[1]::int;
end loop;
end$$;
The goal is to produce an array of arrays matching the id to its new name. That can be iterated over to do the updates.
unnest(
(select array_agg(id order by id) from t where y = 0),
array['a','y','z']
) with ordinality as a(id, name);
That bit produces rows with the IDs and their new names side by side.
select array_agg(array[id::text,name])
into changes
from unnest(...) with ordinality as a(id, name);
Then those rows of IDs and names are turned into an array of arrays like: {{1,a},{2,y},{3,z}}. (There's probably a more direct way to do that)
foreach change slice 1 in array changes
loop
update t set name = change[2] where id = change[1]::int;
end loop;
Finally we loop over the array and use it to perform each update.
You can turn this into a proper function and pass in the y value to match and the array of names to change them to. You should verify that the length of the ids and names match.
This might be faster, depends on how many rows you're updating, but it isn't simpler, and it took some time to puzzle out.
Related
Translating an Excel concept into SQL
Let's say I have the following range in Excel named MyRange: This isn't a table by any means, it's more a collection of Variant values entered into cells. Excel makes it easy to sum these values doing =SUM(B3:D6) which gives 25. Let's not go into the details of type checking or anything like that and just figure that sum will easily skip values that don't make sense. If we were translating this concept into SQL, what would be the most natural way to do this? The few approaches that came to mind are (ignore type errors for now): MyRange returns an array of values: -- myRangeAsList = [1,1,1,2, ...] SELECT SUM(elem) FROM UNNEST(myRangeAsList) AS r (elem); MyRange returns a table-valued function of a single column (basically the opposite of a list): -- myRangeAsCol = (SELECT 1 UNION ALL SELECT 1 UNION ALL ... SELECT SUM(elem) FROM myRangeAsCol as r (elem); Or, perhaps more 'correctly', return a 3-columned table such as: -- myRangeAsTable = (SELECT 1,1,1 UNION ALL SELECT 2,'other',2 UNION ALL ... SELECT SUM(a+b+c) FROM SELECT a FROM myRangeAsTable (a,b,c) Unfortunately, I think this makes things the most difficult to work with, as we now have to combine an unknown number of columns. Perhaps returning a single column is the easiest of the above to work with, but even that takes a very simple concept -- SUM(myRange) and converts into something that is anything but that: SELECT SUM(elem) FROM myRangeAsCol as r (elem). Perhaps this could also just be rewritten to a function for convenience, for example:
Just possible direction to think create temp function extract_values (input string) returns array<string> language js as """ return Object.values(JSON.parse(input)); """; with myrangeastable as ( select '1' a, '1' b, '1' c union all select '2', 'other', '2' union all select 'true', '3', '3' union all select '4', '4', '4' ) select sum(safe_cast(value as float64)) range_sum from myrangeastable t, unnest(extract_values(to_json_string(t))) value with output Note: no columns explicitly used so should work for any sized range w/o any changes in code Depends on specific use case, I think above can be wrapped into something more friendly for someone who knows excel to do
I'll try to pose, atomic, pure SQL principles that start with obvious items and goes to the more complicated ones. The intention is, all items can be used in any RDBS: SQL is basically designed to query tabular data which has relations. (Hence the name is Structured Query Language). The range in excel is a table for SQL. (Yes you can have some other types in different DBs, but keep it simple so you can use the concept in different types of DBs.) Now we accept a range in the excel is a table in a database. Then the next step is how to map columns and rows of an excel range to a DB table. It is straight forward. An excel range column is a column in DB. And a row is a row. So why is this a separate item? Because the main difference between the two is usually in DBs, adding new column is usually a pain, the DB tables are almost exclusively designed for new rows not for new columns. (But, of course there are methods to add new columns, and even there exists column based DBs, but these are out of the scope of this answer.) Items 2 and 3 in Excel and in a DB: /* Item 2: Table the range in the excel is modeled as the below test_table Item 3: Columns id keeps the excel row number b, c, d are the corresponding b, c, d columns of the excel */ create table test_table ( id integer, b varchar(20), c varchar(20), d varchar(20) ); -- Item 3: Adding the rows in the DB insert into test_table values (3 /* same as excel row number */ , '1', '1', '1'); insert into test_table values (4 /* same as excel row number */ , '2', 'other', '2'); insert into test_table values (5 /* same as excel row number */ , 'TRUE', '3', '3'); insert into test_table values (6 /* same as excel row number */ , '4', '4', '4'); Now we have similar structure. Then the first thing we want to do is to have equal number of rows between excel range and db table. At DB side this is called filtering and your tool is the where condition. where condition goes through all rows (or indexes for the sake of speed but this is beyond this answer's scope), and filters out which does not satisfy the test boolean logic in the condition. (So for example where 1 = 1 is brings all rows because the condition is always true for all rows. The next thing to do is to sum the related columns. For this purpose you have two options. To use sum(column_a + column_b) (row by row summation) or sum(a) + sum(b) (column by column summation). If we assume all the data are not null, then both gives the same output. Items 4 and 5 in Excel and in a DB: select sum(b + c + d) -- Item 5, first option: We sum row by row from test_table where id between 3 and 6; -- Item 4: We simple get all rows, because for all rows above the id are between 3 and 6, if we had another row with 7, it would be filtered out +----------------+ | sum(b + c + d) | +----------------+ | 25 | +----------------+ select sum(b) + sum(c) + sum(d) -- Item 5, second option: We sum column by column from test_table where id between 3 and 6; -- Item 4: We simple get all rows, because for all rows above the id are between 3 and 6, if we had another row with 7, it would be filtered out +--------------------------+ | sum(b) + sum(c) + sum(d) | +--------------------------+ | 25 | +--------------------------+ At this point it is better to go one step further. In the excel you have got the "pivot table" structure. The corresponding structure at SQL is the powerful group by mechanics. The group by basically groups a table according to its condition and each group behaves like a sub-table. For example if you say group by column_a for a table, the values are grouped according to the values of the table. SQL is so powerful that you can even filter the sub groups using having clauses, which acts same as where but works over the columns in group by or the functions over those columns. Items 6 and 7 in Excel and in a DB: -- Item 6: We can have group by clause to simulate a pivot table insert into test_table values (7 /* same as excel row */ , '4', '2', '2'); select b, sum(d), min(d), max(d), avg(d) from test_table where id between 3 and 7 group by b; +------+--------+--------+--------+--------+ | b | sum(d) | min(d) | max(d) | avg(d) | +------+--------+--------+--------+--------+ | 1 | 1 | 1 | 1 | 1 | | 2 | 2 | 2 | 2 | 2 | | TRUE | 3 | 3 | 3 | 3 | | 4 | 6 | 2 | 4 | 3 | +------+--------+--------+--------+--------+ Beyond this point following are the details which are not directly related with the questions purpose: SQL has the ability for table joins (the relations). They can be thought like the VLOOKUP functionality in the Excel. The RDBSs have the indexing mechanisms to fetch the rows as quick as possible. (Where the RDBMSs start to go beyond the purpose of excel). The RDBSs keep huge amount of data (where excel the max rows are limited). Both RDBSMs and Excel can be used by most of frameworks as persistent data layer. But of course Excel is not the one you pick because its reason of existence is more on the presentation layer. The excel file and the SQL used in this answer can be found in this github repo: https://github.com/MehmetKaplan/stackoverflow-72135212/ PS: I used SQL for more than 2 decades and then reduced using it and started to use Excel much frequently because of job changes. Each time I use Excel I still think of the DBs and "relational algebra" which is the mathematical foundation of the RDBMSs.
So in Snowflake: Strings as input: if you have your data in a "order" table represented by this CTE: and the data was strings of comma separated values: WITH data(raw) as ( select * from values ('null,null,null,null,null,null'), ('null,null,null,null,null,null'), ('null,1,1,1,null,null'), ('null,2, other,2,null,null'), ('null,true,3,3,null,null'), ('null,4,4,4,null,null') ) this SQL will select the sub part, try parse it and sum the valid values: select sum(nvl(try_to_double(r.value::text), try_to_number(r.value::text))) as sum_total from data as d ,table(split_to_table(d.raw,',')) r where r.index between 2 and 4 /* the column B,C,D filter */ and r.seq between 3 and 6 /* the row 3-6 filter */ ; giving: SUM_TOTAL 25 Arrays as input: if you already have arrays.. here I am smash those strings into STRTOK_TO_ARRAY in the CTE to make me some arrays: WITH data(_array) as ( select STRTOK_TO_ARRAY(column1, ',') from values ('null,null,null,null,null,null'), ('null,null,null,null,null,null'), ('null,1,1,1,null,null'), ('null,2, other,2,null,null'), ('null,true,3,3,null,null'), ('null,4,4,4,null,null') ) thus again with almost the same SQL, but not the array indexes are 0 based, and I have used FLATTEN: select sum(nvl(try_to_double(r.value::text), try_to_number(r.value::text))) as sum_total from data as d ,table(flatten(input=>d._array)) r where r.index between 1 and 3 /* the column B,C,D filter */ and r.seq between 3 and 6 /* the row 3-6 filter */ ; gives: SUM_TOTAL 25 With JSON driven data: This time using semi-structured data, we can include the filter ranges with the data.. and some extra "out of bounds values just to show we are not just converting it all. WITH data as ( select parse_json('{ "col_from":2, "col_to":4, "row_from":3, "row_to":6, "data":[[101,102,null,104,null,null], [null,null,null,null,null,null], [null,1,1,1,null,null], [null,2, "other",2,null,null], [null,true,3,3,null,null], [null,4,4,4,null,null] ]}') as json ) select sum(try_to_double(c.value::text)) as sum_total from data as d ,table(flatten(input=>d.json:data)) r ,table(flatten(input=>r.value)) c where r.index+1 between d.json:row_from::number and d.json:row_to::number and c.index+1 between d.json:col_from::number and d.json:col_to::number ;
Here is another solution using Snowflake scripting (Snowsight format) . This code can easily be wrapped as a stored procedure. declare table_name := 'xl_concept'; -- input column_list := 'a,b,c'; -- input total resultset; -- result output pos int := 0; -- position for delimiter sql := ''; -- sql to be generated col := ''; -- individual column names begin sql := 'select sum('; -- initialize sql loop -- repeat until column list is empty col := replace(split_part(:column_list, ',', 1), ',', ''); -- get the column name pos := position(',' in :column_list); -- find the delimiter sql := sql || 'coalesce(try_to_number('|| col ||'),0)'; -- add to the sql if (pos > 0) then -- more columns in the column list sql := sql || ' + '; column_list := right(:column_list, len(:column_list) - :pos); -- update column list else -- last entry in the columns list break; end if; end loop; sql := sql || ') total from ' || table_name||';'; -- finalize the sql total := (execute immediate :sql); -- run the sql and store total value return table(total); -- return total value end; only these two variables need to be set table_name and column_list generates the following sql to sum up the values select sum(coalesce(try_to_number(a),0) + coalesce(try_to_number(b),0) + coalesce(try_to_number(c),0)) from xl_concept prep steps create or replace temp table xl_concept (a varchar,b varchar,c varchar) ; insert into xl_concept with cte as ( select '1' a, '1' b, '1' c union all select '2', 'other', '2' union all select 'true', '3', '3' union all select '4', '4', '4' ) select * from cte ; result for the run with no change TOTAL 25 result after changing column list to column_list := 'a,c'; TOTAL 17 Also, this can be enhanced setting columns_list to * and reading the column names from information_schema.columns to include all the columns from the table.
In PostgreSQL regular expression can be used to filter non numeric values before sum select sum(e::Numeric) from ( select e from unnest((Array[['1','2w','1.2e+4'],['-1','2.232','zz']])) as t(e) where e ~ '^[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?$' ) a expression for validating numeric value was taken from post Return Just the Numeric Values from a PostgreSQL Database Column More secure option is to define function as in PostgreSQL alternative to SQL Servers try_cast function Function (simplified for this example): create function try_cast_numeric(p_in text) returns Numeric as $$ begin begin return $1::Numeric; exception when others then return 0; end; end; $$ language plpgsql; Select select sum(try_cast_numeric(e)) from unnest((Array[['1','2w','1.2e+4'],['-1','2.232','zz']])) as t(e)
Most modern RDBMS support lateral joins and table value constructors. You can use them together to convert arbitrary columns to rows (3 columns per row become 3 rows with 1 column) then sum. In SQL server you would: create table t ( id int not null primary key identity, a int, b int, c int ); insert into t(a, b, c) values ( 1, 1, 1), ( 2, null, 2), (null, 3, 3), ( 4, 4, 4); select sum(value) from t cross apply (values (a), (b), (c) ) as x(value); Below is the implementation of this concept in some popular RDBMS: SQL Server PostgreSQL MySQL Generic solution, ANSI SQL Unpivot solution, Oracle
Using regular expression to extract all number values from a row could be another option, I guess. DECLARE rectangular_table ARRAY<STRUCT<A STRING, B STRING, C STRING>> DEFAULT [ ('1', '1', '1'), ('2', 'other', '2'), ('TRUE', '3', '3'), ('4', '4', '4') ]; SELECT SUM(SAFE_CAST(v AS FLOAT64)) AS `sum` FROM UNNEST(rectangular_table) t, UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r':"?([-0-9.]*)"?[,}]')) v output: +------+------+ | Row | sum | +------+------+ | 1 | 25.0 | +------+------+
You could use a CTE with a SELECT FROM VALUES with xlary as ( select val from (values ('1') ,('1') ,('1') ,('2') ,('OTHER') ,('2') ,('TRUE') ,('3') ,('3') ,('4') ,('4') ,('4') ) as tbl (val) ) select sum(try_cast(val as number)) from xlary;
filter a column based on another column in oracle query
I have table like this : ID | key | value 1 | A1 |o1 1 | A2 |o2 1 | A3 |o3 2 | A1 |o4 2 | A2 |o5 3 | A1 |o6 3 | A3 |o7 4 | A3 |o8 I want to write a oracle query that can filter value column based on key column . some thing like this select ID where if key = A1 then value ='o1' and key = A3 then value ='o4' please help me to write this query. ***To clarify my question ,I need list of IDs in result that all condition(key-value) are true for them. for each IDs I should check key-values (with AND ) and if all conditions are true then this ID is acceptable . thanks
IF means PL/SQL. In SQL, we use CASE expression instead (or DECODE, if you want). Doing so, you'd move value out of the expression and use something like this: where id = 1 and value = case when key = 'A1' then 'o1' when key = 'A3' then 'o4' end
You are mixing filtering and selection. List the columns that you want to display in the SELECT list and the columns used to filter in the WHERE clause SELECT key, value FROM my_table WHERE ID = 1 AND key IN ('A1', 'A2') If there is no value column in your table, you can use the DECODE function SELECT key, DECODE(key, 'A1', 'o1', 'A2', 'o4', key) AS value FROM my_table WHERE ID = 1 After the key, you must specify pairs of search and result values. The pairs can be followed by a default value. In this example, since we did not specify a result for 'A3', the result will be the key itself. If no default value was specified, NULL would be returned for missing search values. update It seems that I have misunderstood the question (see #mathguy's comment). You can filter the way you want by simply using the Boolean operators AND and OR SELECT * FROM FROM my_table WHERE ID = 1 AND ( key = 'A1' AND value ='o1' OR key = 'A3' AND value ='o4' ) By using this pattern it is easy to add more constraints of this kind. Note that AND has precedence over OR (like * over +).
DB2 SELECT from UPDATE Options
I am currently trying to do an SELECT DISTINCT * FROM FINAL TABLE (UPDATE mainTable SET value = 'N' WHERE value2 = 'Y') However, the version of DB2 I have does not appear to support this SQL Error [42601]: [SQL0199] Keyword UPDATE not expected. Valid tokens: INSERT. Is there any alternative to this in DB2 that could be return a desired result? Where in one query we can Update and Return the result? EDIT - The Select statement is supposed to return the values that are to begin processing by a server application. When this happens, a column will be updated to indicate that the Processing of this row has begun. A later command will update the row again when it is completed. ORIGINAL DATA ROW ID | COLUMN TWO | PROCESSING FLAG ------------------------------------------- 1 | TASK 1 | N 2 | TASK 2 | N 3 | TASK 3 | N 4 | TASK 4 | N After Optimistic Select/Update Query Data Table returned as: ROW ID | COLUMN TWO | PROCESSING FLAG ------------------------------------------- 1 | TASK 1 | Y 2 | TASK 2 | Y 3 | TASK 3 | Y 4 | TASK 4 | Y This is being called by a .NET Application, so this would be converted into a List of the Table Object.
You can't specify UPDATE in the table-reference in DB2 IBM i 7.3 (and even in 7.4 at the moment) as you could do it in Db2 for LUW. Only INSERT is available. data-change-table-reference -+-- FINAL -+- TABLE (INSERT statement) correlation-clause | | -+-- NEW ---+ A possible emulation is to use a dynamic compound statement, positioned update and a temporary table to save info on updated rows. --#SET TERMINATOR # DECLARE GLOBAL TEMPORARY TABLE SESSION.MAINTABLE ( ID INT, COL VARCHAR (10), FLAG CHAR (1) ) WITH REPLACE ON COMMIT PRESERVE ROWS NOT LOGGED# INSERT INTO SESSION.MAINTABLE (ID, COL, FLAG) VALUES (1, 'TASK 1', 'N') , (2, 'TASK 2', 'N') , (3, 'TASK 3', 'N') , (4, 'TASK 4', 'Y') # DECLARE GLOBAL TEMPORARY TABLE SESSION.UPDRES AS ( SELECT ID FROM SESSION.MAINTABLE ) DEFINITION ONLY WITH REPLACE ON COMMIT PRESERVE ROWS NOT LOGGED# BEGIN FOR F1 AS C1 CURSOR FOR SELECT ID FROM SESSION.MAINTABLE WHERE FLAG = 'N' FOR UPDATE DO UPDATE SESSION.MAINTABLE SET FLAG = 'Y' WHERE CURRENT OF C1; INSERT INTO SESSION.UPDRES (ID) VALUES (F1.ID); END FOR; END# SELECT * FROM SESSION.MAINTABLE# ID COL FLAG 1 TASK 1 Y 2 TASK 2 Y 3 TASK 3 Y 4 TASK 4 Y SELECT * FROM SESSION.UPDRES# ID 1 2 3
While you can't use SELECT FROM FINAL TABLE(UPDATE ...) currently on Db2 for IBM i... You can within the context of a transaction do UPDATE mainTable SET value = 'Y' WHERE value2 = 'N' with RR SELECT * FROM mainTable WHERE value2 = 'Y' COMMIT The use of RR - Repeatable read means that the entire table will be locked until you issue your commit. You may be able to use a lower isolation level if you have knowledge/control of any other processes working with the table. Or if your willing to do some extra work...the below only locks the rows being returned. UPDATE mainTable SET value = '*' WHERE value2 = 'N' with CHG SELECT * FROM mainTable WHERE value2 = '*' UPDATE mainTable SET value = 'Y' WHERE value2 = '*' with CHG COMMIT The straight-forward SQL way to do this is via a cursor and an UPDATE WHERE CURRENT OF CURSOR .... Lastly, since you are using .NET, I suggest taking a look at the iDB2DataAdapter class in the IBM .NET Provider Technical Reference (part of the IBM ACS Windows Application package) public void Example() { //create table mylib.mytable (col1 char(20), col2 int) //insert into mylib.mytable values('original value', 1) iDB2Connection cn = new iDB2Connection("DataSource=mySystemi;"); iDB2DataAdapter da = new iDB2DataAdapter(); da.SelectCommand = new iDB2Command("select * from mylib.", cn); da.UpdateCommand = new iDB2Command("update mylib.mytable set col1 = #col1 where col2 = #col2", cn); cn.Open(); //Let the provider generate the correct parameter information da.UpdateCommand.DeriveParameters(); //Associate each parameter with the column in the table it corresponds to da.UpdateCommand.Parameters["#col1"].SourceColumn = "col1"; da.UpdateCommand.Parameters["#col2"].SourceColumn = "col2"; //Fill the DataSet from the DataAdapter's SelectCommand DataSet ds = new DataSet(); da.Fill(ds, "table"); //Modify the information in col1 DataRow dr = ds.Tables[0].Rows[0]; dr["col1"] = "new value"; //Write the information back to the table using the DataAdapter's UpdateCommand da.Update(ds, "table"); cn.Close(); } You may also find some good information in the Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET Redbook.
How to combine two queries where one of them results in an array and the second is the element place in the array?
I have the following two queries: Query #1 (SELECT ARRAY (SELECT (journeys.id) FROM JOURNEYS JOIN RESPONSES ON scenarios[1] = responses.id) AS arry); This one returns an array. Query #2: SELECT (journeys_index.j_index) FROM journeys_index WHERE environment = 'env1' AND for_channel = 'ch1' AND first_name = 'name1'; This second query returns the element index in the former array. How do I combine the two to get only the element value?
I recreated a simpler example with a table containing an array column (the result of your first query) create table my_array_test (id int, tst_array varchar[]); insert into my_array_test values (1,'{cat, mouse, frog}'); insert into my_array_test values (2,'{horse, crocodile, rabbit}'); And another table containing the element position for each row I want to extract. create table my_array_pos_test (id int, pos int); insert into my_array_pos_test values (1,1); insert into my_array_pos_test values (2,3); e.g. from the row in my_array_test with id=1 I want to extract the 1st item (pos=1) and from the row in my_array_test with id=2 I want to extract the 3rd item (pos=3) defaultdb=> select * from my_array_pos_test; id | pos ----+----- 1 | 1 2 | 3 (2 rows) Now the resulting statement is select *, tst_array[my_array_pos_test.pos] from my_array_test join my_array_pos_test on my_array_test.id = my_array_pos_test.id with the expected result id | tst_array | id | pos | tst_array ----+--------------------------+----+-----+----------- 1 | {cat,mouse,frog} | 1 | 1 | cat 2 | {horse,crocodile,rabbit} | 2 | 3 | rabbit (2 rows) Now, in your case I would probably do something similar to the below, assuming your 1st select statement returns one row only. with array_sel as (SELECT ARRAY (SELECT (journeys.id) FROM JOURNEYS JOIN RESPONSES ON scenarios[1] = responses.id) AS arry) SELECT arry[journeys_index.j_index] FROM journeys_index cross join array_sel WHERE environment = 'env1' AND for_channel = 'ch1' AND first_name = 'name1'; I can't validate fully the above sql statement since we can't replicate your tables, but should give you a hint on where to start from
How to update several rows with postgresql function
I have a table representing an object: id|field1|field2|field3 --|------|------|------ 1 | b | f | z 2 | q | q | q I want to pass several objects to pg function which will update corresponding rows. Now I see only one way to do it - to pass a jsonb with array of objects. For example: [{"id":1, "field1":"foo", "field2":"bar", "field3":"baz"},{"id":2, "field1":"fooz", "field2":"barz", "field3":"bazz"}] Is this a best way to perform an update? And what is the best way to do it with jsonb input? I don't really like the way to convert jsonb input to rows with select * from json_each('{"a":"foo", "b":"bar"}') and operating it. I would prefer some way to execute a single UPDATE.
this can be achieved using from clause in update along with a dynamic on the fly table from input as follows assuming that the DB table and custom input will be mapped/matched with each other on basis of ID update table as t1 set field1 = custom_input.field1::varchar, field2 = custom_input.field2::varchar, field3 = custom_input.field3::varchar from ( values (1, 'foo', 'bar', 'baz'), (2, 'fooz', 'barz', 'bazz') ) as custom_input(id, field1, field2, field3) where t1.id = custom_input.id::int;
You can do it in next way (using json_populate_recordset): update test set field1 = data.field1, field2 = data.field2, field3 = data.field3 from (select * from json_populate_recordset( NULL::test, '[{"id":1, "field1":"f.1.2", "field2":"f.2.2", "field3":"f.3.2"},{"id":2, "field1":"f.1.4", "field2":"f.2.5", "field3":"f.3.6"}]' )) data where test.id = data.id; PostgreSQL fiddle