How to update several rows with postgresql function - sql

I have a table representing an object:
id|field1|field2|field3
--|------|------|------
1 | b | f | z
2 | q | q | q
I want to pass several objects to pg function which will update corresponding rows.
Now I see only one way to do it - to pass a jsonb with array of objects. For example:
[{"id":1, "field1":"foo", "field2":"bar", "field3":"baz"},{"id":2, "field1":"fooz", "field2":"barz", "field3":"bazz"}]
Is this a best way to perform an update? And what is the best way to do it with jsonb input?
I don't really like the way to convert jsonb input to rows with select * from json_each('{"a":"foo", "b":"bar"}') and operating it. I would prefer some way to execute a single UPDATE.

this can be achieved using from clause in update along with a dynamic on the fly table from input as follows assuming that the DB table and custom input will be mapped/matched with each other on basis of ID
update table as t1
set field1 = custom_input.field1::varchar,
field2 = custom_input.field2::varchar,
field3 = custom_input.field3::varchar
from (
values
(1, 'foo', 'bar', 'baz'),
(2, 'fooz', 'barz', 'bazz')
) as custom_input(id, field1, field2, field3)
where t1.id = custom_input.id::int;

You can do it in next way (using json_populate_recordset):
update test
set
field1 = data.field1,
field2 = data.field2,
field3 = data.field3
from (select * from json_populate_recordset(
NULL::test,
'[{"id":1, "field1":"f.1.2", "field2":"f.2.2", "field3":"f.3.2"},{"id":2, "field1":"f.1.4", "field2":"f.2.5", "field3":"f.3.6"}]'
)) data
where test.id = data.id;
PostgreSQL fiddle

Related

DB2 SELECT from UPDATE Options

I am currently trying to do an
SELECT DISTINCT * FROM FINAL TABLE
(UPDATE mainTable SET value = 'N' WHERE value2 = 'Y')
However, the version of DB2 I have does not appear to support this
SQL Error [42601]: [SQL0199] Keyword UPDATE not expected. Valid tokens: INSERT.
Is there any alternative to this in DB2 that could be return a desired result? Where in one query we can Update and Return the result?
EDIT -
The Select statement is supposed to return the values that are to begin processing by a server application. When this happens, a column will be updated to indicate that the Processing of this row has begun. A later command will update the row again when it is completed.
ORIGINAL DATA
ROW ID | COLUMN TWO | PROCESSING FLAG
-------------------------------------------
1 | TASK 1 | N
2 | TASK 2 | N
3 | TASK 3 | N
4 | TASK 4 | N
After Optimistic Select/Update Query
Data Table returned as:
ROW ID | COLUMN TWO | PROCESSING FLAG
-------------------------------------------
1 | TASK 1 | Y
2 | TASK 2 | Y
3 | TASK 3 | Y
4 | TASK 4 | Y
This is being called by a .NET Application, so this would be converted into a List of the Table Object.
You can't specify UPDATE in the table-reference in DB2 IBM i 7.3 (and even in 7.4 at the moment) as you could do it in Db2 for LUW.
Only INSERT is available.
data-change-table-reference
-+-- FINAL -+- TABLE (INSERT statement) correlation-clause
| |
-+-- NEW ---+
A possible emulation is to use a dynamic compound statement, positioned update and a temporary table to save info on updated rows.
--#SET TERMINATOR #
DECLARE GLOBAL TEMPORARY TABLE SESSION.MAINTABLE
(
ID INT, COL VARCHAR (10), FLAG CHAR (1)
) WITH REPLACE ON COMMIT PRESERVE ROWS NOT LOGGED#
INSERT INTO SESSION.MAINTABLE (ID, COL, FLAG)
VALUES
(1, 'TASK 1', 'N')
, (2, 'TASK 2', 'N')
, (3, 'TASK 3', 'N')
, (4, 'TASK 4', 'Y')
#
DECLARE GLOBAL TEMPORARY TABLE SESSION.UPDRES AS
(
SELECT ID FROM SESSION.MAINTABLE
) DEFINITION ONLY WITH REPLACE ON COMMIT PRESERVE ROWS NOT LOGGED#
BEGIN
FOR F1 AS C1 CURSOR FOR
SELECT ID FROM SESSION.MAINTABLE WHERE FLAG = 'N' FOR UPDATE
DO
UPDATE SESSION.MAINTABLE SET FLAG = 'Y' WHERE CURRENT OF C1;
INSERT INTO SESSION.UPDRES (ID) VALUES (F1.ID);
END FOR;
END#
SELECT * FROM SESSION.MAINTABLE#
ID
COL
FLAG
1
TASK 1
Y
2
TASK 2
Y
3
TASK 3
Y
4
TASK 4
Y
SELECT * FROM SESSION.UPDRES#
ID
1
2
3
While you can't use SELECT FROM FINAL TABLE(UPDATE ...) currently on Db2 for IBM i...
You can within the context of a transaction do
UPDATE mainTable SET value = 'Y' WHERE value2 = 'N' with RR
SELECT * FROM mainTable WHERE value2 = 'Y'
COMMIT
The use of RR - Repeatable read means that the entire table will be locked until you issue your commit. You may be able to use a lower isolation level if you have knowledge/control of any other processes working with the table.
Or if your willing to do some extra work...the below only locks the rows being returned.
UPDATE mainTable SET value = '*' WHERE value2 = 'N' with CHG
SELECT * FROM mainTable WHERE value2 = '*'
UPDATE mainTable SET value = 'Y' WHERE value2 = '*' with CHG
COMMIT
The straight-forward SQL way to do this is via a cursor and an UPDATE WHERE CURRENT OF CURSOR ....
Lastly, since you are using .NET, I suggest taking a look at the iDB2DataAdapter class in the IBM .NET Provider Technical Reference (part of the IBM ACS Windows Application package)
public void Example()
{
//create table mylib.mytable (col1 char(20), col2 int)
//insert into mylib.mytable values('original value', 1)
iDB2Connection cn = new iDB2Connection("DataSource=mySystemi;");
iDB2DataAdapter da = new iDB2DataAdapter();
da.SelectCommand = new iDB2Command("select * from mylib.", cn);
da.UpdateCommand = new iDB2Command("update mylib.mytable set col1 = #col1 where col2 = #col2", cn);
cn.Open();
//Let the provider generate the correct parameter information
da.UpdateCommand.DeriveParameters();
//Associate each parameter with the column in the table it corresponds to
da.UpdateCommand.Parameters["#col1"].SourceColumn = "col1";
da.UpdateCommand.Parameters["#col2"].SourceColumn = "col2";
//Fill the DataSet from the DataAdapter's SelectCommand
DataSet ds = new DataSet();
da.Fill(ds, "table");
//Modify the information in col1
DataRow dr = ds.Tables[0].Rows[0];
dr["col1"] = "new value";
//Write the information back to the table using the DataAdapter's UpdateCommand
da.Update(ds, "table");
cn.Close();
}
You may also find some good information in the Integrating DB2 Universal Database for iSeries with Microsoft ADO .NET Redbook.

How to extract value from json in Postgress based on a key pattern?

In Postgres v9.6.15, I need to get the SUM of all the values using text patterns of certain keys.
For example, having table "TABLE1" with 4 rows:
|group |col1
======================================
Row #1:|group1 |{ "json_key1.id1" : 1 }
Row #2:|group1 |{ "json_key1.id2" : 1 }
Row #3:|group1 |{ "json_key2.idX" : 1 }
Row #4:|group1 |{ "not_me" : 2 }
I'd like to get the int values using a pattern of the first part of the keys ( "json_key1" and "json_key2" ) and SUM them all using a CASE block like so:
SELECT table1.group as group,
COALESCE(sum(
CASE
WHEN table1.col1 = 'col1_val1' THEN (table1.json_col->>'json_key1.%')::bigint
WHEN table1.col1 = 'col1_val2' THEN (table1.json_col->>'json_key2.%')::bigint
ELSE 0::bigint
END), 0)::bigint AS my_result
FROM table1 as table1
GROUP BY table1.group;
I need "my_result" to look like:
|group |my_result
======================================
Row #1:|group1 |3
Is there a way to collect the values using regex or something like that. Not sure if I am checking the right documentation ( https://www.postgresql.org/docs/9.6/functions-json.html ), but I am not finding anything that can help me to achieve the above OR if that is actually possible..
Use jsonb_each_text() in a lateral join to get pairs (key, value) from json objects:
select
group_col as "group",
coalesce(sum(case when key like 'json_k%' then value::numeric end), 0) as my_result
from table1
cross join jsonb_each_text(col1)
group by group_col
Db<>fiddle.

One-statement Insert+delete in PostgreSQL

Suppose I have a PostgreSQL table t that looks like
id | name | y
----+------+---
0 | 'a' | 0
1 | 'b' | 0
2 | 'c' | 0
3 | 'd' | 1
4 | 'e' | 2
5 | 'f' | 2
With id being the primary key and with a UNIQUE constraint on (name, y).
Suppose I want to update this table in such a way that the part of the data set with y = 0 becomes (without knowing what is already there)
id | name | y
----+------+---
0 | 'a' | 0
1 | 'x' | 0
2 | 'y' | 0
I could use
DELETE FROM t WHERE y = 0 AND name NOT IN ('a', 'x', 'y');
INSERT INTO t (name, y) VALUES ('a', 0), ('x', 0), ('y', 0)
ON CONFLICT (name) DO NOTHING;
I feel like there must be a one-statement way to do this (like what upsert does for the task "update the existing entries and insert missing ones", but then for "insert the missing entries and delete the entries that should not be there"). Is there? I heard rumours that oracle has something called MERGE... I'm not sure what it does exactly.
This can be done with a single statement. But I doubt whether that classifies as "simpler".
Additionally: your expected output doesn't make sense.
Your insert statement does not provide a value for the primary key column (id), so apparently, the id column is a generated (identity/serial) column.
But in that case, news rows can't have the same IDs as the ones before because when the new rows were inserted, new IDs were generated.
Given the above change to your expected output, the following does what you want:
with data (name, y) as (
values ('a', 0), ('x', 0), ('y', 0)
), changed as (
insert into t (name, y)
select *
from data
on conflict (name,y) do nothing
)
delete from t
where (name, y) not in (select name, y from data);
That is one statement, but certainly not "simpler". The only advantage I can see is that you do not have to specify the list of values twice.
Online example: https://rextester.com/KKB30299
Unless there's a tremendous number of rows to be updated, do it as three update statements.
update t set name = 'a' where id = 0;
update t set name = 'x' where id = 1;
update t set name = 'y' where id = 2;
This is simple. It's easily done in a loop with a SQL builder. There's no race conditions as there are with deleting and inserting. And it preserves the ids and other columns of those rows.
To demonstrate with some psuedo-Ruby code.
new_names = ['a', 'x', 'y']
# In a transaction
db.transaction {
# Query the matching IDs in the same order as their new names
ids_to_update = db.select("
select id from t where y = 0 order by id
")
# Iterate through the IDs and new names together
ids_to_update.zip(new_names).each { |id,name|
# Update the row with its new name
db.execute("
update t set name = ? where id = ?
", name, id)
}
}
Fooling around some, here's how I did it in "one" statement, or at least one thing sent to the server, while preserving the IDs and no race conditions.
do $$
declare change text[];
declare changes text[][];
begin
select array_agg(array[id::text,name])
into changes
from unnest(
(select array_agg(id order by id) from t where y = 0),
array['a','y','z']
) with ordinality as a(id, name);
foreach change slice 1 in array changes
loop
update t set name = change[2] where id = change[1]::int;
end loop;
end$$;
The goal is to produce an array of arrays matching the id to its new name. That can be iterated over to do the updates.
unnest(
(select array_agg(id order by id) from t where y = 0),
array['a','y','z']
) with ordinality as a(id, name);
That bit produces rows with the IDs and their new names side by side.
select array_agg(array[id::text,name])
into changes
from unnest(...) with ordinality as a(id, name);
Then those rows of IDs and names are turned into an array of arrays like: {{1,a},{2,y},{3,z}}. (There's probably a more direct way to do that)
foreach change slice 1 in array changes
loop
update t set name = change[2] where id = change[1]::int;
end loop;
Finally we loop over the array and use it to perform each update.
You can turn this into a proper function and pass in the y value to match and the array of names to change them to. You should verify that the length of the ids and names match.
This might be faster, depends on how many rows you're updating, but it isn't simpler, and it took some time to puzzle out.

Looking for a better way to dynamically pick a column source in SQL

To start with an example, lets say I need an SQL view with a structure like:
ID | Text01 | Text02 | Text03 | Text04 | Text05
Depending on the type of item, what is stored in each column could change, for example item with ID 1 may use 'Length' in Text01 while ID 2 may use Text02 to store length.
Now assume there is another table that explains the mapping:
ID | Text01 | Text02
--------------------
1 | Length |
2 | | Length
I want a way to directly populate the query based on the mapping.
I know I could use a case statement, eg.
case when mapping.text01 = 'length' then sourcetable.length ...
However my actual scenario consists of 40 dynamic columns and up to 150 fields which could be mapped to a column, which makes this option less viable.
Is there any way to convert the text of "sourcetable.length" to a column source or any other ideas you can recommend to potentially simplify this process?
You have a lousy data structure because you are storing data across the tables in columns rather than in rows.
You can do what you want, basically by unpivoting the data and then joining:
with t1 as (
select t1.id, v.colname, v.colvalue
from table1 t1 cross apply
(values ('Text01', t1.Text01),
('Text02', t1.Text02),
('Text03', t1.Text03),
. . .
) v(colname, colvalue)
),
t2 as (
select m.id, v.colname, v.colfield
from mapping m cross apply
(values ('Text01', m.Text01),
('Text02', m.Text02),
('Text03', m.Text03),
. . .
) v(colname, colfield)
)
select t1.id, t2.colfield, t1.colvalue
from t1 join
t2
on t1.id = t2.id and t1.colname = t2.colname;
If you want the data in a single row, then you would have to re-pivot the results.

SQL: update specific rows based on a condition

So i have two tables as the following:
- T_Sample
ws_id|date|depth|number_l
| | |
and
- T_Sample_value
ws_id|parameter|value
| |
I have some rows in the T_sample table, which have negative depth values and in the T_sample_value table they have some data; what i am trying to do is for these rows i would like to copy their data (present in T_sample_value) in the row which of T_sample which has 0 depth value.
I tried to do an update set query with subqueries but i get the error that the subquery does return multiple rows and cannot update the fields. What i tried looks pretty much like this:
UPDATE T_sample_value
SET T_sample_value.ws_id = (select blah blah where depth is <0)
WHERE T_sample_value.ws_id = (select blah blah where depth is = 0)
You would like to do a update-join like
UPDATE a
SET depth = b.value
FROM T_Sample a
JOIN T_Sample_value b
ON a.ws_id = b.ws_id
WHERE a.depth = 0;