I'm trying to handle an array of counters column in Postgres
for example, let's say I have this table
name
counters
Joe
[1,3,1,0]
and now I'm adding 2 values ("Ben", [1,3,1,0]) and ("Joe",[2,0,2,1])
I expect the query to sum between the 2 counters vectors on conflict ([1,3,1,0] + [2,0,2,1] = [3,3,3,1])
the expected result:
name
counters
Joe
[3,3,3,1]
Ben
[1,3,1,0]
I tried this query
insert into test (name, counters)
values ("Joe",[2,0,2,1])
on conflict (name)
do update set
counters = array_agg(unnest(test.counters) + unnest([2,0,2,1]))
but it didn't seem to work, what am I missing?
There are two problems with the expression:
array_agg(unnest(test.counters) + unnest([2,0,2,1]))
there is no + operator for arrays,
you cannot use set-valued expressions as an argument in an aggregate function.
You need to unnest both arrays in a single unnest() call placed in the from clause:
insert into test (name, counters)
values ('Joe', array[2,0,2,1])
on conflict (name) do
update set
counters = (
select array_agg(e1 + e2)
from unnest(test.counters, excluded.counters) as u(e1, e2)
)
Also pay attention to the correct data syntax in values and the use of a special record excluded (find the relevant information in the documentation.)
Test it in db<>fiddle.
Based on your reply to my comments that it will always be four elements in the array and the update is being done by a program of some type, I would suggest something like this:
insert into test (name, counters)
values (:NAME, :COUNTERS)
on conflict (name) do
update set
counters[1] = counters[1] + :COUNTERS[1],
counters[2] = counters[2] + :COUNTERS[2],
counters[3] = counters[3] + :COUNTERS[3],
counters[4] = counters[4] + :COUNTERS[4]
Related
I want to update saboloo with the following table
update sagani
set saboloo=sum(sagani.qula + sagani.shualeduri + sagani.finaluri)
where sagnis_id='9';
Aggregation is not allowed in an update, because update changes values in the rows in the table; once the data is aggregated, the connection to the original rows is lost.
I can imagine that you mean one of two things. The first would be a simple sum within the row:
update sagani
set saboloo = (sagani.qula + sagani.shualeduri + sagani.finaluri)
where sagnis_id = 9; -- looks like a number so I assume it is a number
Alternatively, you may want to update multiple rows with the same value added up from all those rows:
update sagani s
set saboloo = (select sum(s2.qula + s2.shualeduri + s2.finaluri)
from sagani s2
where s2.sagnis_id = s.sagnis_id
)
where s.sagnis_id = 9;
Your question doesn't have enough information to infer your intention, although the use of sagnis_id suggests that there is only one row and you don't want aggregation at all.
SUM is not applicable here as your requirement is very straight forward. You can try this following script for your purpose-
UPDATE sagani
SET saboloo=(sagani.qula + sagani.shualeduri + sagani.finaluri)
WHERE sagnis_id='9';
I've got this query in a #SQLInsert annotation in Spring against an Oracle 11g database and, although it's inserting properly it is not updating the values but raising no error.
Any ideas? If not, any alternative ways of obtaining same behaviour?
merge INTO ngram sn
USING(SELECT ? AS frequency,
? AS occurrences,
? AS ngram
FROM dual) src
ON (sn.ngram = src.ngram)
WHEN matched THEN
UPDATE SET sn.occurrences = sn.occurrences + src.occurrences,
sn.frequency = sn.frequency + 1
WHEN NOT matched THEN
INSERT (ngram,
frequency,
occurrences)
VALUES (src.ngram,
src.frequency,
src.occurrences)
Update1: I'm adding the Entity def in Java for clarification.
#Entity(name = "NGRAM")
public class Ngram
{
#Id
#JsonProperty("ngram")
private String ngram;
#JsonProperty("frequency")
#Column(name = "frequency")
private int frequency;
#JsonProperty("occurrences")
#Column(name = "occurrences")
private int occurrences;
Update2: Adding a run of the SQL query on an existing Ngram
sql> merge INTO NGRAM sn
USING(SELECT 1 AS frequency,
200 AS occurrences,
'year' AS ngram
FROM dual) src
ON (sn.ngram = src.ngram)
WHEN matched THEN
UPDATE SET sn.occurrences = sn.occurrences + src.occurrences,
sn.frequency = sn.frequency + 1
WHEN NOT matched THEN
INSERT (ngram,
frequency,
occurrences)
VALUES (src.ngram,
src.frequency,
src.occurrences)
[2019-01-29 12:09:10] 1 row affected in 19 ms
This actually modifies the row, but not when I go over them in the code, so it seems the SQL is right...
Update3: So, this is the piece of code that should be inserting or updating but it's only doing the insert:
List<Ngram> lNgrams = new ArrayList<>(lCollocationsMap.size());
lCollocationsMap.forEach((pKey, pValue) -> lNgrams.add( new Ngram(pKey, 1, pValue)));
mNgramRepo.saveAll(lNgrams);
So I'll be investigating the saveAll behaviour.
Update4: I tried using save one by one which is much slower instead of using saveAll but got same behaviour. Changing ON (sn.ngram = src.ngram) to ON (sn.ngram LIKE src.ngram) makes some of the frequencies to become 2 (so they seem to be updated) but not all of them: 'year' for example appears more than 10 times but it's frequency remains 1 and it's occurrences is just updated with last value found.
So, I'm now completely lost on why this is failing, specially in this way.
This is my first db trigger. It compiles with warnings and therefore doesn't work. I've re-read the Oracle docs and searched online but can't work out where I'm going wrong. Any help with my trigger below would gratefully received.
CREATE OR REPLACE TRIGGER oa_mhd_update AFTER
INSERT ON men_mhd FOR EACH row WHEN (new.mhd_tktc LIKE 'OA_A_%'
OR new.mhd_tktc LIKE 'OA_T_%'
OR new.mhd_tktc LIKE 'OA_M_%')
DECLARE seq_var NVARCHAR2 (20);
BEGIN
SELECT (MAX (seq) + 1) into seq_var FROM oa_mhd_data;
INSERT
INTO oa_mhd_data
(
mhd_code,
seq,
mhd_mst1,
mhd_mst2,
mhd_cred,
mhd_cret,
mhd_tsks,
mhd_msgs,
mhd_tktc,
mhd_tref,
mhd_actn,
mhd_eref,
mhd_subj,
mhd_udf1,
mhd_udf2,
mhd_udf3,
mhd_udf4,
mhd_udf5,
mhd_udf6,
mhd_udf7,
mhd_udf8,
mhd_udf9,
mhd_udfa,
mhd_udfb,
mhd_udfc,
mhd_udfd,
mhd_udfe,
mhd_udff,
mhd_udfg,
mhd_udfh,
mhd_udfi,
mhd_udfj,
mhd_udfk,
mhd_updd,
mhd_begd,
mhd_begt,
mhd_endd,
mhd_endt,
mhd_mrcc,
mhd_mhdc,
mhd_mscc,
mhd_pprc,
mhd_ppss,
mhd_inst
)
VALUES
(
:new.mhd_code
seq_var,
:new.mhd_mst1,
:new.mhd_mst2,
:new.mhd_cred,
:new.mhd_cret,
:new.mhd_tsks,
:new.mhd_msgs,
:new.mhd_tktc,
:new.mhd_tref,
:new.mhd_actn,
:new.mhd_eref,
:new.mhd_subj,
:new.mhd_udf1,
:new.mhd_udf2,
:new.mhd_udf3,
:new.mhd_udf4,
:new.mhd_udf5,
:new.mhd_udf6,
:new.mhd_udf7,
:new.mhd_udf8,
:new.mhd_udf9,
:new.mhd_udfa,
:new.mhd_udfb,
:new.mhd_udfc,
:new.mhd_udfd,
:new.mhd_udfe,
:new.mhd_udff,
:new.mhd_udfg,
:new.mhd_udfh,
:new.mhd_udfi,
:new.mhd_udfj,
:new.mhd_udfk,
:new.mhd_updd,
:new.mhd_begd,
:new.mhd_begt,
:new.mhd_endd,
:new.mhd_endt,
:new.mhd_mrcc,
:new.mhd_mhdc,
:new.mhd_mscc,
:new.mhd_pprc,
:new.mhd_ppss,
:new.mhd_inst
)
END;
/
You're missing a comma between the first two elements of the values clause, and a semi-colon at the end of the insert statement:
VALUES
(
:new.mhd_code
seq_var,
:new.mhd_mst1,
...
:new.mhd_ppss,
:new.mhd_inst
)
... should be:
VALUES
(
:new.mhd_code,
seq_var,
:new.mhd_mst1,
...
:new.mhd_ppss,
:new.mhd_inst
);
Odd that you can't see the error though.
Incidentally, the max(seq) + 1 from ... pattern isn't reliable in a multi-user environment. It would be more normal (and safer) to use a proper sequence to generate that value.
Hi there are two syntactical errors
First please add a comma between two values you are inserting
VALUES
(
:new.mhd_,
seq_var,
:new.mhd_mst1,...
and second please add a semi colon at he end of insert statement
...
:new.mhd_pprc,
:new.mhd_ppss,
:new.mhd_inst
);
Hope this will solve your problem
I have a SQLite database with table myTable and columns id, posX, posY. The number of rows changes constantly (might increase or decrease). If I know the value of id for each row, and the number of rows, can I perform a single SQL query to update all of the posX and posY fields with different values according to the id?
For example:
---------------------
myTable:
id posX posY
1 35 565
3 89 224
6 11 456
14 87 475
---------------------
SQL query pseudocode:
UPDATE myTable SET posX[id] = #arrayX[id], posY[id] = #arrayY[id] "
#arrayX and #arrayY are arrays which store new values for the posX and posY fields.
If, for example, arrayX and arrayY contain the following values:
arrayX = { 20, 30, 40, 50 }
arrayY = { 100, 200, 300, 400 }
... then the database after the query should look like this:
---------------------
myTable:
id posX posY
1 20 100
3 30 200
6 40 300
14 50 400
---------------------
Is this possible? I'm updating one row per query right now, but it's going to take hundreds of queries as the row count increases. I'm doing all this in AIR by the way.
There's a couple of ways to accomplish this decently efficiently.
First -
If possible, you can do some sort of bulk insert to a temporary table. This depends somewhat on your RDBMS/host language, but at worst this can be accomplished with a simple dynamic SQL (using a VALUES() clause), and then a standard update-from-another-table. Most systems provide utilities for bulk load, though
Second -
And this is somewhat RDBMS dependent as well, you could construct a dynamic update statement. In this case, where the VALUES(...) clause inside the CTE has been created on-the-fly:
WITH Tmp(id, px, py) AS (VALUES(id1, newsPosX1, newPosY1),
(id2, newsPosX2, newPosY2),
......................... ,
(idN, newsPosXN, newPosYN))
UPDATE TableToUpdate SET posX = (SELECT px
FROM Tmp
WHERE TableToUpdate.id = Tmp.id),
posY = (SELECT py
FROM Tmp
WHERE TableToUpdate.id = Tmp.id)
WHERE id IN (SELECT id
FROM Tmp)
(According to the documentation, this should be valid SQLite syntax, but I can't get it to work in a fiddle)
One way: SET x=CASE..END (any SQL)
Yes, you can do this, but I doubt that it would improve performances, unless your query has a real large latency.
If the query is indexed on the search value (e.g. if id is the primary key), then locating the desired tuple is very, very fast and after the first query the table will be held in memory.
So, multiple UPDATEs in this case aren't all that bad.
If, on the other hand, the condition requires a full table scan, and even worse, the table's memory impact is significant, then having a single complex query will be better, even if evaluating the UPDATE is more expensive than a simple UPDATE (which gets internally optimized).
In this latter case, you could do:
UPDATE table SET posX=CASE
WHEN id=id[1] THEN posX[1]
WHEN id=id[2] THEN posX[2]
...
ELSE posX END [, posY = CASE ... END]
WHERE id IN (id[1], id[2], id[3]...);
The total cost is given more or less by: NUM_QUERIES * ( COST_QUERY_SETUP + COST_QUERY_PERFORMANCE ). This way, you knock down on NUM_QUERIES (from N separate id's to 1), but COST_QUERY_PERFORMANCE goes up (about 3x in MySQL 5.28; haven't yet tested in MySQL 8).
Otherwise, I'd try with indexing on id, or modifying the architecture.
This is an example with PHP, where I suppose we have a condition that already requires a full table scan, and which I can use as a key:
// Multiple update rules
$updates = [
"fldA='01' AND fldB='X'" => [ 'fldC' => 12, 'fldD' => 15 ],
"fldA='02' AND fldB='X'" => [ 'fldC' => 60, 'fldD' => 15 ],
...
];
The fields updated in the right hand expressions can be one or many, must always be the same (always fldC and fldD in this case). This restriction can be removed, but it would require a modified algorithm.
I can then build the single query through a loop:
$where = [ ];
$set = [ ];
foreach ($updates as $when => $then) {
$where[] = "({$when})";
foreach ($then as $fld => $value) {
if (!array_key_exists($fld, $set)) {
$set[$fld] = [ ];
}
$set[$fld][] = $value;
}
}
$set1 = [ ];
foreach ($set as $fld => $values) {
$set2 = "{$fld} = CASE";
foreach ($values as $i => $value) {
$set2 .= " WHEN {$where[$i]} THEN {$value}";
}
$set2 .= ' END';
$set1[] = $set2;
}
// Single query
$sql = 'UPDATE table SET '
. implode(', ', $set1)
. ' WHERE '
. implode(' OR ', $where);
Another way: ON DUPLICATE KEY UPDATE (MySQL)
In MySQL I think you could do this more easily with a multiple INSERT ON DUPLICATE KEY UPDATE, assuming that id is a primary key keeping in mind that nonexistent conditions ("id = 777" with no 777) will get inserted in the table and maybe cause an error if, for example, other required columns (declared NOT NULL) aren't specified in the query:
INSERT INTO tbl (id, posx, posy, bazinga)
VALUES (id1, posY1, posY1, 'DELETE'),
...
ON DUPLICATE KEY SET posx=VALUES(posx), posy=VALUES(posy);
DELETE FROM tbl WHERE bazinga='DELETE';
The 'bazinga' trick above allows to delete any rows that might have been unwittingly inserted because their id was not present (in other scenarios you might want the inserted rows to stay, though).
For example, a periodic update from a set of gathered sensors, but some sensors might not have been transmitted:
INSERT INTO monitor (id, value)
VALUES (sensor1, value1), (sensor2, 'N/A'), ...
ON DUPLICATE KEY UPDATE value=VALUES(value), reading=NOW();
(This is a contrived case, it would probably be more reasonable to LOCK the table, UPDATE all sensors to N/A and NOW(), then proceed with INSERTing only those values we do have).
A third way: CTE (PostgreSQL, not sure about SQLite3)
This is conceptually almost the same as the INSERT MySQL trick. As written, it works in PostgreSQL 9.6:
WITH updated(id, posX, posY) AS (VALUES
(id1, posX1, posY1),
(id2, posX2, posY2),
...
)
UPDATE myTable
SET
posX = updated.posY,
posY = updated.posY
FROM updated
WHERE (myTable.id = updated.id);
Something like this might work for you:
"UPDATE myTable SET ... ;
UPDATE myTable SET ... ;
UPDATE myTable SET ... ;
UPDATE myTable SET ... ;"
If any of the posX or posY values are the same, then they could be combined into one query
UPDATE myTable SET posX='39' WHERE id IN('2','3','40');
In recent versions of SQLite (beginning from 3.24.0 from 2018) you can use the UPSERT clause. Assuming only existing datasets are updated having a unique id column, you can use this approach, which is similar to #LSerni's ON DUPLICATE suggestion:
INSERT INTO myTable (id, posX, posY) VALUES
( 1, 35, 565),
( 3, 89, 224),
( 6, 11, 456),
(14, 87, 475)
ON CONFLICT (id) DO UPDATE SET
posX = excluded.posX, posY = excluded.posY
I could not make #Clockwork-Muse work actually. But I could make this variation work:
WITH Tmp AS (SELECT * FROM (VALUES (id1, newsPosX1, newPosY1),
(id2, newsPosX2, newPosY2),
......................... ,
(idN, newsPosXN, newPosYN)) d(id, px, py))
UPDATE t
SET posX = (SELECT px FROM Tmp WHERE t.id = Tmp.id),
posY = (SELECT py FROM Tmp WHERE t.id = Tmp.id)
FROM TableToUpdate t
I hope this works for you too!
Use a comma ","
eg:
UPDATE my_table SET rowOneValue = rowOneValue + 1, rowTwoValue = rowTwoValue + ( (rowTwoValue / (rowTwoValue) ) + ?) * (v + 1) WHERE value = ?
To update a table with different values for a column1, given values on column2, one can do as follows for SQLite:
"UPDATE table SET column1=CASE WHEN column2<>'something' THEN 'val1' ELSE 'val2' END"
Try with "update tablet set (row='value' where id=0001'), (row='value2' where id=0002'), ...
In SQLite, given this database schema
CREATE TABLE observations (
src TEXT,
dest TEXT,
verb TEXT,
occurrences INTEGER
);
CREATE UNIQUE INDEX observations_index
ON observations (src, dest, verb);
whenever a new observation tuple (:src, :dest, :verb) comes in, I want to either increment the "occurrences" column for the existing row for that tuple, or add a new row with occurrences=1 if there isn't already one. In concrete pseudocode:
if (SELECT COUNT(*) FROM observations
WHERE src == :src AND dest == :dest AND verb == :verb) == 1:
UPDATE observations SET occurrences = occurrences + 1
WHERE src == :src AND dest == :dest AND verb == :verb
else:
INSERT INTO observations VALUES (:src, :dest, :verb, 1)
I'm wondering if it's possible to do this entire operation in one SQLite statement. That would simplify the application logic (which is required to be fully asynchronous wrt database operations) and also avoid a double index lookup with exactly the same key. INSERT OR REPLACE doesn't appear to be what I want, and alas there is no UPDATE OR INSERT.
I got this answer from Igor Tandetnik on sqlite-users:
INSERT OR REPLACE INTO observations
VALUES (:src, :dest, :verb,
COALESCE(
(SELECT occurrences FROM observations
WHERE src=:src AND dest=:dest AND verb=:verb),
0) + 1);
It's slightly but consistently faster than dan04's approach.
Don't know of a way to do it in one statement, but you could try
BEGIN;
INSERT OR IGNORE INTO observations VALUES (:src, :dest, :verb, 0);
UPDATE observeraions SET occurrences = occurrences + 1 WHERE
src = :src AND dest = :dest AND verb = :verb;
COMMIT;