Given a table like this, where x and y have a unique constraint.
id,x,y
1, 1,1
2, 1,2
3, 2,3
4, 3,5
..
I want to increase the value x and y in a set of rows by a fixed amount with the UPDATE statement. Suppose I'm increasing them both by 1, it seems UPDATE follows id order and gives me an error after updating the row with 1 2, since it collides with the next row, 2 3, which hasn't been updated to 3, 4 yet.
Googling around, I can't find a way to force UPDATE to use an order. To do it in reverse would be enough for my application. I am also sure the set is consistent after the whole update.
Any solutions? Some way to force an order into the update, or any way to make it postpone the constraint check until it's finished?
This is meant for a Django application, and it's meant to be compatible with all supported databases. I know some databases have atomic transactions and this problem won't happen, some have features to avoid this problem, but I need a strictly standard SQL solution.
For PostgreSQL you can define the primary key constraint as "deferrable" and then it would be only evaluated at commit time.
In PostgreSQL this would look like this:
postgres=>create table foo (
postgres(> id integer not null,
postgres(> x integer,
postgres(> y integer
postgres(>);
CREATE TABLE
postgres=>alter table foo add constraint pk_foo primary key (id) deferrable initially deferred;
ALTER TABLE
postgres=> insert into foo (id, x,y) values (1,1,1), (2,1,1), (3,1,1);
INSERT 0 3
postgres=> commit;
COMMIT
postgres=> update foo set id = id + 1;
UPDATE 3
postgres=> commit;
COMMIT
postgres=> select * from foo;
id | x | y
----+---+---
2 | 1 | 1
3 | 1 | 1
4 | 1 | 1
(3 rows)
postgres=>
For Oracle this is not necessary as it will evaluate the UPDATE statement as a single atomic operation, so that works out of the box there.
For the sake of reference, MS SQL Server will not present a problem with this either. UPDATE is a single atomic operation.
ALTER TABLE [table_name]
ADD CONSTRAINT unique_constraint
UNIQUE(x)
ALTER TABLE [table_name]
ADD CONSTRAINT unique_constraint2
UNIQUE(y)
update [table_name]
set x = x+1,
y = y+1
Should pose no problem at all.
Related
How to have a Unique Constraint in Oracle-DB with two columns so that a duplicate must not occur in one or the other.
Assume this table
|id | A | B |
|---|---|---|
| 1 | 1 | 2 |
| 2 | 3 | 4 |
I that a new row is not allowed to have in column "A" a value that duplicate a value from column "A" or "B".
In the example above: I am allowed to add 5 to column "A" but not 1, 2, 3, or 4.
My idea was to do something like:
CREATE UNIQUE INDEX crossTest ON test (
SELECT t.A AS x FROM test t
UNION ALL
SELECT t.B AS x FROM test t
)
but it does not work because Oracle does not accept this syntax.
The two classic approaches:
have two unique constraints CREATE UNIQUE INDEX uidxA ON test A and CREATE UNIQUE INDEX uidxB ON test B does not work because then I could add 2 and 4 to column "A"
have a unique constraint of two columns CREATE UNIQUE INDEX uidxB ON test (A, B) because this check only existing pairs.
(Bonus question: it should be allowed that "A" and "B" of the same row can be equals)
SQL scripts for the example
CREATE TABLE test (id NUMBER (10) NOT NULL, a VARCHAR2(12), b VARCHAR2(12));
INSERT INTO test (id,a,b) VALUES(1, '1', '2');
INSERT INTO test (id,a,b) VALUES(2, '3', '4');
INSERT INTO test (id,a,b) VALUES(3, '4', 'x'); -> should fail
INSERT INTO test (id,a,b) VALUES(3, '5', 'x'); -> should work
#Tejash's answer gave me an idea to avoid locking or serialization. You can create an auxiliary table duet_index to produce the extended data set with all rows. Then a simple trigger will do the trick, including your bonus question.
For example:
create table duet_index (
n number,
constraint unique uq1 (n)
);
And then the trigger:
create or replace trigger test_trg
before insert on test
for each row
begin
insert into duet_index (n) values (:new.a);
if (:new.a <> :new.b) then
insert into duet_index (n) values (:new.b);
end if;
end;
Please consider I'm not proficient at writing Oracle triggers. The syntax can be wrong, but the idea should fly.
I've been working with Oracle for decades now and I don't recall having such a requirement. It makes me nervous about your data model.
What you want to do cannot be done with a single index. Trigger-based approaches are going to have trouble working correctly in all multi-user cases. A materialized-view approach seems promising.
My suggestion is to create a materialized view that refreshes on commit and that contains a concatenation (UNION ALL) of the column A and column B values.
Here is what I mean (see comments in code for more details):
create table test1 ( id number not null primary key, a number, b number );
insert into test1 values ( 1, 1, 2);
insert into test1 values ( 2, 3, 4);
commit;
-- Create a snapshot to allow us to create a REFRESH FAST ON COMMIT snapshot...
create snapshot log on test1 with primary key, rowid;
-- And create that snapshot... this will be updated when any changes to TEST1 are committed
create materialized view test1_concat
refresh fast on commit
as
select t1.rowid row_id, 1 as marker, t1.a concatenation from test1 t1
union all
select t2.rowid row_id, 2 as marker, t2.b concatenation from test1 t2
-- this next bit allows a = b in single rows (i.e., bonus question)
where t2.a != t2.b;
-- Now, enforce the constraint on our snapshot to prevent cross-column duplicates
create unique index test1_concat_u1 on test1_concat ( concatenation );
-- Test #1 -- column a may equal column b without error (bonus!)
insert into test1 values ( 3, 5, 5);
commit;
-- Test #2 uniqueness enforced
insert into test1 values ( 4, 6, 1);
-- (no error at this point)
commit;
> ORA-12008: error in materialized view refresh path ORA-00001: unique
> constraint (APPS.TEST1_CONCAT_U1) violated
Drawbacks
There is a scalability issue here. Oracle will synchronize on the commit. Every working solution to your problem will have this drawback, I believe
You do not get an error until the transaction tries to commit, at which point it is impossible to correct and recover the transaction. I believe you cannot solve this drawback in any solution without making drawback #1 much worse (i.e., without much more extensive and longer-lasting locks on your table).
I suggest fixing our data model, so the values are in rows rather than columns:
CREATE TABLE test (
id NUMBER (10) NOT NULL,
type varchar2(1) check (type in ('A', 'B'),
value varchar2(12),
unique (value),
unique (id, type)
);
The unique constraint is then easy.
Not possible using INDEX or CONSTRAINT. You need a trigger, something like this:
CREATE OR REPLACE TRIGGER TEST_TRG
BEFORE INSERT ON TEST
FOR EACH ROW
DECLARE
CNT NUMBER := 0;
BEGIN
SELECT COUNT(1) INTO CNT from TEST
WHERE A = :NEW.A OR B = :NEW.A OR A = :NEW.B OR B = :NEW.B;
IF CNT > 0 THEN
raise_application_error(-20111,'This value is not allowed');
END IF;
END;
I need to create a CHECK constraint to verify that the entered integer in a column is greater than or equal to the integer in another column in a different table.
For example, the following tables would be valid:
=# SELECT * FROM table1;
current_project_number
------------------------
12
=# SELECT * FROM table2;
project_name | project_number
--------------+----------------
Schaf | 1
Hase | 8
Hai | 12
And the following tables would NOT be valid:
=# SELECT * FROM table1;
current_project_number
------------------------
12
=# SELECT * FROM table2;
project_name | project_number
--------------+----------------
Schaf | 1
Hase | 8
Hai | 12
Erdmännchen | 71 <-error:table1.current_project_number is NOT >= 71
Please note this CHECK constraint is designed to make sure info like above cannot be inserted. I'm not looking to SELECT values where current_project_number >= project_number, this is about INSERTing
What would I need in order for such a CHECK to work? Thanks
Defining a CHECK constraint that references another table is possible, but a seriously bad idea that will lead to problems in the future.
CHECK constraints are only validated when the table with the constraint on it is modified, not when the other table referenced in the constraint is modified. So it is possible to render the condition invalid with modifications on that second table.
In other words, PostgreSQL will not guarantee that the constraint is always valid. This can and will lead to unpleasant surprises, like a backup taken with pg_dump that can no longer be restored.
Don't go down that road.
If you need functionality like that, define a BEFORE INSERT trigger on table1 that verifies the condition and throws an exception otherwise:
CREATE FUNCTION ins_trig() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF EXISTS (SELECT 1 FROM table1
WHERE NEW.project_number > current_project_number)
THEN
RAISE EXCEPTION 'project number must be less than or equal to values in table1';
END IF;
RETURN NEW;
END;$$;
CREATE TRIGGER ins_trig BEFORE INSERT ON table2
FOR EACH ROW EXECUTE PROCEDURE ins_trig();
As far as I know, clickhouse allows only inserting new data. But is it possible to delete block older then some period to avoid overflow of HDD?
Lightweight delete
Available since v22.8
Standard DELETE syntax for MergeTree tables has been introduced in #37893.
SET allow_experimental_lightweight_delete = 1;
DELETE FROM merge_table_standard_delete WHERE id = 10;
Altering data using Mutations
See the docs on Mutations feature https://clickhouse.yandex/docs/en/query_language/alter/#mutations.
The feature was implemented in Q3 2018.
Delete data
ALTER TABLE <table> DELETE WHERE <filter expression>
"Dirty" delete all
You always have to specify a filter expression. If you want to delete all the data through Mutation, specify something that's always true, eg.:
ALTER TABLE <table> DELETE WHERE 1=1
Update data
It's also possible to mutate (UPDATE) the similar way
ALTER TABLE <table> UPDATE column1 = expr1 [, ...] WHERE <filter expression>
Mind it's async
Please note that all commands above do not execute the data mutation directly (in sync). Instead they schedule ClickHouse Mutation that is executed independently (async) on background. That is the reason why ALTER TABLE syntax was chosen instead of typical SQL UPDATE/DELETE. You can check unfinished Mutations' progress via
SELECT *
FROM system.mutations
WHERE is_done = 0
...unless
you change mutations_sync settings to
1 so it synchronously waits for current server
2 so it waits for all replicas
Altering data without using Mutations
Theres's TRUNCATE TABLE statement with syntax as follows:
TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
This synchronously truncates the table. It will check for table size so won't allow you to delete if table size exceeds max_table_size_to_drop. See docs here:
https://clickhouse.tech/docs/en/sql-reference/statements/truncate/
Example to create and delete partition
CREATE TABLE test.partitioned_by_month(d Date, x UInt8) ENGINE = MergeTree
PARTITION BY toYYYYMM(d) ORDER BY x;
INSERT INTO test.partitioned_by_month VALUES ('2000-01-01', 1), ('2000-01-02', 2), ('2000-01-03', 3);
INSERT INTO test.partitioned_by_month VALUES ('2000-02-03', 4), ('2000-02-03', 5);
INSERT INTO test.partitioned_by_month VALUES ('2000-03-03', 4), ('2000-03-03', 5);
SELECT * FROM test.partitioned_by_month;
---d------------|-------x-----
2000-02-03 | 4
2000-02-03 | 5
---d------------|-------x-----
2000-03-03 | 4
2000-03-03 | 5
---d------------|-------x-----
2000-01-01 | 1
2000-01-02 | 2
2000-01-03 | 3
ALTER TABLE test.partitioned_by_month DROP PARTITION 200001;
select * from partitioned_by_month;
---d------------|-------x-----
2000-03-03 | 4
2000-03-03 | 5
---d------------|-------x-----
2000-02-03 | 4
2000-02-03 | 5
Clickhouse doesn't have update/Delete feature like Mysql database. But we still can do delete by organising data in the partition.I dont know how u r managing data so i am taking here an example like one are storing data in a monthwise partition.
By using "DROP PARTITION" command you can delete the data of that month by Droping the partition of that month, here is the complete explanation of how to Drop partition https://clickhouse.yandex/blog/en/how-to-update-data-in-clickhouse.
I have a table in which I have a numeric field A, which is set to be UNIQUE. This field is used to indicate an order in which some action has to be performed. I want to make an UPDATE of all the values that are greater, for example, than 3. For example,
I have
A
1
2
3
4
5
Now, I want to add 1 to all values of A greater than 3. So, the result would be
A
1
2
3
5
6
The question is, whether it is possible to be done using only one query? Remember that I have a UNIQUE constraint on the column A.
Obviously, I tried
UPDATE my_table SET A = A + 1 WHERE A > 3;
but it did not work as I have the constraint on this field.
PostgreSQL 9.0 and later
PostgreSQL 9.0 added deferrable unique constraints, which is exactly the feature you seem to need. This way, uniqueness is checked at commit-time rather than update-time.
Create the UNIQUE constraint with the DEFERRABLE keyword:
ALTER TABLE foo ADD CONSTRAINT foo_uniq (foo_id) DEFERRABLE;
Later, before running the UPDATE statement, you run in the same transaction:
SET CONSTRAINTS foo_uniq DEFERRED;
Alternatively you can create the constraint with the INITIALLY DEFERRED keyword on the unique constraint itself -- so you don't have to run SET CONSTRAINTS -- but this might affect the performance of your other queries which don't need to defer the constraint.
PostgreSQL 8.4 and older
If you only want to use the unique constraint for guaranteeing uniqueness -- not as a target for a foreign key -- then this workaround might help:
First, add a boolean column such as is_temporary to the table that temporarily distinguishes updated and non-updated rows:
CREATE TABLE foo (value int not null, is_temporary bool not null default false);
Next create a partial unique index that only affects rows where is_temporary=false:
CREATE UNIQUE INDEX ON foo (value) WHERE is_temporary=false;
Now, every time do make the updates you described, you run them in two steps:
UPDATE foo SET is_temporary=true, value=value+1 WHERE value>3;
UPDATE foo SET is_temporary=false WHERE is_temporary=true;
As long as these statements occur in a single transaction, this will be totally safe -- other sessions will never see the temporary rows. The downside is that you'll be writing the rows twice.
Do note that this is merely a unique index, not a constraint, but in practice it shouldn't matter.
You can do it in 2 queries with a simple trick :
First, update your column with +1, but add a with a x(-1) factor :
update my_table set A=(A+1)*-1 where A > 3.
You will swtich from 4,5,6 to -5,-6,-7
Second, convert back the operation to restore positive :
update my_table set A=(A)*-1 where A < 0.
You will have : 5,6,7
You can do this with a loop. I don't like this solution, but it works:
CREATE TABLE update_unique (id INT NOT NULL);
CREATE UNIQUE INDEX ux_id ON update_unique (id);
INSERT INTO update_unique(id) SELECT a FROM generate_series(1,100) AS foo(a);
DO $$
DECLARE v INT;
BEGIN
FOR v IN SELECT id FROM update_unique WHERE id > 3 ORDER BY id DESC
LOOP
UPDATE update_unique SET id = id + 1 WHERE id = v;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';
In PostgreSQL 8.4, you might have to create a function to do this since you can't arbitrarily run PL/PGSQL from a prompt using DO (at least not to the best of my recollection).
I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"