Update using two unique fields - sql

There are two tables
| table_1 | periodically TRUNCATE and INSERT updated information
| table_2 | rows are updated based on data from table_1
the first and second tables have a unique index for the lap, tt fields
My current request to update via
INSERT INTO public.table_2 ... VALUES (...) ON CONFLICT (lap,tt) DO UPDATE SET ...
There is a field in the second table table_2.status and I need
if the ("lap", "tt") entry is in the first table, then update
table_2.status =1
if there are no records, then update
table_2.status=0
So far, I first do update table_2.status=0 and then it already goes
INSERT ...VALUES (...) ON CONFLICT ...DO UPDATE SET
If there was one unique field, then I would do an Update via
....SELECT column1 FROM table1 WHERE column1 in (SELECT column1 FROM table2)....
but how can I do it with two unique fields.... I can't understand
UP
UNIQUE INDEX table_1_idx ON public.table_1 USING btree (lap, tt);
UNIQUE INDEX table_2_idx ON public.table_2 USING btree (lap, tt);

Related

How to ignore duplicates without unique constraint in Postgres 9.4?

I am currently facing an issue in our old database(postgres 9.4) table which contains some duplicate rows. I want to ensure that no more duplicate rows should be generated.
But I also want to keep the duplicate rows that already has been generated. Due to which I could not apply unique constraint on those columns(multiple column).
I have created a trigger which would check the row if already exists and raise exception accordingly. But it is also failing when concurrent transactions are in processing.
Example :
TAB1
col1 | col2 | col3 |
------------------------------------
1 | A | B | --
2 | A | B | -- already present duplicates for column col2 and col3(allowed)
3 | C | D |
INSERT INTO TAB1 VALUES(4 , 'A' , 'B') ; -- This insert statement will not be allowed.
Note: I cannot use on conflict due to older version of database.
Presumably, you don't want new rows to duplicate historical rows. If so, you can do this but it requires modifying the table and adding a new column.
alter table t add duplicate_seq int default 1;
Then update this column to identify existing duplicates:
update t
set duplicate_seq = seqnum
from (select t.*, row_number() over (partition by col order by col) as seqnum
from t
) tt
where t.<primary key> = tt.<primary key>;
Now, create a unique index or constraint:
alter table t add constraint unq_t_col_seq on t(col, duplicate_seq);
When you insert rows, do not provide a value for duplicate_seq. The default is 1. That will conflict with any existing values -- or with duplicates entered more recently. Historical duplicates will be allowed.
You can try to create a partial index to have the unique constraint only for a subset of the table rows:
For example:
create unique index on t(x) where (d > '2020-01-01');

Get last generated id asp.net

So I have 2 Tables in one Database (Table1 and Table2). What I want to do is to get the last generated ID (which is primary key) from the first table (Table1) and add it to another table (Table2).
For example. The Last generated ID from Table1, column NRRENDOR is 25 (I have deleted the rows that's why it shows 22, it is primary key). If I add a row to Table1 it will generate number 26 on column NRRENDOR (First picture). But when number 26 is added to column NRRENDOR from Table 1, I want it to be added to Table2, column NRD too (Second Picture).
You should use a sql-query like the one below after inserting the new element to the first table.
Insert into table2 (id) values(Select top 1 ID from table1 order by id desc)
This should work in sql-server 2008 and newer.
From MySQL Reference Manual, 20.6.14.3 How to Get the Unique ID for the Last Inserted Row
INSERT INTO NRRENDOR (auto, field) VALUES(NULL, 'value');
generate ID by inserting NULL
INSERT INTO NRD (id, field) VALUES(LAST_INSERT_ID(), 'value');
use ID in second table.
Or you can get the last insert id manually executing the following query immediately after the INSERT into NRRENDOR
SELECT last_insert_id()
and use it later in your second INSERT query for NRD.

Database Triggers: On Insert

This is a simple example.
I want to insert data in Table1 (Name, Age, Sex). This table has an automatically increasing serial#(int) on insertion of data.
I want to put a trigger on Table1 insert, so that after inserting data, it picks up the serial#(int) from Table1 and puts Serial# and Name to Table2 and Serial# and some other data in Table3.
Is it possible via triggers?
or, should I pick (last) Serial from table1 and call insert on other tables by increasing it manually, in same SP I used to insert in Table1?
Which approach is better?
EDIT 1:
Suppose table:
Serial | UID | Name | Age | Sex | DateTimeStamp
(int | uniqueidentifier | nvarchar | smallint | nchar | DateTime )
Default NewID() and Default GetDate() as UID and DateTimeStamp, would INSERTED table have Datetime-Of-Insertion in DatetimeStamp field? Meaning, I originally didn't enter any of Serial, GUID or DatetimeStamp, will they occur in INSERTED table?
EDIT 2:
Can you point me towards good books/articles on triggers. I read mastering SQL server 2005, didn't get much out of it. Thanks!
Sure you can do this with a trigger - something like:
CREATE TRIGGER trg_Table1_INSERT
ON dbo.Table1 AFTER INSERT
AS BEGIN
INSERT INTO dbo.Table2(SerialNo, Name)
SELECT SerialNo, Name
FROM Inserted
INSERT INTO dbo.Table3(SomeOtherCol)
SELECT SomeOtherCol
FROM Inserted
END
or whatever it is you need to do here....
It's important to understand that the trigger will be called once per statement - not once per row inserted. So if you have a statement that inserts 10 rows, your trigger gets called once, and the pseudo-table Inserted will contain those 10 rows that have been inserted in the statement.
Yes, this is possible with triggers.
When you use an INSERT trigger, you have access to the INSERTED logical table that represents the row to be inserted, with the value of the new ID it in.
Yes, it is possible by trigger, but keep in mind that TRIGGER doesn't take any input and doesn't provide any output, so you only can collect your desired data by querying in the trigger, however to satisfy insertion into your Table2 and Table3
CREATE TRIGGER tr_YourDesiredTriggerName ON Table1
FOR INSERT
AS
BEGIN
-- Inserting data to Table2
INSERT INTO Table2( Serial, Name)
SELECT i.Serial, i.Name
FROM Table1 AS t1 INNER JOIN Inserted AS i ON t1.Serial = i.Serial
AND i.Serial NOT IN ( SELECT t2.Serial FROM Table2 AS t2 )
-- Inserting data to Table3
INSERT INTO Table3( Serial, OtherData) -- select from other table
SELECT i.Serial, OtherData
FROM OtherTable AS ot INNER JOIN Inserted AS i ON ot.Serial = i.Serial
AND i.Serial NOT IN ( SELECT t3.Serial FROM Table3 AS t3 )
END
If you don't have control over the source of the insert then use a trigger. If you do have control over the source of the inserts then you can modify your insert process to also add the secondary table rows.
Will you also have control over future inserts? What if another insert gets created for this table? If you use a trigger then the secondary inserts would also get handled automatically. If not then the second insert process could possibly leave out the secondary inserts. Maybe that would be good or bad depending on your application.

SQL Insert into 2 tables, passing the new PK from one table as the FK in the other

How can I achieve an insert query on 2 tables that will insert the primary key set from one table as a foreign key into the second table.
Here's a quick example of what I'm trying to do, but I'd like this to be one query, perhaps a join.
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 )
INSERT INTO Table2 (foreign_key_column) VALUES (parimary_key_from_table1_insert)
I'd like this to be one join query.
I've made some attempts but I can't get this to work correctly.
This is not possible to do with a single query.
The record in the PK table needs to be inserted before the new PK is known and can be used in the FK table, so at least two queries are required (though normally 3, as you need to retrieve the new PK value for use).
The exact syntax depends on the database being used, which you have not specified.
If you need this set of inserts to be atomic, use transactions.
Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).
Here's the mysql solution (with executable test code below):
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );
INSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());
Note: These should be executed using a single connection.
Here's the test code:
create table tab1 (id int auto_increment primary key, note text);
create table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);
insert into tab1 values (null, 'row 1');
insert into tab2 values (null, LAST_INSERT_ID(), 'row 1');
select * from tab1;
select * from tab2;
mysql> select * from tab1;
+----+-------+
| id | note |
+----+-------+
| 1 | row 1 |
+----+-------+
1 row in set (0.00 sec)
mysql> select * from tab2;
+----+---------+-------+
| id | tab2_id | note |
+----+---------+-------+
| 1 | 1 | row 1 |
+----+---------+-------+
1 row in set (0.00 sec)
From your example, if the tuple (col1, col2) can be considered unique, then you could do:
INSERT INTO table1 (col1, col2) VALUES (val1, val2);
INSERT INTO table2 (foreign_key_column) VALUES (SELECT id FROM Table1 WHERE col1 = val1 AND col2 = val2);
There may be a few ways to accomplish this. Probably the most straight forward is to use a stored procedure that accepts as input all the values you need for both tables, then inserts to the first, retrieves the PK, and inserts to the second.
If your DB supports it, you can also tell the first INSERT to return a value:
INSERT INTO table1 ... RETURNING primary_key;
This at least saves the SELECT step that would otherwise be necessary. If you go with a stored procedure approach, you'll probably want to incorporate this into that stored procedure.
It could also possibly be done with a combination of views and triggers--if supported by your DB. This is probably far messier than it's worth, though. I believe this could be done in PostgreSQL, but I'd still advise against it. You'll need a view that contains all of the columns represented by both table1 and table2, then you need an ON INSERT DO INSTEAD trigger with three parts--the first part inserts to the new table, the second part retrieves the PK from the first table and updates the NEW result, and the third inserts to the second table. (Note: This view doesn't even have to reference the two literal tables, and would never be used for queries--it only has to contain columns with names/data types that match the real tables)
Of course all of these methods are just complicated ways of getting around the fact that you can't really do what you want with a single command.

Swap unique indexed column values in database

I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"