How to ignore duplicates without unique constraint in Postgres 9.4? - sql

I am currently facing an issue in our old database(postgres 9.4) table which contains some duplicate rows. I want to ensure that no more duplicate rows should be generated.
But I also want to keep the duplicate rows that already has been generated. Due to which I could not apply unique constraint on those columns(multiple column).
I have created a trigger which would check the row if already exists and raise exception accordingly. But it is also failing when concurrent transactions are in processing.
Example :
TAB1
col1 | col2 | col3 |
------------------------------------
1 | A | B | --
2 | A | B | -- already present duplicates for column col2 and col3(allowed)
3 | C | D |
INSERT INTO TAB1 VALUES(4 , 'A' , 'B') ; -- This insert statement will not be allowed.
Note: I cannot use on conflict due to older version of database.

Presumably, you don't want new rows to duplicate historical rows. If so, you can do this but it requires modifying the table and adding a new column.
alter table t add duplicate_seq int default 1;
Then update this column to identify existing duplicates:
update t
set duplicate_seq = seqnum
from (select t.*, row_number() over (partition by col order by col) as seqnum
from t
) tt
where t.<primary key> = tt.<primary key>;
Now, create a unique index or constraint:
alter table t add constraint unq_t_col_seq on t(col, duplicate_seq);
When you insert rows, do not provide a value for duplicate_seq. The default is 1. That will conflict with any existing values -- or with duplicates entered more recently. Historical duplicates will be allowed.

You can try to create a partial index to have the unique constraint only for a subset of the table rows:
For example:
create unique index on t(x) where (d > '2020-01-01');

Related

Update using two unique fields

There are two tables
| table_1 | periodically TRUNCATE and INSERT updated information
| table_2 | rows are updated based on data from table_1
the first and second tables have a unique index for the lap, tt fields
My current request to update via
INSERT INTO public.table_2 ... VALUES (...) ON CONFLICT (lap,tt) DO UPDATE SET ...
There is a field in the second table table_2.status and I need
if the ("lap", "tt") entry is in the first table, then update
table_2.status =1
if there are no records, then update
table_2.status=0
So far, I first do update table_2.status=0 and then it already goes
INSERT ...VALUES (...) ON CONFLICT ...DO UPDATE SET
If there was one unique field, then I would do an Update via
....SELECT column1 FROM table1 WHERE column1 in (SELECT column1 FROM table2)....
but how can I do it with two unique fields.... I can't understand
UP
UNIQUE INDEX table_1_idx ON public.table_1 USING btree (lap, tt);
UNIQUE INDEX table_2_idx ON public.table_2 USING btree (lap, tt);

Redshift Delete Duplicate Rows

I want to delete duplicates from a Redshift table that are true duplicates. Below is an example of two rows that are true duplicates.
Since it is Redshift, there are no primary keys to the table. Any help is appreciated.
id
Col 1
Col 2
1
Val 1
Val 2
1
Val 1
Val 2
I tried using window functions row_number(), rank(). Neither worked as when applying Delete command, SQL command cannot differentiate both rows.
Trial 1:
The below command deletes both rows
DELETE From test_table
where (id) IN
(
select \*,row_number() over(partition by id) as rownumber from test*table where row*number !=1
);
Trial 2:
The below command retains both rows.
DELETE From test_table
where (id) IN
(
select \*,rank() over(partition by id) as rownumber from test*table where row*number !=1
);
All row values are identical. Hence you unable to delete specific rows in that table.
In that I would recommend to create dummy table, and load unique records.
Steps to follow:
create table dummy as select * from main_table where 1=2
insert into dummy(col1,col2..coln) select distinct col1,col2..coln from main_table;
verify dummy table.
Alter table main_table rename to main_table_bk
alter table dummy rename to main.
after complete your testing and verification, drop main_table_bk
Hope it will help.
You cannot delete one without deleting the other as they are identical. The way to do this is to:
make a temp table with (one copy) of each duplicate row
(within a transaction) delete all rows from the source table that match rows in the temp table
Insert temp table rows into source table (commit)

Why does SQL Server populate new fields in existing rows in some environments and not others?

I am using MS SQL Server 2012. I have this bit of SQL:
alter table SomeTable
add Active bit not null default 1
In some environments the default value is applied to existing rows and in other environments we have to add an update script to set the new field to 1. Naturally I am thinking that the difference is a SQL Server setting but my searches thus far are not suggesting which one. Any suggestions?
Let me know if the values of particular settings are desired.
Edit: In the environments that don't apply the default the existing rows are set to 0, which at least conforms to the NOT NULL.
If you add the column as not null it will be set to the default value for existing rows.
If you add the column as null it will be null despite having a default constraint when added to the table.
For example:
create table SomeTable (id int);
insert into SomeTable values (1);
alter table SomeTable add Active_NotNull bit not null default 1;
alter table SomeTable add Active_Null bit null default 1;
select * from SomeTable;
returns:
+----+----------------+-------------+
| id | Active_NotNull | Active_Null |
+----+----------------+-------------+
| 1 | 1 | NULL |
+----+----------------+-------------+
dbfiddle.uk demo: http://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=c4aeea808684de48097ff44d391c9954
Default value will be applied to existing row to avoid violation of "NOT NULL" constraint.

SQL - drop a column in netezza

I have a tabletable1 like below from which i'm trying to drop a column.
table1
id name time value
---------------------
1 john 11:00 324
2 NULL 12:00 645
3 NULL 13:00 324
4 jane 11:00 132
5 NULL 12:00 30
A temp table is created as the original table cannot be altered due to permissions. This case may be very simple to be done by selecting everything except id , but what I really need to do is get rid of one column when there are large number of cols.
create temp table table2 as(
select * from table1
) distribute on random;
alter table table2 drop column id;
this gives the error - Drop behaviour (RESTRICT | CASCADE) needs to be specified when dropping a column or constraint
How should the alter table statement be ?
As the error message and documentation say, you need to specify either RESTRICT or CASCADE. However, note that you can't drop a column from a true TEMPORARY table, so this only applies to normal tables.
ALTER TABLE <table> <action> [ORGANIZE ON {(<columns>) | NONE}]
Where <action> can be one of:
ADD COLUMN <col> <type> [<col_constraint>][,…] |
ADD <table_constraint> |
ALTER [COLUMN] <col> { SET DEFAULT <value> | DROP DEFAULT } |
DROP [COLUMN] column_name[,column_name…] {CASCADE | RESTRICT } |
DROP CONSTRAINT <constraint_name> {CASCADE | RESTRICT} |
MODIFY COLUMN (<col> VARCHAR(<maxsize>)) |
OWNER TO <user_name> |
RENAME [COLUMN] <col> TO <new_col_name> |
RENAME TO <new_table> |
SET PRIVILEGES TO <table>
Like this:
SYSTEM.ADMIN(ADMIN)=> create table t1 (col1 bigint, col2 varchar(5));
CREATE TABLE
SYSTEM.ADMIN(ADMIN)=> insert into t1 values (1,'One');
INSERT 0 1
SYSTEM.ADMIN(ADMIN)=> insert into t1 values (2,'Two');
INSERT 0 1
SYSTEM.ADMIN(ADMIN)=> insert into t1 values (3,'Three');
INSERT 0 1
SYSTEM.ADMIN(ADMIN)=> select * from t1;
COL1 | COL2
------+-------
3 | Three
1 | One
2 | Two
(3 rows)
SYSTEM.ADMIN(ADMIN)=> alter table t1 drop column col2 restrict;
ALTER TABLE
SYSTEM.ADMIN(ADMIN)=> select * from t1;
COL1
------
1
2
3
(3 rows)
As always, if you alter a table to drop or add a column, you should follow it up with a GROOM to clean up the versioned table:
SYSTEM.ADMIN(ADMIN)=> groom table t1 versions;
NOTICE: Groom will not purge records deleted by transactions that started after 2016-11-07 17:00:11.
NOTICE: If this process is interrupted please either repeat GROOM VERSIONS or issue 'GENERATE STATISTICS ON "T1"'
NOTICE: Groom processed 2 pages; purged 0 records; scan size unchanged; table size unchanged.
GROOM VERSIONS
SYSTEM.ADMIN(ADMIN)=>
This is the syntax for dropping the column in Netezza
Alter table tablename drop columnname RESTRICT
According to this: http://datawarehouse.ittoolbox.com/groups/technical-functional/netezza-l/netezza-issue-2467523
it seems that you can't DROP a column via ALTER TABLE, only a constraint.

Swap unique indexed column values in database

I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"