I have a table that contains two rows: AccountID and PartnerAccountID. I need to prevent duplicates across both columns. Meaning, if an entry exists:
| AccountID | PartnerAccountID |
| 1 | 2 |
I need to make sure that the following can't also exist:
| AccountID | PartnerAccountID |
| 2 | 1 |
Any way to do that in a constraint?
It would be nice if you could create a unique index on an expression:
create unique index unq_t_AccountID_PartnerAccountID
on t((case when AccountID < PartnerAccountID then AccountId else PartnerAccountID end),
(case when AccountID < PartnerAccountID then PartnerAccountIDelse AccountId end)
);
But you can do almost the same thing by creating the columns as computed columns and then creating the index:
alter table t add minid as (case when AccountID < PartnerAccountID then AccountId else PartnerAccountID end);
alter table t add maxid as (case when AccountID < PartnerAccountID then PartnerAccountIDelse AccountId end);
create unique index unq_t_minid_maxid on t(minid, maxid);
An approach could also be to create an instead-of trigger which simply ignores such duplicates across two columns; The advantage would be that transactions do not need to be aborted if the information is already stored (yet the other way round). Here is an attempt of how such a trigger could look like:
CREATE TRIGGER tr_t ON t
INSTEAD OF INSERT
AS
BEGIN
INSERT t
SELECT AccountID, PartnerAccountID
FROM inserted
where not exists (select * from t2 where t2.AccountID = ParnterAccountID and t2.ParnterAccountId = AccountID);
END
Related
I have a table that contains these columns:
ID (varchar)
SETUP_ID (varchar)
MENU (varchar)
LABEL (varchar)
The thing I want to achieve is to remove all duplicates from the table based on two columns (SETUP_ID, MENU).
Table I have:
id | setup_id | menu | label |
-------------------------------------
1 | 10 | main | txt |
2 | 10 | main | txt |
3 | 11 | second | txt |
4 | 11 | second | txt |
5 | 12 | third | txt |
Table I want:
id | setup_id | menu | label |
-------------------------------------
1 | 10 | main | txt |
3 | 11 | second | txt |
5 | 12 | third | txt |
You can achieve this with a common table expression (cte)
with cte as (
select id, setup_id, menu,
row_number () over (partition by setup_id, menu, label) rownum
from atable )
delete from atable a
where id in (select id from cte where rownum >= 2)
This will give you your desired output.
Common Table Expression docs
Assuming a table named tbl where both setup_id and menu are defined NOT NULL and id is the PRIMARY KEY.
EXISTS will do nicely:
DELETE FROM tbl t0
WHERE EXISTS (
SELECT FROM tbl t1
WHERE t1.setup_id = t0.setup_id
AND t1.menu = t0.menu
AND t1.id < t0.id
);
This deletes every row where a dupe with lower id is found, effectively only keeping the row with the smallest id from each set of dupes. An index on (setup_id, menu) or even (setup_id, menu, id) will help performance with big tables a lot.
If there is no PK and no reliable UNIQUE (combination of) column(s), you can fall back to using the ctid. If NULL values can be involved, you need to specify how to deal with those.
Consider:
Delete duplicate rows from small table
How to delete duplicate rows without unique identifier
How do I (or can I) SELECT DISTINCT on multiple columns?
After cleaning up duplicates, add a UNIQUE constraint to prevent new dupes:
ALTER TABLE tbl ADD CONSTRAINT tbl_setup_id_menu_uni UNIQUE (setup_id, menu);
If you had an index on (setup_id, menu), drop that now. It's superseded by the UNIQUE constraint.
I have found a solution that fits me the best.
Here it is if anyone needs it:
DELETE FROM table_name
WHERE id IN
(SELECT id
FROM
(SELECT id,
ROW_NUMBER() OVER( PARTITION BY setup_id,
menu
ORDER BY id ) AS row_num
FROM table_name ) t
WHERE t.row_num > 1 );
link: https://www.postgresql.org/docs/current/queries-union.html
https://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT
let's sat table name is a
select distinct on (setup_id,menu ) a.* from a;
Key point: The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the desired precedence of rows within each DISTINCT ON group.
Which means you can only order by setup_id,menu in this distinct on query scope.
Want the opposite:
EXCEPT returns all rows that are in the result of query1 but not in the result of query2. (This is sometimes called the difference between two queries.) Again, duplicates are eliminated unless EXCEPT ALL is used.
SELECT * FROM a
EXCEPT
select distinct on (setup_id,menu ) a.* from a;
You can try something along these lines to delete all but the first row in case of duplicates (please note that this is not tested in any way!):
DELETE FROM your_table WHERE id IN (
SELECT unnest(duplicate_ids[2:]) FROM (
SELECT array_agg(id) AS duplicate_ids FROM your_table
GROUP BY SETUP_ID, MENU
HAVING COUNT(*) > 1
)
)
)
The above collects the ids of the duplicate rows (COUNT(*) > 1) in an array (array_agg), then takes all but the first element in that array ([2:]) and "explodes" the id values into rows (unnest).
The outer query just deletes every id that ends up in that result.
For mysql the similar question is already answered here Find and remove duplicate rows by two columns
Try if any of the approach helps in this matter.
I like the below one for MySql:
ALTER IGNORE TABLE your_table ADD UNIQUE (SETUP_ID, MENU);
DELETE t1
FROM table_name t1
join table_name t2 on
(t2.setup_id = t1.setup_id or t2.menu = t1.menu) and t2.id < t1.id
There are many ways to find and delete all duplicate row(s) based on conditions. But I like inner join method, which works very fast even in a large amount of Data. Please check follows :
DELETE T1 FROM <TableName> T1
INNER JOIN <TableName> T2
WHERE
T1.id > T2.id AND
T1.<ColumnName1> = T2.<ColumnName1> AND T1.<ColumnName2> = T2.<ColumnName2>;
In your case you can write as follows :
DELETE T1 FROM <TableName> T1
INNER JOIN <TableName> T2
WHERE
T1.id > T2.id AND
T1.setup_id = T2. setup_id;
Let me know if you face any issue or need more help.
I am using Postgresql. I would like to write a SELECT statement with a column value based on the value in the database.
For example.
| id | indicator |
| 1 | 0 |
| 2 | 1 |
indicator can be 0 or 1 only where 0 = manual and 1 = auto.
Expected output from a SELECT *
1 manual
2 auto
You can use a case expression:
select id, case indicator
when 0 then 'manual'
when 1 then 'auto'
end as indicator
from the_table;
If you need that frequently you could create a view for that.
In the long run, it might be better to create a proper lookup table and join to that:
create table indicator
(
id integer primary key,
name text not null
);
insert into indicator (id, name)
values (0, 'manua', 1, 'auto');
alter table the_table
add constraint fk_indicator
foreign key (indicator) references indicator (id);
Then join to it:
select t.id, i.name as indicator
from the_table t
join indicator i on i.id = t.indicator;
I have a table and I want to make sure that no two rows can be alike.
So, for example, this table would be valid:
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet
2 | DeskJet
But this table would not be:
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet <--error (duplicate row)
2 | DeskJet
1 | ThinkJet <--error (duplicate row)
This is because the last table has two instances of 1 | ThinkJet.
So, user_id can be repeated (i.e. 1) and printer can be repeated (i.e. LaserWriter) but once a record like 1 | ThinkJet is entered once that combination cannot be entered again.
How can I prevent such occurrences in a Postgresql 11.5 table?
I would try experimenting with SQL code but alas I am still new on the matter.
Please note this is for INSERTing data into the table, not SELECTing it. Like a constraint iirc.
Thanks
Here's your script
ALTER TABLE tableA ADD CONSTRAINT some_constraint PRIMARY KEY(user_id,printer);
INSERT INTO tableA(user_id, printer)
VALUES
(
1,
'LaserWriter'
)
ON CONFLICT (user_id, printer)
DO NOTHING;
You can use DISTINCT. For example:
SELECT user_id, DISTINCT printer FROM my_table;
That's all. Hope it helps!
You need a series of steps (assuming there is no already assigned unique key).
Add a temporary column to make each row unique.
Assign a value to the new columns.
Remove the already existing duplicates.
Create a Unique or Primary Key on the composite columns.
Remove the temporary column.
alter table your_table add temp_unique integer unique;
do $$
declare
row_num integer = 1;
c_assign cursor for
select temp_unique
from your_table
for update;
begin
for rec in c_assign
loop
update your_table
set temp_unique = row_num
where current of c_assign;
row_num = row_num + 1;
end loop;
end;
$$
delete from your_table ytd
where exists ( select 1
from your_table ytk
where ytd.user_id = ytk.user_id
and ytd.printer = ytk.printer
and ytd.temp_unique > ytk.temp_unique
) ;
alter table your_table add constraint id_prt_uk unique (user_id, printer);
alter table your_table drop temp_unique;
I found the answer. When creating the table I needed to specify the two columns as UNIQUE. Observe:
CREATE TABLE foo (user_id INT, printer VARCHAR(20), UNIQUE (user_id, printer));
Now, here are my results:
=# INSERT INTO foo VALUES (1, 'LaserWriter');
INSERT 0 1
=# INSERT INTO foo VALUES (4, 'LaserWriter');
INSERT 0 1
=# INSERT INTO foo VALUES (1, 'ThinkJet');
INSERT 0 1
=# INSERT INTO foo VALUES (2, 'DeskJet');
INSERT 0 1
=# INSERT INTO foo VALUES (1, 'ThinkJet');
ERROR: duplicate key value violates unique constraint "foo_user_id_printer_key"
DETAIL: Key (user_id, printer)=(1, ThinkJet) already exists.
=# SELECT * FROM foo;
user_id | printer
---------+-------------
1 | LaserWriter
4 | LaserWriter
1 | ThinkJet
2 | DeskJet
(4 rows)
I have a table with some data. Now, I want to make a non unique column as unique.
But the thing is, I don't want to delete the duplicate data present in the table but want to restrict the data to be added in the table from being non unique.
To be practical:
I have a table tbl with name,age as columns.
I have data in the table as follows:
name |age
-----------------------
kaushikC |21
mohan |27
kumar |29
mohan |31
karthik |55
karthik |76
Now I want to make the name column unique without deleting the duplicate entry of 'mohan' and 'karthik'.
How to write such constraint
If you have a column in your table that allows you to identify the records that you don't want to change, such as an identity column or a create date, you can create a unique filtered index on the table, specifying in it's where clause that it should only include the other records in the table.
Suppose you have an identity column called id:
id | name |age
-----------------------
1 | kaushikC |21
2 | mohan |27
3 | kumar |29
4 | mohan |31
5 | karthik |55
6 | karthik |76
You could create a unique filtered index on this table that will only be valid for rows where the id is grater than 6:
CREATE UNIQUE INDEX UX_YourTable_Name_WhereIdGraterThanSix
ON YourTable (Name)
WHERE id > 6;
This will enable you to keep uniqueness of other names on the table - However, it will not prevent you from inserting one more duplicate for any existing name - so you could insert another mohan or another kumar to the table (but just one).
If you want to exclude all duplicates including duplicates of existing rows, your best option is probably to use an instead of trigger for insert and update:
CREATE TRIGGER tr_YourTable ON YourTable
INSTEAD OF INSERT, UPDATE
AS
BEGIN
-- the statement that fired the trigger is an update statement
IF EXISTS(select 1 FROM deleted)
BEGIN
UPDATE T
SET name = I.Name
FROM YourTable AS T
JOIN Inserted AS I
ON T.Id = I.Id
WHERE NOT EXISTS
( -- make sure the name is unique
SELECT 1
FROM YourTable AS T1
WHERE T1.Name = I.Name
AND NOT EXISTS
( -- unless it is going to be updated
SELECT 1
FROM Deleted AS D
JOIN Inserted AS I
ON D.Id = I.Id
WHERE D.Id = T1.Id
AND T1.Name = D.Name
AND D.Name <> I.Name
)
)
END
ELSE -- the statement that fired the trigger is an insert statement
BEGIN
INSERT INTO YourTable(Name)
SELECT I.Name
FROM Inserted I
WHERE NOT EXISTS
( -- make sure the name is unique
SELECT 1
FROM YourTable AS T1
WHERE T1.Name = I.Name
)
END
END
I have two tables, bank_data and sec_data. Table bank_data has the columns id, date, asset, and liability. The date column is divided into quarters.
id | date | asset | liability
--------+----------+--------------------
1 | 6/30/2001| 333860 | 308524
1 | 3/31/2001| 336896 | 311865
1 | 9/30/2001| 349343 | 308524
1 |12/31/2002| 353863 | 322659
2 | 6/30/2001| 451297 | 425156
2 | 3/31/2001| 411421 | 391846
2 | 9/30/2001| 430178 | 41356
2 |12/31/2002| 481687 | 46589
3 | 6/30/2001| 106506 | 104532
3 | 3/31/2001| 104196 | 102983
3 | 9/30/2001| 106383 | 104865
3 |12/31/2002| 107654 | 105867
Table sec_data has columns of id, date, and security. I combined the two tables into a new table named new_table in R using this code:
dbGetQuery(con, "CREATE TABLE new_table
AS (SELECT sec_data.id,
bank_data.date,
bank_data.asset,
bank_data.liability,
sec_data.security
FROM bank_data,bank_sec
WHERE (bank_data.id = sec_data.id) AND
(bank_data.date = sec_data.date)")
I would like to set two primary keys (id and date) in this R code without using pgAdmin. I want to use something like Constraint bankkey Primary Key (id, date) but the AS and SELECT functions are throwing me off.
First your query is wrong.. You say table sec_data but you assign table bank_sec and i am rephrase your query
CREATE TABLE new_table AS
SELECT
sec_data.id,
bank_data.date,
bank_data.asset,
bank_data.liability,
sec_data.security
FROM bank_data
INNER JOIN sec_data on bank_data.id = sec_data.id
and bank_data.date = sec_data.date
Avoid using Implicit Join and use Explicit Join instead.. And as stated by # a_horse_with_no_name you can't define more than 1 primary key in 1 table. So what you do are Composite Primary Key
Define :
is a combination of two or more columns in a table that can be used to
uniquely identify each row in the table
So you need to Alter Function because Your create statement base on other table..
ALTER TABLE new_table
ADD PRIMARY KEY (id, date);
You may run these two separate statements ( create table and Insert into )
CREATE TABLE new_table (
id int, date date, asset int, liability int, security int,
CONSTRAINT bankkey PRIMARY KEY (id, date)
) ;
INSERT INTO new_table (id,date,asset,liability,security)
SELECT s.id,
b.date,
b.asset,
b.liability,
s.security
FROM bank_data b JOIN bank_sec s
ON b.id = s.id AND b.date = s.date;
Demo
To create the primary key you desire, run the following SQL statement after your CREATE TABLE ... AS statement:
ALTER TABLE new_table
ADD CONSTRAINT bankkey PRIMARY KEY (id, date);
That has the advantage that the primary key index won't slow down the data insertion.