HI I am having a table which does not have any primary key or unique key.
How can I delete the duplicate records?
Can any one of u tell me?
The easiest way would be to copy all of the duplicates into another identical table, delete them all from the original table, then put back the duplicates (just once for each unique one of course) from the temporary table.
For example:
BEGIN TRANSACTION
CREATE TABLE Holding_Table (my_string VARCHAR(20) NOT NULL)
INSERT INTO Holding_Table (my_string)
SELECT my_string
FROM My_Table
GROUP BY my_string
HAVING COUNT(*) > 1
DELETE MT
FROM Holding_Table HT
INNER JOIN My_Table MT ON MT.my_string = HT.my_string
INSERT INTO My_Table (my_string)
SELECT my_string
FROM Holding_Table
DROP TABLE Holding_Table
COMMIT TRANSACTION
This is just a simple example with one column. You would need to adjust it for your table obviously. Then be sure to add a primary key to your table...
You would have to create a primary key first. Then you would be able to run an aggregate query and see how many duplicates there are and delete based off of the new ID. You could then remove the primary key and make another field the primary key if you so desired (or stick with the one you created).
I have done this many times when fixing ancient legacy databases.
If you use: SET ROWCOUNT 1
You can get SQL to delete only a single row, and use whatever technique you prefer to delete the identical rows one at a time.
To revert back to normal behaviour, use: SET ROWCOUNT 0
However, it would be advisable to at least add a column that allows you to uniquely identify each row so that you can avoid this problem in future. The following does the trick:
ALTER TABLE TableName ADD TableName_ID int IDENTITY NOT NULL
Now you can simply: DELETE TableName WHERE TableName_ID = ? for each of your duplicates.
Check this site on support.microsoft.com: Site
It can tell you alot of how to identify, etc.
Adding this as another answer since it's a different approach...
You could also add a new column to the table, make that one unique, and then use that to delete all but one of the duplicate rows. For example:
ALTER TABLE My_Table
ADD my_id INT IDENTITY NOT NULL
DELETE
MT1
FROM
My_Table MT1
WHERE EXISTS (
SELECT
*
FROM
My_Table MT2
WHERE
MT2.my_string = MT1.my_string AND
MT2.my_id < MT1.my_id)
ALTER TABLE My_Table
DROP COLUMN my_id
Related
I have table which contains millions of rows.
I want to delete all the data which is over a week old based on the value of column last_updated.
so here are my two queries,
Approach 1:
Delete from A where to_date(last_updated,''yyyy-mm-dd'')< sysdate-7;
Approach 2:
l_lastupdated varchar2(255) := to_char(sysdate-nvl(p_days,7),'YYYY-MM-DD');
insert into B(ID) select ID from A where LASTUPDATED < l_lastupdated;
delete from A where id in (select id from B);
which one is better considering performance, safety and locking?
Assuming the delete removes a significant fraction of the data & millions of rows, approach three:
create table tmp
Delete from A where to_date(last_updated,''yyyy-mm-dd'')< sysdate-7;
drop table a;
rename tmp to a;
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2345591157689
Obviously you'll need to copy over all the indexes, grants, etc. But online redefinition can help with this https://oracle-base.com/articles/11g/online-table-redefinition-enhancements-11gr1
When you get to 12.2, there's another simpler option: a filtered move.
This is an alter table move operation, with an extra clause stating which rows you want to keep:
create table t (
c1 int
);
insert into t values ( 1 );
insert into t values ( 2 );
commit;
alter table t
move including rows where c1 > 1;
select * from t;
C1
2
While you're waiting to upgrade to 12.2+ and if you don't want to use the create-as-select method for some reason then approach 1 is superior:
Both methods delete the same rows from A* => it's the same amount of work to do the delete
Option 1 has one statement; Option 2 has two statements; 2 > 1 => option 2 is more work
*Statement level consistency means you might get different results running the processes. Say another session tries to update an old row that your process will remove.
With just the delete, the update will be blocked until the delete finishes. At which point the row's gone, so the update does nothing.
Whereas if you do the insert first, the other session can update & commit the row before the insert completes. So the update "succeeds". But the delete will then remove it! Which can lead to some unhappy customers...
Your stored dateformat seems suitable for proper sorting, so you could go the other way round and convert sysdate to string:
--this is false today
select * from dual where '2019-06-05' < to_char(sysdate-7, 'YYYY-MM-DD');
--this is true today
select * from dual where '2019-05-05' < to_char(sysdate-7, 'YYYY-MM-DD');
So it would be:
Delete from A where last_updated < to_char(sysdate-7, ''yyyy-mm-dd'');
It has the benefit that your default index (if there is any) will be used.
It has the disadvantage on relying on the String/Varchar ordering which might be changed i.e. bei NLS changes (if i remember right), so in any case you should do a little testing before...
In the long term, you should of cource alter the colum to a proper date-datatype, but I guess that doesn't help you right now ;)
If you are trying to delete most of the rows in the table, I would advise you go with a different approach, namely:
create <new table name> as
select *
from <old table name>
where <predicates for the data you want to keep>;
then
drop table <old table name>;
and finally you can rename the new table back to the old table.
You could always partition the new table (i.e. create the new table with a separate statement containing the partitioning clauses, and then have an insert as select into the new table from the old table).
That way, when you need to delete rows, it's a simple matter of dropping the relevant partition(s).
I have had a look at similar problems, however none of the answers helped in my case.
Just a little bit of background. I have Two databases, both have the same table with the same fields and structure. Data already exists in both tables. I want to overwrite and add to the data in db1.table from db2.table the primary ID is causing a problem with the update.
When I use the query:
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
It works to a blank table, because none of the primary keys exist. As soon as the primary key exists it fails.
Would it be easier for me to overwrite the primary keys? or find the primary key and update the fields related to the field_id? Im really not sure how to go ahead from here. The data needs to be migrated every 5min, so possibly a stored procedure is required?
first you should try to add new records then update all records.you can create a procedure like below code
PROCEDURE sync_Data(a IN NUMBER ) IS
BEGIN
insert into db2.table
select *
from db1.table t
where t.field_id not in (select tt.field_id from db2.table tt);
begin
for t in (select * from db1.table) loop
update db2.table aa
set aa.field1 = t.field1,
aa.field2 = t.field2
where aa.field_id = t.field_id;
end loop;
end;
END sync_Data
Set IsIdentity to No in Identity Specification on the table in which you want to move data, and after executing your script, set it to Yes again
I ended up just removing the data in the new database and sending it again.
DELETE FROM db2.table WHERE db2.table.field_id != 0;
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT table.field_id,table.field1,table.field2
FROM table;
Its not very efficient, but gets the job done. I couldnt figure out the syntax to correctly do an UPDATE or to change the IsIdentity field within MariaDB, so im not sure if they would work or not.
The overhead of deleting and replacing non-trivial amounts of data for an entire table will be prohibitive. That said I'd prefer to update in place (merge) over delete /replace.
USE db1;
INSERT INTO db2.table(field_id,field1,field2)
SELECT t.field_id,t.field1,t.field2
FROM table t
ON DUPLICATE KEY UPDATE field1 = t.field1, field2 = t.field2
This can be used inside a procedure and called every 5 minutes (not recommended) or you could build a trigger that fires on INSERT and UPDATE to keep the tables in sync.
INSERT INTO database1.tabledata SELECT * FROM database2.tabledata;
But you have to keep length of varchar length larger or equal to database2 and keep the same column name
In a database application, I want to insert, update and delete records in a table of database.
Table is as below:
In this table, Ga1_ID is Primary Key.
Suppose, I insert 5 records as show currently.
In second attempt, if I want to insert 5 other records and if any of these new records contains a primary key attribute which is already present in table it show error. Its fine.
But, when I insert new 5 records... how I can verify these new records's primary key value is not present. I mean, how to match or calculate the already present primary key attributes and then insert new records.
What is the best approach to manage this sort of situation ?
use following query in dataadapter:
da=new SqlDataAdapter("select Ga1_ID from table where Ga1_ID=#pkVal",conn);
DataSet=new DataSet();
da.fill(ds);
//pass parameter for #pkVal
da.SelectCommand.Parameters(1).Value = pkValue;
if(ds.Tables[0].Rows.Count>0) //If number of rows >0 then record exists
BEGIN
messagebox.show("Primary key present");
END
Hope its helpful.
Do not check existing records in advance, i.e. do not SELECT and then INSERT. A better (and pretty common) approach is to try to INSERT and handle exceptions, in particular, catch a primary key violation if any and handle it.
Do the insert in a try/catch block, with different handling in case of a primary key violation exception and other sql exception types.
If there was no exception, then job's done, record was inserted.
If you caught a primary key violation exception, then handle it appropriately (your post does not specify what you want to do in this case, and it's completely up to you)
If you want to perform 5 inserts at once and want to make sure they all succeed or else roll back if any of them failed, then do the inserts within a transaction.
you can do a lookup first before inserting.
IF EXISTS (SELECT * FROM tableName WHERE GA1_id=#newId)
BEGIN
UPDATE tableName SET Ga1_docid = #newdocID, GA1_fieldNAme = #newName, Ga1_fieldValue = #newVal where GA1_id=#newId
END
ELSE
BEGIN
INSERT INTO tableName(GA1_ID, Ga1_docid, GA1_fieldNAme Ga1_fieldValue) VALUES (value1,val2,value3,value4)
END
If you're using SQL Server 2012, use a sequence object - CREATE SEQUENCE.
This way you can get the next value using NEXT VALUE FOR.
With an older SQL Server version, you need to create the primary key field as an IDENTITY field and use the SCOPE_IDENTITY function to get the last identity value and then increment it manually.
Normally, you would like to have a surrogate key wich is generally an identity column that will automatically increment when you are inserting rows so that you don't have to care about knowing which id already exists.
However, if you have to manually insert the id there's a few alternatives for that and knowing wich SQL database you are using would help, but in most SQL implementations, you should be able to do something like:
IF NOT EXISTS
IF NOT EXISTS(
SELECT 1
FROM your_table
WHERE Ga1_ID = 1
)
INSERT INTO ...
SELECT WHERE NOT EXISTS
INSERT INTO your_table (col_1, col_2)
SELECT col_1, col_2
FROM (
SELECT 1 AS col_1, 2 AS col_2
UNION ALL
SELECT 3, 4
) q
WHERE NOT EXISTS (
SELECT 1
FROM your_table
WHERE col_1 = q.col_1
)
For MS SQL Server, you can also look at the MERGE statement and for MySQL, you can use the INSERT IGNORE statement.
I've got some duplicate records in a table because as it turns out Netezza does not support constraint checks on primary keys. That being said, I have some records where the information is the exact same and I want to delete just ONE of them. I've tried doing
delete from table_name where test_id=2025 limit 1
and also
delete from table_name where test_id=2025 rowsetlimit 1
However neither option works. I get an error saying
found 'limit'. Expecting a keyword
Is there any way to limit the records deleted by this query? I know I could just delete the record and reinsert it but that is a little tedious since I will have to do this multiple times.
Please note that this is not SQL Server or MySQL.This is for Netezza
If it doesn't support either "DELETE TOP 1" or the "LIMIT" keyword, you may end up having to do one of the following:
1) add some sort of an auto-incrementing column (like IDs), making each row unique. I don't know if you can do that in Netezza after the table has been created, though.
2) Programmatically read the entire table with some programming language, eliminate duplicates programmatically, then deleting all the rows and inserting them again. This might not be possible if they are references by other tables, in which case, you might have to temporarily remove the constraint.
I hope that helps. Please let us know.
And for future reference; this is why I personally always create an auto-incrementing ID field, even if I don't think I'll ever use it. :)
The below query works for deleting duplicates from a table.
DELETE FROM YOURTABLE
WHERE COLNAME1='XYZ' AND
(
COLNAME1,
ROWID
)
NOT IN
(
SELECT COLNAME1,
MAX(ROWID)
FROM YOURTABLENAME
WHERE COLNAME = 'XYZ'
GROUP BY COLNAME1
)
If the records are identical then you could do something like
CREATE TABLE DUPES as
SELECT col11,col2,col3,col....... coln from source_table where test_id = 2025
group by
1,2,3..... n
DELETE FROM source_table where test_id = 2025
INSERT INTO Source_table select * from duoes
DROP TABLE DUPES
You could even create a sub-query to select all the test_ids HAVING COUNT(*) > 1 to automatically find the dupes in steps 1 and 3
-- remove duplicates from the <<TableName>> table
delete from <<TableName>>
where rowid not in
(
select min(rowid) from <<TableName>>
group by (col1,col2,col3)
);
The GROUP BY 1,2,3,....,n will eliminate the dupes on the insert to the temp table
Does the use rowid is allowed in Netezza...As far as my knowledge is concern i don't think this query will executed in Netezza...
I have this table which doesn't have a primary key.
I'm going to insert some records in a new table to analyze them and I'm thinking in creating a new primary key with the values from all the available columns.
If this were a programming language like Java I would:
int hash = column1 * 31 + column2 * 31 + column3*31
Or something like that. But this is SQL.
How can I create a primary key from the values of the available columns? It won't work for me to simply mark all the columns as PK, for what I need to do is to compare them with data from other DB table.
My table has 3 numbers and a date.
EDIT What my problem is
I think a bit more of background is needed. I'm sorry for not providing it before.
I have a database ( dm ) that is being updated everyday from another db ( original source ) . It has records form the past two years.
Last month ( july ) the update process got broken and for a month there was no data being updated into the dm.
I manually create a table with the same structure in my Oracle XE, and I copy the records from the original source into my db ( myxe ) I copied only records from July to create a report needed by the end of the month.
Finally on aug 8 the update process got fixed and the records which have been waiting to be migrated by this automatic process got copied into the database ( from originalsource to dm ).
This process does clean up from the original source the data once it is copied ( into dm ).
Everything look fine, but we have just realize that an amount of the records got lost ( about 25% of july )
So, what I want to do is to use my backup ( myxe ) and insert into the database ( dm ) all those records missing.
The problem here are:
They don't have a well defined PK.
They are in separate databases.
So I thought that If I could create a unique pk from both tables which gave the same number I could tell which were missing and insert them.
EDIT 2
So I did the following in my local environment:
select a.* from the_table#PRODUCTION a , the_table b where
a.idle = b.idle and
a.activity = b.activity and
a.finishdate = b.finishdate
Which returns all the rows that are present in both databases ( the .. union? ) I've got 2,000 records.
What I'm going to do next, is delete them all from the target db and then just insert them all s from my db into the target table
I hope I don't get in something worst : - S : -S
The danger of creating a hash value by combining the 3 numbers and the date is that it might not be unique and hence cannot be used safely as a primary key.
Instead I'd recommend using an autoincrementing ID for your primary key.
Just create a surrogate key:
ALTER TABLE mytable ADD pk_col INT
UPDATE mytable
SET pk_col = rownum
ALTER TABLE mytable MODIFY pk_col INT NOT NULL
ALTER TABLE mytable ADD CONSTRAINT pk_mytable_pk_col PRIMARY KEY (pk_col)
or this:
ALTER TABLE mytable ADD pk_col RAW(16)
UPDATE mytable
SET pk_col = SYS_GUID()
ALTER TABLE mytable MODIFY pk_col RAW(16) NOT NULL
ALTER TABLE mytable ADD CONSTRAINT pk_mytable_pk_col PRIMARY KEY (pk_col)
The latter uses GUID's which are unique across databases, but consume more spaces and are much slower to generate (your INSERT's will be slow)
Update:
If you need to create same PRIMARY KEYs on two tables with identical data, use this:
MERGE
INTO mytable v
USING (
SELECT rowid AS rid, rownum AS rn
FROM mytable
ORDER BY
co1l, col2, col3
)
ON (v.rowid = rid)
WHEN MATCHED THEN
UPDATE
SET pk_col = rn
Note that tables should be identical up to a single row (i. e. have same number of rows with same data in them).
Update 2:
For your very problem, you don't need a PK at all.
If you just want to select the records missing in dm, use this one (on dm side)
SELECT *
FROM mytable#myxe
MINUS
SELECT *
FROM mytable
This will return all records that exist in mytable#myxe but not in mytable#dm
Note that it will shrink all duplicates if any.
Assuming that you have ensured uniqueness...you can do almost the same thing in SQL. The only problem will be the conversion of the date to a numeric value so that you can hash it.
Select Table2.SomeFields
FROM Table1 LEFT OUTER JOIN Table2 ON
(Table1.col1 * 31) + (Table1.col2 * 31) + (Table1.col3 * 31) +
((DatePart(year,Table1.date) + DatePart(month,Table1.date) + DatePart(day,Table1.date) )* 31) = Table2.hashedPk
The above query would work for SQL Server, the only difference for Oracle would be in terms of how you handle the date conversion. Moreover, there are other functions for converting dates in SQL Server as well, so this is by no means the only solution.
And, you can combine this with Quassnoi's SET statement to populate the new field as well. Just use the left side of the Join condition logic for the value.
If you're loading your new table with values from the old table, and you then need to join the two tables, you can only "properly" do this if you can uniquely identify each row in the original table. Quassnoi's solution will allow you to do this, IF you can first alter the old table by adding a new column.
If you cannot alter the original table, generating some form of hash code based on the columns of the old table would work -- but, again, only if the hash codes uniquely identify each row. (Oracle has checksum functions, right? If so, use them.)
If hash code uniqueness cannot be guaranteed, you may have to settle for a primary key composed of as many columns are required to ensure uniqueness (e.g. the natural key). If there is no natural key, well, I heard once that Oracle provides a rownum for each row of data, could you use that?