In SQL Server 2008 R2 I added two duplicate ID and record in my table. When I try to delete one of the last two records I receive the following error.
The row values updated or deleted either do not make the row unique or they alter multiple rows.
The data is:
7 ABC 6
7 ABC 6
7 ABC 6
8 XYZ 1
8 XYZ 1
8 XYZ 4
7 ABC 6
7 ABC 6
I need to delete last two records:
7 ABC 6
7 ABC 6
I have been trying to delete last 2 record using the feature "Edit the Top 200 rows" to delete this duplicate id but get the error above.
Any help is appreciated. Thanks in advance:)
Since you have given no clue whatsoever that there are other columns in the table, assuming your data is in 3 columns A,B,C, you can delete 2 rows using:
;with t as (
select top(2) *
from tbl
where A = 7 and B = 'ABC' and C = 6
)
DELETE t;
This will arbitrarily match two rows based on the conditions, and delete them.
This is an outline of code I use to delete dups in tables that may have many dups.
/* I always put the rollback and commit up here in comments until I am sure I have
done what I wanted. */
BEGIN tran Jim1 -- rollback tran Jim1 -- Commit tran Jim1; DROP table PnLTest.dbo.What_Jim_Deleted
/* This creates a table to put the deleted rows in just in case I'm really screwed up */
SELECT top 1 *, NULL dupflag
INTO jt1.dbo.What_Jim_Deleted --DROP TABLE jt1.dbo.What_Jim_Deleted
FROM jt1.dbo.tab1;
/* This removes the row without removing the table */
TRUNCATE TABLE jt1.dbo.What_Jim_Deleted;
/* the cte assigns a row number to each unique security for each day, dups will have a
rownumber > 1. The fields in the partition by are from the composite key for the
table (if one exists. These are the queries that I ran to show them as dups
SELECT compkey1, compkey2, compkey3, compkey4, COUNT(*)
FROM jt1.dbo.tab1
GROUP BY compkey1, compkey2, compkey3, compkey4
HAVING COUNT(*) > 1
ORDER BY 1 DESC
*/
with getthedups as
(SELECT *,
ROW_NUMBER() OVER
(partition by compkey1,compkey2, compkey3, compkey4
ORDER BY Timestamp desc) dupflag /*This can be anything that gives some order to the rows (even if order doesn't matter) */
FROM jt1.dbo.tab1)
/* This delete is deleting from the cte which cascades to the underlying table
The Where is part of the Delete (even though it comes after the OUTPUT. The
OUTPUT takes all of the DELETED row and inserts them into the "oh shit" table,
just in case.*/
DELETE
FROM getthedups
OUTPUT DELETED.* INTO jti.dbo.What_Jim_Deleted
WHERE dupflag > 1
--Check the resulting tables here to ensure that you did what you think you did
/* If all has gone well then commit the tran and drop the "oh shit" table, or let it
hang around for a while. */
Related
I have a table where records are inserted and updated. In case of updates, a new row is inserted into the table. In order to track updates for a given record, there's a column added to the table called root_record_id which holds the id of the very first record in the update chain.
For eg: Consider the record table schema as follows:
id
root_record_id
other columns
1
1
...
2
2
...
3
1
...
4
1
...
5
2
...
In this case, a record with id=1 was inserted, which was then updated to id=3 and then to id=4. Similarly the record with id=2 was inserted and then updated to id=5.
I want to add a version column to this table, where version is incremented on each update and starts with 0.
id
root_record_id
version
other columns
1
1
0
...
2
2
0
...
3
1
1
...
4
1
2
...
5
2
1
...
I tried writing queries using group by clause on root_record_id but failed to accomplish the task.
If you are looking for the general sequence on how to add the column and then pre-fill the values, then follow this fiddle: https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=5a04b49fbda3883a9605f5482e252a1b
Add the version column allowing nulls:
ALTER TABLE Records ADD version int null;
Update the version according to your logic:
UPDATE Records
SET version = lkp.version
FROM Records r
INNER JOIN (
SELECT Id, COUNT(root_record_id) OVER (partition by root_record_id ORDER BY id ASC)-1 as version
FROM Records
) lkp ON r.Id = lkp.Id;
Alter the version column to NOT allow nulls
ALTER TABLE Records ALTER COLUMN version int not null;
Finally, ensure that you increment the version column during new row inserts.
DBFIDDLE
This query produces the version that you can use (in an update, or in a trigger):
SELECT
id,
root_record_id,
RANK() OVER (partition by root_record_id ORDER BY id ASC)-1 version
FROM table1
ORDER BY id;
output:
id
root_record_id
version
1
1
0
2
2
0
3
1
1
4
1
2
5
2
1
I have a table SL_PROD which has the following columns, NUMBER, DEPTCODE, DISP_SEQ AND SL_PROD_ID.
SL_PROD_ID is an identity column which incrementally increases with each row.
I need to write a query which updates the DISP_SEQ column with sequential numbers (1-X) for the rows which have a DEPTCODE of '725'. I've tried several things with no luck, any ideas?
Try this:
A common table expression can be used in updates. This is extremely usefull, if you want to use the values of window functions (with OVER) as update values.
Attention: Look carefully what you are ordering for. I used NUMBER but you might need some other sort column (maybe your IDENTITY column)
CREATE TABLE #SL_PROD(NUMBER INT,DEPT_CODE INT,DISP_SEQ INT,SL_PROD_ID INT IDENTITY);
INSERT INTO #SL_PROD(NUMBER,DEPT_CODE,DISP_SEQ) VALUES
(1,123,0)
,(2,725,0)
,(3,725,0)
,(4,123,0)
,(5,725,0);
WITH UpdateableCTE AS
(
SELECT ROW_NUMBER() OVER(ORDER BY NUMBER) AS NewDispSeq
,DISP_SEQ
FROM #SL_PROD
WHERE DEPT_CODE=725
)
UPDATE UpdateableCTE SET DISP_SEQ=NewDispSeq;
SELECT * FROM #SL_PROD;
GO
--Clean up
--DROP TABLE #SL_PROD;
The result (look at the lines with 725)
1 123 0 1
2 725 1 2
3 725 2 3
4 123 0 4
5 725 3 5
I've been struggling with sequences for a few days. I have this Origin data table called "datos" with the next columns:
CENTRO
CODV
TEXT
INCIDENCY
And a Destiny data table called "anda" with the following:
TIPO = 31 (for all rows)
DESCRI = 'Site' (for all rows)
SECU = sequence number generated with Myseq.NEXTVAL
CENTRO
CODV
TEXT
The last three columns must be filled in with data from "datos" table.
When I execute my query, it all works fine, my table is filled and the sequence generates its values. But, in the INSERT INTO SELECT, I have the following conditions:
Every row in origin "datos" must not already be in the destiny "anda", so it won't be duplicated, and every row in "datos" must have the INCIDENCY flag value to 'N' or NULL.
If each row matches the conditions, it should be filled.
The thing is, that the query works fine and I have been trying with many different values. Here comes the problem:
When a row has its INCIDENCY value set to 'Y' (so it must not be copied into destiny table), it doesn't appear, but the sequence DOES consumes one value, and when I check Myseq.NEXTVAL its value is higher.
How can I prevent the sequence to add any value when it doesn't match the conditions? I've read that Oracle first reserves all the possible values returning from the SELECT query, but I can't find how to prevent it.
Here's the SQL:
INSERT INTO anda (TIPO, DESCRI, SECU, CENTRO, CODV, TEXT)
SELECT( 31 TIPO,
'Site' DESCRI,
Myseq.NEXTVAL,
datos.CENTRO,
datos.CODV,
datos.TEXT
FROM datos
WHERE (CENTRO, CODV) NOT IN
(SELECT CENTRO, CODV
FROM anda)
AND (datos.INCIDENCY = 'N' OR datos.INCIDENCY IS NULL)
)
Thanks in advance!!
Definition of MySeq
CREATE SEQUENCE CREATE SEQUENCE "BBDD"."MySeq" MINVALUE 800000000000
MAXVALUE 899999999999 INCREMENT BY 1 START WITH 800000000000 CACHE 20 ORDER NOCYCLE ;
You might be able to trick Oracle into doing this with a CTE:
INSERT INTO anda (TIPO, DESCRI, SECU, CENTRO, CODV, TEXT)
WITH toinsert as (
SELECT d.*
FROM datos d
WHERE (CENTRO, CODV) NOT IN (SELECT CENTRO, CODV FROM anda) AND
(d.INCIDENCY = 'N' OR d.INCIDENCY IS NULL)
)
SELECT 31 as TIPO, 'Site' as DESCRI, Myseq.NEXTVAL,
d.CENTRO, d.CODV, d.TEXT
FROM toinsert d;
I'm not quite sure if that will work. A more guaranteed approach is to use a before insert trigger (or identity column if you are using 12c+). You would increment the value in the trigger.
However, I do agree with Hugh Jones. You should be confident using the sequence to add a unique value to each row and this value will be increasing. Gaps can appear for other reasons, such as deletes. Also, I know that SQL Server can create gaps when doing parallel inserts; I'm not sure if that also happens with Oracle.
I don't believe you have a real problem(the gaps are not really an issue) but you can put a before insert (at row level) trigger on anda table and set sequ there with your sequence generated value.
But keep in mind that this will keep consecutive only the sequ number in a statement. You'll get gaps anyway for other reasons.
UPDATE: as Alex Poole has commented, the insert itself does not generate gaps.
See a test below:
> drop sequence tst_fgg_seq;
sequence TST_FGG_SEQ dropped.
> drop table tst_fgg;
table TST_FGG dropped.
> drop table tst_insert_fgg;
table TST_INSERT_FGG dropped.
> create sequence tst_fgg_seq start with 1 nocycle;
sequence TST_FGG_SEQ created.
> create table tst_fgg as select level l from dual connect by level < 11;
table TST_FGG created.
> create table tst_insert_fgg as
select tst_fgg_seq.nextval
from tst_fgg
where l between 3 and 5;
table TST_INSERT_FGG created.
> select * from tst_insert_fgg;
NEXTVAL
----------
1
2
3
> insert into tst_insert_fgg
select tst_fgg_seq.nextval
from tst_fgg
where l between 3 and 5;
3 rows inserted.
> select * from tst_insert_fgg;
NEXTVAL
----------
1
2
3
4
5
6
6 rows selected
i have a Table from which i delete records .
The problem is when i delete a certain record,its ID flies away too, so the ID order is no longer respected within the table.
What i want is a SQL Server Procedure to rearrange records after the deletion of one of them.
Example :
ID ID ID
1 1 1
2 I delete record 2, i want to have this ===> 2 and NOT this : 3
3 3 4
4 4 5
5
You don't want to do this. The id should be a field that has no meaning other than identifying a row. You might have other tables that refer to the id and they would break.
Instead, just recalculate a sequential value when you query the table:
select t.*, row_number() over (order by id) as seqnum
from t;
Using Sql Server 2000
I want to find out the duplicate record in the table
Table1
ID Transaction Value
001 020102 10
001 020103 20
001 020102 10 (Duplicate Records)
002 020102 10
002 020103 20
002 020102 10 (Duplicate Records)
...
...
Transaction and value can be repeat for different id's, not for the same id...
Expected Output
Duplicate records are...
ID Transaction Value
001 020102 10
002 020102 10
...
...
How to make a query for view the duplicate records.
Need Query help
You can use
SELECT
ID, Transaction, Value
FROM
Table1
GROUP BY
ID, Transaction, Value
HAVING count(ID) > 1
Select Id, Transaction, Value, Count(id)
from table
group by Id, Transaction, Value
having count(id) > 1
This query will show you the count of times the ID has been repeated with each entry of the Id. If you don't need it you can simply remove the Count(Id) column from the select clause.
Self join (with additional PK or Timestamp or...)
I can see that people've provided solution with grouping but none has provided the self join solution. The only problem is that you'd need some other row descriptor that should be unique for each record. Be it primary key, timestamp or anything else... Suppose that the unique column's name is Uniq this would be the solution:
select distinct ID, [Transaction], Value
from Records r1
join Records r2
on ((r2.ID = r1.ID) and
(r2.[Transaction] = r1.[Transaction]) and
(r2.Value = r1.Value) and
(r2.Uniq != r1.Uniq))
The last join column makes it possible to not join each row to itself but only to other duplicates...
To find out which one works best for you, you can check their execution plan and execute some testing.
You can do this:
SELECT ID, Transaction, Value
FROM Table
GROUP BY ID, Transaction, Value
HAVING COUNT(*) > 1
To delete the duplicates, if you have no primary key then you need to select the distinct values into a separate table, delete everything from this one, then copy the distinct records back:
SELECT ID, Transaction, Value
INTO #tmpDeduped
FROM Table
GROUP BY ID, Transaction, Value
DELETE FROM Table
INSERT Table
SELECT * FROM #tmpDeduped