Here's a strange one. I'm using SubSonic 2.0.3 to insert a new row into a given table.
The table includes an int identity field that is set up properly in the database (Identity seed = 1, Identity increment = 1). Obviously I do not explicitly set this value before calling .Save().
Ever since I rebuilt my development DB (copying from my prod DB), the .Save() fails with the message:
"The INSERT statement conflicted with the CHECK constraint \"repl_identity_range_tran_661577395\". The conflict occurred in database \"blah\", table \"dbo.ScheduledEmails\", column 'MyIdentity'"
The database is replicated, and I didn't explicitly create that constraint. To be honest, I don't understand the constraint, since the condition is ([MyIdentity]>(7) AND [MyIdentity]<(20000)). The constraint from the Prod DB has different numbers, but it's the same format as this one from my Dev DB.
Any clues about this bizarre issue?
Related
I added a simple check constraint to a table that contained some rows violating the constraint; I was expecting to be blocked by the engine due to an integrity check of old data but the schema was saved without warning. The constraint works as expected on new insertions or updates. Is it the expected behavior?
I'm using: XAMPP 3.3 (Windows 10), MariaDB 10.4.21, and SQLYog community edition as frontend
Yes this is expected behaviour as a CHECK constraint will
Before a row is inserted or updated, all constraints are evaluated in the order they are defined.
manual
As the old data are already inserted you will not get a an error message of a violation.
You can run a SELECT to check for all violations and update them accourding to a plan, or or could rename the old table create a new one with the old name and insert all data, this will stop when you hit a constraint, but the forst way is more secure
I have a table on premise that is about 21 million rows with a primary key constraint and when I search that table, there are no duplicates. This table is in an OLTP application database that is constantly moving.
I have the exact same table in Azure which has the same primary key constraint. This table is not an application table, it's just a copy of the one that is on-premise (the goal is to use this one for ad hoc queries, as a source for other systems, etc.).
When I use Azure Data Factory to select all_columns from table on premise to the table in Azure, it returns a violation of the primary key constraint. No matter how many times I run this data factory pipeline, it comes back with a primary key violation for duplicate keys (the keys are always changing though).
So I dropped the primary key constraint in Azure and ran the pipeline again, and sure enough, duplication exists.
Upon investigation, it appears that the on-premise database is doing an insert new record then update the old record to inactivate it. So for a fraction of a second, there are two active rows that ADF is grabbing to then try to insert into the table in Azure which of course fails because of duplicate primary keys.
Now to the best of my knowledge, this shouldn't be possible. You can't insert a new row that violates the primary key constraint. But ADF seems to be grabbing all the data and some of those rows are mid-flight where the insert has happened and the update to inactivate the old row hasn't happened yet.
For those that are curious, the insert happens and the update of the old row happens within less than a second... it's typically 10-20 microseconds. I don't know how this is possible and I don't know how to fix it (because I can't modify the application code). The database for the on-premise database is a SQL Server 2000 database and Azure SQL is an Azure SQL database.
Try with readpast hint. It should not select any rows in locking state.
SELECT * FROM yourtable WITH (readpast)
Since you have create_date and updated_date column then you can select rows older than 5 seconds to avoid duplication.
select * from yourtable where created_date<=dateadd(second,-5,getdate()) and updated_date<=dateadd(second,-5,getdate());
Need to enable the Fault tolerance in a Pipeline Azure Data Factory
Copy data from a Source SQL to a Sink SQL database. A primary key is defined in the sink SQL database, but no such primary key is defined in the source SQL server. The duplicated rows that exist in the source cannot be copied to the sink. Copy activity copies only the first row of the source data into the sink. The subsequent source rows that contain the duplicated primary key value are detected as incompatible and are skipped.
To configure Json Definition skip the incompatible rows in copy activity "enableSkipIncompatibleRow": true
Please Refer: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-fault-tolerance
If possible to modify your application, need to check the Primary key constraint before insert or update using EXISTS() function.
Example:
IF EXISTS(SELECT * FROM Table_Name WHERE primary key condition)
BEGIN
UPDATE Table_Name
SET Col_Name= value
WHERE condition
END
ELSE
BEGIN
INSERT INTO Table_Name ( col_Name1,col_Name2,,.. )
VALUES ( ‘’,’’,’’,….)
END
I am building a .NET disconnected client-server application that uses Entity Framework 5 (EF5) to generate a SQL Server CE 4.0 database from POCOs. The application allows the user to perform a bulk copy of data from the network SQL Server into the client's SQL Server CE database. This is very (VERY) slow, due to the constraints and indexes created by EF5. Temporarily dropping the constraints and indexes will reduce the 30-minute wait to 1 minute or less.
Before starting the bulk copy, the application executes queries to drop the constraints and indexes from the SQL Server CE tables. However, the commands fail, because EF5 created constraint names include the table schema name, dot, and table name. The dot in the constraint name is causing the drop command to fail, due to a parsing issue.
For example, POCO Customer creates table dbo.Customer with the primary key constraint PK_dbo.Customer_Id. The database performs as expected.
However, upon executing non-query:
ALTER TABLE Customer DROP CONSTRAINT PK_dbo.Customer;
SQL Server Compact ADO.NET Data Provider returns an error:
There was an error parsing the query.
[ Token line number = 1, Token line offset = 57, Token in error = . ]
Of course, using a secondary DataContext object that does not have foreign keys generate the database without the constraints, and then add them later works; but, that requires maintaining two DataContext objects and hopefully not forgetting to keep both updated. Therefore, I am looking for one of two solutions:
Compose the DROP statement in such a way that the . character is parsed
Prevent EF5 from using the . character in the constraint and index names
Thank you in advance for your help!
Wrap that bad boy in a []. It tells the parser that everything inside is the key name.
ALTER TABLE Customer DROP CONSTRAINT [PK_dbo.Customer];
Should run fine.
Personally I just wrap every identifier in brackets to avoid this exact issue. So I would write this query like this.
ALTER TABLE [Customer] DROP CONSTRAINT [PK_dbo.Customer];
I think it's more readable that way because you can instantly see identifiers.
I have a database that I migrated from MySql using SQL Server Migration Assistant and it is now stored in Azure.
SSMA apparently generated a new primary key column, named ssma$rowid, for one of the tables. I am trying to change the PK back to Card_Key, but I am getting the following error:
An error was encountered while applying the changes.
An exception occurred while executing the Transact-SQL statement:
ALTER TABLE [carddb].[Cards] ALTER COLUMN [Card_Key] INT NOT NULL.
The index 'Card_Key' is dependent on column 'Card_Key'.
ALTER TABLE ALTER COLUMN Card_Key failed because one or more objects
access this column.
How can I make Card_Key the PK again?
The easiest might be to create a new table [cards2] with the correct primary key and copy your data from [cards] into the new table (just run a INSERT INTO cards2 ... SELECT ... FROM cards). Once that's done, you can drop (or rename to [cardsold] be on the safe side) the original table [cards], and rename the new table as [cards]: sp_rename cards2, cards
This should work.
I have never seen this before, the rows will be sequential but I have noticed that it skipped over a particular "ID".... 1 2 3 4 6 7 8... missing 5...
There are no transactions in the INSERT stored procedure so nothing to roll back
We do not allow the deletion of records.
What else can be the case?
Probably a failed insert due to some other unique constraint on the table or a foreign key reference in the table and you try to insert an invalid fk value.
The insert doesn't have to be in a transaction.
The identity value increments even on a failed insert.
Igor mentions an important point about identities and transactions. From the docs:
Failed statements and transactions can
change the current identity for a
table and create gaps in the identity
column values. The identity value is
never rolled back even though the
transaction that tried to insert the
value into the table is not committed.
For example, if an INSERT statement
fails because of an IGNORE_DUP_KEY
violation, the current identity value
for the table is still incremented.
Any way, identity counter does not restore your value (if you have executed with transaction or without it). The same behavior has oracle (sequences).
Identity is not transactional.
You may use your own primary key counter and control access to it.
Failed attempts generates an identity value even though it is not inserted into the data table. Then, this results to the lost of an identity (it's a shame I can no longer find the post where I have learned it!).