Primary Key Identity Value Increments On Unique Key Constraint Violation - sql

I have a SqlServer 2008 table which has a Primary Key (IsIdentity=Yes) and three other fields that make up a Unique Key constraint.
In addition I have a store procedure that inserts a record into the table and I call the sproc via C# using a SqlConnection object.
The C# sproc call works fine, however I have noticed interesting results when the C# sproc call violates the Unique Key constraint....
When the sproc call violates the Unique Key constraint, a SqlException is thrown - which is no surprise and cool. However, I notice that the next record that is successfully added to the table has a PK value that is not exactly one more than the previous record -
For example: Say the table has five records where the PK values are 1,2,3,4, and 5. The sproc attempts to insert a sixth record, but the Unique Key constraint is violated and, so, the sixth record is not inserted. Then the sproc attempts to insert another record and this time it is successful. - This new record is given a PK value of 7 instead of 6.
Is this normal behavior? If so, can you give me a reason why this is so? (If a record fails to insert, why is the PK index incremented?)
If this is not normal behavior, can you give me any hints as to why I am seeing these symptoms?

Yes, this is normal.
Imagine transactions going on here and this being a potential order of operations ran on SQL Server.
IDs 1, 2, 3, 4, 5 are used.
Client A begins transaction.
Client A performs insert but does not commit (ID 6).
Client B begins transaction.
Client B performs insert but does not commit. (ID 7).
Client A rolls back.
Client B commits.
Because of the possibility for (not necessarily existence of) this behavior, you see that ID 6 gets skipped when that insert fails.

Related

How to know if an uncommitted transaction tries to insert a specific unique key into SQL

I'm writing a programm which inserts data to a MariaDB-Server and can be used by different people on the same time. The transactions take some time, so the following problem might occur: Person A starts a transaction with primary key "c" and while the transaction is still uncommitted, Person B wants to insert data with the same primary key "c". How can I prevent that B can start its transaction with a primary key that A already uses in its uncommitted transaction?
I use MariaDB as database and InnoDB as Engine.
I've checked the Isolation-Levels but couldn't figure how to use them to solve my Problem.
Thanks!
It has nothing to do with transaction isolation levels. It's about locking.
Any insert/update/delete to a specific entry in an index locks that entry. Locks are granted first-come, first-serve. The next session that tries to do an insert/update/delete to the same index entry will be blocked.
You can demo this yourself. Open two MySQL client windows side by side.
First window:
mysql> START TRANSACTION;
mysql> INSERT INTO mytable SET c = 42;
Then don't commit yet.
Second window:
mysql> INSERT INTO mytable SET c = 42;
Notice that it hangs at this point, waiting for the lock.
First window:
mysql> commit;
Second window finally returns:
ERROR 1062 (23000): Duplicate entry '42' for key 'PRIMARY'
Every table should have a PRIMARY KEY. In MySQL, the PRIMARY KEY is, by definition, UNIQUE.
You can also have UNIQUE keys declared on the table.
Each connection should be doing this to demark a transaction:
BEGIN;
various SQL statements
COMMIT;
If any of those SQL statements inserts a row, it uses the unique key(s) to block others from inserting the same unique value into that table. This will lead to some form of error -- deadlock (fatal to the transaction), "lock wait timeout" -- which it might recover from, etc.
Note: If you have any SELECTs in the transaction, you may need to stick FOR UPDATE on the end of them. This signals what rows you might change in the transaction, thereby giving other connections a heads-up to stay out of the way.
Can you find out if any of this is going on? Not really. But why bother? Simply plow ahead and do what you need to do. But check for errors to see if some other connection prevented you from doing it.
Think of it is "optimistic" coding.
Leave the isolation level alone; it only adds confusion to typical tasks.
Primary keys are internal values that ensure uniqueness of rows and are not meant to be exposed to the external world.
Generate your primary keys using IDENTITY columns or using SEQUENCEs. They will handle multiple simultaneous inserts gracefully and will assign each one different values.
Using IDENTITY:
CREATE TABLE house (
id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
address VARCHAR(40) NOT NULL
);
INSERT INTO house (address) VALUES ('123 Maple Street');
Using a SEQUENCE:
CREATE SEQUENCE myseq1;
CREATE TABLE house (
id INTEGER NOT NULL PRIMARY KEY,
address VARCHAR(40) NOT NULL
);
INSERT INTO house (id, address) VALUES (NEXT VALUE FOR myseq1, '123 Maple Street');

SQL command works in interactive mode but no in bash script in Oracle Database [duplicate]

Imagine I have this simple table:
Table Name: Table1
Columns: Col1 NUMBER (Primary Key)
Col2 NUMBER
If I insert a record into Table1 with no commit...
INSERT INTO Table1 (Col1, Col2) Values (100, 1234);
How does Oracle know that this next INSERT statement violates the PK constraint, since nothing has yet been committed to the database yet.
INSERT INTO Table1 (Col1, Col2) Values (100, 5678);
Where/how does Oracle manage the transactions so that it knows I'm violating the constraint when I haven't even committed the transaction yet.
Oracle creates an index to enforce the primary key constraint (a unique index by default). When Session A inserts the first row, the index structure is updated but the change is not committed. When Session B tries to insert the second row, the index maintenance operation notes that there is already a pending entry in the index with that particular key. Session B cannot acquire the latch that protects the shared index structure so it will block until Session A's transaction completes. At that point, Session B will either be able to acquire the latch and make its own modification to the index (because A rolled back) or it will note that the other entry has been committed and will throw a unique constraint violation (because A committed).
It's because of the unique index that enforces the primary key constraint. Even though the insert into the data block is not yet committed, the attempt to add the duplicate entry into the index cannot succeed, even if it's done in another session.
Just because you haven't done a commit yet does not mean the first record hasn't been sent to the server. Oracle already knows about you intentions to insert the first record. When you insert the second record Oracle knows for sure there is no way this will ever succeed without a constraint violation so it refuses.
If another user were to insert the second record, Oracle will accept it if the first record has not been committed yet. If the second user commits before you do, your commit will fail.
Unless a particular constraint is "deferred", it will be checked at the point of the statement execution. If it is deferred, it will be checked at the end of the transaction. I'm assuming you did not defer your PRIMARY KEY and that's why you get a violation even before you commit.
How this is really done is an implementation detail and may vary between different database systems and even versions of the same system. The application developer should probably not make too many assumptions about it. In Oracle's case, PRIMARY KEY uses the underlying index for performance reasons, while there are systems out there that do not even require an index (if you can live with the corresponding performance hit).
BTW, a deferrable Oracle PRIMARY KEY constraint relies on a non-unique index (vs non-deferrable PRIMARY KEY that uses a unique index).
--- EDIT ---
I just realized you didn't even commit the first INSERT. I think Justin's answer explains nicely how what is essentially a lock contention causes one of the transactions to stall.

Postgresql turns update to insert and creates duplicate record

I'm not quite sure how to ask this question.
The table stores its main data in a JSONB column. The other columns are an integer primary key, a unique text secondary key, an application generated integer transaction id, and the type of operation last performed (insert, update, delete).
There are 5 triggers.
On before insert and update, set the new.operation column to TG_OP (more on this later)
On before insert, generate a unique 6 digit alphameric code for use in URLs
On before insert, generate a unique, random 6 digit numeric code avoiding the German Tank Problem.
On before insert, add the numeric and alphameric codes to the JSONB object.
On after update and delete, insert the old record appended with the new tranid and operation column to an unindexed archive table.
All of the triggers seem to work and the records get created with the new ids and the ids in the JSONB column.
However, on an update the new operation gets set to update from TG_OP variable, but the record gets inserted into the table creating duplicate keys. Subsequent ops on that record fail because of the duplicate records.
I've stepped through it in the pgAdmin debugger. It seems to go through each trigger correctly. It completes with a record from the insert (e.g. tranid=254, operation=insert) and another from the update (e.g. tranid=256, operation=update). The archive table has one record added which shows the original info was 254/insert and it was replaced by 256/update.
But there are two records in the main table!!!
This is a violation of two uniqueness constraints which should have caused it to fail:
CONSTRAINT npprimarykey_id PRIMARY KEY (id),
CONSTRAINT npid_txt_unique UNIQUE (id_txt)
Beyond that, the command being executed was an UPDATE.
I'd not clear where to look or on what forum to ask the question. Which is the forum the firms building Postgresql frequent?
Thanks,
David

SQL Server breaks the assigned identity seed increment?

I have two tables: table1 (column id: Seed 1000, increment 1) & table2 (column id: Seed 2000, increment 1). First I insert some records in table1 & table2. Second I insert to table2 in table1 (using identity_insert_on) and get something like:
1000 first_record_table1
1001 second_record_table1
1002 third_record_table1
2000 first_record_table2
2001 second_record_table2
Third: If I add a new record in table1, i expect to get 1003 as the new id but i get 2002.
(1003 is not breaking the unique id rule and is the correct increment value for table1, so I don't see any reason to jump to the last record and increment 1, as the database seems to do).
Question: How can I get 1003 as the new id?
The documentation on identity is quite clear:
The identity property on a column does not guarantee the following:
Uniqueness of the value – Uniqueness must be enforced by using a PRIMARY KEY or UNIQUE constraint or UNIQUE index.
Consecutive values within a transaction – A transaction inserting multiple rows is not guaranteed to get consecutive values for the rows because other concurrent inserts might occur on the table. If values must be consecutive then the transaction should use an exclusive lock on the table or use the SERIALIZABLE isolation level.
Consecutive values after server restart or other failures –SQL Server might cache identity values for performance reasons and some of the assigned values can be lost during a database failure or server restart. This can result in gaps in the identity value upon insert. If gaps are not acceptable then the application should use a sequence generator with the NOCACHE option or use their own mechanism to generate key values.
Reuse of values – For a given identity property with specific seed/increment, the identity values are not reused by the engine. If a particular insert statement fails or if the insert statement is rolled back then the consumed identity values are lost and will not be generated again. This can result in gaps when the subsequent identity values are generated.
In most cases, an identity primary key column does just what it is intended to do. It creates a new value bigger than any previous value when a new row is inserted. There may be gaps, but that is not a problem for a primary key.
If you want a column that fills in gaps, then you will have to write a trigger to assign the values.
Alternatively, you could just use row_number() in queries to get sequential values.
Increment 1 just means the next record will have an ID that's one larger than the largest one, not that all numbers will be used. If you want the 1003 as the new ID, you'll have to do some programming yourself, and handle the case that the new is already taken once you reach 2000.
But you should never ever rely on any generated IDs being without gaps. Imagine you have 2 sessions. First session inserts something and does not commit nor rollback yet. Second session inserts something and commits. First session rolls back. How do you want to handle this case? You will need to assign some ID to the first session, and a different ID to the second one. Now the first session rolls back, so its ID is unused. Do you want to subtract one from the second session's ID? Very bad idea. Block the 2nd session until the 1st one has either rolled back or committed? Very bad idea as well.
So please forget about consecutive IDs, they will never work. Unique IDs are what you normally need, and SQL Server guarantees them.

Oracle - How does Oracle manage transaction specific DML statements

Imagine I have this simple table:
Table Name: Table1
Columns: Col1 NUMBER (Primary Key)
Col2 NUMBER
If I insert a record into Table1 with no commit...
INSERT INTO Table1 (Col1, Col2) Values (100, 1234);
How does Oracle know that this next INSERT statement violates the PK constraint, since nothing has yet been committed to the database yet.
INSERT INTO Table1 (Col1, Col2) Values (100, 5678);
Where/how does Oracle manage the transactions so that it knows I'm violating the constraint when I haven't even committed the transaction yet.
Oracle creates an index to enforce the primary key constraint (a unique index by default). When Session A inserts the first row, the index structure is updated but the change is not committed. When Session B tries to insert the second row, the index maintenance operation notes that there is already a pending entry in the index with that particular key. Session B cannot acquire the latch that protects the shared index structure so it will block until Session A's transaction completes. At that point, Session B will either be able to acquire the latch and make its own modification to the index (because A rolled back) or it will note that the other entry has been committed and will throw a unique constraint violation (because A committed).
It's because of the unique index that enforces the primary key constraint. Even though the insert into the data block is not yet committed, the attempt to add the duplicate entry into the index cannot succeed, even if it's done in another session.
Just because you haven't done a commit yet does not mean the first record hasn't been sent to the server. Oracle already knows about you intentions to insert the first record. When you insert the second record Oracle knows for sure there is no way this will ever succeed without a constraint violation so it refuses.
If another user were to insert the second record, Oracle will accept it if the first record has not been committed yet. If the second user commits before you do, your commit will fail.
Unless a particular constraint is "deferred", it will be checked at the point of the statement execution. If it is deferred, it will be checked at the end of the transaction. I'm assuming you did not defer your PRIMARY KEY and that's why you get a violation even before you commit.
How this is really done is an implementation detail and may vary between different database systems and even versions of the same system. The application developer should probably not make too many assumptions about it. In Oracle's case, PRIMARY KEY uses the underlying index for performance reasons, while there are systems out there that do not even require an index (if you can live with the corresponding performance hit).
BTW, a deferrable Oracle PRIMARY KEY constraint relies on a non-unique index (vs non-deferrable PRIMARY KEY that uses a unique index).
--- EDIT ---
I just realized you didn't even commit the first INSERT. I think Justin's answer explains nicely how what is essentially a lock contention causes one of the transactions to stall.