SQL Server: arbitrary auto-increment of primary key [duplicate] - sql

This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 7 years ago.
We're running SQL server 2012 SP1 x64 (11.0.3000.0)
I have the following table with the InvoiceId field as the auto-incrementing, primary key:
CREATE TABLE Orders(
InvoiceId bigint IDENTITY(1001,1) NOT FOR REPLICATION,
OrderId varchar(8) NOT NULL,
... -- other fields removed for brevity
CONSTRAINT [PK_ORDERS] PRIMARY KEY CLUSTERED (InvoiceId)
ON [PRIMARY],
)
New rows are inserted though a simple stored procedure like the following:
SET XACT_ABORT ON
SET NOCOUNT ON
BEGIN TRANSACTION
INSERT INTO Orders(
OrderId,
... -- other fields removed for brevity
)
VALUES (
#orderId,
...
)
SELECT #newRowId = SCOPE_IDENTITY()
COMMIT TRANSACTION
The above sproc returns the newly created row-id (Orders.InvoiceId) to the caller.
The code was working perfectly, with [InvoiceId] starting from 1001 and incrementing by 1 for each successive inserts.
Our users inserted about 130 rows. [InvoiceId] was at 1130, then on the next insert its value jumped to 11091!
Here's the data screenshot:
I'm baffled as to what just happened here. Why did the auto-inc counter suddenly skip nearly 10,000 points?
We're using the value of [InvoiceId] to generate barcodes, so we'd prefer the value to remain in a specific range, preferably in a contiguous series.
I've perused the T-SQL documentation but failed to find anything related to my issue. Is this the normal behavior (arbitrary population) of an identity field?

UPDATE Thanks to Marting & Aron, I've found a work-around. Here's the official response from Microsoft:
In SQL Server 2012 the implementation of the identity property has been changed to accommodate investments into other features. In previous versions of SQL Server the tracking of identity generation relied on transaction log records for each identity value generated. In SQL Server 2012 we generate identity values in batches and log only the max value of the batch. This reduces the amount and frequency of information written to the transaction log improving insert scalability.
If you require the same identity generation semantics as previous versions of SQL Server there are two options available:
• Use trace flag 272 o This will cause a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.
• Use a sequence generator with the NO CACHE setting(http://msdn.microsoft.com/en-us/library/ff878091.aspx) o This will cause a log record to be generated for each generated sequence value. Note that the performance of sequence value generation may be impacted by using NO CACHE.
Example:
CREATE SEQUENCE s1 AS INT START WITH 1 NO CACHE;
CREATE TABLE t1 (Id INT PRIMARY KEY DEFAULT NEXT VALUE FOR s1, col INT NOT NULL);

Alternatively, you can have a dedicated table with the counters.
It is not a good design pattern but it put you in full control of the how the identity work.

Related

GORM Auto-increments primary key even if data wasnt inserted into DB [duplicate]

I'm using MySQL's AUTO_INCREMENT field and InnoDB to support transactions. I noticed when I rollback the transaction, the AUTO_INCREMENT field is not rollbacked? I found out that it was designed this way but are there any workarounds to this?
It can't work that way. Consider:
program one, you open a transaction and insert into a table FOO which has an autoinc primary key (arbitrarily, we say it gets 557 for its key value).
Program two starts, it opens a transaction and inserts into table FOO getting 558.
Program two inserts into table BAR which has a column which is a foreign key to FOO. So now the 558 is located in both FOO and BAR.
Program two now commits.
Program three starts and generates a report from table FOO. The 558 record is printed.
After that, program one rolls back.
How does the database reclaim the 557 value? Does it go into FOO and decrement all the other primary keys greater than 557? How does it fix BAR? How does it erase the 558 printed on the report program three output?
Oracle's sequence numbers are also independent of transactions for the same reason.
If you can solve this problem in constant time, I'm sure you can make a lot of money in the database field.
Now, if you have a requirement that your auto increment field never have gaps (for auditing purposes, say). Then you cannot rollback your transactions. Instead you need to have a status flag on your records. On first insert, the record's status is "Incomplete" then you start the transaction, do your work and update the status to "compete" (or whatever you need). Then when you commit, the record is live. If the transaction rollsback, the incomplete record is still there for auditing. This will cause you many other headaches but is one way to deal with audit trails.
Let me point out something very important:
You should never depend on the numeric features of autogenerated keys.
That is, other than comparing them for equality (=) or unequality (<>), you should not do anything else. No relational operators (<, >), no sorting by indexes, etc. If you need to sort by "date added", have a "date added" column.
Treat them as apples and oranges: Does it make sense to ask if an apple is the same as an orange? Yes. Does it make sense to ask if an apple is larger than an orange? No. (Actually, it does, but you get my point.)
If you stick to this rule, gaps in the continuity of autogenerated indexes will not cause problems.
I had a client needed the ID to rollback on a table of invoices, where the order must be consecutive
My solution in MySQL was to remove the AUTO-INCREMENT and pull the latest Id from the table, add one (+1) and then insert it manually.
If the table is named "TableA" and the Auto-increment column is "Id"
INSERT INTO TableA (Id, Col2, Col3, Col4, ...)
VALUES (
(SELECT Id FROM TableA t ORDER BY t.Id DESC LIMIT 1)+1,
Col2_Val, Col3_Val, Col4_Val, ...)
Why do you care if it is rolled back? AUTO_INCREMENT key fields are not supposed to have any meaning so you really shouldn't care what value is used.
If you have information you're trying to preserve, perhaps another non-key column is needed.
I do not know of any way to do that. According to the MySQL Documentation, this is expected behavior and will happen with all innodb_autoinc_lock_mode lock modes. The specific text is:
In all lock modes (0, 1, and 2), if a
transaction that generated
auto-increment values rolls back,
those auto-increment values are
“lost.” Once a value is generated for
an auto-increment column, it cannot be
rolled back, whether or not the
“INSERT-like” statement is completed,
and whether or not the containing
transaction is rolled back. Such lost
values are not reused. Thus, there may
be gaps in the values stored in an
AUTO_INCREMENT column of a table.
If you set auto_increment to 1 after a rollback or deletion, on the next insert, MySQL will see that 1 is already used and will instead get the MAX() value and add 1 to it.
This will ensure that if the row with the last value is deleted (or the insert is rolled back), it will be reused.
To set the auto_increment to 1, do something like this:
ALTER TABLE tbl auto_increment = 1
This is not as efficient as simply continuing on with the next number because MAX() can be expensive, but if you delete/rollback infrequently and are obsessed with reusing the highest value, then this is a realistic approach.
Be aware that this does not prevent gaps from records deleted in the middle or if another insert should occur prior to you setting auto_increment back to 1.
INSERT INTO prueba(id)
VALUES (
(SELECT IFNULL( MAX( id ) , 0 )+1 FROM prueba target))
If the table doesn't contain values or zero rows
add target for error mysql type update FROM on SELECT
If you need to have the ids assigned in numerical order with no gaps, then you can't use an autoincrement column. You'll need to define a standard integer column and use a stored procedure that calculates the next number in the insert sequence and inserts the record within a transaction. If the insert fails, then the next time the procedure is called it will recalculate the next id.
Having said that, it is a bad idea to rely on ids being in some particular order with no gaps. If you need to preserve ordering, you should probably timestamp the row on insert (and potentially on update).
Concrete answer to this specific dilemma (which I also had) is the following:
1) Create a table that holds different counters for different documents (invoices, receipts, RMA's, etc..); Insert a record for each of your documents and add the initial counter to 0.
2) Before creating a new document, do the following (for invoices, for example):
UPDATE document_counters SET counter = LAST_INSERT_ID(counter + 1) where type = 'invoice'
3) Get the last value that you just updated to, like so:
SELECT LAST_INSERT_ID()
or just use your PHP (or whatever) mysql_insert_id() function to get the same thing
4) Insert your new record along with the primary ID that you just got back from the DB. This will override the current auto increment index, and make sure you have no ID gaps between you records.
This whole thing needs to be wrapped inside a transaction, of course. The beauty of this method is that, when you rollback a transaction, your UPDATE statement from Step 2 will be rolled back, and the counter will not change anymore. Other concurrent transactions will block until the first transaction is either committed or rolled back so they will not have access to either the old counter OR a new one, until all other transactions are finished first.
SOLUTION:
Let's use 'tbl_test' as an example table, and suppose the field 'Id' has AUTO_INCREMENT attribute
CREATE TABLE tbl_test (
Id int NOT NULL AUTO_INCREMENT ,
Name varchar(255) NULL ,
PRIMARY KEY (`Id`)
)
;
Let's suppose that table has houndred or thousand rows already inserted and you don't want to use AUTO_INCREMENT anymore; because when you rollback a transaction the field 'Id' is always adding +1 to AUTO_INCREMENT value.
So to avoid that you might make this:
Let's remove AUTO_INCREMENT value from column 'Id' (this won't delete your inserted rows):
ALTER TABLE tbl_test MODIFY COLUMN Id int(11) NOT NULL FIRST;
Finally, we create a BEFORE INSERT Trigger to generate an 'Id' value automatically. But using this way won't affect your Id value even if you rollback any transaction.
CREATE TRIGGER trg_tbl_test_1
BEFORE INSERT ON tbl_test
FOR EACH ROW
BEGIN
SET NEW.Id= COALESCE((SELECT MAX(Id) FROM tbl_test),0) + 1;
END;
That's it! You're done!
You're welcome.
$masterConn = mysql_connect("localhost", "root", '');
mysql_select_db("sample", $masterConn);
for($i=1; $i<=10; $i++) {
mysql_query("START TRANSACTION",$masterConn);
$qry_insert = "INSERT INTO `customer` (id, `a`, `b`) VALUES (NULL, '$i', 'a')";
mysql_query($qry_insert,$masterConn);
if($i%2==1) mysql_query("COMMIT",$masterConn);
else mysql_query("ROLLBACK",$masterConn);
mysql_query("ALTER TABLE customer auto_increment = 1",$masterConn);
}
echo "Done";

Regenerate lost identity value

Is there a way to recreate the identity value of a SQL Server table if the statements failed inside a transaction block?
Please go through the code below:
DECLARE #IdentityTable AS TABLE (ID INT IDENTITY(1, 1), Description VARCHAR(50))
INSERT INTO #IdentityTable (Description)
VALUES('Test1')
BEGIN TRY
BEGIN TRANSACTION IdentityTest
INSERT INTO #IdentityTable (Description)
VALUES('Test2')
INSERT INTO #IdentityTable (Description)
VALUES(1/0)
COMMIT TRANSACTION IdentityTest
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION IdentityTest
END CATCH
INSERT INTO #IdentityTable (Description)
VALUES('Test4')
SELECT * FROM #IdentityTable
Identity No 3 is lost due to ROLLBACK TRANSACTION. Is it possible to regain it?
You're trying to use the IDENTITY property to generate consecutive numbers, and maintain it; that isn't what IDENTITY is for. It's designed to provide an incrementing value based on the current seed (on it's own (without a PRIMARY KEY constraint or UNIQUE INDEX), it doesn't even guarantee uniqueness as the seed could be changed (thanks HoneyBadger for reminding me so early in the morning)).
If an INSERT fails, the value of the IDENTITY will still be incremented. Also, if you were to DELETE a row from a table, that would not cause every "latter" row to have their ID's updated accordingly; thus you would also have a gap then.
The only guaranteed way of ensuring you get an incrementing value is by using a function like ROW_NUMBER at run time. For example:
SELECT ROW_NUMBER() OVER (ORDER BY ID) AS cID,
Description
FROM YourTable;
The Remarks section of the documentation specifically states that consecutive values are not guarenteed:
Identity columns can be used for generating key values. The identity
property on a column guarantees the following:
...
Consecutive values within a transaction – A transaction inserting
multiple rows is not guaranteed to get consecutive values for the rows
because other concurrent inserts might occur on the table. If values
must be consecutive then the transaction should use an exclusive lock
on the table or use the SERIALIZABLE isolation level.
Consecutive values after server restart or other failures – SQL Server
might cache identity values for performance reasons and some of the
assigned values can be lost during a database failure or server
restart. This can result in gaps in the identity value upon insert. If
gaps are not acceptable then the application should use its own
mechanism to generate key values. Using a sequence generator with the
NOCACHE option can limit the gaps to transactions that are never
committed.
Reuse of values – For a given identity property with specific
seed/increment, the identity values are not reused by the engine. If a
particular insert statement fails or if the insert statement is rolled
back then the consumed identity values are lost and will not be
generated again. This can result in gaps when the subsequent identity
values are generated.

Identities Appearing in Tables have Gaps [duplicate]

This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 9 years ago.
We have begun to see identities created in some of our tables are no longer precisely sequential. That is to say they remain incrementally higher but there are large gaps in the values.
For instance, the sequence is {1,2,3.. .. 97,98,99} and then a jump to {1092,1093,1094.. .. 1097,1098,1099} followed by another gap and then {4231,4232,4233.. .. 4257,4258,4259}.
Can anyone shed any light on this behaviour?
If you execute an insert to a table that has an identity column, and the insert fails (for ANY reason), the identity value is still incremented and the next insert will leave a gap. Also, if you delete rows there will obviously be gaps.
NEVER rely on, or use, the actual value of a surrogate key, or identity for anything other than as a "connection value" to connect row or rows in one table to rows in another table. Certainly never rely on the sequence of values being contiguous, nor even that they are chronological increasing.
This is a known bug. Your large gaps are caused by things like failover, service restart, reboots, etc.
http://connect.microsoft.com/SQLServer/feedback/details/739013/failover-or-restart-results-in-reseed-of-identity
Until it is fixed, there's not much you can do about it except maybe have a startup procedure that reseeds the identity column on all affected tables.
Anytime you insert into a table with an identity column it increments the identity. If you delete the rows, or even if you rollback an insert into that table the identity column stays at the new increment.
Here's a script to show the effect of rolling back a transaction:
create table #temp (TheKey int identity(1,1), TheValue int)
insert into #Temp (TheValue) values (1)
select max(TheKey) from #Temp --1 as expected
begin tran
insert into #Temp (TheValue) values (1)
select max(TheKey) from #Temp --2 as expected
rollback
select max(TheKey) from #Temp --1 as expected
insert into #Temp (TheValue) values (1)
select max(TheKey) from #Temp --3 a little bit of a surprise?
It appears that the server caches values for performance and so if the server is reset after a power outage for instance then those values can be lost. See this article.
Consecutive values after server restart or other failures

Auto Increment feature of SQL Server

I have created a table named as ABC. It has three columns which are as follows:-
The column number_pk (int) is the primary key of my table in which I have made the auto increment feature on for that column.
Now I have deleted two rows from that table say Number_pk= 5 and Number_pk =6.
The table which I get now is like this:-
Now if I again enter two new rows in this table with the same value I get the two new Number_pk starting from 7 and 8 i.e,
My question is that what is the logic behind this since I have deleted the two rows from the table. I know that a simple answer is because I have set the auto increment on for the primary key of my table. But I want to know is there any way that I can insert the two new entries starting from the last Number_pk without changing the design of my table?
And how the SQL Server manage this record since I have deleted the rows from the database??
The logic is guaranteeing that the generated numbers are unique. An ID field does not neccessarily have to have a meaning, but rather is most often used to identify a unique record, thus making it easier to perform operations on it.
If your database is designed properly, the deleted ID numbers would not have been possible to delete if they were referenced by any other tables in a foreign key relationship, thus preventing records from being orphaned in that way.
If you absolutely want to have entries sequences, you could consider issuing a RESEED, but as suggested, it would not really give you much advantages.
The identity record is "managed" because SQL Server will keep track of which numbers have been issued, regardless of whether they are still present or not.
Should you ever want to delete all records from a table, there are two ways to do so (provided no foreign key relatsons exist):
DELETE FROM Table
DELETE just removes the records, but the next INSERTED value will continue where the ID numbering left of.
TRUNCATE TABLE
TRUNCATE will actually RESEED the table, thus guaranteeing it starts again at the value you originally specified (most likely 1).
Although you should not do this until their is a specific requirement.
1.) Get the max id:
Declare #id int
Select #id = Max(Number_pk) From ABC
SET #id = #id + 1;
2.) And reset the Identity Column:
DBCC CHECKIDENT('ABC', RESEED, #id)
DBCC CHECKIDENT (Transact-SQL)

Periodic restarting auto numbering for SQL Server

I have a requirement for a program I'm working on to store job numbers as YY-###### with incrementing numbers starting at 000001 in each year preceded by the last two digits of the year.
The only method I've been able to come up with is to make a CurrentJob table and use an identity column along with the last two digits of the year and an ArchiveJob table and then combining the two via a union in a view. Then I'd have to copy the CurrentJob to ArchiveJob at the begining of the year and truncate CurrentJob.
Is there an easier way to restart the numbering (obviously not having it be an Identity column) in one table?
The client is closed on New Years so there should be no data entry at the change of the year (for a school).
An identity column is by far the fastest and most concurrent solution to generating sequential numbers inside SQL Server. There's no need to make it too complicated though. Just have one table for generating the identity values, and reset it at the end of the year. Here's a quick example:
-- Sequence generating table
create table SequenceGenerator (
ID integer identity(1, 1) primary key clustered
)
-- Generate a new number
insert into SequenceGenerator default values
select ##identity
-- Reset the sequence
truncate table SequenceGenerator
if ident_current('SequenceGenerator') <> 1 begin
dbcc checkident('SequenceGenerator', reseed, 0)
dbcc checkident('SequenceGenerator', reseed)
end else begin
dbcc checkident('SequenceGenerator', reseed, 1)
end
There is a similar question #761378.. (Note: it uses MySql but the principle is the same)
The accepted answer suggested using a second table to manage the current ID.
However the most popular question was to not do this! Please note HLGEM's answer on the post for reasons why not to.
You can use the "reseed" command from here to reset the starting value.