SQL Server based Counter - sql

Whats the best way to implement a unique counter generator in SQL Server 2005?
So I have a single field in a table with a single value which is the counter.
I want to get the current value and also increment by 1.
So database will hold the next incremented value. I have tried several select + update combos but since this counter is being hit very hard (probably 5000 times per minute) from various calls from independent applications, the application seems to be slow. How can I get a better counter?
I need the value to be serial so we cannot have a different table for each application.
I have suggestion from someone here: http://www.sqlines.com/mysql/how-to/select-update-single-statement-race-condition but not sure it is right or may have other issues.

Can you not just Insert a row into a table using an Identity column? Once you retrieve the inserted counter value using SCOPE_IDENTITY, you'll know what the previous version was. It'll be fast because it's not checking existing rows. If you don't need to store per-request rows, you could clear these out occasionally.

Related

SEQUENCE number on every INSERT in MS SQL 2012

I am in the situation where multiple user inserting values from application to database via web service, have using stored procedure for validate and insert records.
Requirement is create unique number for each entries but strictly in SEQUENCE only. I added Identity column but its missed some of the number in between e.g. 25,26,27,29,34...
Our requirement is strictly generate next number only like we use for Invoice Number/ Order Number/ Receipt Number etc. 1,2,3,4,5...
I checked below link about Sequence Number but not sure if its surely resolve my issue. Can someone please assist in this.
Sequence Numbers
If you absolutely, positively cannot have gaps, then you need to use a trigger and your own logic. This puts a lot of overhead into inserts, but it is the only guarantee.
Basically, the parts of a database that protect the data get in the way of doing what you want. If a transaction uses a sequence number (or identity) and it is later rolled back, then what happens to the generated number? Well, that is one way that gaps appear.
Of course, you will have to figure out what to do in that case. I would just go for an identity column and work on educating users that gaps are possible. After all, if you don't want gaps on output, then row_number() is available to re-assign numbers.

Is it viable to have a SQL table with only one row and one column?

I'm currently working on my first application that uses a database so I'm very new to this. The database has multiple tables that are what you would expect to normally see.
However, I created one table which only has one row and one column used to keep a count of the total items processed by the program so it's available to access elsewhere. I can't just use
SELECT COUNT(*) FROM table_name
because these items that I am processing I do not want to actually keep in a table.
It seems like a waste to use a table to store one value so I am wondering if there a better way to keep track of this value.
What is your table storing? it's storing some kind of processing audit. So make it a little more useful - add a column storing the last datetime that the data was processed. Add a column for the time it took to process. Add another column which stores the username (or some identifier) of whoever ran the process. Now add a row for every table that is processed (there's only one now but there might be more in future). Try and envisage how your processing is going to grow in future

SQL Server 2008 identity Key auto increment Issue

I recently had to move a database(sql server 2008) to a different server, and I have noticed that in one of the table, the value of identity column has started to get some unexpected values. its set as identity column with identity increment 1 and identity seed 1. After every 10 consecutive entry or so, it would start from another much higher number and increment by 1 for next 10 entries or so and then jump up to another higher number. I can't seem to figure out the issue.
Sorry for the layman language. I am not a DB person.
This is likely not an issue with your identity key but an issue with a framework being used to insert data, or a SP. If you have a stored procedure that inserts data but then is ROLLBACK'd, the ID was reserved but the row is 'deleted'.
So two places to check: One on the frameworks you're using (NHibernate, or Entity Framework, etc?)... those frameworks might be inserting rows then deleting them. Second place to check is INSERT statements in SPROCs and other places you might expect a ROLLBACK.
See: SQL Identity (autonumber) is Incremented Even with a Transaction Rollback
Another issue is that you may just be examining data without sorting it? And when you ported the data you assumed it would be inserted or retrieved always in-ID-order. But because the new table is not 'indexed' the same way you won't necessarily see items in primary-key order. This is less likely if rows appear sequential most of the time with gaps, but worth mentioning.
The following will shed some light on your issue.
Create a table with an identity auto increment column
Insert 10 rows of random data
You will see that the ID values are 1,2,3,4,5,6,7,8,9,10
delete from mytable where id=10
insert another row into the table
now you will see the ID values are 1,2,3,4,5,6,7,8,9,**11**

A fresh SQL sequence per day

What would be a good way to create a fresh sequence of serial numbers on a per-day basis in a SQL database?
Something like the auto-increment feature of integer columns in some database systems.
I'm creating/recording transactions, and the transaction is identified by the pair 'date,serial no'.
That depends on what kind of database you have. If you have access to global variables or generators, just reset it each day and use it to seed your serial number column. If not, you can store the value in a table and look it up to seed your column, again resetting it each day.
Don't forget to increment the seeds manually if necessary. (Generators are a special kind of global variable that can auto-increment themselves if set up to do so. Other variables and certainly a record in a table do not.)
To reset the value, just set a trigger on insert that checks if COUNT(DATE = today) is 0. If so, reset the value.
How about a specific table just for this purpose?
create table AvailableSerialNumbers (
AvailableOn datetime primarykey,
NextAvailableNumber int
)
You'd have to populate this in advance, but that's pretty straight-forward (make sure this is done automatically and not manually).
Note that if you have a large number of serial number records being created at the same time the create logic will bottleneck on updating the AvailbleSerialNumbers record. The easiest solution to that problem is to define multiple AvailableSerialNumbers records per day (say 100 of them) and randomly choose 1 to update. If using this approach them instead of a "NextAvailableNumber" field it should be a from/to range. When the range hits 0 delete the range record.
The best way is to use the date as part of the number, such as having 080216001 through 080216999 for today. Can you have non-contiguous numbers?

SQL Identity Column out of step

We have a set of databases that have a table defined with an Identity column as the primary key. As a sub-set of these are replicated to other servers, a seed system was created so that they could never clash. That system was by using a starting seed with an increment of 50.
In this way the table on DB1 would generate 30001, 30051 etc, where Database2 would generate 30002, 30052 and so on.
I am looking at adding another database into this system (it is split for scaling/loading purposes) and have discovered that the identites have got out of sync on one or two of the databases - i.e. database 3 that should have numbers ending in 3, doesn't anymore. The seeding and increments is still correct according to the table design.
I am obviously going to have to work around this problem somehow (probably by setting a high initial value), but can anyone tell me what would cause them to get out of sync like this? From a query on the DB I can see the sequence went as follows: 32403,32453, 32456, 32474, 32524, 32574 and has continued in increments of 50 ever since it went wrong.
As far as I am aware no bulk-inserts or DTS or anything like that has put new data into these tables.
Second (bonus) question - how to reset the identity so that it goes back to what I want it to actually be!
EDIT:
I know the design is in principle a bit ropey - I didn't ask for criticism of it, I just wondered how it could have got out of sync. I inherited this system and changing the column to a GUID - whilst undoubtedly the best theoretical solution - is probably not going to happen. The system evolved from a single DB to multiple DBs when the load got too large (a few hundred GBs currently). Each ID in this table will be referenced in many other places - sometimes a few hundred thousand times each (multiplied by about 40,000 for each item). Updating all those will not be happening ;-)
Replication = GUID column.
To set the value of the next ID to be 1000:
DBCC CHECKIDENT (orders, RESEED, 999)
If you want to actually use Primary Keys for some meaningful purpose other than uniquely identify a row in a table, then it's not an Identity Column, and you need to assign them some other explicit way.
If you want to merge rows from multiple tables, then you are violating the intent of Identity, which is for one table. (A GUID column will use values that are unique enough to solve this problem. But you still can't impute a meaningful purpose to them.)
Perhaps somebody used:
SET IDENTITY INSERT {tablename} ON
INSERT INTO {tablename} (ID, ...)
VALUES(32456, ....)
SET IDENTITY INSERT {tablename} OFF
Or perhaps they used DBCC CHECKIDENT to change the identity. In any case, you can use the same to set it back.
It's too risky to rely on this kind of identity strategy, since it's (obviously) possible that it will get out of synch and wreck everything.
With replication, you really need to identify your data with GUIDs. It will probably be easier for you to migrate your data to a schema that uses GUIDs for PKs than to try and hack your way around IDENTITY issues.
To address your question directly,
Why did it get out of sync may be interesting to discuss, but the only result you could draw from the answer would be to prevent it in the future; which is a bad course of action. You will continue to have these and bigger problems unless you deal with the design which has a fatal flaw.
How to set the existing values right is also (IMHO) an invalid question, because you need to do something other than set the values right - it won't solve your problem.
This isn't to disparage you, it's to help you the best way I can think of. Changing the design is less work both short term and long term. Not to change the design is the pathway to FAIL.
This doesn't really answer your core question, but one possibility to address the design would be to switch to a hi_lo algorithm. it wouldn't require changing the column away from an int. so it shouldn't be nearly as much work as changing to a guid.
Hi_lo is used by the nhibernate ORM, but I couldn't find much documentation on it.
Basically the way a Hi_lo works is you have 1 central place where you keep track of your hi value. 1 table in 1 of the databases that every instance of your insert application can see. then you need to have some kind of a service (object, web service, whatever) that has a life somewhat longer than a single entity insert. this service when it starts up will go to the hi table, grab the current value, then increment the value in that table. Use a read committed lock to do this so that you won't get any concurrency issues with other instances of the service. Now you would use the new service to get your next id value. It internally starts at the number it got from the db, and when it passes that value out, increments by 1. keeping track of this current value and the "range" it's allowed to pass out. A simplistic example would be this.
service 1 gets 100 from "hi_value" table in db. increments db value 200.
service 1 gets request for a new ID. passes out 100.
another instance of the service, service 2 (either another thread, another middle tier worker machine, etc) spins up, gets 200 from the db, increments db to 300.
service 2 gets a request for a new id. passes out 200.
service 1 gets a request for a new id. passes out 101.
if any of these ever gets to passing out more than 100 before dying, then they will go back to the db, and get the current value and increment it and start over. Obviously there's some art to this. How big should your range be, etc.
A very simple variation on this is a single table in one of your db's that just contains the "nextId" value. basically manually reproducing oracle's sequence concept.