This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 9 years ago.
Ok so I used this code to make the table:
CREATE TABLE Clients
(
ID int IDENTITY(1,1) PRIMARY KEY,
NAME varchar(20) NOT NULL,
BALANCE int NOT NULL,
)
And it worked good for the first few times and after when i add a new record it gives it like an random id:
I dont really know what the problem is so you might tell me?
This is all perfectly normal. Microsoft added sequences in SQL Server 2012
By default when you create a SEQUENCE you can either supply CACHE size. Caching is used to increase performance for applications that use sequence objects by minimizing the number of disk IOs that are required to generate sequence numbers.
To fix this issue, you need to make sure, you add a NO CACHE option in sequence creation / properties like this.
CREATE SEQUENCE TEST_Sequence
AS INT
START WITH 1
INCREMENT BY 1
MINVALUE 0
NO MAXVALUE
NO CACHE
Sequence number
use trace flag 272 - this will cause a log record to be generated
for each generated identity value. The performance of identity
generation may be impacted by turning on this trace flag.
use a sequence generator with the NO CACHE setting
(http://msdn.microsoft.com/en-us/library/ff878091.aspx)
Identity column value suddenly jumps to 1001 in sql server
Related
I started using SQL capture data change tables on Microsoft SQL Server 2016, it looks fairly easy to use mechanism, but now when I was using some tutorial I found and there was n info about that there is a limited time that data is kept in those tables, I think default is 3 days.
I was trying to find some info about it but with no luck so my questions stands:
Is there a way to increase that time that logs are kept or even turn it off.
You are looking for the Retention Period, which is indeed 3 days by default.
You can change it using sys.sp_cdc_change_job
USE [YourDatabase];
EXECUTE sys.sp_cdc_change_job
#job_type = N'cleanup',
#retention = 2880;
[ #retention ] =retention Number of minutes that change rows are to be
retained in change tables. retention is bigint with a default of NULL,
which indicates no change for this parameter. The maximum value is
52494800 (100 years). If specified, the value must be a positive
integer. retention is valid only for cleanup jobs.
Please note, that this affects ALL tables marked to be tracked by CDC in the database, there is no way to configure it per table.
https://msdn.microsoft.com/en-us/library/bb510748(v=sql.105).aspx
We are implementing a file upload and storage field in our DB2 database. Right now the file upload column is defined as BLOB(5242880) as follows:
CREATE TABLE MYLIB.MYTABLE (
REC_ID FOR COLUMN RECID DECIMAL(10, 0) GENERATED ALWAYS AS IDENTITY (
START WITH 1 INCREMENT BY 1
NO MINVALUE NO MAXVALUE
NO CYCLE NO ORDER
CACHE 20 )
,
[other fields snipped]
FORM_UPLOAD FOR COLUMN F00001 BLOB(5242880) DEFAULT NULL ,
CONSTRAINT MYLIB.MYCONSTRAINT PRIMARY KEY( REC_ID ) )
RCDFMT MYTABLER ;
Is this the correct way to do this? Should it be in its own table or defined a different way? I'm a little nervous that it's showing it as a five-megabyte column instead of a pointer to somwhere else, as SQL Server does (for example). Will we get into trouble defining it like this?
There's nothing wrong with storing the BLOB in a DB2 column, but if you prefer to store the pointer, look at DataLinks. http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/sqlp/rbafyusoocap.htm
Unless you specify the ALLOCATE clause, the data itself is stored in the "variable length" aka "overflow" space of the table. Not the fixed length space where the rest of the row is.
So if you don't have ALLOCATE and the file is only 1MB, you only use 1MB of space to store it, not the 5MB max you've defined.
Note this means means the system has to do twice the I/O when accessing data from both areas.
Datalinks have the same I/O hit.
From a performance standpoint,
- Make sure you only read the BLOB if you need to.
- if 90% or more of the BLOBs are say < 1MB, you could improve performance at the cost of space by saying ALLOCATE(1048576) while still allowing for the full 5MB to be stored. The first 1MB would be in the row the last 4MB in the overflow.
Charles
I have a django app that uses MySQL as the database backend. It's been running for a few days now, and I'm up to ID 5000 in some tables already.
I'm concerned about what will happen when I overflow the datatype.
Is there anyway to tell the auto increment to start over at some point? My data is very volatile, so when I do overflow the ID, there is no possible way that ID 0, or anywhere near that will still be in use.
Depending on whether you're using an unsigned integer or not and which version of MySQL you're running, you run the rink of getting nasty negative values for the primary key or (worse) the row simply won't be inserted and will throw an error.
That said, you can easily change the size/type of the integer in MySQL using an ALTER command to preemptively stop this from happening. The "standard" size for an INT being used as a primary key is an INT(11), but the vast majority of DB applications don't need anything nearly that large. Try a MEDIUMINT.
MEDIUMINT - The signed range is –8388608 to 8388607. The unsigned range is 0 to 16777215
As compared to....
INT or INTEGER - The signed range is –2147483648 to 2147483647. The unsigned range is 0 to 4294967295
There's also the BIGINT, but to be honest you've probably got much larger scalability issues than your data types to worry about if you have a table with > 2 billion rows :)
Well, the default 32bit INT goes up to about 2 billion. At 5000 IDs per day, that's about 1000 years till overflow. I don't think you have to worry yet...
Given the table:
CREATE TABLE Table1
(
UniqueID int IDENTITY(1,1)
...etc
)
Now why would you ever set the increment to something other than 1?
I can understand setting the initial seed value differently. For example if, say, you're creating one database table per month of data (e.g. Table1_082009, Table1_092009) and want to start the UniqueID of the new table where the old one left off. (I probably wouldn't use that strategy myself, but hey, I can see people doing it).
But for the increment? I can only imagine it being of any use in really odd situations, for example:
after the initial data is inserted, maybe later someone will want to turn identity insert on and insert new rows in the gaps, but for efficient lookup on the index will want the rows to be close to each other?
if you're looking up ids based directly off a URL, and want to make it harder for people to arbitrarily access the other items (for example, instead of the user being able to work out that changing the URL suffix from /GetData?id=1000 to /GetData?id=1001, you set an increment of 437 so that the next url is actually /GetData?id=1437)? Of course if this is your "security" then you're probably already in trouble...
I can't think of anything else. Has anyone used an increment that wasn't 1, and why? I'm really just curious.
One idea might be using this to facilitate partitionnement of data (though there might be more "automated" ways to do that) :
Considering you have two servers :
On one server, you start at 1 and increment by 2
On the other server, you start at 2 and increment by 2.
Then, from your application, your send half inserts to one server, and the other half to the second server
some kind of software load-balancing
This way, you still have the ability to identify your entries : the "UniqueID" is still unique, even if the data is split on two servers / tables.
But that's only a wild idea -- there are probably some other uses to that...
Once, for pure fun, (Oh yeah, we have a wild side to us) decided to negative increment. It was strange to see the numbers grow in size and smaller in value at the same time.
I could hardly sit still in my chair.
edit (afterthought):
You think the creator of the IDENTITY was in love with FOR loops? You know..
for (i = 0; i<=99; i+=17)
or for those non semi-colon folks out there
For i = 0 to 100 step 17
Only for entertainment. And you have to be REALLY bored.
What happen when SQL Server 2005 happen to reach the maximum for an IDENTITY column? Does it start from the beginning and start refilling the gap?
What is the behavior of SQL Server 2005 when it happen?
You will get an overflow error when the maximum value is reached. If you use the bigint datatype with a maximum value of 9,223,372,036,854,775,807 this will most likely never be the case.
The error message you will get, will look like this:
Msg 220, Level 16, State 2, Line 10
Arithmetic overflow error for data type tinyint, value = 256.
(Source)
As far as I know MS SQL provides no functionality to fill the identity gaps, so you will either have to do this by yourself or change the datatype of the identity column.
In addition to this you can set the start value to the smallest negative number, to get an even bigger range of values to use.
Here is a good blog post about this topic.
It will not fill in the gaps. Instead inserts will fail until you change the definition of the column to either drop the identity and find some other way of filling in the gaps or increase the size (go from int to bigint) or change the type of the data (from int to decimal) so that more identity values are available.
You will be unable to insert new rows and will receive the error message listed above until you fix the problem. You can do this a number of ways. If you still have data and are using all the id's below the max, you will have to change the datatype. If the data is getting purged on a regular basis and you have a large gap that is not going to be used, you can reseed the identity number to the lowest number in that gap. For example,at a previous job,we were logging transactions. We had maybe 40-50 million per month, but we were purging everything older than 6 months, so every few years, the identity would get close to 2 Billion, but we would have nothing with an id below 1.5 billion, so we would reseed back to 0. Again it's possible that neither of these will work for you and you will have to find a different solution.
If the identity column is an Integer, then your max is 2,147,483,647. You will get an overflow error if you exceed it.
If you think this is a risk, just use the BIGINT datatype, which gives you up to 9,223,372,036,854,775,807. Can't imagine a database table with that many rows.
Further discussion here. (Same link as xsl mentioned).
In the event that you do hit the maximum number for you identity column, you can move the data from that table into a secondary table with a bigger identity column type and specify the starting value for that new identity value to be the maximum of the previous type. The new identity values will continue from that point.
If you delete "old values" from time to time you just need to reset the seed using
DBCC CHECKIDENT ('MyTable', RESEED, 0);