Is it possible in SQL (SQL Server) to retrieve the next ID (integer) from an identity column in a table before, and without actually, inserting a row? This is not necessarily the highest ID plus 1 if the most recent row was deleted.
I ask this because we occassionally have to update a live DB with new rows. The ID of the row is used in our code (e.g. Switch (ID){ Case ID: } and must be the same. If our development DB and live DB get out of sync, it would be nice to predict a row ID in advance before deployment.
I could of course SET IDENTITY OFF SET INSERT_IDENTITY ON or run a transaction (does this roll back the ID?) etc but wondered if there was a function that returned the next ID (without incrementing it).
try IDENT_CURRENT:
Select IDENT_CURRENT('yourtablename')
This works even if you haven't inserted any rows in the current session:
Returns the last identity value generated for a specified table or view. The last identity value generated can be for any session and any scope.
Edit:
After spending a number of hours comparing entire page dumps, I realised there is an easier way and I should of stayed on the DMVs.
The value survives a backup / restore, which is a clear indication that it is stored - I dumped all the pages in the DB and couldn't find the location / alteration for when
a record was added. Comparing 200k line dumps of pages isn't fun.
I had used the dedicated admin console I took a dump of every single internal table exposed inserted a row and then took a further dump of the system tables. Both of the dumps were identical, which indicates that whilst it survived, and therefore must be stored, it is not exposed even at that level.
So after going around in a circle I realised the DMV did have the answer.
create table foo (MyID int identity not null, MyField char(10))
insert into foo values ('test')
go 10
-- Inserted 10 rows
select Convert(varchar(8),increment_value) as IncrementValue,
Convert(varchar(8),last_value) as LastValue
from sys.identity_columns where name ='myid'
-- insert another row
insert into foo values ('test')
-- check the values again
select Convert(varchar(8),increment_value) as IncrementValue,
Convert(varchar(8),last_value) as LastValue
from sys.identity_columns where name ='myid'
-- delete the rows
delete from foo
-- check the DMV again
select Convert(varchar(8),increment_value) as IncrementValue,
Convert(varchar(8),last_value) as LastValue
from sys.identity_columns where name ='myid'
-- value is currently 11 and increment is 1, so the next insert gets 12
insert into foo values ('test')
select * from foo
Result:
MyID MyField
----------- ----------
12 test
(1 row(s) affected)
Just because the rows got removed, the last value was not reset, so the last value + increment should be the right answer.
Also going to write up the episode on my blog.
Oh, and the short cut to it all:
select ident_current('foo') + ident_incr('foo')
So it actually turns out to be easy - but this all assumes no one else has used your ID whilst you got it back. Fine for investigation, but I wouldn't want to use it in code.
This is a little bit strange but it will work:
If you want to know the next value, start by getting the greatest value plus one:
SELECT max(id) FROM yourtable
To make this work, you'll need to reset the identity on insert:
DECLARE #value INTEGER
SELECT #value = max(id) + 1 FROM yourtable
DBCC CHECKIDENT (yourtable, reseed, #value)
INSERT INTO yourtable ...
Not exactly an elegant solution but I haven't had my coffee yet ;-)
(This also assumes that there is nothing done to the table by your process or any other process between the first and second blocks of code).
You can pretty easily determine that the last value used is:
SELECT
last_value
FROM
sys.identity_columns
WHERE
object_id = OBJECT_ID('yourtablename')
Usually, the next ID will be last_value + 1 - but there's no guarantee for that.
Marc
Rather than using an IDENTITY column, you could use a UNIQUEIDENTIFIER (Guid) column as the unique row identifer and insert known values.
The other option (which I use) is SET IDENTITY_INSERT ON, where the row IDs are managed in a source controlled single 'document'.
Related
How to overwrite the table each time there is an insert statement in a vertica?
Consider:
INSERT INTO table1 VALUES ('My Value');
This will give say
| MyCol |
----------
MyValue
How to overwrite the same table on next insert statement say
INSERT INTO table1 VALUES ('My Value2');
| MyCol |
----------
MyValue2
You can either DELETE or TRUNCATE your table. There is no override method for Vertica. Use TRUNCATE since you have wanted only and only a value.
Source
INSERT INTO table1 VALUES ('My Value');
TRUNCATE TABLE table1;
INSERT INTO table1 VALUES ('My Value2');
Or (if connection get lost before you commit then it will not get effected.)
Rollback
An individual statement returns an ERROR message. In this case, Vertica rolls back the statement.
DDL errors, systemic failures, dead locks, and resource constraints return a ROLLBACK message. In this case, Vertica rolls back the entire transaction.
INSERT INTO table1 VALUES ('My Value');
DELETE FROM table1
WHERE MyCol !='My Value2';
INSERT INTO table1 VALUES ('My Value2');
COMMIT;
I might suggest that you don't do such a thing.
The simplest method is to populate the table with a row, perhaps:
insert into table1 (value)
values (null);
Then use update, not insert:
update table1
set value = ?;
That fixes your problem.
If you insist on using insert, you could insert values with an identity column and use a view to get the most recent value:
create table table1 (
table1_id identity(1, 1),
value varchar(255)
);
Then access the table using a view:
create view v_table1 as
select value
from table1
order by table1_id desc
limit 1;
If the view becomes inefficient, you can periodically empty the table.
One advantage of this approach is that the table is never empty and not locked for very long -- so it is generally available. Deleting rows and inserting rows can be tricky in that respect.
If you really like triggers, you can use a table as above. Then use a trigger to update the row in another table that has a single row. This also maximizes availability, without overhead for fetching the most recent value.
If it is a single-row table, then there's no risk whatsoever to fill it with a single row that can be NULL, as #Gordon Linoff suggests.
Internally, you should be aware that Vertica, in the background, always implements an UPDATE as a DELETE, by adding a delete vector for the row, and then applying an INSERT.
No problem with a single-row table, as the Tuple Mover (the background daemon process that wakes up all 5 mins to de-fragment the internal storage, to put it simply, and will create a single data (Read Optimized Storage - ROS) container out of: the previous value; the delete vector pointing to that previous value, thus deactivating it, and the newly inserted value that it is updated to.
So:
CREATE TABLE table1 (
mycol VARCHAR(16)
) UNSEGMENTED ALL NODES; -- a small table, replicate it across all nodes
-- now you have an empty table
-- for the following scenario, I assume you commit the changes every time, as other connected
-- processes will want to see the data you changed
-- then, only once:
INSERT INTO table1 VALUES(NULL::VARCHAR(16);
-- now, you get a ROS container for one row.
-- Later:
UPDATE table1 SET mycol='first value';
-- a DELETE vector is created to mark the initial "NULL" value as invalid
-- a new row is added to the ROS container with the value "first value"
-- Then, before 5 minutes have elapsed, you go:
UPDATE table1 SET mycol='second value';
-- another DELETE vector is created, in a new delete-vector-ROS-container,
-- to mark "first value" as invalid
-- another new row is added to a new ROS container, containing "second value"
-- Now 5 minutes have elapsed since the start, the Tuple Mover sees there's work to do,
-- and:
-- - it reads the ROS containers containing "NULL" and "first value"
-- - it reads the delete-vector-ROS containers marking both "NULL" and "first value"
-- as invalid
-- - it reads the last ROS container containing "second value"
-- --> and it finally merges all into a brand new ROS container, to only contain.
-- "second value", and, at the end the four other ROS containers are deleted.
With a single-row table, this works wonderfully. Don't do it like that for a billion rows.
After deleting the duplicate records from the table,
I want to update Identity column of a table with consecutive numbering starting with 1. Here is my table details
id(identity(1,1)),
EmployeeID(int),
Punch_Time(datetime),
Deviceid(int)
I need to perform this action through a stored procedure.
When i tried following statement in stored procedure
DECLARE #myVar int
SET #myVar = 0
set identity_insert TempTrans_Raw# ON
UPDATE TempTrans_Raw# SET #myvar = Id = #myVar + 1
set identity_insert TempTrans_Raw# off
gave error like...Cannot update identity column 'Id'
Anyone please suggest how to update Identity column of that table with consecutive numbering starting with 1.
--before running this make sure Foreign key constraints have been removed that reference the ID.
--insert everything into a temp table
SELECT (ColumnList) --except identity column
INTO #tmpYourTable
FROM yourTable
--clear your table
DELETE FROM yourTable
-- reseed identity
DBCC CHECKIDENT('table', RESEED, new reseed value)
--insert back all the values
INSERT INTO yourTable (ColumnList)
SELECT OtherCols FROM #tmpYourTable
--drop the temp table
DROP TABLE #tmpYourTable
GO
The IDENTITY keword is used to generate a key which can be used in combination with the PRIMARY KEY constraint to get a technical key. Such keys are technical, they are used to link table records. They should have no other meaning (such as a sort order). SQL Server does not guarantee the generated IDs to be consecutive. They do guarantee however that you get them in order. (So you might get 1, 2, 4, ..., but never 1, 4, 2, ...)
Here is the documentation for IDENTITY: https://msdn.microsoft.com/de-de/library/ms186775.aspx.
Personally I don't like it to be guaranteed that the generated IDs are in order. A technical ID is supposed to have no meaning other then offering a reference to a record. You can rely on the order, but if order is information you are interested in, you should store that information in my opinion (in form of a timestamp for example).
If you want to have a number telling you that a record is the fifth or sixteenth or whatever record in order, you can get always get that number on the fly using the ROW_NUMBER function. So there is no need to generate and store such consecutive value (which could also be quite troublesome when it comes to concurrent transactions on the table). Here is how to get that number:
select
row_number() over(order by id),
employeeid,
punch_time,
deviceid
from mytable;
Having said all this; it should never be necessary to change an ID. It is a sign for inappropriate table design, if you feel that need.
If you really need sequential numbers, may I suggest that you create a table ("OrderNumbers") with valid numbers, and then make you program pick one row from OrderNumbers when you add a row to yourTable.
If you everything in one transaction (i.e. with Begin Tran and Commit) then you can get one number for one row with no gabs.
You should have either Primary Keys or Unique Keys on both tables on this column to protect against duplicates.
HIH,
Henrik
Check this function: DBCC CHECKIDENT('table', RESEED, new reseed value)
I have the following problem, I want to have Composite Primary Key like:
PRIMARY KEY (`base`, `id`);
for which when I insert a base the id to be auto-incremented based on the previous id for the same base
Example:
base id
A 1
A 2
B 1
C 1
Is there a way when I say:
INSERT INTO table(base) VALUES ('A')
to insert a new record with id 3 because that is the next id for base 'A'?
The resulting table should be:
base id
A 1
A 2
B 1
C 1
A 3
Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions.
EDIT
The base currently represents a company, the id represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the SEQUENCE objects that are provided are geared around fast access, and can't be rolled back (ie, you will get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own.
The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple:
CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED,
invoiceNumber INTEGER);
In reality the base column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity.
Next, you want a stored proc that will take a key (base) and spit out the next number in the sequence (invoiceNumber). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows:
CREATE PROCEDURE Next_Invoice_Number #baseKey CHAR(1),
#invoiceNumber INTEGER OUTPUT
AS MERGE INTO Invoice_Sequence Stored
USING (VALUES (#baseKey)) Incoming(base)
ON Incoming.base = Stored.base
WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1
WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(#baseKey)
OUTPUT INSERTED.invoiceNumber ;;
Note that:
You must run this in a serialized transaction
The transaction must be the same one that's inserting into the destination (invoice) table.
That's right, you'll still get blocking per-business when issuing invoice numbers. You can't avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued.
Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger:
CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT
AS
DECLARE #invoiceNumber INTEGER
BEGIN
EXEC Next_Invoice_Number Inserted.base, #invoiceNumber OUTPUT
INSERT INTO Invoice (base, invoiceNumber)
VALUES (Inserted.base, #invoiceNumber)
END
(obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in)
...which you can then use by simply saying:
INSERT INTO Invoice (base) VALUES('A');
So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this INSERT is committed, there are only two rows locked:
The row in Invoice_Sequence maintaining the sequence number
The row in Invoice for the new invoice.
All other rows for a particular base are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case.
That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert.
I think this may help you
MSSQL Triggers: http://msdn.microsoft.com/en-in/library/ms189799.aspx
Test Table
CREATE TABLE MyTable
( base CHAR(1),
id INT
)
GO
Trigger Definition
CREATE TRIGGER dbo.tr_Populate_ID
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO MyTable (base,id)
SELECT i.base, ISNULL(MAX(mt.id),0) +1 AS NextValue
FROM inserted i left join MyTable mt
on i.base = mt.base
GROUP BY i.base
END
Test
Execute the following statement multiple times and you will see the next values available in that group will be assigned to ID.
INSERT INTO MyTable VALUES
('A'),
('B'),
('C')
GO
SELECT * FROM MyTable
GO
I applied 12Lac Insert command in Single table ,
but after some time query terminated , How can I find Last
Inserted Record
a)Table don't have created Date column
b)Can not apply order by clause because primary key values are manually generated
c)Last() is not buit in fumction in mssql.
Or any way to find last executed query
There will be some way but not able to figure out
Table contain only primary key constrain no other constrain
As per comment request here a quick and dirty manual solution, assuming you've got the list of INSERT statements (or the according data) in the same sequence as the issued INSERTs. For this example I assume 1 million records.
INSERT ... VALUES (1, ...)
...
INSERT ... VALUES (250000, ...)
...
INSERT ... VALUES (500000, ...)
...
INSERT ... VALUES (750000, ...)
...
INSERT ... VALUES (1000000, ...)
You just have to find the last PK, that has been inserted. Luckily in this case there is one. So you start doing a manual binary search in the table issuing
SELECT pk FROM myTable WHERE pk = 500000
If you get a row back, you know it got so far. Continue checking with pk = 750000. Then again, if it is there with pk = 875000. If 750000 is not there, then the INSERTs must have stopped earlier. Then check for pk = 675000. This process stops in this case after 20 steps.
It's just plain manual divide and conquer.
There is a way.
Unfortunately you have to do this in advance so it helps you.
So if you have, by any chance the PRIMARY KEYS you inserted, still at hand go ahead and delete all rows that have those keys:
DELETE FROM tableName WHERE ID IN (id1, id2, ...., idn)
Then you enable Change Data Capture for your database (have the db already selected):
EXEC sys.sp_cdc_enable_db;
Now you also need to enable Change Data Capture for that table, in an example that I've tried I could just run:
EXEC sys.sp_cdc_enable_table #source_schema = N'dbo', #source_name = N'tableName', #role_name = null
Now you are almost setup! You need to look into your system services and verify that SQL Server Agent is running for your DBMS, if it does not capturing will not happen.
Now when you insert something into your table you can select data changes from a new table called [cdc].[dbo_tableName_CT]:
SELECT [__$start_lsn]
,[__$end_lsn]
,[__$seqval]
,[__$operation]
,[__$update_mask]
,[ID]
,[Value]
FROM [cdc].[dbo_tableName_CT]
GO
An example output of this looks like this:
you can order by __$seqval that should give you the order in which the rows were inserted.
NOTE: this feature seems not to be present in SQL Server Express
I have a table in a SQL Server database that has an auto-generated integer primary key. Without inserting a record into the table, I need to query the database and get what the next auto-generated ID number will be.
I think it's SQL Server version 2005, if that makes a difference.
Is there a way to do this?
Yes, but it's unreliable because another session might use the expected number.
If you still want to do this, use IDENT_CURRENT
Edit, as the comments have pointed out (improving my answer):
you need to add one IDENT_INCR('MyTable') to this to get the potential next number
another process may rollback and this number may not be the one used anyway
No, there is not. The ID will only ever be defined and handed out when the actual INSERT happens.
You can check the last given ID by using
DBCC CHECKIDENT('YourTableName')
but that's just the last one used - no guarantee that the next one is really going to be this value + 1 - it could be - but no guarantees
The only way to get a number that is guranteed not to be used by another process (i.e., a race condition) is to do the insert - is there any reason you can't do a NULL insert (i.e., just insert into the table with NULLs or default values for all other columns) and then subsequently UPDATE it?
i.e.,
CREATE TABLE bob (
seq INTEGER IDENTITY (1,1) NOT NULL,
col1 INTEGER NULL
)
GO
DECLARE #seqid INTEGER
INSERT INTO bob DEFAULT VALUES
SET #seqid = SCOPE_IDENTITY()
-- do stuff with #seqid
UPDATE bob SET col1 = 42 WHERE seq = #seqid
GO
You shouldn't use the technique in code, but if you need to do it for investigative purposes:
select ident_current(‘foo’) + ident_incr(‘foo’)
That gives you the last value generated + the incrementation for the identity, so should represent the next choice SQL would make without inserting a row to find out. This is a correct value even if a rollback has pushed the value forwards - but again, this is investigative SQL not stuff I would put in code.
The two values can also be found in the sys.identity_values DMV, the fields are increment_value and last_value.
Another way, depending on what your doing, is inserting whatever data goes into the table, and then using ##identity to retrieve the id of the record inserted.
example:
declare #table table (id int identity(1,1), name nvarchar(10))
insert into #table values ('a')
insert into #table values ('b')
insert into #table values ('c')
insert into #table values ('d')
select ##identity
insert into #table values ('e')
insert into #table values ('f')
select ##identity
This is pretty much a bad idea straight off the bat, but if you don't anticipate high volume and/or concurrency issues, you could just do something like this
select #nextnum = max(Id) + 1 from MyTable
I don't think thats possible out of the box in MS SQL (any version). You can do this with column type uniqueidentifier and using function NEWID().
For int column, you would have to implement your own sequential generator.