Custom ID column based on year and month - sql

I have the below create table code:
CTranID int identity(1,1) constraint pk_CTrainID Primary Key,
CustomerID AS 'PB' +
cast(datepart(yy,getdate()) as varchar(25)) +
cast(datepart(mm,getdate()) as varchar(25)) +
RIGHT('000000000' + CAST(CTranID AS VARCHAR(10)), 9),
What is happening is that when the record inserts it goes as per current year and month.
But as a testing I changed the system month and when I query in T-SQL, I see the previously inserted records as for the changed month.
Where in case it should have been for the month before changing the system month.

You are using a computed column. Every time you query that table the value of the column, for each row, will be calculated on the fly. That is why you are seeing changing values as you change the system clock.
Instead, store the date of creation in a column and base your calculated column on that column:
CREATE TABLE peter (
CTranID int identity(1,1) constraint pk_CTrainID Primary Key,
CreatedOn DateTime2 DEFAULT (SYSDATETIME()) NOT NULL, -- New Column
CustomerID AS 'PB' + cast(datepart(yy,CreatedOn) as varchar(25)) + cast(datepart(mm,CreatedOn) as varchar(25)) + RIGHT('000000000' + CAST(CTranID AS VARCHAR(10)), 9)
)
INSERT INTO peter
DEFAULT values
SELECT * FROM peter
CTranID CreatedOn CustomerID
1 2015-05-26 PB20155000000001

The whole point of a computed column is that it gets computed every time it is accessed - if the underlying components of the computation change, so will the columns value, .
You will need to change your CustomerId column to a normal, persisted column and find another way to auto-populate it.
CREATE TABLE...
CustomerID VARCHAR(50), -- i.e. not computed
Ideally, this will be done by the app inserting the row, but you could I guess also resort to a trigger, e.g.
CREATE TRIGGER T_Customer ON CUSTOMER AFTER INSERT
AS
BEGIN
UPDATE c
SET c.CustomerID = 'PB' + cast(datepart(yy,getdate()) as varchar(25)) +
cast(datepart(mm,getdate()) as varchar(25)) +
RIGHT('000000000' + CAST(ins.CTranID AS VARCHAR(10)), 9)
FROM Customer c
INNER JOIN Inserted ins on c.CTranID = ins.CTranID;
END;
GO
SqlFiddle here

Related

Automatically generate unique id like UID 22-001

I want to automatically generate unique id with per-defined code attach to it.
ex:
'UID 22-001..
'UID 22-002..
'UID 22-003 ('22' is year 2022)
and then when the year is 2023 it will be generated as;
'UID 23-001..
'UID 23-002..
'UID 23-003..
and so on. Thanks in advance for the help.
Consider you have table with this columns:
id varchar(max),
name varchar(20),
address varchar(20))
You are going to insert value with the ID format that you mentionned.
Here is an example query:
INSERT INTO TABLE_NAME
VALUES(
(SELECT concat('UID ' , (SELECT RIGHT(YEAR(getdate()),2)),'-',(select RIGHT('000'+(CAST(MAX(CAST(RIGHT(ID,2) AS INT) + 1) AS NVARCHAR )),3) FROM TABLE_NAME))),'RAM','INDIA')

How to create custom, dynamic string sequence in SQL Server

Is there any way to dynamically build sequences containing dates/strings/numbers in SQL Server?
In my application, I want every order to have a unique identificator that is a sequence of: Type of order, Year, Month, Incrementing number
(ex: NO/2016/10/001, NO/2016/10/002)
where NO = "Normal order", 2016 is a year, 10 is a month and 001 is an incrementing number. The idea is that it is easier for employees to comunicate using these types of identificators (of course this sequence would not be primary key of database table)
I know that I could create a stored procedure that would take Order type as an argument and return the sequence, but I'm curious if there is any better way to do it.
Cheers!
An IDENTITY column might have gaps. Just imagine an insert which is rollbacked out of any reason...
You could use ROW_NUMBER() OVER(PARTITION BY CONVERT(VARCHAR(6),OrderDate,112) ORDER BY OrderDate) in order to start a sorted numbering starting with 1 for each month. What will be best is depending on the following question: Are there parallel insert operations?
As this order name should be unique, you might run into unique-key-violations where you'd need complex mechanisms to work around...
If it is possible for you to use the existing ID you might use a scalar function together with a computed column (might be declared persistant):
CREATE TABLE OrderType(ID INT,Name VARCHAR(100),Abbr VARCHAR(2));
INSERT INTO OrderType VALUES(1,'Normal Order','NO')
,(2,'Special Order','SO');
GO
CREATE FUNCTION dbo.OrderCaption(#OrderTypeID INT,#OrderDate DATETIME,#OrderID INT)
RETURNS VARCHAR(100)
AS
BEGIN
RETURN ISNULL((SELECT Abbr FROM OrderType WHERE ID=#OrderTypeID),'#NA')
+ '/' + CAST(YEAR(#OrderDate) AS VARCHAR(4))
+ '/' + REPLACE(STR(MONTH(#OrderDate),2),' ','0')
+ '/' + REPLACE(STR(#OrderID,5),' ','0')
END
GO
CREATE TABLE YourOrder
(
ID INT IDENTITY
,OrderDate DATETIME DEFAULT(GETDATE())
,OrderTypeID INT NOT NULL --foreign key...
,Caption AS dbo.OrderCaption(OrderTypeID,OrderDate,ID)
);
GO
INSERT INTO YourOrder(OrderDate,OrderTypeID)
VALUES({ts'2016-01-01 23:23:00'},1)
,({ts'2016-02-02 12:12:00'},2)
,(GETDATE(),1);
GO
SELECT * FROM YourOrder
The result
ID OrderDate OrderTypeID Caption
1 2016-01-01 23:23:00.000 1 NO/2016/01/00001
2 2016-02-02 12:12:00.000 2 SO/2016/02/00002
3 2016-10-23 23:16:23.990 1 NO/2016/10/00003
You could create a computed column in your table definition which concatenates other values in your database into the kind of Identifier you're looking for.
Try this for a simplified example:-
CREATE TABLE Things (
[Type of Order] varchar(10),
[Year] int,
[Month] int,
[Inc Number] int identity(1,1),
[Identifier] as [Type of Order] + '/' + cast([Year] as varchar) + '/' + cast([Month] as varchar) + '/' + cast([Inc Number] as varchar)
)
insert into Things
values
('NO',2016,10)
select * from Things
If you wanted to do something more complex you could always use a trigger to update the column post insert or update.

Postgres insert on conflict update using other table

I having syntax error for the following sql which spent an hour but cant find any answer form postgres documentation.
CREATE TABLE transaction (userid SMALLINT, point INT, timestamp TIMESTAMPTZ);
CREATE TABLE point (userid SMALLINT PRIMARY KEY, balance1 INT, balance2 INT);
CREATE TABLE settings (userid SMALLINT, bonus INT);
INSERT INTO settings VALUES (1, 10); -- sample data
WITH trans AS (
INSERT INTO transaction (userid, point, timestamp)
VALUES ($1, $2, NOW())
RETURNING *
)
INSERT INTO point (userid, balance1)
SELECT userid, point
FROM trans -- first time
ON CONFLICT (userid) DO UPDATE
SET balance1=point.balance1 + excluded.balance1,
balance2=point.balance2 + excluded.balance1 + settings.bonus
FROM settings -- tried USING, not work
WHERE settings.userid=excluded.userid;
My question is how can I include extra table without using FROM during update? My target is to keep all into 1 query.
It's not totally clear to me what you are trying to do. The way I understand it, is that you are trying to use the value from settings.bonus when updating the row in case on conflict kicks in.
This can be done using a co-related sub-query in the UPDATE part:
WITH trans (userid, point, timestamp) AS (
INSERT INTO transaction (userid, point, timestamp)
VALUES (1, 1, NOW())
RETURNING *
)
INSERT INTO point (userid, balance1)
SELECT trans.userid, trans.point
FROM trans
ON CONFLICT (userid) DO UPDATE
SET balance1 = point.balance1 + excluded.balance1,
balance2 = point.balance2 + excluded.balance1 + (select s.bonus
from settings s
where s.userid = excluded.userid)
Note however, that balance2 will never be incremented this way because on the first insert it will be null subsequent updates will try to add something to null but any expression involving null yields null (null + 5 is null).
So you either need to insert 0 when doing the insert the first time, or use coalesce() when doing the update, e.g.
balance2 = coalesce(point.balance2, 0) + excluded.balance1 + (select ...);
If it's possible that there is no row in settings for that user, then you need to apply coalesce() on the result of the sub-query as well:
... excluded.balance1 + coalesce( (select ... ), 0)
Also: you used excluded.point in your question which is incorrect. excluded refers to the target table point but that does not have a column point - you probably meant: excluded.balance1 (as that is the column into which the inserted trans.point will go)
Unrelated, but: timestamp is a horrible name for a column. For one because it's a keyword, but more importantly because it does not document what the column is for. Does it mean "created at"? "valid until"? "start at"? "obsolete from"? Something entirely different?

Incrementing custom primary key values in SQL

I am asked to generate custom ID values for primary key columns. The query is as follows,
SELECT * FROM SC_TD_GoodsInward WHERE EntityId = #EntityId
SELECT #GoodsInwardId=IIF((SELECT COUNT(*) FROM SC_TD_GoodsInward)>0, (Select 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+CONVERT(varchar,#EntityId)+'_'+(SELECT RIGHT('0000'+CONVERT(VARCHAR,CONVERT(INT,RIGHT(MAX(GoodsInwardId),4))+1),4) from SC_TD_GoodsInward)), (SELECT 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+CONVERT(varchar,#EntityId)+'_0001'))
Here the SC_TD_GoodsInward is a table, GoodsInwardId is the value to be generated. I am getting the desired outputs too. Examples.
GI_131118_1_0001
GI_131212_1_0002
GI_131212_1_0003
But, the above condition fails when the last digits reach 9999. I simulated the query and the results were,
GI_131226_1_9997
GI_140102_1_9998
GI_140102_1_9999
GI_140102_1_0000
GI_140102_1_0000
GI_140102_1_0000
GI_140102_1_0000
GI_140102_1_0000
After 9999, it goes to 0000 and does not increment thereafter. So, in the future, I will eventually run into a PK duplicate error. How can i recycle the values so that after 9999, it goes on as 0000, 0001 ... etc. What am I missing in the above query?
NOTE: Please consider the #EntityId value to be 1 in the query.
I am using SQL SERVER 2012.
Before giving a solution for the question few points on your question:
As the Custom primary key consists of mainly three parts Date(140102), physical location where transaction takes place (entityID), 4 place number(9999).
According to the design on a single date in a single physical location there cannot be more than 9999 transactions -- My Solution will also contain the same limitation.
Some points on my solution
The 4 place digit is tied up to the date which means for a new date the count starts from 0000. For Example
GI_140102_1_0001,
GI_140102_1_0002,
GI_140102_1_0003,
GI_140103_1_0000,
GI_140104_1_0000
Any way the this field will be unique.
The solution compares the latest date in the record to the current date.
The Logic:
If current date and latest date in the record matches
Then it increments 4 place digit by the value by 1
If the current date and the latest date in the record does not matched
The it sets the 4 place digit by the value 0000.
The Solution: (Below code gives out the value which will be the next GoodsInwardId, Use it as per requirement to fit in to your solution)
declare #previous nvarchar(30);
declare #today nvarchar(30);
declare #newID nvarchar(30);
select #previous=substring(max(GoodsInwardId),4,6) from SC_TD_GoodsInward;
Select #today=RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)
+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2);
if #previous=#today
BEGIN
Select #newID='GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)
+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)
+'_'+CONVERT(varchar,1)+'_'+(SELECT RIGHT('0000'+
CONVERT(VARCHAR,CONVERT(INT,RIGHT(MAX(GoodsInwardId),4))+1),4)
from SC_TD_GoodsInward);
END
else
BEGIN
SET #newID='GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)
+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)
+'_'+CONVERT(varchar,1)+'_0000';
END
select #newID;
T-SQL to create the required structure (Probable Guess)
For the table:
CREATE TABLE [dbo].[SC_TD_GoodsInward](
[EntityId] [int] NULL,
[GoodsInwardId] [nvarchar](30) NULL
)
Sample records for the table:
insert into dbo.SC_TD_GoodsInward values(1,'GI_140102_1_0000');
insert into dbo.SC_TD_GoodsInward values(1,'GI_140101_1_9999');
insert into dbo.SC_TD_GoodsInward values(1,'GI_140101_1_0001');
**Its a probable solution in your situation although the perfect solution would be to have identity column (use reseed if required) and tie it with the current date as a computed column.
You get this problem because once the last 4 digits reach 9999, 9999 will remain the highest number no matter how many rows are inserted, and you are throwing away the most significant digit(s).
I would remodel this to track the last used INT portion value of GoodsInwardId in a separate counter table (as an INTEGER), and then MODULUS (%) this by 10000 if need be. If there are concurrent calls to the PK generator, remember to lock the counter table row.
Also, even if you kept all the digits (e.g. in another field), note that ordering a CHAR is as follows
1
11
2
22
3
and then applying MAX() will return 3, not 22.
Edit - Clarification of counter table alternative
The counter table would look something like this:
CREATE TABLE PK_Counters
(
TableName NVARCHAR(100) PRIMARY KEY,
LastValue INT
);
(Your #EntityID might be another candidate for the counter PK column.)
You then increment and fetch the applicable counter on each call to your custom PK Key generation PROC:
UPDATE PK_Counters
SET LastValue = LastValue + 1
WHERE TableName = 'SC_TD_GoodsInward';
Select
'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)
+RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)
+RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'
+CONVERT(varchar,#EntityId)+'_'
+(SELECT RIGHT('0000'+ CONVERT(NVARCHAR, LastValue % 10000),4)
FROM PK_Counters
WHERE TableName = 'SC_TD_GoodsInward');
You could also modulo the LastValue in the counter table (and not in the query), although I believe there is more information about the number of records inserted by leaving the counter un-modulo-ed.
Fiddle here
Re : Performance - Selecting a single integer value from a small table by its PK and then applying modulo will be significantly quicker than selecting MAX from a SUBSTRING (which would almost certainly be a scan)
DECLARE #entityid INT = 1;
SELECT ('GI_'
+ SUBSTRING(convert(varchar, getdate(), 112),3,6) -- yymmdd today DATE
+ '_' + CAST(#entityid AS VARCHAR(50)) + '_' --#entity parameter
+ CASE MAX(t.GI_id + 1) --take last number + 1
WHEN 10000 THEN
'0000' --reset
ELSE
RIGHT( CAST('0000' AS VARCHAR(4)) +
CAST(MAX(t.GI_id + 1) AS VARCHAR(4))
, 4)
END) PK
FROM
(
SELECT TOP 1
CAST(SUBSTRING(GoodsInwardId,11,1) AS INT) AS GI_entity,
CAST(SUBSTRING(GoodsInwardId,4,6) AS INT) AS GI_date,
CAST(RIGHT(GoodsInwardId,4) AS INT) AS GI_id
FROM SC_TD_GoodsInward
WHERE CAST(SUBSTRING(GoodsInwardId,11,1) AS INT) = #entityid
ORDER BY gi_date DESC, rowTimestamp DESC, gi_id DESC
) AS t
This should take the last GoodInwardId record, ordered by date DESC and take its numeric "id". Then add + 1 to return the NEW id and combine it with today's date and the #entityid you passed. If >9999, start again from 0000.
You need a timestamp type column tho, to order two inserted in the same date + same transaction time. Otherwise you could get duplicates.
I have simplified the answer even more and arrived with the following query.
IF (SELECT COUNT(GoodsInwardId) FROM SC_TD_GoodsInward WHERE EntityId = #EntityId)=0
BEGIN
SELECT #GoodsInwardId= 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+
CONVERT(varchar,#EntityId)+'_0001'
END
ELSE
BEGIN
SELECT * FROM SC_TD_GoodsInward WHERE EntityId = #EntityId AND CONVERT(varchar,CreatedOn,103) = CONVERT(varchar,GETDATE(),103)
SELECT #GoodsInwardId=IIF(##ROWCOUNT>0,
(Select 'GI_'+
RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+
CONVERT(varchar,#EntityId)+'_'+
(SELECT RIGHT('0000'+CONVERT(VARCHAR,CONVERT(INT,RIGHT(MAX(GoodsInwardId),4))+1),4) from SC_TD_GoodsInward WHERE CONVERT(varchar,CreatedOn,103) = CONVERT(varchar,GETDATE(),103))),
(SELECT 'GI_'+RIGHT('00'+CONVERT(varchar,datepart(YY,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(MM,getdate())),2)+
RIGHT('00'+CONVERT(varchar,datepart(DD,getdate())),2)+'_'+
CONVERT(varchar,#EntityId)+'_0001'))
END
select * from SC_TD_GoodsInward

sql server: generate primary key based on counter and another column value

I am creating a customer table with a parent table that is company.
It has been dictated(chagrin) that I shall create a primary key for the customer table that is a combination of the company id which is an existing varchar(4) column in the customer table, e.g. customer.company
The rest of the varchar(9) primary key shall be a zero padded counter incrementing through the number of customers within that company.
E.g. where company = MSFT and this is a first insert of an MSFT record: the PK shall be MSFT00001
on subsequent inserts the PK would be MSFT00001, MSFT00002 etc.
Then when company = INTL and its first record is inserted, the first record would be INTL00001
I began with an instead of trigger and a udf that I created from other stackoverflow responses.
ALTER FUNCTION [dbo].[GetNextID]
(
#in varchar(9)
)
RETURNS varchar(9) AS
BEGIN
DECLARE #prefix varchar(9);
DECLARE #res varchar(9);
DECLARE #pad varchar(9);
DECLARE #num int;
DECLARE #start int;
if LEN(#in)<9
begin
set #in = Left(#in + replicate('0',9) , 9)
end
SET #start = PATINDEX('%[0-9]%',#in);
SET #prefix = LEFT(#in, #start - 1 );
declare #tmp int;
set #tmp = len(#in)
declare #tmpvarchar varchar(9);
set #tmpvarchar = RIGHT( #in, LEN(#in) - #start + 1 )
SET #num = CAST( RIGHT( #in, LEN(#in) - #start + 1 ) AS int ) + 1
SET #pad = REPLICATE( '0', 9 - LEN(#prefix) - CEILING(LOG(#num)/LOG(10)) );
SET #res = #prefix + #pad + CAST( #num AS varchar);
RETURN #res
END
How would I write my instead of trigger to insert the values and increment this primary key. Or should I give it up and start a lawnmowing business?
Sorry for that tmpvarchar variable SQL server was giving me strange results without it.
Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.
Disadvantages
Single-row inserts only. You won't be doing any bulk inserts to your new customer table as you'll need to execute the stored procedure each time you want to insert a row.
A certain amount of contention for the key generation table, hence a potential for blocking.
On the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...
First, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.
create table dbo.CustomerNumberGenerator
(
company varchar(8) not null ,
curr_value int not null default(1) ,
constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,
)
Second, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:
Puts the company id into canonical form (e.g. uppercase and trimmed of leading/trailing whitespace).
Inserts the row into the key generation table if it doesn't already exist (atomic operation).
In a single, atomic operation (update statement), the current value of the counter for the specified company is fetched and then incremented.
The customer number is then generated in the specified way and returned to the caller via a 1-row/1-column SELECT statement.
Here you go:
create procedure dbo.GetNewCustomerNumber
#company varchar(8)
as
set nocount on
set ansi_nulls on
set concat_null_yields_null on
set xact_abort on
declare
#customer_number varchar(32)
--
-- put the supplied key in canonical form
--
set #company = ltrim(rtrim(upper(#company)))
--
-- if the name isn't already defined in the table, define it.
--
insert dbo.CustomerNumberGenerator ( company )
select id = #company
where not exists ( select *
from dbo.CustomerNumberGenerator
where company = #company
)
--
-- now, an interlocked update to get the current value and increment the table
--
update CustomerNumberGenerator
set #customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,
curr_value = curr_value + 1
where company = #company
--
-- return the new unique value to the caller
--
select customer_number = #customer_number
return 0
go
The reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.
As others said before me, using a primary key with calculated auto-increment values sounds like a very bad idea!
If you are allowed to and if you can live with the downsides (see at the bottom), I would suggest the following:
Use a normal numeric auto-increment key and a char(4) column which only contains the company id.
Then, when you select from the table, you use row_number on the auto-increment column and combine that with the company id so that you have an additional column with a "key" that looks like you wanted (MSFT00001, MSFT00002, ...)
Example data:
create table customers
(
Id int identity(1,1) not null,
Company char(4) not null,
CustomerName varchar(50) not null
)
insert into customers (Company, CustomerName) values ('MSFT','First MSFT customer')
insert into customers (Company, CustomerName) values ('MSFT','Second MSFT customer')
insert into customers (Company, CustomerName) values ('ABCD','First ABCD customer')
insert into customers (Company, CustomerName) values ('MSFT','Third MSFT customer')
insert into customers (Company, CustomerName) values ('ABCD','Second ABCD customer')
This will create a table that looks like this:
Id Company CustomerName
------------------------------------
1 MSFT First MSFT customer
2 MSFT Second MSFT customer
3 ABCD First ABCD customer
4 MSFT Third MSFT customer
5 ABCD Second ABCD customer
Now run the following query on it:
select
Company + right('00000' + cast(ROW_NUMBER() over (partition by Company order by Id) as varchar(5)),5) as SpecialKey,
*
from
customers
This returns the same table, but with an additional column with your "special key":
SpecialKey Id Company CustomerName
---------------------------------------------
ABCD00001 3 ABCD First ABCD customer
ABCD00002 5 ABCD Second ABCD customer
MSFT00001 1 MSFT First MSFT customer
MSFT00002 2 MSFT Second MSFT customer
MSFT00003 4 MSFT Third MSFT customer
You could create a view with this query and let everyone use that view, to make sure everyone sees the "special key" column.
However, this solution has two downsides:
You need at least SQL Server 2005 in
order for row_number to work.
The numbers in the special key will change when you delete companies from the table. So, if you don't want the numbers to change, you have to make sure that nothing is ever deleted from that table.