How To Create a Custom Alphanumeric Primary key in Sql Server 2005 - sql-server-2005

i write this alghorithm a and it's correct:
Tables were :
create table PrimKeyTest (primarykeycolumn varchar(8), nextcolumn int)
GO
insert into PrimKeyTest values ('P09-0001', 1)
GO
and My function is :
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
CREATE function [dbo].[GetSpecialPrimaryKey](#yearvalue int)
returns nvarchar(8)
as
begin
declare #maxkey varchar(4)
declare #maxLength int, #maxkeylength int
set #maxLength = 4
select #maxkey = ISNULL(cast(max(cast(substring(primaryKeycolumn, 5, 4) as integer)+1) as varchar),'1')
from PrimKeyTest
where substring(primaryKeycolumn, 2, 2) = substring(convert(varchar, #yearvalue), 3, 2)
set #maxkeylength = len(#maxkey)
while #maxkeylength < #maxLength
begin
set #maxkey = '0' + #maxkey
set #maxkeylength = len(#maxkey)
end
return 'P' + substring(convert(varchar, #yearvalue), 3, 2) + '-' + #maxkey
end
Now when i delete last row of this table ,new last record give correct number eg.
P09-0001 P09-0002 P09-0003 P09-0004 P09-0005
but when i delete 2nd row of this table order of primary column has incorrect
eg. P09-0001 P09-0003 P09-0004 P09-0005
can you help me?
i want this: P09-0001 P09-0002 P09-0003 P09-0004

This is actually not a great approach to dealing with primary keys. Doing this means "realigning" all of your pkeys whenever one is deleted (except when it's the last.) Doing this could be a complex, costly and error-prone process. For example, you have a pkey in this table which will probably be referenced via a foreign key from other tables. If you change the value of the pkey in the first table then you also have to change it in all the other tables that reference it. This means dropping any constraints for the duration of the change etc.
It looks like you're trying to create an identifier that will most likely be presented to the end user. You can go ahead and use your function to do that, BUT do not make it a primary key. Use an auto-incrementing column as the primary key and the 'P09-N' value as a separate field. Then, if you want to modify the values you can do so without affecting the rest of your table design.
Now to update the identifier values for the table whenever one is deleted you'll probably need to use a cursor in a stored procedure. Here's a good overview on cursors. You could also use CTEs (Common Table Expressions) to do the updating.
Here is a cursor example where Col1 is your pkey and Col2 is the identifier you want to change:
begin tran -- it's important to wrap this in a transaction!
declare #counter int
set #counter = 1
declare #val varchar(50)
DECLARE crs CURSOR
FOR SELECT Col1 FROM TblTest ORDER By Col1
OPEN crs
FETCH NEXT FROM crs INTO #val
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE TblTest
SET Col2 = 'P09-' + cast(#counter as varchar(50))
WHERE Col1 = #val
SET #counter = #counter + 1
FETCH NEXT FROM crs INTO #val
END
CLOSE crs
DEALLOCATE crs
commit tran
I didn't do the leading zero logic but you can Google for that pretty easily.

Related

How to Create a table on variable name in SQL Server?

WHILE #i < #deptcount + 1
BEGIN
--creating dynamic tables
DECLARE #tablenames NVARCHAR(50)
SET #tablenames = 'dept' + Cast(#i AS NVARCHAR)
EXECUTE ('create table '+#tablenames+
' (deptno int, formno int, stdpr int, agg int)')
SET #i = #i + 1
END
Your code seems to work:
DECLARE #i INT = 0, #deptcount INT = 4;
while #i < #deptcount+1
Begin
--creating dynamic tables
declare #tablenames nvarchar(50)
set #tablenames = '##dept'+CAST(#i as nvarchar)
execute('create table '+#tablenames+' (deptno int, formno int, stdpr int, agg int)')
set #i = #i +1
End
SELECT *
FROM ##dept1
UNION ALL
SELECT *
FROM ##dept2
UNION ALL
SELECT *
FROM ##dept3;
LiveDemo
But reconsider your approach:
CREATE TABLE #tbl
The desire here is to create a table of which the name is determined
at run-time.
If we just look at the arguments against using dynamic SQL in stored
procedures, few of them are really applicable here. If a stored
procedure has a static CREATE TABLE in it, the user who runs the
procedure must have permissions to create tables, so dynamic SQL will
not change anything. Plan caching obviously has nothing to do with it.
Etc.
Nevertheless: Why? Why would you want to do this? If you are creating
tables on the fly in your application, you have missed some
fundamentals about database design. In a relational database, the set
of tables and columns are supposed to be constant. They may change
with the installation of new versions, but not during run-time.
Sometimes when people are doing this, it appears that they want to
construct unique names for temporary tables. This is completely
unnecessary, as this is a built-in feature in SQL Server. If you say:
CREATE TABLE #nisse (a int NOT NULL)
then the actual name behind the scenes will be something much longer,
and no other connections will be able to see this instance of #nisse.

insert the null record in the print lable

I am trying to create a SP which print the label of my vendor, vendor name. I want the user set the startposition, before the startposition I just simply insert a null value. I want be able to reuse the label sheet.
I have the SP code like this:
Alter PROCEDURE [dbo].[z_sp_APVendorLabel]
(#VendorGroup bGroup ,
#StartPosition int)
AS
BEGIN
SET NOCOUNT ON;
Create table #data_null
(Vendor int,
Name varchar(60)null)
Declare #counter int
SET #counter = 0
WHILE #counter < #StartPosition
BEGIN
UPDATE #data_null SET Vendor='',Name=' '
SET #counter = #counter + 1
END
Create table #detial
(Vendor int,
Name varchar (60)null)
select Vendor, Name into #data from APVM
WHERE VendorGroup= #VendorGroup
select * from #data_null
Union All
select * from #detial
END
It is very simple, but when I test it, I did not get any data.
You're creating the table #data_null, and updating it, but never inserting any rows. If you inspect ##rowcount after each update, you'll see it's zero.
Before you change that loop to insert instead of update, please consider setting up a permanent table to select from. A loop to generate N values on every invocation of the procedure is really not the best use of your server's time, or yours. ;-)

How to exec a stored procedure for each row in a select statement?

I have a stored procedure that returns an unique Id. I need to call this sp to get the unique ID for each row. I must use this SP because an application also uses this.
How can I select for each row a ID that is returned from the SP?
CREATE procedure [dbo].[SelectNextNumber]
#TableName nvarchar(255)
as
begin
declare #NewSeqVal int
set NOCOUNT ON
update Number --This is a table that holds for each table the max ID
set #NewSeqVal = Next = Next + Increase
where TableNaam= #TableName
if ##rowcount = 0
begin
Insert into Number VALUES (#TableName, 1, 1)
return 1
end
return #NewSeqVal
The number table:
CREATE TABLE [dbo].[Number](
[TableName] [varchar](25) NOT NULL,
[Next] [int] NULL,
[Increase] [int] NULL
I have seen that a While loop is usable for this but in my situation I don't know how to use a while loop.
You can't use stored procedures inside a SELECT statement, only functions.
You can iterate on a resultset with a cursor if you really have to use a stored procedure:
http://msdn.microsoft.com/library/ms180169.aspx
EDIT:
To be honest I'm not very sure to have understood what you really need, it looks like you are building a IDENTITY by yourself ( http://msdn.microsoft.com/library/ms174639(v=sql.105).aspx );
still, if you really need to run a cursor here's an example which uses your stored procedure:
http://sqlfiddle.com/#!3/2b81a/1
Taking the singular INSERT INTO.. SELECT apart:
Temporarily store the SELECT results away
declare #rc int, #NewSeqVal int;
SELECT ..
INTO #tmp -- add this
FROM ..
Store the rowcount and get that many numbers
set #rc = ##rowcount;
For which you have to use the code in the SP directly:
update Number --This is a table that holds for each table the max ID
set #NewSeqVal = Next = Next + #rc
where TableNaam= 'sometbl';
Finally, the insert
INSERT ...
SELECT ID = #NewSeqVal + 1 - row_number() over (ORDER BY col1)
, {all the other columns}
FROM #tmp;
ORDER by Col1 is arbitrary, choose something sensible, or make it ORDER BY NEWID() if you don't care.

How to copy large amount of data from one table to other table in SQL Server

I want to copy large amount of datas from one table to another table.I used cursors in Stored Procedure to do the same.But it is working only for tables with less records.If the tables contain more records it is executing for long time and hanged.Please give some suggestion as how can i copy the datas in faster way,My SP is as below:
--exec uds_shop
--select * from CMA_UDS.dbo.Dim_Shop
--select * from UDS.dbo.Dim_Shop
--delete from CMA_UDS.dbo.Dim_Shop
alter procedure uds_shop
as
begin
declare #dwkeyshop int
declare #shopdb int
declare #shopid int
declare #shopname nvarchar(60)
declare #shoptrade int
declare #dwkeytradecat int
declare #recordowner nvarchar(20)
declare #LogMessage varchar(600)
Exec CreateLog 'Starting Process', 1
DECLARE cur_shop CURSOR FOR
select
DW_Key_Shop,Shop_ID,Shop_Name,Trade_Sub_Category_Code,DW_Key_Source_DB,DW_Key_Trade_Category,Record_Owner
from
UDS.dbo.Dim_Shop
OPEN cur_shop
FETCH NEXT FROM cur_shop INTO #dwkeyshop,#shopid,#shopname,#shoptrade, #shopdb ,#dwkeytradecat,#recordowner
WHILE ##FETCH_STATUS = 0
BEGIN
Set #LogMessage = ''
Set #LogMessage = 'Records insertion/updation start date and time : ''' + Convert(varchar(19), GetDate()) + ''''
if (isnull(#dwkeyshop, '') <> '')
begin
if not exists (select crmshop.DW_Key_Shop from CMA_UDS.dbo.Dim_Shop as crmshop where (convert(varchar,crmshop.DW_Key_Shop)+CONVERT(varchar,crmshop.DW_Key_Source_DB)) = convert(varchar,(CONVERT(varchar, #dwkeyshop) + CONVERT(varchar, #shopdb))) )
begin
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record for shop table is inserting...'
insert into
CMA_UDS.dbo.Dim_Shop
(DW_Key_Shop,DW_Key_Source_DB,DW_Key_Trade_Category,Record_Owner,Shop_ID,Shop_Name,Trade_Sub_Category_Code)
values
(#dwkeyshop,#shopdb,#dwkeytradecat,#recordowner,#shopid,#shopname,#shoptrade)
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record successfully inserted in shop table for shop Id : ' + Convert(varchar, #shopid)
end
else
begin
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record for Shop table is updating...'
update
CMA_UDS.dbo.Dim_Shop
set DW_Key_Trade_Category=#dwkeytradecat,
Record_Owner=#recordowner,
Shop_ID=#shopid,Shop_Name=#shopname,Trade_Sub_Category_Code=#shoptrade
where
DW_Key_Shop=#dwkeyshop and DW_Key_Source_DB=#shopdb
Set #LogMessage = Ltrim(Rtrim(#LogMessage)) + ' ' + 'Record successfully updated for shop Id : ' + Convert(varchar, #shopid)
end
end
Exec CreateLog #LogMessage, 0
FETCH NEXT FROM cur_shop INTO #dwkeyshop,#shopid,#shopname,#shoptrade, #shopdb ,#dwkeytradecat,#recordowner
end
CLOSE cur_shop
DEALLOCATE cur_shop
End
Assuming targetTable and destinationTable have the same schema...
INSERT INTO targetTable t
SELECT * FROM destinationTable d
WHERE someCriteria
Avoid the use of cursors unless there is no other way (rare).
You can use the WHERE clause to filter out any duplicate records.
If you have an identity column, use an explicit column list that doesn't contain the identity column.
You can also try disabling constraints and removing indexes provided you replace them (and make sure the constraints are checked) afterwards.
If you are on SQL Server 2008 (onwards) you can use the MERGE statement.
From my personal experience, when you copy the huge data from one table to another (with similar constraints), drop the constraints on the table where the data is getting copied. Once the copy is done, reinstate all the constraints again.
I could reduce the copy time from 7 hours to 30 mins in my case (100 million records with 6 constraints)
INSERT INTO targetTable
SELECT * FROM destinationTable
WHERE someCriteria (based on Criteria you can copy/move the records)
Cursors are notoriously slow and ram can begin to become a problem for very large datasets.
It does look like you are doing a good bit of logging in each iteration, so you may be stuck with the cursor, but I would instead look for a way to break the job up into multiple invocations so that you can keep your footprint small.
If you have an autonumber column, I would add a '#startIdx bigint' to the procedure, and redefine your cursor statement to take the 'TOP 1000' 'WHERE [autonumberFeild] <= #startIdx Order by [autonumberFeild]'. Then create a new stored procedure with something like:
DECLARE #startIdx bigint = 0
WHILE select COUNT(*) FROM <sourceTable> > #startIdx
BEGIN
EXEC <your stored procedure> #startIdx
END
SET #startIdx = #startIdx + 1000
Also, make sure your database files are set to auto-grow, and that it does so in large increments, so you are not spending all your time growing your datafiles.

Using While Loop for SQL Server Update

I am trying to become more efficient in my SQL programming.
I am trying to run a loop to repeat an update command on field names that only change by a numerical suffix.
For example, instead of writing out x_1, y_1, then x_2, y_2 for each update:
DECLARE #a INT
DECLARE #b VARCHAR
SET #a = 1
WHILE #a < 30
set #b = #a
BEGIN
UPDATE source set h = h + "x_"+#b
where "y_"+#b = 'Sold'
SET #a = #a + 1
END
Let me know if I can clarify. I'm using SQL Server 2005.
Thanks for any guidance.
I'm trying to apply Adams's solution and need to understand what is proper usage of N' in the following:
exec sp_executesql update source_temp set pmt_90_day = pmt_90_day + convert(money,'trans_total_'+#b'')
where convert(datetime,'effective_date_'+#b) <= dateadd(day,90,ORSA_CHARGE_OFF_DATE)
and DRC_FLAG_'+#b = 'C'
This won't actually work, as you can't have the column name in quotes. What you're essentially doing is having SQL compare two strings that will always be different, meaning you'll never perform an update.
If you must do it this way, you'd have to have something like...
DECLARE #a INT
DECLARE #b VARCHAR
SET #a = 1
WHILE #a < 30
BEGIN
set #b = #a
exec sp_executesql N'UPDATE source set h = h + 'x_'+#b + N'
where y_'+#b + N' = ''Sold'''
SET #a = #a + 1
END
In general, however, I'd discourage this practice. I'm not a fan of dynamic SQL being generated inside another SQL statement for any sort of production code. Very useful for doing one-off development tasks, but I don't like it for code that could get executed by a user.
Adam hit a lot around the problem itself, but I'm going to mention the underlying problem of which this is just a symptom. Your data model is almost certainly bad. If you plan to be doing much (any) SQL development you should read some introductory books on data modeling. One of the first rules of normalization is that entities should not contain repeating groups in them. For example, you shouldn't have columns called "phone_1", "phone_2", etc.
Here is a much better way to model this kind of situation:
CREATE TABLE Contacts (
contact_id INT NOT NULL,
contact_name VARCHAR(20) NOT NULL,
contact_description VARCHAR(500) NULL,
CONSTRAINT PK_Contacts PRIMARY KEY CLUSTERED (contact_id)
)
CREATE TABLE Contact_Phones (
contact_id INT NOT NULL,
phone_type VARCHAR(10) NOT NULL,
phone_number VARCHAR(20) NOT NULL,
CONSTRAINT PK_Contact_Phones PRIMARY KEY CLUSTERED (contact_id, phone_type),
CONSTRAINT CK_Contact_Phones_phone_type CHECK (phone_type IN ('HOME', 'FAX', 'MOBILE'))
)
Now, instead of trying to concatenate a string to deal with different columns you can deal with them as a set and get at the phone numbers that you want through business logic. (Sorry that I didn't use your example, but it seemed a bit too general and hard to understand).
while #count < #countOfSession
begin
if #day = 'Saturday' or #day = 'Tuesday'
begin
if #day='Saturday'
begin
select #date
set #day='Tuesday'
set #count=#count+1
set #date=#date+3
end
else if #day='Tuesday'
begin
select #date
set #day='Saturday'
set #count=#count+1
set #date=#date+4
end
end
end