I want to achieve an auto increment ID with prefix but resetting the number if it has different prefix.
The output I want looks like this:
ID PREFIX PROJECTID
1 PID_ PID_1
2 PID_ PID_2
3 RID_ RID_1
4 RID_ RID_2
But the result I got with my script is this:
ID PREFIX PROJECTID
1 PID_ PID_1
2 PID_ PID_2
3 RID_ RID_3
4 RID_ RID_4
Here's my script to create the table
CREATE TABLE PROJECTS
(ID INT IDENTITY(1,1) NOT NULL,
PREFIX NVARCHAR(10) NOT NULL,
PROJECTID AS ISNULL(PREFIX + CAST(ID AS NVARCHAR(10)), '') PERSISTED)
INSERT INTO PROJECTS(PREFIX) VALUES('PID_'),('PID_'),('RID_'),('RID_')
I'm using MS SQL 2012
you want like this
CREATE TABLE #PROJECTS
(
ID INT IDENTITY(1, 1)
NOT NULL ,
PREFIX NVARCHAR(10) NOT NULL ,
PROJECTID NVARCHAR(11)
)
INSERT INTO #PROJECTS
( PREFIX )
VALUES ( 'PID_' ),
( 'PID_' ),
( 'RID_' ),
( 'RID_' )
suppose you have above data in your table
now if you want to perform insert with DECLARE #PREFIX NVARCHAR(10) = 'RID_'
INSERT INTO #PROJECTS
( PREFIX ,
PROJECTID
)
SELECT #PREFIX ,
#PREFIX + CAST(( COUNT(TT.rn) + 1 ) AS NVARCHAR(1))
FROM ( SELECT ROW_NUMBER() OVER ( PARTITION BY P.PREFIX ORDER BY ( SELECT
NULL
) ) AS rn
FROM #PROJECTS AS P
WHERE P.PREFIX = #PREFIX
) AS tt
see above query may helps you.
Hi i found the ansowr after working couple of hours in Ms Sql server
USE [StocksDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[tb1_triger]
ON [dbo].[Table1]
instead of INSERT
AS
declare #name nchar(12)
select top 1 #name=name from inserted
declare #maxid char(12)
select #maxid = MAX(id1) from Table1
BEGIN
SET NOCOUNT ON;
if (#maxid is null)
begin
set #maxid=0
end
set #maxid= substring(#maxid, 5 , LEN(#maxid))+1
INSERT INTO table1
(id1,name) SELECT CONCAT_WS((REPLICATE('0',12-4-LEN(#maxid))),'tblo',#maxid),i.name
from inserted i
END
You can do this with an INSTEAD OF trigger on the table rather than using a PERSISTED column. I have written the trigger so that it will correctly handle bulk inserts as this is something many people overlook. Also, for my solution it is not necessary to have an IDENTITY column on the table if you do not want it.
So the table has been defined with the column included. Also, you can get rid of the IDENTITY column as I mentioned above:
CREATE TABLE dbo.PROJECTS
(
ID INT IDENTITY(1, 1) NOT NULL,
PREFIX NVARCHAR(10) NOT NULL,
PROJECTID NVARCHAR(20) NOT NULL
);
One note - since the PREFIX column is NVARCHAR(10) and I do not know how big the numbers will get, the size of the PROEJCTID column was increased to prevent overflow. Adjust the size as your data requires.
Here is the trigger:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER dbo.InsertProjects
ON dbo.PROJECTS
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #rowsAffected INT = (SELECT COUNT(*) FROM Inserted);
-- if there are no rows affected, no need to do anything
IF #rowsAffected = 0 RETURN;
DECLARE #ExistingCounts TABLE (
Prefix NVARCHAR(10) NOT NULL,
ExistingCount INT NOT NULL
);
-- get the count for each existing prefix
INSERT INTO #ExistingCounts(Prefix, ExistingCount)
SELECT PREFIX, COUNT(*) FROM dbo.PROJECTS GROUP BY PREFIX;
-- since this is an INSTEAD OF trigger, we must do the insert ourself.
-- a prefix might not exist, so use ISNULL() to get a zero in that case.
INSERT INTO dbo.PROJECTS
(
PREFIX, PROJECTID
)
SELECT sub.PREFIX,
-- the number after the prefix is the existing count for the prefix plus
-- the position of the prefix in the Inserted table
sub.PREFIX + CAST((sub.ExistingCount + sub.Number) AS NVARCHAR(10))
FROM
(SELECT i.PREFIX,
-- get the position (1, 2, 3...) of the prefix in the Inserted table
ROW_NUMBER() OVER(PARTITION BY i.PREFIX ORDER BY i.PREFIX) AS [Number],
-- get the existing count of the prefix
ISNULL(c.ExistingCount, 0) AS [ExistingCount]
FROM Inserted AS i
LEFT OUTER JOIN #ExistingCounts AS c ON c.Prefix = i.PREFIX) AS sub;
END
GO
I have included comments in the source code to explain the simple logic. Hopefully this helps and is what you are looking for :-)
Hey use this query..
CREATE FUNCTION DBO.GET_NEX_P_ID(#PREF VARCHAR(4))
RETURNS NVARCHAR(24)
AS
BEGIN
RETURN (SELECT #PREF+CAST(COUNT(1)+1 AS VARCHAR) FROM PROJECTS WHERE PREFIX=#PREF)
END
GO
CREATE TABLE PROJECTS
(
PREFIX VARCHAR(8),
PROJECTID NVARCHAR(24)
)
GO
INSERT INTO PROJECTS
VALUES('PRJ_',DBO.GET_NEX_P_ID('PRJ_'))
GO
INSERT INTO PROJECTS
VALUES('PRQ_',DBO.GET_NEX_P_ID('PRQ_'))
GO
Thanks
Related
If there is a two column table MyTable with enough records that optimization of queries is relevant.
CorporationID int (unindexed)
BatchID int (indexed)
And lets assume there is always a 1 to many relationship between CorporationID and BatchID. In other words for each BatchID there will be only one CorporationID, but for each CorporationID there will be many BatchID values.
We need to get all BatchID values where corporationID = 1.
I know the simplest solution may be to just add an index to CorporationID, but assuming that is not allowed, is there some other way to inform SQL that each BatchID corresponds to only 1 CorporationID, through a query or otherwise?
select distinct batchid from MyTable where corporationID = 1
It seems this is not effective.
select batchid from (select min(corporationid) corporationid, batchid
from MyTable group by batchid) subselect where corporationid = 1
This is also not effective, I assume due to SQL needing to iterate needlessly through all values of corporationid? (Does an aggregate function exist to select any() value which would not have the overhead of min(), max(), sum() or avg()??)
select batchid
from (
select corporationid, batchid
from (
select *, ROW_NUMBER() OVER (PARTITION BY batchid ORDER BY(SELECT NULL)) AS RowNumber
from mytable
) subselect
where RowNumber = 1
) subselect2
where corporationid = 1
Would this work? By arbitrarily selecting the corporationid related to row number 1 after partitioning by batchid with no order?
"assuming it is not allowed to create an index" - this is a highly unlikely assumption. Of course, you should create the index.
The most direct answer to your alternate questions that lie within your question is "no". There is no function or sub query or view or other "read" action you can make to get a list of the batches for a given CorpID. You NEED to access the corpID data to do that... all your sample queries do not work because, at some point, they NEED to access the CorpIDs to know which rows to gather for BatchIDs. Any summary or "rollup" function that might exist would still NEED to access all the pages of data to "see" them. The reading of the pages cannot be avoided.
Without changes to your architecture, it's not physically possible to optimize your query further.
However, with some changes, you could have some options (but Id guess they are much uglier than just adding the index). For instance, you could modify the structure of your BatchID to include data for both the BatchID and the CorpID. Something like "8888899999999"... the 9's are the batchID and the 8's are the CorpID. This doesn't win you much though, you're not saving any index space, but at least you dont have to index the CorpID field :) Somethings like this could be done, but I wont share any others. I dont want the really experienced people here to see this stuff and get ill. :)
You need an index on CorpID if you want to improve performance.
If you don't have a lot of data, I suggest putting an index on the Corporation ID column. But if you have too much data, you can define an index for each Corporation ID
Part 01=>
/*01Create DB*/
IF DB_ID('Test01')>0
BEGIN
ALTER DATABASE Test01 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE Test01
END
GO
CREATE DATABASE Test01
GO
USE Test01
Go
Part 02=>
/*02Create table*/
CREATE TABLE Table01(
ID INT PRIMARY KEY IDENTITY,
Title NVARCHAR(100),
CreationDate DATETIME,
CorporationID INT ,
MyID INT ,
[GuidId1] [uniqueidentifier] NOT NULL,
[GuidId2] [uniqueidentifier] NOT NULL,
[Code] [nvarchar](50) NULL
)
ALTER TABLE [dbo].[Table01] ADD DEFAULT (GETDATE()) FOR [CreationDate]
GO
ALTER TABLE [dbo].[Table01] ADD DEFAULT (NEWSEQUENTIALID()) FOR [GuidId1]
GO
ALTER TABLE [dbo].[Table01] ADD DEFAULT (NEWID()) FOR [GuidId2]
GO
CREATE TABLE Table02(
ID INT PRIMARY KEY IDENTITY,
Title NVARCHAR(100),
CreationDate DATETIME,
CorporationID INT ,
MyID INT ,
[GuidId1] [uniqueidentifier] NOT NULL,
[GuidId2] [uniqueidentifier] NOT NULL,
[Code] [nvarchar](50) NULL
)
ALTER TABLE [dbo].[Table02] ADD DEFAULT (GETDATE()) FOR [CreationDate]
GO
ALTER TABLE [dbo].[Table02] ADD DEFAULT (NEWSEQUENTIALID()) FOR [GuidId1]
GO
ALTER TABLE [dbo].[Table02] ADD DEFAULT (NEWID()) FOR [GuidId2]
GO
Part 03=>
/*03Add Data*/
DECLARE #I INT = 1
WHILE #I < 1000000
BEGIN
DECLARE #Title NVARCHAR(100) = 'TITLE '+ CAST(#I AS NVARCHAR(10)),
#CorporationID INT = CAST((RAND()*20) + 1 AS INT),
#Code NVARCHAR(50) = 'CODE '+ CAST(#I AS NVARCHAR(10)) ,
#MyID INT = CAST((RAND()*50) + 1 AS INT)
INSERT INTO Table01 (Title , CorporationID , Code , MyID )
VALUES ( #Title , #CorporationID , 'CODE '+ #Code , #MyID)
SET #I += 1
END
INSERT INTO Table02 ([Title], [CreationDate], [CorporationID], [MyID], [GuidId1], [GuidId2], [Code])
SELECT [Title], [CreationDate], [CorporationID], [MyID], [GuidId1], [GuidId2], [Code] FROM Table01
Part 04=>
/*04 CREATE INDEX*/
CREATE NONCLUSTERED INDEX IX_Table01_ALL
ON Table01 (CorporationID) INCLUDE (MyID) ;
DECLARE #QUERY NVARCHAR(MAX) = ''
DECLARE #J INT = 1
WHILE #J < 21
BEGIN
SET #QUERY += '
CREATE NONCLUSTERED INDEX IX_Table02_'+CAST(#J AS NVARCHAR(5))+'
ON Table02 (CorporationID) INCLUDE (MyID) WHERE CorporationID = '+CAST(#J AS NVARCHAR(5))+';'
SET #J+= 1
END
EXEC (#QUERY)
Part 05=>
/*05 READ DATA => PUSH Button CTRL + M ( EXECUTION PLAN) */
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT * FROM [dbo].[Table01] WHERE CorporationID = 10 AND MyID = 25
SELECT * FROM [dbo].[Table01] WITH(INDEX(IX_Table01_ALL)) WHERE CorporationID = 10 AND MyID = 25
SELECT * FROM [dbo].[Table02] WITH(INDEX(IX_Table02_10)) WHERE CorporationID = 10 AND MyID = 25
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
Notice IO , TIME , and EXECUTION PLAN .
Good luck
At times I need to store a temporary value to a field. I have a stored procedure that adds it using:
Insert new record first then
SELECT #Record_Value = SCOPE_IDENTITY();
UPDATE ADMIN_Publication_JSON
SET NonPubID = CAST(#Record_Value as nvarchar(20)) + '_tmp'
WHERE RecID = #Record_Value
It simply takes the identity value and adds an '_tmp' to the end. Is there a way that I can create a default value in the table that would do that automatically if I did not insert a value into that field?
The NonPubID column is just a NVARCHAR(50).
Thanks
You could write a trigger, that replaces NULL with that string upon INSERT.
CREATE TRIGGER admin_publication_json_bi
ON admin_publication_json
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE apj
SET apj.nonpubid = concat(convert(varchar(20), i.id), '_tmp')
FROM admin_publication_json apj
INNER JOIN inserted i
ON i.id = apj.id
WHERE i.nonpubid IS NULL;
END;
db<>fiddle
Downside: You cannot explicitly insert NULLs for that column, should that be desired.
Check out NewKey col below:
CREATE TABLE #Table
(
ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY CLUSTERED,
IDValue VARCHAR(1) ,
ModifiedDT DATETIME NULL,
NewKey AS ( CONVERT(VARCHAR(100),ID)+'_Tmp' )
)
INSERT #Table( IDValue, ModifiedDT )
SELECT 'A', GETDATE()
UNION ALL
SELECT 'Y', GETDATE() - 1
UNION ALL
SELECT 'N', GETDATE() - 5
SELECT * FROM #Table
I have a table that looks something like this:
UserID Email
-----------------------------------
1 1_0#email.com;1_1#email.com
2 2_0#email.com;2_1#email.com
3 3_0#email.com;3_3#email.com
And I need to create a temp table that will look like this:
UserID Email
-----------------------------------
1 1_0#email.com
1 1_1#email.com
2 2_0#email.com
2 2_1#email.com
3 3_0#email.com
3 3_1#email.com
The temp table will be used in a update trigger and I was wondering if there is a more elegant approach than doing something like this:
-- Create temp table to hold the result table
CREATE TABLE #resultTable(
UserID int,
Email nvarchar(50)
)
-- Create temp table to help iterate through table
CREATE TABLE #tempTable(
ID int IDENTITY(1,1),
UserID int,
Email nvarchar(50)
)
-- Insert data from updated table into temp table
INSERT INTO #tempTable
SELECT [UserId], [Email]
FROM inserted
-- Iterate through temp table
DECLARE #count int = ##ROWCOUNT
DECLARE #index int = 1
WHILE (#index <= #count)
BEGIN
DECLARE #userID int
DECLARE #email nvarchar(50)
-- Get the user ID and email values
SELECT
#userID = [UserID], #email = [Email]
FROM #tempTable
WHERE [ID] = #index
-- Insert the parsed email address into the result table
INSERT INTO #resultTable([UserID], [Email])
SELECT #userID, [Data]
FROM myFunctionThatSplitsAColumnIntoATable(#email, ';')
SET #index = #index + 1
END
-- Do stuff with the result table
You'd better avoid iterative approaches when using T-SQL unless strictly necessary, specially inside triggers.
You can use the APPLY operator.
From MSDN:
The APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query.
So, you can try to replace all your code with this:
INSERT INTO #resultTable(UserID, Email)
SELECT T1.UserID
,T2.Data
FROM updated T1
CROSS APPLY myFunctionThatSplitsAColumnIntoATable(T1.Email, ';') AS T2
I have a table that loads new data every day and another table that contains a history of changes to that table. What's the best way to check if any of the data have changed since the last time data was loaded?
For example, I have table #a with some strategies for different countries and table #b tracks the changes made to table #a. I can use a checksum() to hash the fields that can change, and add them to the table if the existing hash is different from the new hash. However, MSDN doesn't think this is a good idea since "collisions" can occur, e.g. two different values map to the same checksum.
MSDN link for checksum
http://msdn.microsoft.com/en-us/library/aa258245(v=SQL.80).aspx
Sample code:
declare #a table
(
ownerid bigint
,Strategy varchar(50)
,country char(3)
)
insert into #a
select 1,'Long','USA'
insert into #a
select 2,'Short','CAN'
insert into #a
select 3,'Neutral','AUS'
declare #b table
(
Lastupdated datetime
,ownerid bigint
,Strategy varchar(50)
,country char(3)
)
insert into #b
(
Lastupdated
,ownerid
,strategy
,country
)
select
getdate()
,a.ownerid
,a.strategy
,a.country
from #a a left join #b b
on a.ownerid=b.ownerid
where
b.ownerid is null
select * from #b
--get a different timestamp
waitfor delay '00:00:00.1'
--change source data
update #a
set strategy='Short'
where ownerid=1
--add newly changed data into
insert into #b
select
getdate()
,a.ownerid
,a.strategy
,a.country
from
(select *,checksum(strategy,country) as hashval from #a) a
left join
(select *,checksum(strategy,country) as hashval from #b) b
on a.ownerid=b.ownerid
where
a.hashval<>b.hashval
select * from #b
How about writing a query using EXCEPT? Just write queries for both tables and then add EXCEPT between them:
(SELECT * FROM table_new) EXCEPT (SELECT * FROM table_old)
The result will be the entries in table_new that aren't in table_old (i.e. that have been updated or inserted).
Note: To get rows recently deleted from table_old, you can reverse the order of the queries.
There is no need to check for changes if you use a different approach to the problem.
On your master table create a trigger for INSERT, UPDATE and DELETE which tracks the changes for you by writing to table #b.
If you search the internet for "SQL audit table" you will find many pages describing the process, for example: Adding simple trigger-based auditing to your SQL Server database
Thanks to #newenglander I was able to use EXCEPT to find the changed row. As #Tony said, I'm not sure how multiple changes will work, but here's the same sample code reworked to use Except instead of CHECKSUM
declare #a table
(
ownerid bigint
,Strategy varchar(50)
,country char(3)
)
insert into #a
select 1,'Long','USA'
insert into #a
select 2,'Short','CAN'
insert into #a
select 3,'Neutral','AUS'
declare #b table
(
Lastupdated datetime
,ownerid bigint
,Strategy varchar(50)
,country char(3)
)
insert into #b
(
Lastupdated
,ownerid
,strategy
,country
)
select
getdate()
,a.ownerid
,a.strategy
,a.country
from #a a left join #b b
on a.ownerid=b.ownerid
where
b.ownerid is null
select * from #b
--get a different timestamp
waitfor delay '00:00:00.1'
--change source data
update #a
set strategy='Short'
where ownerid=1
--add newly changed data using EXCEPT
insert into #b
select getdate(),
ownerid,
strategy,
country
from
(
(
select
ownerid
,strategy
,country
from #a changedtable
)
EXCEPT
(
select
ownerid
,strategy
,country
from #b historicaltable
)
) x
select * from #b
Let us say I have a table (everything is very much simplified):
create table OriginalData (
ItemName NVARCHAR(255) not null
)
And I would like to insert its data (set based!) into two tables which model inheritance
create table Statements (
Id int IDENTITY NOT NULL,
ProposalDateTime DATETIME null
)
create table Items (
StatementFk INT not null,
ItemName NVARCHAR(255) null,
primary key (StatementFk)
)
Statements is the parent table and Items is the child table. I have no problem doing this with one row which involves the use of IDENT_CURRENT but I have no idea how to do this set based (i.e. enter several rows into both tables).
Thanks.
Best wishes,
Christian
Another possible method that would prevent the use of cursors, which is generally not a best practice for SQL, is listed below... It uses the OUTPUT clause to capture the insert results from the one table to be used in the insert to the second table.
Note this example makes one assumption in the fact that I moved your IDENTITY column to the Items table. I believe that would be acceptable, atleast based on your original table layout, since the primary key of that table is the StatementFK column.
Note this example code was tested via SQL 2005...
IF OBJECT_ID('tempdb..#OriginalData') IS NOT NULL
DROP TABLE #OriginalData
IF OBJECT_ID('tempdb..#Statements') IS NOT NULL
DROP TABLE #Statements
IF OBJECT_ID('tempdb..#Items') IS NOT NULL
DROP TABLE #Items
create table #OriginalData
( ItemName NVARCHAR(255) not null )
create table #Statements
( Id int NOT NULL,
ProposalDateTime DATETIME null )
create table #Items
( StatementFk INT IDENTITY not null,
ItemName NVARCHAR(255) null,
primary key (StatementFk) )
INSERT INTO #OriginalData
( ItemName )
SELECT 'Shirt'
UNION ALL SELECT 'Pants'
UNION ALL SELECT 'Socks'
UNION ALL SELECT 'Shoes'
UNION ALL SELECT 'Hat'
DECLARE #myTableVar table
( StatementFk int,
ItemName nvarchar(255) )
INSERT INTO #Items
( ItemName )
OUTPUT INSERTED.StatementFk, INSERTED.ItemName
INTO #myTableVar
SELECT ItemName
FROM #OriginalData
INSERT INTO #Statements
( ID, ProposalDateTime )
SELECT
StatementFK, getdate()
FROM #myTableVar
You will need to write an ETL process to do this. You may want to look into SSIS.
This also can be done with t-sql and possibly temp tables. You may need to store unique key from OriginalTable in Statements table and then when you are inserting Items - join OriginalTable with Statements on that unique key to get the ID.
I don't think you could do it in one chunk but you could certainly do it with a cursor loop
DECLARE #bla char(10)
DECLARE #ID int
DECLARE c1 CURSOR
FOR
SELECT bla
FROM OriginalData
OPEN c1
FETCH NEXT FROM c1
INTO #bla
WHILE ##FETCH_STATUS = 0
BEGIN
INSERT INTO Statements(ProposalDateTime) VALUES('SomeDate')
SET #ID = SCOPE_IDENTITY()
INSERT INTO Items(StateMentFK,ItemNAme) VALUES(#ID,#bla)
FETCH NEXT FROM c1
INTO #bla
END
CLOSE c1
DEALLOCATE c1