So I've got this table where I keep track of Items, there are 3 kinds of items:
Weapon
Armour
Potion
I created 3 check constraints for those:
CK_Weapon: CHECK ([Type]=(1) AND NOT ([PhysDamage]+[ElemDamage])<(1) AND [AttackSpeed]>(0.5))
CK_Armour: CHECK ([Type]=(2) AND NOT ([PhysReduction]+[ElemReduction]<(1)))
CK_Potion: ([Type]=(3) AND ([PhysDamage]+[ElemDamage])=(0) AND [AttackSpeed]=(0) AND ([PhysReduction]+[ElemReduction])=(0));
When I try to add a potion with the following insert;
DECLARE #num int, #Type int, #Name varchar(50), #Description varchar(50), #Gold int
SET #Type = 3
SET #Name = 'Spirit Potion'
SET #Description = 'Restores a bit of Spirit'
SET #Gold = 150
insert into Item(Type, Name, Description, GoldValue) VALUES(#Type, #Name, #Description, #Gold)
I get the following error:
The INSERT statement conflicted with the CHECK constraint "CK_Weapon". The conflict occurred in database "Database", table "dbo.Item".
But it shouldn't trigger this CHECK at all, because Potion Type should be 3!
Is there an easy way for me to alter those CHECKs so it'll only trigger when the Type is the same?
You need to reverse the first part of your checks so that they give a "pass" to rows they don't care about. So, e.g. for the Armour check, you should check that either the Type isn't 2 (so this check constraint doesn't care) Or that (the checks that apply to armour) are passed:
CHECK ([Type]!=(2) OR (NOT ([PhysReduction]+[ElemReduction]<(1))))
Repeat for your other checks. At the moment, you cannot insert any rows since the combination of check constraints require that Type be simultaneously equal to 1, 2 and 3.
You are trying to put 3 constraints onto the same column, hoping that it will only trigger one of them, matching the Type you are inputting. But it will check them all, that's why the CK_Weapon constraint is violated as it is expecting Type = 1.
You might want to try to write a bit of case-logic inside your constraint, like this:
create table [RPGInventory]
(
[Type] tinyint not null
, [PhysDamage] int null
, [ElemDamage] int null
, [AttackSpeed] int null
, [PhysReduction] int null
, [ElemReduction] int null
, constraint ckInventoryType check (1 = iif([Type] = (1)
and not ([PhysDamage] + [ElemDamage]) < (1)
and [AttackSpeed] > (0.5), 1
, iif([Type] = (2)
and not ([PhysReduction] + [ElemReduction] < (1)), 1
, iif([Type] = (3)
and ([PhysDamage] + [ElemDamage]) = (0)
and [AttackSpeed] = (0)
and ([PhysReduction] + [ElemReduction]) = (0), 1, 0)))
)
)
go
Related
I am creating a stored procedure and the final "When not matched" statement is throwing an error for the tmp.DelDate and tmp.DelUser fields. The "tmp" table is a User-Defined Table Type and the definition is below the sp code. 99 times out of 100, the problem is a bad alias or other typo. I've been staring at this and I have to be missing something small. This last statement is almost identical to the first "When Matched" statement.
ALTER Procedure dbo.spInsertUpdateProtocolRiskStrats
#riskStratsTable ProtocolRiskStrats READONLY
WITH RECOMPILE
AS
BEGIN
WITH riskStrats as (
SELECT ol.StratId,
ol.LinkType,
ol.LinkId,
ol.add_user,
ol.add_date,
ol.del_user,
ol.del_date
FROM ots_StratTriggerOutcomesLinks ol
JOIN #riskStratsTable rst on ol.LinkId = rst.LinkId
WHERE ol.LinkId = rst.LinkId
AND ol.LinkType = 2
)
MERGE riskStrats
USING #riskStratsTable as tmp
ON riskStrats.LinkId = tmp.LinkId
WHEN MATCHED THEN
UPDATE SET riskStrats.add_date = tmp.AddDate,
riskStrats.add_user = tmp.AddUser,
del_date = null,
del_user= null
WHEN NOT MATCHED THEN
INSERT (StratId, LinkType, LinkId, add_user, add_date)
VALUES (tmp.StratId, tmp.LinkType, tmp.LinkId, tmp.AddUser, tmp.AddDate)
WHEN NOT MATCHED BY SOURCE THEN
UPDATE SET riskStrats.del_date = tmp.DelDate,
riskStrats.del_user = tmp.DelUser;
END
User Table definition
CREATE TYPE dbo.ProtocolRiskStrats AS TABLE
(
KeyId int null,
StratId int null,
LinkType int null,
LinkId int null,
AddUser int null,
AddDate datetime null,
DelUser int null,
DelDate datetime null
)
As noted by #AlwaysLearning, I was assigning values that couldn't exist because it was a "not matched" condition. I updated my last statement to use constant values. I had to add another parameter to pass in user name. I could have also done a "Top 1" on my TVP but my dev lead didn't like that.
UPDATE SET riskStrats.del_date = GETDATE(),
riskStrats.del_user = #userName;
I'm writing a stored procedure to insert data from a form into two tables. One table has an autonumbered identity field. I need to insert the data into that table, find the newly created autonumber, and use that number to insert data into another table. So, to boil it down, I have a one-to-many link between the two tables and I need to make sure the identity field gets inserted.
Is this code the best way to do something like this, or am I missing something obvious?
CREATE PROCEDURE [dbo].[sp_Insert_CRT]
(
#TRACKING_ID int,
#CUST_NUM int,
#TRACKING_ITEM_ID int,
#STATEMENT_NUM nvarchar (200) = null,
#AMOUNT numeric (15, 2),
#BBL_ADJUSTED int = NULL,
#PAID_VS_BILLED int = NULL,
#ADJUSTMENT_TYPE int = NULL,
#ENTERED_BY nvarchar (10) = NULL,
#ENTERED_DATE date = NULL,
#AA_STATUS int = NULL
)
AS
BEGIN
-- Insert data into CRT_Main, where Tracking_ID is an autonumber field
INSERT into tbl_CRT_Main
(
-- TRACKING_ID
CUST_NUM
,TRACKING_ITEM_ID
,STATEMENT_NUM
,AMOUNT
)
VALUES
(
-- #TRACKING_ID
#CUST_NUM
,#TRACKING_ITEM_ID
,#STATEMENT_NUM
,#AMOUNT
)
-- Find the newly generated autonumber, and use it in another table
BEGIN TRANSACTION
DECLARE #TrackID int;
SELECT #TrackID = coalesce((select max(TRACKING_ID) from tbl_CRT_Main), 1)
COMMIT
INSERT into tbl_CRT_Admin_Adjustment
(
TRACKING_ID
,BBL_ADJUSTED
,PAID_VS_BILLED
,[ADJUSTMENT_TYPE]
,[ENTERED_BY]
,[ENTERED_DATE]
,AA_STATUS
)
VALUES
(
#TrackID
,#BBL_ADJUSTED
,#PAID_VS_BILLED
,#ADJUSTMENT_TYPE
,#ENTERED_BY
,#ENTERED_DATE
,#AA_STATUS
)
END
SELECT #TrackID = coalesce((select max(TRACKING_ID) from tbl_CRT_Main), 1)
No, don't do this. This will get you the maximum value of TRACKING_ID yes, but that doesn't mean that's the value that was created for your INSERT. If multiple INSERT statements were being run by different connections then very likely you would get the wrong value.
Instead, use SCOPE_IDENTITY to get the value:
SET #TrackID = SCOPE_IDENTITY();
Also, there is no need to wrap the above in an explicit transaction like you have with your SELECT MAX(). Instead, most likely, the entire batch in the procedure should be inside it's own explicit transaction, with a TRY...CATCH so that you can ROLLBACK the whole batch in the event of an error.
I'm dealing with a table in which a bunch of arbitrary settings are stored as VARCHAR(255) values. The particular one I'm tasked with dealing with is a sequence number that needs to be incremented and returned to the caller. (Again, note that the sequence "number" is stored as VARCHAR, which is something I don't have any control over).
Because it's a sequence number, I don't really want to select and update in separate steps. When I've dealt with this sort of thing in the past with actual numeric fields, my method has been something like
UPDATE TABLE SET #SEQ_NUM = VALUE = VALUE + 1
which increments the value and gives me the updated value in one swell foop. I thought in this situation, I'd try the same basic thing with casts:
DECLARE #SEQ_NUM VARCHAR(255)
UPDATE SOME_TABLE
SET #SEQ_NUM = VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
WHERE NAME = 'SOME_NAME'
The actual update works fine so long as I don't try to assign the result to the variable; as soon as I do, I receive the following error:
Msg 549, Level 16, State 1, Line 4 The collation
'SQL_Latin1_General_CP1_CI_AS' of receiving variable is not equal to
the collation 'Latin1_General_BIN' of column 'VALUE'.
I understand what that means, but I don't understand why it's happening, or by extension, how to remedy the issue.
As an aside to fixing the specific error, I'd welcome suggestions for alternative approaches to incrementing a char sequence "number".
From one of the comments, sounds like you may have already hit on this, but here's what I would recommend:
UPDATE TABLE
SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
OUTPUT inserted.VALUE
WHERE NAME = 'SOME_NAME'
This will output the new value like a SELECT statement does. You can also cast inserted.VALUE to an int if you wanted to do that in the SQL.
If you wanted to put the value into #SEQ_NUM instead of outputing the value from the statement/stored procedure, you can't use a scalar variable, but you can pump it into a table variable, like so:
DECLARE #SEQ_NUM AS TABLE ( VALUE VARCHAR(255) );
UPDATE TABLE
SET VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
OUTPUT inserted.VALUE INTO #SEQ_NUM ( VALUE )
WHERE NAME = 'SOME_NAME'
SELECT VALUE FROM #SEQ_NUM
Maintaining a sequential number manually is by no means a solution I'd like to work with, but I can understand there might be constraints around this.
If you break it down in to 2 steps, then you can work around the issue. Note I've replaced your WHERE clause for this example code to work:
CREATE TABLE #SOME_TABLE ( [VALUE] VARCHAR(255) )
INSERT INTO #SOME_TABLE
( VALUE )
VALUES ( '12345' )
DECLARE #SEQ_NUM VARCHAR(255)
UPDATE #SOME_TABLE
SET [VALUE] = CAST(( CAST([VALUE] AS INT) + 1 ) AS VARCHAR(255))
WHERE 1 = 1
SELECT *
FROM #SOME_TABLE
SELECT #SEQ_NUM = [VALUE]
FROM #SOME_TABLE
WHERE 1 = 1
SELECT #SEQ_NUM
DROP TABLE #SOME_TABLE
You can continue using the quirky update in OP but you have to split the triple assignment #Variable = Column = Expression in the UPDATE statement to two simple assignments of #Variable = Expression and Column = #Variable like this
CREATE TABLE #SOME_TABLE (
NAME VARCHAR(255)
, VALUE VARCHAR(255) COLLATE Latin1_General_BIN
)
INSERT #SOME_TABLE SELECT 'SOME_NAME', '42'
DECLARE #SEQ_NUM VARCHAR(255)
/*
-- this quirky update fails on COLLATION mismatch or data-type mismatch
UPDATE #SOME_TABLE
SET #SEQ_NUM = VALUE = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
WHERE NAME = 'SOME_NAME'
*/
-- this quirky update works in all cases
UPDATE #SOME_TABLE
SET #SEQ_NUM = CAST((CAST(VALUE AS INT) + 1) AS VARCHAR)
, VALUE = #SEQ_NUM
WHERE NAME = 'SOME_NAME'
SELECT *, #SEQ_NUM FROM #SOME_TABLE
This simple rewrite prevents db-engine complaining on difference in data-type between #Variable and Column too (e.g. VARCHAR vs NVARCHAR) and seems like a more "portable" way of doing quirky updates (if there is such thing)
Having a table:Table1 in which a column Code accepts nullables values how can we insure that values are unique for non nullable values except for codes that start with 'A' which can be duplicated maximum twice?
Table1
Id | Code
----------
1 | NULL --[ok]
2 | A123 --[ok]
3 | A123 --[ok]
4 | B100 --[ok]
5 | C200 --[ok]
6 | B100 --[not ok already used]
7 | NULL --[ok]
What i have tried is creating an indexed view, the solution work fine for NULL values but not for the second case i mentioned (skipped actualy)
Create view v_Table_unq with schemabinding as(
select code from
dbo.Table1
where code is not null and code not like 'A%'
)
go
create unique clustered index unq_code on v_Table_unq(code)
Thanks for help
Table Creation
CREATE TABLE CheckConstraint
(
Name VARCHAR(50),
)
GO
Function Creation
create FUNCTION CheckDuplicateWithA() RETURNS INT AS BEGIN
DECLARE #ret INT =0 ;
SELECT #ret = IsNull(COUNT(Name), 0) FROM CheckConstraint WHERE Name like '[A]%' group by Name having COUNT(name) >= 1;
RETURN IsNUll(#ret, 0);
END;
GO
create FUNCTION CheckDuplicateOtherThenA() RETURNS INT AS BEGIN
DECLARE #ret INT =0 ;
SELECT #ret = IsNull(COUNT(Name), 0) FROM CheckConstraint WHERE Name not like '[A]%' group by Name having COUNT(name) >= 1;
RETURN IsNUll(#ret, 0);
END;
GO
Constraints
alter TABLE CheckConstraint
add CONSTRAINT CheckDuplicateContraintWithA CHECK (NOT (dbo.CheckDuplicateWithA() > 2));
go
alter TABLE CheckConstraint
add CONSTRAINT CheckDuplicateConmstraintOtherThenA CHECK (NOT (dbo.CheckDuplicateOtherThenA() > 1));
go
Result Set
insert into CheckConstraint(Name)Values('b') -- Passed
insert into CheckConstraint(Name)Values('b') -- Failed
insert into CheckConstraint(Name)Values('a') -- Passed
insert into CheckConstraint(Name)Values('a') -- Passed
insert into CheckConstraint(Name)Values('a') -- Failed
Why would you want a unique contraint? Why cant add this logic in the proc which inserts the data in the table?If you do not have a single point of insertion/updation etc?Why cant put it in instead of or after trigger?That would be much better as you can handle it well and could return proper errror messages.This will have less overhead than having a index view which will add to overhead.If you need unique constraint for the records which doesnt start with 'A' then you can have a persisted column and have a unique constraint on that.
Off course you will have overhead of having persisted computed column with index..But if you just need unique contsraint you can use that.For values which starts with 'A' this could be a null value.
I am creating a customer table with a parent table that is company.
It has been dictated(chagrin) that I shall create a primary key for the customer table that is a combination of the company id which is an existing varchar(4) column in the customer table, e.g. customer.company
The rest of the varchar(9) primary key shall be a zero padded counter incrementing through the number of customers within that company.
E.g. where company = MSFT and this is a first insert of an MSFT record: the PK shall be MSFT00001
on subsequent inserts the PK would be MSFT00001, MSFT00002 etc.
Then when company = INTL and its first record is inserted, the first record would be INTL00001
I began with an instead of trigger and a udf that I created from other stackoverflow responses.
ALTER FUNCTION [dbo].[GetNextID]
(
#in varchar(9)
)
RETURNS varchar(9) AS
BEGIN
DECLARE #prefix varchar(9);
DECLARE #res varchar(9);
DECLARE #pad varchar(9);
DECLARE #num int;
DECLARE #start int;
if LEN(#in)<9
begin
set #in = Left(#in + replicate('0',9) , 9)
end
SET #start = PATINDEX('%[0-9]%',#in);
SET #prefix = LEFT(#in, #start - 1 );
declare #tmp int;
set #tmp = len(#in)
declare #tmpvarchar varchar(9);
set #tmpvarchar = RIGHT( #in, LEN(#in) - #start + 1 )
SET #num = CAST( RIGHT( #in, LEN(#in) - #start + 1 ) AS int ) + 1
SET #pad = REPLICATE( '0', 9 - LEN(#prefix) - CEILING(LOG(#num)/LOG(10)) );
SET #res = #prefix + #pad + CAST( #num AS varchar);
RETURN #res
END
How would I write my instead of trigger to insert the values and increment this primary key. Or should I give it up and start a lawnmowing business?
Sorry for that tmpvarchar variable SQL server was giving me strange results without it.
Whilst I agree with the naysayers, the principle of "accepting that which cannot be changed" tends to lower the overall stress level, IMHO. Try the following approach.
Disadvantages
Single-row inserts only. You won't be doing any bulk inserts to your new customer table as you'll need to execute the stored procedure each time you want to insert a row.
A certain amount of contention for the key generation table, hence a potential for blocking.
On the up side, though, this approach doesn't have any race conditions associated with it, and it isn't too egregious a hack to really and truly offend my sensibilities. So...
First, start with a key generation table. It will contain 1 row for each company, containing your company identifier and an integer counter that we'll be bumping up each time an insert is performed.
create table dbo.CustomerNumberGenerator
(
company varchar(8) not null ,
curr_value int not null default(1) ,
constraint CustomerNumberGenerator_PK primary key clustered ( company ) ,
)
Second, you'll need a stored procedure like this (in fact, you might want to integrate this logic into the stored procedure responsible for inserting the customer record. More on that in a bit). This stored procedure accepts a company identifier (e.g. 'MSFT') as its sole argument. This stored procedure does the following:
Puts the company id into canonical form (e.g. uppercase and trimmed of leading/trailing whitespace).
Inserts the row into the key generation table if it doesn't already exist (atomic operation).
In a single, atomic operation (update statement), the current value of the counter for the specified company is fetched and then incremented.
The customer number is then generated in the specified way and returned to the caller via a 1-row/1-column SELECT statement.
Here you go:
create procedure dbo.GetNewCustomerNumber
#company varchar(8)
as
set nocount on
set ansi_nulls on
set concat_null_yields_null on
set xact_abort on
declare
#customer_number varchar(32)
--
-- put the supplied key in canonical form
--
set #company = ltrim(rtrim(upper(#company)))
--
-- if the name isn't already defined in the table, define it.
--
insert dbo.CustomerNumberGenerator ( company )
select id = #company
where not exists ( select *
from dbo.CustomerNumberGenerator
where company = #company
)
--
-- now, an interlocked update to get the current value and increment the table
--
update CustomerNumberGenerator
set #customer_number = company + right( '00000000' + convert(varchar,curr_value) , 8 ) ,
curr_value = curr_value + 1
where company = #company
--
-- return the new unique value to the caller
--
select customer_number = #customer_number
return 0
go
The reason you might want to integrate this into the stored procedure that inserts a row into the customer table is that it makes globbing it all together into a single transaction; without that, your customer numbers may/will get gaps when an insert fails land gets rolled back.
As others said before me, using a primary key with calculated auto-increment values sounds like a very bad idea!
If you are allowed to and if you can live with the downsides (see at the bottom), I would suggest the following:
Use a normal numeric auto-increment key and a char(4) column which only contains the company id.
Then, when you select from the table, you use row_number on the auto-increment column and combine that with the company id so that you have an additional column with a "key" that looks like you wanted (MSFT00001, MSFT00002, ...)
Example data:
create table customers
(
Id int identity(1,1) not null,
Company char(4) not null,
CustomerName varchar(50) not null
)
insert into customers (Company, CustomerName) values ('MSFT','First MSFT customer')
insert into customers (Company, CustomerName) values ('MSFT','Second MSFT customer')
insert into customers (Company, CustomerName) values ('ABCD','First ABCD customer')
insert into customers (Company, CustomerName) values ('MSFT','Third MSFT customer')
insert into customers (Company, CustomerName) values ('ABCD','Second ABCD customer')
This will create a table that looks like this:
Id Company CustomerName
------------------------------------
1 MSFT First MSFT customer
2 MSFT Second MSFT customer
3 ABCD First ABCD customer
4 MSFT Third MSFT customer
5 ABCD Second ABCD customer
Now run the following query on it:
select
Company + right('00000' + cast(ROW_NUMBER() over (partition by Company order by Id) as varchar(5)),5) as SpecialKey,
*
from
customers
This returns the same table, but with an additional column with your "special key":
SpecialKey Id Company CustomerName
---------------------------------------------
ABCD00001 3 ABCD First ABCD customer
ABCD00002 5 ABCD Second ABCD customer
MSFT00001 1 MSFT First MSFT customer
MSFT00002 2 MSFT Second MSFT customer
MSFT00003 4 MSFT Third MSFT customer
You could create a view with this query and let everyone use that view, to make sure everyone sees the "special key" column.
However, this solution has two downsides:
You need at least SQL Server 2005 in
order for row_number to work.
The numbers in the special key will change when you delete companies from the table. So, if you don't want the numbers to change, you have to make sure that nothing is ever deleted from that table.