SQL custom logic in constraints - sql

I need to create such custom constrain logic no duplicate combination in one period of time
CREATE FUNCTION [dbo].[CheckPriceListDuplicates](
#priceListId uniqueidentifier,
#supplierId uniqueidentifier,
#transportModeId uniqueidentifier,
#currencyId uniqueidentifier,
#departmentTypeId uniqueidentifier,
#consolidationModeId uniqueidentifier,
#importerId uniqueidentifier,
#exporterId uniqueidentifier,
#validFrom datetimeoffset(7),
#validTo datetimeoffset(7))
RETURNS int
AS
BEGIN
DECLARE #result int
IF EXISTS (SELECT * FROM [dbo].[PriceListEntries] AS [Extent1]
WHERE ([Extent1].[Id] <> #priceListId) AND
((([Extent1].[SupplierAddressBook_Id] IS NULL) AND (#supplierId IS NULL)) OR ([Extent1].[SupplierAddressBook_Id] = #supplierId)) AND
([Extent1].[TransportMode_Id] = #transportModeId) AND
([Extent1].[Currency_Id] = #currencyId) AND
([Extent1].[DepartmentType_Id] = #departmentTypeId) AND
((([Extent1].[ConsolidationMode_Id] IS NULL) AND (#consolidationModeId IS NULL)) OR ([Extent1].[ConsolidationMode_Id] = #consolidationModeId)) AND
((([Extent1].[Importer_Id] IS NULL) AND (#importerId IS NULL)) OR ([Extent1].[Importer_Id] = #importerId)) AND
((([Extent1].[Exporter_Id] IS NULL) AND (#exporterId IS NULL)) OR ([Extent1].[Exporter_Id] = #exporterId)) AND
((#validFrom >= [Extent1].[ValidFrom]) OR (#validTo <= [Extent1].[ValidTo]))
)
BEGIN
SET #result = 0
END
ELSE
BEGIN
SET #result = 1
END
RETURN #result
END
ALTER TABLE [dbo].[PriceListEntries]
ADD CONSTRAINT UniquCombinations CHECK ([dbo].[CheckPriceListDuplicates](
Id,
SupplierAddressBook_Id,
TransportMode_Id,
Currency_Id,
DepartmentType_Id,
ConsolidationMode_Id,
Importer_Id,
Exporter_Id,
ValidFrom,
ValidTo) = 1)
any idea how to do with out function?

It's a generally accepted concept that business rules should not be enforced in the DB. This is also generally difficult to strictly enforce as there is a large amount of overlap between business rules and data integrity rules. A data integrity constraint may limit a field to an integer value between 5 and 20, but that is because some business rule somewhere stipulates those are the only valid values.
So the difference between a business rule and a constraint is usually de facto defined as: a business rule is something that can't be easily enforced with the built-in checks available in the database and a constraint can be.
But I would further narrow the definition to state that a business rule is liable to change and a constraint is more static. For example, the rule "a patron may have no more than 5 library items checked out at any one time" could well be easily enforced using database constraints. But the limit of 5 is arbitrary and could change at a moments notice. Therefore it should be defined as a business rule that should not be enforced at the database level.
If a structural or modeling change or enhancement/addition of a database feature makes a "business rule" easily enforceable in the database where it wasn't before, you still have to consider if the rule is rigidly defined such that it is not expected to change. The database should be the bedrock, the foundation of your data edifice. You don't want it shifting around a lot.

One way to query multiple columns in a table and see if they are all (together) unique is to Concatenate the tested data together as a string.
Select CONCAT ( Col1, Col2, Col3) AS ConcateString
from [TABLE_NAME]
WHERE ConcateString = 'All_of_your_data_in_one_string';
https://msdn.microsoft.com/en-us/library/hh231515.aspx
If the result yields more than one result, it combination of the data is not unique.

Related

SQL - Unique key across 2 columns of same table?

I use SQL Server 2016. I have a database table called "Member".
In that table, I have these 3 columns (for the purpose of my question):
idMember [INT - Identity - Primary Key]
memEmail
memEmailPartner
I want to prevent a row to use an email that already exists in the table.
Both email columns are not mandatory, so they can be left blank (NULL).
If I create a new Member:
If not blank, the values entered for "memEmail" and "memEmailPartner" (independently) should not be found in any other rows in columns memEmail nor memEmailPartner.
So if I want to create a row with email (dominic#email.com) I must not find any occurrences of that value in memEmail or memEmailPartner.
If I update an existing Member:
I must not find any occurrences of that value in memEmail or memEmailPartner, with the exception that I am updating the row (idMembre) which already have the value in memEmail or memEmailPartner.
--
From what I read on Google, it should be possible to do something with a Function-Based Check Constraint but I can't make that work.
Anyone have a solution to my problem ?
Thank you.
I may have misunderstood exactly what you were asking but it looks like you want a simple upsert query with IF EXISTS conditions.
DECLARE #emailAddress VARCHAR(255)= 'dominic#email.com', --dummy value
#id INT= 2; --dummy value
IF NOT EXISTS
(
SELECT 1
FROM #Member
WHERE memEmail = #emailAddress
OR memEmailPartner = #emailAddress
)
BEGIN
SELECT 'insert';
END;
ELSE IF EXISTS
(
SELECT 1
FROM #Member
WHERE idMember = #id
)
BEGIN
SELECT 'update';
END;
A trigger is the traditional way of doing doing what you're asking for. Here's a simple demo;
--if object_id('member') is not null drop table member
go
create table member (
idMember INT Identity Primary Key,
memEmail varchar(100),
memEmailPartner varchar(100)
)
go
create trigger trg_member on member after insert, update as
begin
set nocount on
if exists (select 1 from member m join inserted i on i.memEmail = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmail = m.memEmailPartner and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmailPartner and i.idMember <> m.idMember)
begin
raiserror('Email addresses must be unique.', 16, 1)
rollback
end
end
go
insert member(memEmail, memEmailPartner) values('a#a.com', null), ('b#b.com', null), (null, 'c#c.com'), (null, 'd#d.com')
go
select * from member
insert member(memEmail, memEmailPartner) values('a#a.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'a#a.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('c#c.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'c#c.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('e#e.com', null) -- should work
go
insert member(memEmail, memEmailPartner) values(null, 'f#f.com') -- should work
go
select * from member
-- Make sure updates still work!
update member set memEmail = memEmail, memEmailPartner = memEmailPartner
I've not tested this extensively but it should be enough to get you started if you want to try this approach.
StuartLC notes the potential for the UDF check constraint to fail in set based updates and/or various other conditions, triggers don't have this problem.
Stuart also suggests reconsidering whether this should really be a database constraint or managed through business logic elsewhere. I'm inclined to agree - my gut feel here is that sooner or later you will come across a situation that requires email addresses to be reused, or in some other way not strictly unique.
TL;DR
The wisdom of applying this kind of business rule logic in the database needs to be reconsidered - this check is likely a better candidate for your application, or a stored procedure which acts as an insert gate keeper instead of direct new row inserts into the table.
Ignoring the Warnings
That said, I do believe that what you want is however possible in a constraint UDF, albeit with potentially atrocious performance consequences*1, and likely prone to race conditions in set based updates
Here's a user defined function which applies the unique email logic across both columns. Note that by the time the constraint is checked, that the row is IN the table already, hence the new row itself needs to be excluded from the duplicate checks.
My code also is depedent on ANSI NULL behaviour, i.e. that the predicates NULL = NULL and X IN (NULL) both return NULL, and hence are excluded from the failure check (in order to meet your requirement that NULLS do not fail the rule).
We also need to check for the insert of BOTH new columns being non-null, but duplicated.
So here's the a UDF doing the checking:
CREATE FUNCTION dbo.CheckUniqueEmails(#id int, #memEmail varchar(50),
#memEmailPartner varchar(50))
RETURNS bit
AS
BEGIN
DECLARE #retval bit;
IF #memEmail = #memEmailPartner
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmail IS NOT NULL
AND memEmail IN(#memEmail, #memEmailPartner) AND idMember <> #id)
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmailPartner IS NOT NULL
AND memEmailPartner IN(#memEmail, #memEmailPartner) AND idMember <> #id)
SET #retval = 0
ELSE
SET #retval = 1;
RETURN #retval;
END;
GO
Which is then enforced in a CHECK constraint:
ALTER TABLE MyTable ADD CHECK (dbo.CheckUniqueEmails(
idMember, memEmail, memEmailPartner) = 1);
I've put a SQLFiddle up here
Uncomment the 'failed' test cases to ensure that the above check constraint is working.
I haven't tested this with updates, and as per Martin's advice on the link, this will likely break on an insert with multiple rows.
*1 - we'll need indexes on BOTH email address columns.

Azure Sql Server fails to keep data consistency using a check constraint and a trigger

I'm trying to keep data consistency in my Azure Sql Server database and implemented two scenarios:
check constraint
insert/update trigger
None of them work and I still able to reproduce a situation when my check is bypassed. The rule is simple - there couldn't be more than one active assignment for a user.
Tasks:
- Id
- UserId
- Status ('Active', 'Done')
User
- Id
- ...
Approach #1 - Check Constraints
I've implemented a function to ensure data consistency and applied it as a check constraint
create function [dbo].[fnCheckUserConcurrentAssignment]
(
#id nvarchar(128),
#status nvarchar(50), -- not used but required to check constraint
)
returns bit
as
begin
declare #result bit
select #result = cast(case when (select count(t.Id)
from dbo.Tasks t
where t.[UserId] = #id
and t.[Status != 'Done') > 1
then 1
else 0
end as bit)
return #result
end
alter table dbo.Tasks
with check add constraint [CK_CheckUserConcurrentAssignment]
check (dbo.fnCheckUserConcurrentAssignment([UserId], [Status]) = 0)
Approach #2 - Trigger
alter trigger dbo.[TR_CheckUserConcurrentAssignment]
on dbo.Tasks
for insert, update
as begin
if(exists(select conflictId from
(select (select top 1 t.Id from dbo.Tasks t
where t.UserId = i.UserId
and o.[Status] != N'Done'
and o.Id != i.Id) as conflictId
from inserted i
where i.UserId is not null) conflicts
where conflictId is not null))
begin
raiserror ('Concurrent user assignment detected.', 16, 1);
rollback;
end
end
If I create lots of assignments in parallel (regularly > 10) then some of them will be rejected by the constraint/trigger, other will be able to concurrently save UserId in the database. As a result by database data will be inconsistent.
I've verified both approaches in Management Studio and it prevents me from corrupting my data. I'm unable to assign multiple 'Active' tasks to a given user.
What is improtant to say, I'm using Entity Framework 6.x at my Backend to save my data (SaveChangesAsync) and every save action is executed in a separate transaction with default Transaction Isolation level ReadCommited
What could be wrong in my approaches and how to keep my data consistent?
It is a classic race condition.
select #result = cast(case when (select count(t.Id)
from dbo.Tasks t
where t.[UserId] = #id
and t.[Status != 'Done') > 1
then 1
else 0
end as bit)
If two threads run the fnCheckUserConcurrentAssignment function at the same time, they will get the same count from Tasks. Then each thread will proceed with inserting the row and the final state of the database would violate your constraint.
If you want to keep your approach with the function in the CHECK constraint, or the trigger, you should make sure that your transaction isolation level is set to SERIALIZABLE. Or use query hints to lock the table. Or use sp_getapplock to serialise calls to your function/trigger.
In your case, the check is pretty simple, so it can be implemented without a trigger or function. I'd use a filtered unique index:
CREATE UNIQUE NONCLUSTERED INDEX [IX_UserID] ON [dbo].[Tasks]
(
[UserID] ASC
)
WHERE (Status = 'Active')
This unique index would guarantee that there is no two rows with Status = 'Active' which have the same UserID.
There is a similar question on dba.se How are my SQL Server constraints being bypassed? with more detailed explanations. They mention another possible solution - indexed views, which boils down again to unique index.

sql server cannot access inserted table in a trigger

I am trying to create a simple to insert trigger that gets the count from a table and adds it to another like this
CREATE TABLE [poll-count](
id VARCHAR(100),
altid BIGINT,
option_order BIGINT,
uip VARCHAR(50),
[uid] VARCHAR(100),
[order] BIGINT
PRIMARY KEY NONCLUSTERED([order]),
FOREIGN KEY ([order]) references ord ([order]
)
GO
CREATE TRIGGER [get-poll-count]
ON [poll-count]
FOR INSERT
AS
BEGIN
DECLARE #count INT
SET #count = (SELECT COUNT (*) FROM [poll-count] WHERE option_order = i.option_order)
UPDATE [poll-options] SET [total] = #count WHERE [order] = i.option_order
END
GO
when i ever i try to run this i get this error:
The multi-part identifier "i.option_order" could not be bound
what is the problem?
thanks
Your trigger currently assumes that there will always be one-row inserts. Have you tried your trigger with anything like this?
INSERT dbo.[poll-options](option_order --, ...)
VALUES(1 --, ...),
(2 --, ...);
Also, you say that SQL Server "cannot access inserted table" - yet your statement says this. Where do you reference inserted (even if this were a valid subquery structure)?
SET #count = (SELECT COUNT (*) FROM [poll-count]
WHERE option_order = i.option_order)
-----------------------^ "i" <> "inserted"
Here is a trigger that properly references inserted and also properly handles multi-row inserts:
CREATE TRIGGER dbo.pollupdate
ON dbo.[poll-options]
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
;WITH x AS
(
SELECT option_order, c = COUNT(*)
FROM dbo.[poll-options] AS p
WHERE EXISTS
(
SELECT 1 FROM inserted
WHERE option_order = p.option_order
)
GROUP BY option_order
)
UPDATE p SET total = x.c
FROM dbo.[poll-options] AS p
INNER JOIN x
ON p.option_order = x.option_order;
END
GO
However, why do you want to store this data on every row? You can always derive the count at runtime, know that it is perfectly up to date, and avoid the need for a trigger altogether. If it's about the performance aspect of deriving the count at runtime, a much easier way to implement this write-ahead optimization for about the same maintenance cost during DML is to create an indexed view:
CREATE VIEW dbo.[poll-options-count]
WITH SCHEMABINDING
AS
SELECT option_order, c = COUNT_BIG(*)
FROM dbo.[poll-options]
GROUP BY option_order;
GO
CREATE UNIQUE CLUSTERED INDEX oo ON dbo.[poll-options-count](option_order);
GO
Now the index is maintained for you and you can derive very quick counts for any given (or all) option_order values. You'll have test, of course, whether the improvement in query time is worth the increased maintenance (though you are already paying that price with the trigger, except that it can affect many more rows in any given insert, so...).
As a final suggestion, don't use special characters like - in object names. It just forces you to always wrap it in [square brackets] and that's no fun for anyone.

Fastest way to return a primary key value SQL Server 2005

I have a two column table with a primary key (int) and a unique value (nvarchar(255))
When I insert a value to this table, I can use Scope_identity() to return the primary key for the value I just inserted. However, if the value already exists, I have to perform an additional select to return the primary key for a follow up operation (inserting that primary key into a second table)
I'm thinking there must be a better way to do this - I considered using covered indexes but the table only has two columns, most of what I've read on covered indexes suggests they only help where the table is significantly larger than the index.
Is there any faster way to do this? Would a covered index be faster even if its the same size as the table?
Building an index won't gain you anything since you have already created your value column as unique (which builds a index in the background). Effectively a full table scan is no different from an index scan in your scenario.
I assume you want to have a sort of insert-if-not-already-existsts behaviour. There is no way getting around a second select
if not exists (select ID from where name = #...)
insert into ...
select SCOPE_IDENTITY()
else
(select ID from where name = #...)
If the value happens to exist, the query will usually have been cached, so there should be no performance hit for the second ID select.
[Update statment here]
IF (##ROWCOUNT = 0)
BEGIN
[Insert statment here]
SELECT Scope_Identity()
END
ELSE
BEGIN
[SELECT id statment here]
END
I don't know about performance but it has no big overhead
As has already been mentioned this really shouldn't be a slow operation, especially if you index both columns. However if you are determined to reduce the expense of this operation then I see no reason why you couldn't remove the table entirely and just use the unique value directly rather than looking it up in this table. A 1-1 mapping like this is (theoretically) redundant. I say theoretically because there may be performance implications to using an nvarchar instead of an int.
I'll post this answer since everyone else seems to say you have to query the table twice in the event that the record exists... that's not true.
Step 1) Create a unique-index on the other column:
I recommend this as the index:
-- We're including the "ID" column so that SQL will not have to look far once the "WHERE" clause is finished.
CREATE INDEX MyLilIndex ON dbo.MyTable (Column2) INCLUDE (ID)
Step 2)
DECLARE #TheID INT
SELECT #TheID = ID from MyTable WHERE Column2 = 'blah blah'
IF (#TheID IS NOT NULL)
BEGIN
-- See, you don't have to query the table twice!
SELECT #TheID AS TheIDYouWanted
END
ELSE
INSERT...
SELECT SCOPE_IDENTITY() AS TheIDYouWanted
Create a unique index for the second entry, then:
if not exists (select null from ...)
insert into ...
else
select x from ...
You can't get away from the index, and it isn't really much overhead -- SQL server supports index columns upto 900-bytes, and does not discriminate.
The needs of your model are more important than any perceived performance issues, symbolising a string (which is what you are doing) is a common method to reduce database size, and this indirectly (and generally) means better performance.
-- edit --
To appease timothy :
declare #x int = select x from ...
if (#x is not null)
return x
else
...
You could use OUTPUT clause to return the value in the same statement. Here is an example.
DDL:
CREATE TABLE ##t (
id int PRIMARY KEY IDENTITY(1,1),
val varchar(255) NOT NULL
)
GO
-- no need for INCLUDE as PK column is always included in the index
CREATE UNIQUE INDEX AK_t_val ON ##t (val)
DML:
DECLARE #id int, #val varchar(255)
SET #val = 'test' -- or whatever you need here
SELECT #id = id FROM ##t WHERE val = #val
IF (#id IS NULL)
BEGIN
DECLARE #new TABLE (id int)
INSERT INTO ##t (val)
OUTPUT inserted.id INTO #new -- put new ID into table variable immediately
VALUES (#val)
SELECT #id = id FROM #new
END
PRINT #id

Accessing data with stored procedures

One of the "best practice" is accessing data via stored procedures. I understand why is this scenario good.
My motivation is split database and application logic ( the tables can me changed, if the behaviour of stored procedures are same ), defence for SQL injection ( users can not execute "select * from some_tables", they can only call stored procedures ), and security ( in stored procedure can be "anything" which secure, that user can not select/insert/update/delete data, which is not for them ).
What I don't know is how to access data with dynamic filters.
I'm using MSSQL 2005.
If I have table:
CREATE TABLE tblProduct (
ProductID uniqueidentifier -- PK
, IDProductType uniqueidentifier -- FK to another table
, ProductName nvarchar(255) -- name of product
, ProductCode nvarchar(50) -- code of product for quick search
, Weight decimal(18,4)
, Volume decimal(18,4)
)
then I should create 4 stored procedures ( create / read / update / delete ).
The stored procedure for "create" is easy.
CREATE PROC Insert_Product ( #ProductID uniqueidentifier, #IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
INSERT INTO tblProduct ( ProductID, IDProductType, ... etc .. ) VALUES ( #ProductID, #IDProductType, ... etc ... )
END
The stored procedure for "delete" is easy too.
CREATE PROC Delete_Product ( #ProductID uniqueidentifier, #IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
DELETE tblProduct WHERE ProductID = #ProductID AND IDProductType = #IDProductType AND ... etc ...
END
The stored procedure for "update" is similar as for "delete", but I'm not sure this is the right way, how to do it. I think that updating all columns is not efficient.
CREATE PROC Update_Product( #ProductID uniqueidentifier, #Original_ProductID uniqueidentifier, #IDProductType uniqueidentifier, #Original_IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
UPDATE tblProduct SET ProductID = #ProductID, IDProductType = #IDProductType, ... etc ...
WHERE ProductID = #Original_ProductID AND IDProductType = #Original_IDProductType AND ... etc ...
END
And the last - stored procedure for "read" is littlebit mystery for me. How pass filter values for complex conditions? I have a few suggestion:
Using XML parameter for passing where condition:
CREATE PROC Read_Product ( #WhereCondition XML ) AS BEGIN
DECLARE #SELECT nvarchar(4000)
SET #SELECT = 'SELECT ProductID, IDProductType, ProductName, ProductCode, Weight, Volume FROM tblProduct'
DECLARE #WHERE nvarchar(4000)
SET #WHERE = dbo.CreateSqlWherecondition( #WhereCondition ) --dbo.CreateSqlWherecondition is some function which returns text with WHERE condition from passed XML
DECLARE #LEN_SELECT int
SET #LEN_SELECT = LEN( #SELECT )
DECLARE #LEN_WHERE int
SET #LEN_WHERE = LEN( #WHERE )
DECLARE #LEN_TOTAL int
SET #LEN_TOTAL = #LEN_SELECT + #LEN_WHERE
IF #LEN_TOTAL > 4000 BEGIN
-- RAISE SOME CONCRETE ERROR, BECAUSE DYNAMIC SQL ACCEPTS MAX 4000 chars
END
DECLARE #SQL nvarchar(4000)
SET #SQL = #SELECT + #WHERE
EXEC sp_execsql #SQL
END
But, I think the limitation of "4000" characters for one query is ugly.
The next suggestion is using filter tables for every column. Insert filter values into the filter table and then call stored procedure with ID of filters:
CREATE TABLE tblFilter (
PKID uniqueidentifier -- PK
, IDFilter uniqueidentifier -- identification of filter
, FilterType tinyint -- 0 = ignore, 1 = equals, 2 = not equals, 3 = greater than, etc ...
, BitValue bit , TinyIntValue tinyint , SmallIntValue smallint, IntValue int
, BigIntValue bigint, DecimalValue decimal(19,4), NVarCharValue nvarchar(4000)
, GuidValue uniqueidentifier, etc ... )
CREATE TABLE Read_Product ( #Filter_ProductID uniqueidentifier, #Filter_IDProductType uniqueidentifier, #Filter_ProductName uniqueidentifier, ... etc ... ) AS BEGIN
SELECT ProductID, IDProductType, ProductName, ProductCode, Weight, Volume
FROM tblProduct
WHERE ( #Filter_ProductID IS NULL
OR ( ( ProductID IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = #Filter_ProductID AND FilterType = 1 ) AND NOT ( ProductID IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = #Filter_ProductID AND FilterType = 2 ) )
AND ( #Filter_IDProductType IS NULL
OR ( ( IDProductType IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = #Filter_IDProductType AND FilterType = 1 ) AND NOT ( IDProductType IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = #Filter_IDProductType AND FilterType = 2 ) )
AND ( #Filter_ProductName IS NULL OR ( ... etc ... ) )
END
But this suggestion is littlebit complicated I think.
Is there some "best practice" to do this type of stored procedures?
For reading data, you do not need a stored procedure for security or to separate out logic, you can use views.
Just grant only select on the view.
You can limit the records shown, change field names, join many tables into one logical "table", etc.
First: for your delete routine, your where clause should only include the primary key.
Second: for your update routine, do not try to optimize before you have working code. In fact, do not try to optimize until you can profile your application and see where the bottlenecks are. I can tell you for sure that updating one column of one row and updating all columns of one row are nearly identical in speed. What takes time in a DBMS is (1) finding the disk block where you will write the data and (2) locking out other writers so that your write will be consistent. Finally, writing the code necessary to update only the columns that need to change will generally be harder to do and harder to maintain. If you really wanted to get picky, you'd have to compare the speed of figuring out which columns changed compared with just updating every column. If you update them all, you don't have to read any of them.
Third: I tend to write one stored procedure for each retrieval path. In your example, I'd make one by primary key, one by each foreign key and then I'd add one for each new access path as I needed them in the application. Be agile; don't write code you don't need. I also agree with using views instead of stored procedures, however, you can use a stored procedure to return multiple result sets (in some version of MSSQL) or to change rows into columns, which can be useful.
If you need to get, for example, 7 rows by primary key, you have some options. You can call the stored procedure that gets one row by primary key seven times. This may be fast enough if you keep the connection opened between all the calls. If you know you never need more than a certain number (say 10) of IDs at a time, you can write a stored procedure that includes a where clause like "and ID in (arg1, arg2, arg3...)" and make sure that unused arguments are set to NULL. If you decide you need to generate dynamic SQL, I wouldn't bother with a stored procedure because TSQL is just as easy to make a mistake as any other language. Also, you gain no benefit from using the database to do string manipulation -- it's almost always your bottleneck, so there is no point in giving the DB any more work than necessary.
I disagree that create Insert/Update/Select stored procedures are a "best practice". Unless your entire application is written in SPs, use a database layer in your application to handle these CRUD activities. Better yet, use an ORM technology to handle them for you.
My suggestion is that you don't try to create a stored procedure that does everything that you might now or ever need to do. If you need to retrieve a row based on the table's primary key then write a stored procedure to do that. If you need to search for all rows meeting a set of criteria then find out what that criteria might be and write a stored procedure to do that.
If you try to write software that solves every possible problem rather than a specific set of problems you will usually fail at providing anything useful.
your select stored procedure can be done as follows to require only one stored proc but any number of different items in the where clause. Pass in any one or combination of the parameters and you will get ALL items which match - so you only need one stored proc.
Create sp_ProductSelect
(
#ProductID int = null,
#IDProductType int = null,
#ProductName varchar(50) = null,
#ProductCode varchar(10) = null,
...
#Volume int = null
)
AS
SELECT ProductID, IDProductType, ProductName, ProductCode, Weight, Volume FROM tblProduct'
Where
((#ProductID is null) or (ProductID = #ProductID)) AND
((#ProductName is null) or (ProductName = #ProductName)) AND
...
((#Volume is null) or (Volume= #Volume))
In SQL 2005, it supports nvarchar(max), which has a limit of 2G, but virtually accepting all string operations upon normal nvarchar. You may want to test if this can fit into what you need in the first approach.