Update a variable number of parameters in a Stored Procedure - sql

Assume the following please:
I have a table that has ~50 columns. I receive an import file that has information that needs to be updated in a variable number of these columns. Is there a method via a stored procedure where I can only update the columns that need to be updated and the rest retain their current value unchanged. Keep in mind I am not saying that the unaffected columns return to some default value but actually maintain the current value stored in them.
Thank you very much for any response on this.
Is COALESCE something that could be applied here as I have read in this thread: Updating a Table through a stored procedure with variable parameters
Or am I looking at a method more similar to what is being explained here:
SQL Server stored procedure with optional parameters updates wrong columns
For the record my sql skills are quite weak, I am just beginning to dive into this end of the pool
JD

Yes, you can use COALESCE to do this. Basically if the parameter is passed in as NULL then it will use the original value. Below is a general pattern that you can adapt.
DECLARE #LastName NVARCHAR(50) = 'John'
DECLARE #FirstName NVARCHAR(50) = NULL;
DECLARE #ID INT = 1;
UPDATE dbo.UpdateExample
SET LastName = COALESCE(#LastName, LastName), FirstName = COALESCE(#FirstName, FirstName),
WHERE ID = #ID
Also, have a read of this article, titled: The Impact of Non-Updating Updates
http://web.archive.org/web/20180406220621/http://sqlblog.com:80/blogs/paul_white/archive/2010/08/11/the_2D00_impact_2D00_of_2D00_update_2D00_statements_2D00_that_2D00_don_2D00_t_2D00_change_2D00_data.aspx
Basically,
"SQL Server contains a number of optimisations to avoid unnecessary logging or page flushing when processing an UPDATE operation that will not result in any change to the persistent database."

Related

Prevent parameter sniffing in stored procedures SQL Server 2008?

I have started creating a stored procedure that will search through my database table based on the passed parameters. So far I already heard about potential problems with kitchen sink parameter sniffing. There are a few articles that helped understand the problem but I'm still not 100% that I have a good solution. I have a few screens in the system that will search different tables in my database. All of them have three different criteria that the user will select and search on. First criteria are Status that can be Active,Inactive or All. Next will be Filter By, this can offer different options to the user depends on the table and the number of columns. Usually, users can select to filter by Name,Code,Number,DOB,Email,UserName or Show All. Each search screen will have at least 3 filters and one of them will be Show All. I have created a stored procedure where the user can search Status and Filter By Name,Code or Show All. One problem that I have is Status filter. Seems that SQL will check all options in where clause so If I pass parameter 1 SP returns all active records if I pass 0 then only inactive records. The problem is if I pass 2 SP should return all records (active and inactive) but I see only active records. Here is an example:
USE [TestDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROC [dbo].[Search_Master]
#Status BIT = NULL,
#FilterBy INT = NULL,
#Name VARCHAR(50) = NULL,
#Code CHAR(2) = NULL
WITH RECOMPILE
AS
DECLARE #MasterStatus INT;
DECLARE #MasterFilter INT;
DECLARE #MasterName VARCHAR(50);
DECLARE #MasterCode CHAR(2);
SET #MasterStatus = #Status;
SET #MasterFilter = #FilterBy;
SET #MasterName = #Name;
SET #MasterCode = #Code;
SELECT RecID, Status, Code, Name
FROM Master
WHERE
(
(#MasterFilter = 1 AND Name LIKE '%'+#MasterName+'%')
OR
(#MasterFilter = 2 AND Code = #MasterCode)
OR
(#MasterFilter = 3 AND #MasterName IS NULL AND #MasterCode IS NULL)
)
AND
(
(#MasterStatus != 2 AND MasterStatus = #Status)
OR
(#MasterStatus = 2 AND 1=1)
);
Other than problem with Status filter I'm wondering if there is any other issues that I might have with parameter sniffing? I found a blog that talks about preventing sniffing and one way to do that is by declaring local variables. If anyone have suggestions or solution for Status filter please let me know.
On your Status issue, I believe the problem is that your BIT parameter isn't behaving as you're expecting it to. Here's a quick test to demonstrate:
DECLARE #bit BIT;
SET #bit = 2
SELECT #bit AS [2=What?];
--Results
+---------+
| 2=What? |
+---------+
| 1 |
+---------+
From our friends at Microsoft:
Converting to bit promotes any nonzero value to 1.
When you pass in your parameter as 2, the engine does an implicit conversion from INTEGER to BIT, and your non-zero value becomes a 1.
You'll likely want to change the data type on that parameter, then use some conditional logic inside your procedure to deal with the various possible values as you want them to be handled.
On the issue of parameter sniffing, 1) read the article Sean suggests in the comments, but 2) if you keep that WITH RECOMPILE on your procedure, parameter sniffing can't happen.
The issue (but still read the article) is that SQL Server uses the first set of parameters you send through the proc to store an execution plan, but subsequent parameters require substantially different plans. Adding WITH RECOMPILE is forcing a new execution plan on every iteration, which has some overhead, but is may well be exactly what you want to do in your situation.
As a closing thought, SQL Server 2008 ended mainstream support in 2014 and extended support ends on 7/9/2019. An upgrade might be a good idea.

SQL Server setting multiple variables

I have 7 different stored procedures that take in a number of parameters and insert data into 7 different tables.
I am now creating a main stored procedure to execute these 7 procedures with all of the data they need. All of the data they need is ONE table (CommonImport).
Should I take all of the parameters I need in this main stored procedure?
Or
Only take in the ID of the row that needs to be inserted into these 7 separate tables and get the data directly from the table.
I think the second option is best. BUT, how do I set all the variables in the main stored procedure to all of the data from the (CommonImport) table?
Essentially, how do I set a bunch of declared variables to the values from a specific row in the CommonImport table?
Thanks
Passing an ID:
The benefit here is that you simplify all the interfaces to your stored procedures.
This makes it easier to code against. If you end up calling the SP from speveral places, you just need to use a single parameter, rather than loading and passing several parameters.
Passing n Variables:
Then benefit here is that you 'decouple' your Stored Procedures from the holding table.
This means that you could simply call the stored procedures directly, without having any data in the table. This may be useful in the future, if data arrives in a new way, or for unit testing, etc.
Which is best:
I don't think that there is a straight answer to this, it's more a case of preference and opinion.
My opinion is that the less tightly coupled things are, the better. It's more flexible in the face of changes.
The way I'd do it is as follows...
CREATE PROCEDURE main_by_variable #v1 INT, #v2 INT, ...
BEGIN
EXEC sub_part_1 #v1, #v3
EXEC sub_part_2 #v2
EXEC sub_part_3 #v2, #v3
...
END
CREATE PROCEDURE main_by_id #id INT AS
BEGIN
DECLARE
#v1 INT,
#v2 INT,
...
SELECT
#v1 = field1,
#v2 = field2
FROM
holding_table
WHERE
id = #id
EXEC main_by_variable #v1, #v2, ...
END
GO;
By having the main_by_variable procedure, you have the felxibility, such as testing all the sub procedures, without actually having to enter any data into the holding table. And that flexibility is part of the sub procedures as well.
But, for convenience, you may find that using main_by_id is more tidy. As this is just a wrapper around main_by_variable, all you are doing is encapsulating a single step in the process (getting the data out of the table).
It also allows you to put a transaction around the data gathering part, and delete the data out of the table. Or many other options. It's flexible, and I like flexible.
I would suggest accepting all variables as parameters and define default values for them, so SP users can use it either with single ID parameter or with an other as weel by specifying those directly
CREATE PROCEDURE MainSP
#ID int,
#CustomParameter varchar(10) = NULL,
#CustomParameter1 DateTime = NULL,
...
In this way SP would be pretty flexible

Need example of Conditional update stored proc in SQL server

I am just at starting levels in DB usage and have 2 basic questions
I have a generic UPDATE stored proc which updates all columns of a table.
But i need to make it conditional wherein it does not SET when the parameter is NULL.
Usage: I want to use this as a single SP to UPDATE any subset of columns, the caller from C# will fill in corresponding parameter values and leave other parameters NULL.
2
In case of , "UPDATE selected records" do i need to use locking inside stored proc ?
Why ? Isn't the operation in itself locked and transactional ?
I find the same question come up when i need to UPDATE selected(condition) records and then Return updated records.
UPDATE table
SET a = case when #a is null then a else #a end
WHERE id = #id
OR
EXEC 'update table set ' + #update + ' where id = ' + #id
OR
Conditionally update a column at a time
First option to me would usually be preferrable as it is usually efficient enough and you do not need to worry about string escaping
If I have understood the question properly, Why can't you build a query on the fly from sql server SP, and use sp_sqlexecute. So when you build query you can ensure only columns that have value has got updated.
Does this answer your question?

What happens when I update SQL column to itself? -OR- simplified conditional updates?

I would like to call an "update" stored procedure which won't necessarily include all columns.
There is probably a better way to handle this....
As you can see, if I do not pass in the column parameters their value is NULL. Then, using the ISNULL, I set the columns either to their new values or their existing values.
CREATE PROCEDURE [dbo].[spUpdateTable]
#pPKID bigint = NULL,
#pColumn1 int = NULL,
#pColumn2 int = NULL
AS
BEGIN
SET NOCOUNT ON;
UPDATE
TableName
SET
[Column1] = ISNULL(#pColumn1,[Column1]),
[Column2] = ISNULL(#pColumn2,[Column2])
WHERE
[PKID] = #pPKID
END
This is basically the same thing that the transactional replication stored procedures do when updating a table on a subscriber. If Microsoft does it themselves, it must be safe, right? :-)
Seriously, my primary concern here would be any update triggers that might exist on the table. You'd want to understand the impact of potentially firing those triggers on what could be a non-change. Otherwise I think your technique is fine.

Updating a Table from a Stored Procedure

I am trying to learn database on my own; all of your comments are appreciated.
I have the following table.
CREATE TABLE AccountTable
(
AccountId INT IDENTITY(100,1) PRIMARY KEY,
FirstName NVARCHAR(50) NULL,
LastName NVARCHAR(50) NULL,
Street NVARCHAR(50) NULL,
StateId INT REFERENCES STATETABLE(StateId) NOT NULL
)
I would like to write a Stored procedure that updates the row. I imagine that the stored procedure would look something like this:
CREATE PROCEDURE AccountTable_Update
#Id INT,
#FirstName NVARCHAR(20),
#LastName NVARCHAR(20),
#StreetName NVARCHAR(20),
#StateId INT
AS
BEGIN
UPDATE AccountTable
Set FirstName = #FirstName
Set LastName = #LastName
Set Street = #StreetName
Set StateId = #StateId
WHERE AccountId = #Id
END
the caller provides the new information that he wants the row to have. I know that some of the fields are not entirely accurate or precise; I am doing this mostly for learning.
I am having a syntax error with the SET commands in the UPDATE portion, and I don't know how to fix it.
Is the stored procedure I am writing a procedure that you would write in real life? Is this an antipattern?
Are there any grave errors I have made that just makes you cringe when you read the above TSQL?
Are there any grave errors I have made that just makes you cringe when you read the above TSQL?
Not really "grave," but I noticed your table's string fields are set up as the datatype of NVARCHAR(50) yet your stored procedure parameters are NVARCHAR(20). This may be cause for concern. Usually your stored procedure parameters will match the corresponding field's datatype and precision.
#1: You need commas between your columns:
UPDATE AccountTable SET
FirstName = #FirstName,
LastName = #LastName,
Street = #StreetName,
StateId = #StateId
WHERE
AccountId = #Id
SET is only called once, at the very start of the UPDATE list. Every column after that is in a comma separated list. Check out the MSDN docs on it.
#2: This isn't an antipattern, per se. Especially given user input. You want parametized queries, as to avoid SQL injection. If you were to build the query as a string off of user input, you would be very, very susceptible to SQL injection. However, by using parameters, you circumvent this vulnerability. Most RDBMS's make sure to sanitize the parameters passed to its queries automagically. There are a lot of opponents of stored procedures, but you're using it as a way to beat SQL injection, so it's not an antipattern.
#3: The only grave error I saw was the SET instead of commas. Also, as ckittel pointed out, your inconsistency in the length of your nvarchar columns.