Update table using parameters - sql

I have a stored procedure that gets these parameters:
#TaskId UNIQUEIDENTIFIER,
#ColumnName VARCHAR(255) = NULL,
#CheckBoxValue VARCHAR(255) = NULL
Then I have an UPDATE statement like this:
UPDATE [RedMarkItems]
SET #ColumnName = #CheckBoxValue
WHERE [TaskId] = #TaskId
If I run my stored procedure like:
exec usp_RedMarkItem_Insert
#TaskId = '82ab0c4b-9342-46fa-acbe-c00b87571bf9',
#ColumnName = Item7,
#CheckBoxValue = 1,
#CurrentUser = '6074caea-7a8e-4699-9451-16c2eaf394ef'
It does not affect table just says
Commands completed successfully
But values still the same, but if I replace values in UPDATE statement like
UPDATE [RedMarkItems]
SET Item7 = 1
WHERE [TaskId] = '82ab0c4b-9342-46fa-acbe-c00b87571bf9'
It works! Why is it not working if I use parameters? Regards

The mistake you're making here is you're under the impression that a variable/parameter can be used in place for an object's name. Simply put, it can't. An object's name must be a literal, so you can't do something like:
DECLARE #TableName sysname;
SET #TableName = N'MyTable';
SELECT *
FROM #TableName;
For things like this you need to use dynamic SQL, and (just as importantly) secure dynamic SQL.
Firstly, I would change #ColumnName to the datatype sysname (which is a synonym for nvarchar(128)) and #Checkbox to a bit (A Checkbox can have only 3 values: True/1, False/0 and NULL, so an nvarchar(255) is a very poor data choice). Then your Stored Procedure will look something like:
DECLARE #SQL nvarchar(MAX);
SELECT #SQL = N'UPDATE RedMarkItems' + NCHAR(10) +
N'SET ' + QUOTENAME(c.[name]) + N' = #Checkbox' + NCHAR(10) +
N'WHERE TaskID = #ID;'
FROM sys.tables t
JOIN sys.columns c ON t.object_id = c.object_id
WHERE c.[name] = #ColumnName
AND t.[name] = 'RedMarkItems';
EXEC sp_executesql #SQL, N'#Checkbox bit, #ID uniqueindentifier', #Checkbox = #CheckBoxValue, #ID = #TaskId;
The reason for the references to the sys tables is to ensure that the column does indeed exist. If it does not, then no SQL will be run. This is just an extra safety measure, alongside the use of QUOTENAME.

What you did was set #ColumnName equal to the value in #CheckBoxValue 0 or more times (based on how many rows exist in the table). Likely not what you intended...
Instead, you will either want to use dynamic SQL (set #sql = 'UPDATE … ' + QUOTENAME(#ColumnName) + 'rest of sql') or otherwise build a case statement to handle each column based on the value you are trying to update dynamically. SQL needs to bind the statement at compile time, so you need to have that available at compile time for the query processor to make sure that the column is real, has the right types to do type derivation, etc. Using parameters as you did would prevent all of that logic from working (assuming the semantics were as you intended in your posted question).
Please be careful as SQL injection attacks are possible on non-validated parameters. You would want to make sure that the column name is a valid column name and not something that allows arbitrary SQL code execution under the context of the running transaction.

Related

SQL Server stored procedure select specific columns passed by arguments, and check other constraints

I want to create a stored procedure in SQL Server 2017 and let it be called somewhere else (i.e., Python). It accepts three parameters, stkname - stock name, stktype - stock type, colname - output columns (only these columns are returned). #colname is a varchar storing all required column names separated by ,.
If #colname is not specified, return all columns (*)
If stkname or stktype is not specified, do not filter by it
This is my code so far:
create procedure demo
(#stkname varchar(max),
#colname varchar(max),
#stktype varchar(max))
as
begin
----These lines are pseudo code -----
select
(if #colname is not null, #colname, else *)
from
Datatable
(if #stkname is not null, where stkname = #stkname)
(if #stktype is not null, where stktype = #stktype)
---- pseudo code end here-----
end
The desired result is that
exec demo #colname= 'ticker,price', #stktype = 'A'
returns two columns - ticker and price, for all records with stktype = 'A'
I could imagine 'dynamic SQL' would be possible, but not that 'elegant' and I need to write 2*2*2 = 8 cases.
How can I actually implement it in a better way?
PS: This problem is not a duplicate, since not only I need to pass column names, but also 'generate a query by other parameters'
You need dynamic SQL, but you don't need to write a case for every permutation, and it's really not all that hard to prevent malicious behavior.
CREATE PROCEDURE dbo.demo -- always use schema prefix
#stkname varchar(max) = NULL,
#colnames nvarchar(max) = NULL, -- always use Unicode + proper name
#stktype varchar(max) = NULL
AS
BEGIN
DECLARE #sql nvarchar(max);
SELECT #sql = N'SELECT '
+ STRING_AGG(QUOTENAME(LTRIM(RTRIM(x.value))), ',')
FROM STRING_SPLIT(#colnames, ',') AS x
WHERE EXISTS
(
SELECT 1 FROM sys.columns AS c
WHERE LTRIM(RTRIM(x.value)) = c.name
AND c.[object_id] = OBJECT_ID(N'dbo.DataTable')
);
SET #sql += N' FROM dbo.DataTable WHERE 1 = 1'
+ CASE WHEN #stkname IS NOT NULL THEN
N' AND stkname = #stkname' ELSE N'' END
+ CASE WHEN #stktype IS NOT NULL THEN
N' AND stktype = #stktype' ELSE N'' END;
EXEC sys.sp_executesql #sql,
N'#stkname varchar(max), #stktype varchar(max)',
#stkname, #stktype;
END
It's not clear if the stkname and stktype columns really are varchar(max), I kind of doubt it, you should replace those declarations with the actual data types that match the column definitions. In any case, with this approach the user can't pass nonsense into the column list unless they somehow had the ability to add a column to this table that matched their nonsense pattern (and why are they typing this anyway, instead of picking the columns from a defined list?). And any data they pass as a string to the other two parameters can't possibly be executed here. The thing you are afraid of is probably a result of sloppy code where you carelessly append user input and execute it unchecked, like this:
SET #sql = N'SELECT ' + #weapon_from_user + '...';
For more on malicious behavior see:
Protecting Yourself from SQL Injection in SQL Server - Part 1
Protecting Yourself from SQL Injection in SQL Server - Part 2

How to Create DELETE Statement Stored Procedure Using TableName, ColumnName, and ColumnValue as Passing Parameters

Here is what i'm trying to do. I'm trying to create a stored procedure where I could just enter the name of the table, column, and column value and it will delete any records associated with that value in that table. Is there a simple way to do this? I don't know too much about SQL and still learning about it.
Here is what I have so far.
ALTER PROCEDURE [dbo].[name of stored procedure]
#TABLE_NAME varchar(50),
#COLUMN_NAME varchar(50),
#VALUE varchar(5)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #RowsDeleted int;
DECLARE #sql VARCHAR(500);
SET #sql = 'DELETE FROM (name of table).' + #TABLE_NAME + ' WHERE ' + #COLUMN_NAME + '=' + '#VALUE'
EXEC(#sql)
SET #RowsDeleted=##ROWCOUNT
END
GO
Couple issues
First, you don't need (name of table)
SET #sql = 'DELETE FROM ' + #TABLE_NAME + etc.
In general you should try to include the appropriate schema prefix
SET #sql = 'DELETE FROM dbo.' + #TABLE_NAME + etc.
And in case your table name has special characters perhaps it should be enclosed in brackets
SET #sql = 'DELETE FROM dbo.[' + #TABLE_NAME + ']' + etc.
Since #Value is a string, you must surround it with single quotes when computing the value for #SQL. To insert a single quote into a string you have to escape it by using two single quotes, like this:
SET #SQL = 'DELETE FROM dbo.[' + #TABLE_NAME + '] WHERE [' + #COLUMN_NAME + '] = '''' + #VALUE + ''''
If #VALUE itself contains a single quote, this whole thing will break, so you need to escape that as well
SET #SQL = 'DELETE FROM dbo.[' + #TABLE_NAME + '] WHERE [' + #COLUMN_NAME + '] = '''' + REPLACE(#VALUE,'''','''''') + ''''
Also, ##ROWCOUNT will not populate from EXEC. If you want to be able to read ##ROWCOUNT, use sp_ExecuteSQL instead
EXEC sp_ExecuteSql #SQL
And finally, let me editorialize for a minute--
This sort of stored procedure is not a great idea. I know it seems pretty cool because it is flexible, and that kind of thinking is usually smart when it comes to other languages, but in the database world this approach causes problems, e.g. there are security issues (e.g. injection, and the fact that you need elevated privileges to call sp_executeSql) and there issues with precompilation/performance (because the SQL isn't known ahead of time, SQL Server will need to generate a new query plan each and every time you call this) and since the caller can supply any value for table and column name you have no idea whether this delete statement will be efficient and use indexes or if it will cause a huge performance issue because the table is large and the column is not indexed.
The proper approach is to have a series of appropriate stored procedures with strongly-typed inputs that are specific to each data use case where you need to delete based on criteria. Database engineers should not be trying to make things flexible; you should be forcing people to think through what exactly they are going to need, and implement that and only that. That is the only way to ensure people are following the rules, keeping R/I intact, efficient use of indexes, etc.
Yes, this may seem like repetitive and redundant work, but c'est la vie. There are tools available to generate the code for CRUD operations if you don't like the extra typing.
In addition to some of the information John Wu provided you have to worry about data types and ##ROWCOUNT may not be accurate if there are triggers on your tables and things..... You can get around both of those issues though by casting to nvarchar() and using OUTPUT clause with a temp table to do the COUNT().
So just for fun here is a way you can do it:
CREATE PROCEDURE dbo.[ProcName]
#TableName SYSNAME
,#ColumnName SYSNAME
,#Value NVARCHAR(MAX)
,#RecordCount INT OUTPUT
AS
BEGIN
DECLARE #SQL NVARCHAR(1000)
SET #SQL = N'IF OBJECT_ID(''tempdb..#DeletedOutput'') IS NOT NULL
BEGIN
DROP TABLE #DeletedOutput
END
CREATE TABLE #DeletedOutput (
ID INT IDENTITY(1,1)
ColumnValue NVARCHAR(MAX)
)
DELETE FROM dbo.' + QUOTENAME(#TableName) + '
OUTPUT deleted.' + QUOTENAME(#ColumnName) + ' INTO #DeletedOutput (ColumnValue)
WHERE CAST(' + QUOTENAME(#ColumnName) + ' AS NVARCHAR(MAX)) = ' + CHAR(39) + #Value + CHAR(39) + '
SELECT #RecordCountOUT = COUNT(ID) FROM #DeletedOutput
IF OBJECT_ID(''tempdb..#DeletedOutput'') IS NOT NULL
BEGIN
DROP TABLE #DeletedOutput
END'
DECLARE #ParmDefinition NVARCHAR(200) = N'#RecordCountOUT INT OUTPUT'
EXECUTE sp_executesql #SQL, #ParmDefinition, #RecordCountOUT = #RecordCount OUTPUT
END
So the use of QOUTENAME will help against the injection attack but not be perfect. And I use CHAR(39) instead of the escape sequence for a single quote on value because I find it easier when string building at that point.... By using Parameter OUTPUT from sp_executesql you can still return your count.
Keep in mind just because you can do something in SQL doesn't always mean you should.

SSIS Multiple Unknow Column Updates

I wonder if anyone has come across a similar situation before that could point me in the right direction..? I'll add that it's a bit frustrating as someone has replaced the NULL value with a text string containing the word 'NULL' - which I need to remove.
I have 6 quite large tables, over 250+ columns and in excess of 1 million records in each and I need to update the columns where the word NULL appears in a row and replace it with a proper NULL value - the problem is that I have no idea in which column this appears.
As a start, I've got some code that will list every column with a count of the values and anything that looks to have a lower count than expected, I'll run a SQL query to ascertain if the column contains the string 'NULL' and using the following code, replace it with NULL.
declare #tablename sysname
declare #ColName nvarchar(500)
declare #sql nvarchar(1000)
declare #sqlUpdate nvarchar(1000)
declare #ParmDefinition nvarchar(1000)
set #tablename = N'Table_Name'
Set #ColName = N'Column_Name'
set #ParmDefinition = N'#ColName nvarchar OUTPUT';
set #sql= 'Select ' + #ColName + ', Count(' + #ColName + ') from ' + #tablename + ' group by ' + #ColName + ''
Set #sqlUpdate = 'Update ' + #tablename + ' SET ' + #ColName + ' = NULL WHERE '+ #ColName + ' = ''NULL'''
print #sql
print #sqlUpdate
EXECUTE sp_executesql #sql, #ParmDefinition, #ColName=#ColName OUTPUT;
EXECUTE sp_executesql #sqlUpdate, #ParmDefinition, #ColName=#ColName OUTPUT;
What I'm trying to with SSIS is to iterate through each column,
Select Column_Name from Table_Name where Column_Name = 'NULL'
run the appropriate query, and perform the update.
So far I can extract the column names from Information.Schema and get a record count from the appropriate table, but when it comes to running the actual UPDATE statement (as above, sqlUpdate) - there doesn't seem to be a component that's happy with the dynamic phrasing of the query.
I'm using a Conditional Split to determine where to go if there are records (which may be incorrect) and I've tried OLE DB Command for the update.
In short, I'm wondering whether SSIS is the best tool for this job or whether I'm looking in the wrong place!
I'm using SSIS 2005, which may well have limitations that I'm not yet aware of!
Any guidance would be appreciated.
Thanks,
Jon
The principle is basically sound, but I would leave SSIS out, and do it with SSMS directly against the SQL Server and build the looping logic there, probably with a cursor.
I'm not sure whether you need to check the count of potential values first - you might just as well apply the update and accept that sometimes it will update no rows - the filtering will then not be duplicated.
Something like
declare columns cursor local read_only for
select
c.TABLE_CATALOG,
c.TABLE_SCHEMA,
c.TABLE_NAME,
c.COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS c
inner join INFORMATION_SCHEMA.TABLES t
on c.TABLE_CATALOG = t.TABLE_CATALOG
and c.TABLE_SCHEMA = t.TABLE_SCHEMA
and c.TABLE_NAME = c.TABLE_NAME
where c.DATA_TYPE like '%varchar%'
open columns
declare #catalog varchar(100), #schema varchar(100), #table varchar(100), #column varchar(100)
fetch from columns into #catalog, #schema, #table, #column
while ##FETCH_STATUS= 0
begin
-- construct update here and execute it.
select #catalog, #schema, #table, #column
fetch next from columns into #catalog, #schema, #table, #column
end
close columns
deallocate columns
You might also consider applying all the updates to the table in one hit, removing the filter and using nullif dependent on the density of the bad data.
eg:
update table
set
col1 = nullif(col1, 'null'),
col2 = nullif(col2, 'null'),
...
SSIS won't be the best option for you. Conceptually, you are performing updates, lots of updates. SSIS can do really fast inserts. Updates, are fired off on a row by agonizing row basis.
In a SQL based approach, you'd be firing off 1000 update statements to fix everything. In an SSIS based scenario, using a data flow with OLE DB Command, you're looking at 1000 * 1000000.
I would skip the cursor myself. It is an acceptable time to use a cursor but if your tables are as littered with 'NULL' as it sounds, just assume you're updating every row and fix all the fields in a given record instead of coming back to the same row for each thing needing fixed.

Dynamically search columns for given table

I need to create a search for a java app I'm building where users can search through a SQL database based on the table they're currently viewing and a search term they provide. At first I was going to do something simple like this:
SELECT * FROM <table name> WHERE CAST((SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table name>')
AS VARCHAR) LIKE '%<search term>%'
but that subquery returns more than one result, so then I tried to make a procedure to loop through all the columns in a given table and put any relevant fields in a results table, like this:
CREATE PROC sp_search
#tblname VARCHAR(4000),
#term VARCHAR(4000)
AS
SET nocount on
SELECT COLUMN_NAME
INTO #tempcolumns
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #tblname
ALTER TABLE #tempcolumns
ADD printed BIT,
num SMALLINT IDENTITY
UPDATE #tempcolumns
SET printed = 0
DECLARE #colname VARCHAR(4000),
#num SMALLINT
WHILE EXISTS(SELECT MIN(num) FROM #tempcolumns WHERE printed = 0)
BEGIN
SELECT #num = MIN(num)
FROM #tempcolumns
WHERE printed = 0
SELECT #colname = COLUMN_NAME
FROM #tempcolumns
WHERE num = #num
SELECT * INTO #results FROM #tblname WHERE CAST(#colname AS VARCHAR)
LIKE '%' + #term + '%' --this is where I'm having trouble
UPDATE #tempcolumns
SET printed = 1
WHERE #num = num
END
SELECT * FROM #results
GO
This has two problems: first is that it gets stuck in an infinite loop somehow, and second I can't select anything from #tblname. I tried using dynamic sql as well, but I don't know how to get results from that or if that's even possible.
This is for an assignment I'm doing at college and I've gotten this far after hours of trying to figure it out. Is there any way to do what I want to do?
You need to only search columns that actually contain strings, not all columns in a table (which may include integers, dates, GUIDs, etc).
You shouldn't need a #temp table (and certainly not a ##temp table) at all.
You need to use dynamic SQL (though I'm not sure if this has been part of your curriculum so far).
I find it beneficial to follow a few simple conventions, all of which you've violated:
use PROCEDURE not PROC - it's not a "prock," it's a "stored procedure."
use dbo. (or alternate schema) prefix when referencing any object.
wrap your procedure body in BEGIN/END.
use vowels liberally. Are you saving that many keystrokes, never mind time, saying #tblname instead of #tablename or #table_name? I'm not fighting for a specific convention but saving characters at the cost of readability lost its charm in the 70s.
don't use the sp_ prefix for stored procedures - this prefix has special meaning in SQL Server. Name the procedure for what it does. It doesn't need a prefix, just like we know they're tables even without a tbl prefix. If you really need a prefix there, use another one like usp_ or proc_ but I personally don't feel that prefix gives you any information you don't already have.
since tables are stored using Unicode (and some of your columns might be too), your parameters should be NVARCHAR, not VARCHAR. And identifiers are capped at 128 characters, so there is no reason to support > 257 characters for #tablename.
terminate statements with semi-colons.
use the catalog views instead of INFORMATION_SCHEMA - though the latter is what your professor may have taught and might expect.
CREATE PROCEDURE dbo.SearchTable
#tablename NVARCHAR(257),
#term NVARCHAR(4000)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'SELECT * FROM ' + #tablename + ' WHERE 1 = 0';
SELECT #sql = #sql + '
OR ' + c.name + ' LIKE ''%' + REPLACE(#term, '''', '''''') + '%'''
FROM
sys.all_columns AS c
INNER JOIN
sys.types AS t
ON c.system_type_id = t.system_type_id
AND c.user_type_id = t.user_type_id
WHERE
c.[object_id] = OBJECT_ID(#tablename)
AND t.name IN (N'sysname', N'char', N'nchar',
N'varchar', N'nvarchar', N'text', N'ntext');
PRINT #sql;
-- EXEC sp_executesql #sql;
END
GO
When you're happy that it's outputting the SELECT query you're after, comment out the PRINT and uncomment the EXEC.
You get into an infinite loop because EXISTS(SELECT MIN(num) FROM #tempcolumns WHERE printed = 0) will always return a row even if there are no matches - you need to EXISTS (SELECT * .... instead
To use dynamic SQL, you need to build up a string (varchar) of the SQL statement you want to run, then you call it with EXEC
eg:
declare #s varchar(max)
select #s = 'SELECT * FROM mytable '
Exec (#s)

MSSQL: given a table's object_id, determine whether it is empty

For a bit of database-sanity checking code, I'd like to determine whether a particular object_id corresponds to an empty table.
Is there some way to (for instance) select count(*) from magic_operator(my_object_id) or similar?
I'd strongly prefer a pure-sql solution that can run on MS SQL server 2008b.
You can get a rough idea from
SELECT SUM(rows)
FROM sys.partitions p
WHERE index_id < 2 and p.object_id=#my_object_id
If you want guaranteed accuracy you would need to construct and execute a dynamic SQL string containing the two part object name. Example below though depending on how you are using this you may prefer to use sp_executesql and return the result as an output parameter instead.
DECLARE #DynSQL nvarchar(max) =
N'SELECT CASE WHEN EXISTS(SELECT * FROM ' +
QUOTENAME(OBJECT_SCHEMA_NAME(#my_object_id)) + '.' +
QUOTENAME(OBJECT_NAME(#my_object_id)) +
') THEN 0 ELSE 1 END AS IsEmpty'
EXECUTE (#DynSQL)
Well it depends on what do you consider as Pure sql
I've come up with the following solution. It is purely written in T-SQL but uses dynamically built query
-- Using variables just for better readability.
DECLARE #Name NVARCHAR(4000)
DECLARE #Schema NVARCHAR(4000)
DECLARE #Query NVARCHAR(4000)
-- Get the relevant data
SET #Schema = QUOTENAME(OBJECT_SCHEMA_NAME(613577224))
SET #Name = QUOTENAME(OBJECT_NAME(613577224))
-- Build query taking into consideration the schema and possible poor object naming
SET #Query = 'SELECT COUNT(*) FROM ' + #Schema + '.' + #Name + ''
-- execute it.
EXEC(#Query)
EDIT
The changes consider the possible faulty cases described in the comments.
I've outlined the variables, because this is a convenient approach for me. Cheers.