How do I reference a field in a dynamic sql query? - sql

I have a dynamic sql query in SQL Server. So I build it, set it to a varible and then I attempt to format a where clause based on a date. However, when I attempt to do this, I get "The multi-part identifier 'FIELD NAME" can not be bound. I beleive this is because the actual tables are in a dynamic from claus so they can't be seen until it is compiled. Any way around this?
Here I am attempting to say, give me all Persons where DOB between YEAR+MONTH specified, for example, 201001 and 201012 would be the entire year of 2010. Here is the code in part....
ALTER PROCEDURE get_persons_by_search_criteria
#month_from as nvarchar(2) = null,
#year_from as nvarchar(4) = null,
#month_to as nvarchar(2) = null,
#year_to as nvarchar(4) = null
AS
declare #from_date varchar(10)
declare #to_date varchar(10)
declare #sqlstr varchar(5000)
set #sqlstr = ' SELECT
Person.PersonID,
Person.FirstName,
Person.LastName,
FROM Person '
--Attemtping to create a value like 201108 (year + month)
set #from_date = Convert(VarChar(10), #year_from) + Replace(Str(#month_from, 2), ' ', '0')
set #to_date = Convert(VarChar(10), #year_to) + Replace(Str(#month_to, 2), ' ', '0')
set #sqlstr = #sqlstr + ' WHERE '
set #sqlstr = #sqlstr + Convert(VarChar(10), Person.DOBYear) + Replace(Str(Person.DOBMonth, 2), ' ', '0')
set #sqlstr = #sqlstr + ' BETWEEN ' + #from_date + ' and ' + #to_date
exec(#sqlstr)

This line gives the error, because the PERSON table is not open when you build the dynamic string.
set #sqlstr = #sqlstr + Convert(VarChar(10), Person.DOBYear) + Replace(Str(Person.DOBMonth, 2), ' ', '0')
Try this
set #sqlstr = #sqlstr + ' Convert(VarChar(10), Person.DOBYear) + Replace(Str(Person.DOBMonth, 2), '' '', ''0'') '
Should do the trick for you..

I realize you've already fixed your issue and accepted an answer, but I thought I would also point out a few other potential improvements (both for you and for any future readers of the question).
ALTER PROCEDURE dbo.get_persons_by_search_criteria
#month_from VARCHAR(2) = NULL,
#year_from VARCHAR(4) = NULL,
#month_to VARCHAR(2) = NULL,
#year_to VARCHAR(4) = NULL
AS
BEGIN
SET NOCOUNT ON;
SELECT
PersonID, DOBYear, DOBMonth
FROM
dbo.Person
WHERE
DOBYear + RIGHT('0' + DOBMonth, 2) + '01'
BETWEEN #year_from + RIGHT('0' + #month_from, 2) + '01'
AND #year_to + RIGHT('0' + #month_to, 2) + '01'
ORDER BY
PersonID, DOBYear, DOBMonth;
END
GO
Isn't that easier on the eyes, easier to follow, and easier to maintain?
Summary:
always use the schema prefix when creating, altering or referencing objects.
don't use Unicode (NCHAR/NVARCHAR) when you don't need to support Unicode data (numbers will never need to contain umlauts, for example). Choosing the right data type might not be that important in this specific case, but it can be crucial in others.
wrap your procedure body in BEGIN/END - this will prevent you from unknowingly picking up other unwanted code from the query window. And always use SET NOCOUNT ON at the beginning of your procedures. I address these and other issues in my "stored procedure best practices checklist."
to avoid changes in behavior, you should always include an ORDER BY clause. If today it orders by first name, and tomorrow it starts ordering by last name, someone is going to complain. See the second section of this post.
learn to write SQL without dynamic SQL, when possible. If you're going to continue using dynamic SQL, at least please try to use sp_executesql instead of EXEC(). I explained the reasons in another recent question: SQL Server use EXEC/sp_executesql or just plain sql in stored procedure?
Even better would be to just store their date of birth as a DATE in the first place. Why would you store the year and month as separate strings? There must be some reason you are doing this but I can't imagine what it is. All it does is make this kind of string matching less efficient than if you were actually using dates, reduces your ability to perform any type of date operations on the values, and makes it very difficult to validate the values passed in. Right now your stuff is going to choke later than it should have to if someone calls the following:
EXEC get_persons_by_search_criteria
#month_from = '97',
#year_from = 'Audi',
#month_to = 'TT',
#year_to = 'Oy!!';
Which they could do, because you perform no validation whatsoever. With DATE variables at least the error message that comes back would make sense. Right now with either of our versions they'll just get an empty result set.

Related

SSMS - MS SQL Sever Query option set ON/OFF to display all columns in Shortdate format?

In SSMS, for MS SQL Server 2008 or newer versions, is there a general query option or something like that, something to set ON or OFF before launching the query, in order to view all DATE columns as Shortdate (only date, without time)?
Something like SET ANSI_NULLS { ON | OFF } ?
Because I often use 'select * from table', or different approaches like that, and inside tables are many columns and the DATE columns are in different places, and I don't want every time to check where these columns are and to explicitly use CONVERT or CAST only on them, to display them properly.
Thank you for any suggestion.
Yeah I will solve such situation from interface end only.
Also saying like,
Because I often use 'select * from table', or different approaches
this is itself bad,you can't have your own way or approaches.
Nonetheless in sql we can do something like this,
USE AdventureWorks2012
GO
--proc parameter
DECLARE #tablename VARCHAR(50) = 'Employee'
DECLARE #table_schema VARCHAR(50) = 'HumanResources'
--local variable
DECLARE #Columnname VARCHAR(max) = ''
DECLARE #Sql VARCHAR(max) = ''
SELECT #Columnname = #Columnname + CASE
WHEN DATA_TYPE = 'date'
OR DATA_TYPE = 'datetime'
THEN 'cast(' + QUOTENAME(COLUMN_NAME) + ' as date)'
ELSE QUOTENAME(COLUMN_NAME)
END + ',' + CASE
WHEN DATA_TYPE = 'date'
OR DATA_TYPE = 'datetime'
THEN 'cast(' + QUOTENAME(COLUMN_NAME) + ' as date)'
ELSE QUOTENAME(COLUMN_NAME)
END + ','
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #tablename
AND TABLE_SCHEMA = #table_schema
ORDER BY ORDINAL_POSITION
SET #Columnname = STUFF(#Columnname, len(#Columnname), 1, '')
--set #Columnname=stuff(#Columnname,1,1,'')
--PRINT #Columnname
SET #Sql = 'select ' + #Columnname + ' from ' + #table_schema + '.' + #tablename + ''
--PRINT #Sql
EXEC (#Sql)
it can be further improve as per requirement.Also please use sp_executeSql
you can customize case condition.
There is no "magic" display format button or function in SSMS no. When you execute a query, SSMS will display that column in the format that is appropriate for that data type; for a datetime field that will include the time.
If you don't want to include the time, then you have to either CAST or CONVERT the individual column(s), or format the data appropriately in your presentation layer (for example, if you're using Excel then dd/MM/yyyy may be appropriate).
If all your columns have '00:00:00.000' at the end of their value, the problem isn't the display format, it's your data type choice. Clearly, the problem isn't that SSMS is returning a time for a date**time** column, it's that you've declare a column as a datetime when it should have been a date. You can change the datatype of a column using ALTER. For example:
USE Sandbox;
Go
CREATE TABLE TestTable (ID smallint IDENTITY(1,1), DateColumn datetime);
INSERT INTO TestTable (DateColumn)
VALUES ('20180201'),('20180202'),('20180203'),('20180204'),('20180205');
SELECT *
FROM TestTable;
GO
ALTER TABLE TestTable ALTER COLUMN DateColumn date;
GO
SELECT *
FROM TestTable;
GO
DROP TABLE TestTable;
TL;DR: SSMS displays data in an appropriate format for the data you have. If you don't like it, you have to supply an alternate format for it to display for each appropriate column. If the issue is your data, change the data type.
Edit: I wanted to add a little more to this.
This question is very much akin to also asking "I would like to be able to run queries where decimals only return the integer part of the value. Can this be done automagically?". So, the value 9.1 would return 9, but also, the value 9.999999999 would return 9.
Now, I realise that you "might" be thinking "Numbers aren't anything like dates", but really, they are. At the end of the (especially in data) a date is just a number (hell, a datetime time in SQL Server is stored as the number of days after 1900-01-01, and the time is a decimal of that number, so 43136.75 is actually 2018-02-07 18:00:00.000).
Now that we're talking in numbers, does it seems like a good idea to you to have all your decimals returned as their FLOOR value? I imagine the answer is "no". Imagine if you were doing some kind of accounting, and only summing the values of transactions using the FLOOR value. You could be losing 1,000's (or more) of £/$/€'s.
Think of the old example of the people who stole money from payments which contained values of less than a penny. The amount they stole was a huge amount, however, not one individual theft had a value >= $0.01. The same principle really rules here; precision is very important and if your column has that precision it should be there for a reason.
The same is true for dates. If you are storing times with dates, and the time isn't relevant for that specific query, change your query; having a setting to ignore times (or decimal points) is, in all honestly, just a bad idea.
I don't think that there is an option like this in SSMS. The best thing I am coming up with is to create views of the tables and this way you can do a
select convert(date, <date column>)
one time and they will appear as just dates in the views.

How to use LIKE in dynamic SQL in a stored procedure?

I wrote a stored procedure using dynamic SQL:
create procedure [dbo].[SearchProduct]
(#ProductId int = null, #ProductName nvarchar(50) = null)
as
declare #SqlStr nvarchar(max)
declare #ParaList nvarchar(2000)
set #SqlStr = 'select p.* from dbo.Product where (1=1) '
if #ProductName is not null
set #SqlStr = #SqlStr + 'and(p.ProductName like '''%' + #ProductName2+'%''')'
set #ParaList='#ProductId2 int , #ProductName2 nvarchar(50)'
EXECUTE SP_EXECUTESQL #SqlStr,#ParaList,#ProductId,#ProductName
But I get an error:
Error in "Like operator" : The data types varchar and varchar are incompatible in the modulo operator.
If I change :
set #SqlStr = #SqlStr + 'and(p.ProductName like ''%' + #ProductName2+'%'')'
I get:
#ProductName2 not declare.
Since you look new to this please accept these notes from me:
As for your question... Your select statement ends with where and you follow it with and
select p.* from dbo.Product where '
Also before % you should have only 2 single quotes not 3.. Like ' '%' +.... + '%' '...
When you do dynamic sql procedures,,, always use print() method first instead of exec to evaluate your sql.
Use case when statement instead of if statements. it will organize your Code much better.
Since dynamic sql is really very bad practice ... Your question should be "how to convert this procedure to normal sql instead of dynamic..."
At the end, please accept my apologies for the lack of samples, mistakes and help links as am answering from my mobile phone.
You have a quote or two too many:
if #ProductName is not null
set #SqlSt r = #SqlStr + 'and (p.ProductName like ''%' + #ProductName2+'%'')';
Within a string, two single quotes represent one single quote in the string. The third single quote then ends the string.
should be below. You have the single quote wrongly
if #ProductName is not null
set #SqlStr=#SqlStr+'and(p.ProductName like ''% + #ProductName2 + %'')'

SQL Server 2012 Using Declared Variables in a Join

I'm quite new to SQL Server so hopefully this makes sense :)
I'm trying to declare variables to be used in an INNER JOIN.
If you take a look at my code, you'll see what I'm trying to do, without me needing to go into too much detail. Let me know if you need more info. Is that syntax possible?
EDIT: See new attempt below
--State - If suburb/postcode, could use postcode lookup
Declare #Missing as nvarchar(255),
#MissingUpdate as nvarchar(255),
#MatchA as nvarchar(255),
#MatchB as nvarchar(255),
#Reason as nvarchar(255);
Set #Missing = '[StateEXPORT]'; -- field to update
Set #MissingUpdate = '[State]'; -- field in postcode lookup to pull in
Set #MatchA = '[PostcodeEXPORT]'; -- field in master field to match with
Set #MatchB = '[Pcode]'; -- field in postcode lookup to match with
Set #Reason = 'Contactable - Needs verificiation - #MissingUpdate taken from Lookup'; -- reason here
update [BT].[dbo].[test]
set #Missing = b.#MissingUpdate,
FinalPot = #Reason
FROM [BT].[dbo].[test] a
INNER JOIN [BT].[dbo].[Postcode Lookup] b
ON a.#MatchA = b.#MatchB
where (#Missing is null or #Missing = '0') and [AddressSource] != ('Uncontactable')
GO
EDIT: SECOND ATTEMPT:
set #sql = 'update [BT].[dbo].[test] set ' + quotename(#Missing) + '= b.' + quotename(#MissingUpdate) + ', FinalPot = ' + #Reason + 'FROM [BT].[dbo].[test] a INNER JOIN [BT].[dbo].[Postcode Lookup] b ON a.' + quotename(#MatchA) + ' = b.' + quotename(#MatchB) + 'where (' + quotename(#Missing) + 'is null or' + quotename(#Missing) + ' = 0 and [AddressSource] != "(Uncontactable)"'
exec (#sql)
Thanks for your help,
Lucas
No, this syntax is not possible, at least not directly: you need to specify the column name, not a string variable that has the name.
If you wish to decide the names of columns dynamically, you could make a SQL string that represents the statement that you wish to execute, and pass that string to EXECUTE command. You have to take extra care not to put any of the user-entered data into the generated SQL string, though, to avoid SQL injection attacks.
EDIT: The reason your second attempt may be failing is that you are passing names in square brackets to quotename. You should remove brackets from your variable declarations, like this:
Set #Missing = 'StateEXPORT'; -- field to update
Set #MissingUpdate = 'State'; -- field in postcode lookup to pull in
Set #MatchA = 'PostcodeEXPORT'; -- field in master field to match with
Set #MatchB = 'Pcode'; -- field in postcode lookup to match with
You can't use variable names as column names without dynamic SQL.
An example of a dynamic SQL query:
declare #ColumnName varchar(100) = 'col1'
declare #sql varchar(max)
set #sql = 'select ' + quotename(#ColumnName) + ' from dbo.YourTable'
exec (#sql)

What is preferred method for searching table data using stored procedure?

I have a customer table with Cust_Id, Name, City and search is based upon any or all of the above three.
Which one Should I go for ?
Dynamic SQL:
declare #str varchar(1000)
set #str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State]
from Customer where 1 = 1'
if (#Cust_Id != '')
set #str = #str + ' and Cust_Id = ''' + #Cust_Id + ''''
if (#Name != '')
set #str = #str + ' and Name like ''' + #Name + '%'''
if (#City != '')
set #str = #str + ' and City like ''' + #City + '%'''
exec (#str)
Simple query:
select
[Sno],[Cust_Id],[Name],[City],[Country],[State]
from
Customer
where
(#Cust_Id = '' or Cust_Id = #Cust_Id) and
(#Name = '' or Name like #Name + '%') and
(#City = '' or City like #City + '%')
Which one should I prefer (1 or 2) and what are advantages?
After going through everyone's suggestion , here is what i finally got.
DECLARE #str NVARCHAR(1000)
DECLARE #ParametersDefinition NVARCHAR(500)
SET #ParametersDefinition = N'#InnerCust_Id varchar(10),
#InnerName varchar(30),#InnerCity varchar(30)'
SET #str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State]
from Customer where 1 = 1'
IF(#Cust_Id != '')
SET #str = #str + ' and Cust_Id = #InnerCust_Id'
IF(#Name != '')
SET #str = #str + ' and Name like #InnerName'
IF(#City != '')
SET #str = #str + ' and City like #InnerCity'
-- ADD the % symbol for search based upon the LIKE keyword
SELECT #Name = #Name + '%', #City = #City+ '%'
EXEC sp_executesql #str, #ParametersDefinition,
#InnerCust_Id = #Cust_Id,
#InnerName = #Name,
#InnerCity = #City;
Note : #Cust_Id, #Name and #City are parameters being passed to the stored procedure
References :
http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/changing-exec-to-sp_executesql-doesn-t-p
http://www.sommarskog.se/dynamic_sql.html
http://msdn.microsoft.com/en-us/library/ms175170.aspx
Dynamic SQL can be a little more difficult to write, and it is vulnerable to SQL Injection if you are not careful. However, it outperforms the "non-dynamic"/Simple or query.
Read more about it here. http://blogs.lessthandot.com/index.php/DataMgmt/DBProgramming/do-you-use-column-param-or-param-is-null
Dynamic SQL is likley to be more performant which is generally important in a search.
However, it is more diffiult to write and debug and test. First you need to make sure it will not allow SQL injection attacks. Next you need to make sure that the variables you use are large enough to contain the largest possible final SQl statement you would create.
Then you need to create a good number of test cases to make sure that there is not some sort of subtle bug.
You will also need to grant read permissions to the underlying tables which you normally don't need to do if you use Stored procs.
Finally when doing dynamic SQL in a stored proc, please add an input variable called #debug as the last input variable and give it a default value of 0. When a 1 is passed in, instead of executing the dynamic SQL, it will send you the SQL that is created. This will help you debug the proc and and is especially helpful when there is a an error in some future search because you can see exactly what SQL was run for those values.
From my experience, Dynamic SQL makes sense (gains performance) only of decreases the number of JOINs.
Otherwise it only worsen code readability and maintainability.

Pass Datetime in SQL query in string format?

I have string query in and pass #Date object to string. It is giving error. See below code.
Declare #MidDate datetime, #MaxDate datetime
set #qrysales_trans_unit_26wks ='update historical_result
set sales_trans_unit_26wks = (
SELECT
SUM(sales_trans_unit)
FROM reg_summary_rowno WHERE
period_idx >= '+ #MidDate // error
+' AND period_idx <'+ #MaxDate /error
+' AND Client_id ='+ #Client_id
+' and historical_result.[store_idx] = reg_summary_rowno.[store_idx]
And [attributes] ='+ #attributes +')'
How to pass Datetime object in the proper way to string Query?
Try using two single quotes to escape quote marks so dates end up like: period_idx >= '#MidDate'
set #qrysales_trans_unit_26wks ='update historical_result
set sales_trans_unit_26wks = (
SELECT
SUM(sales_trans_unit)
FROM reg_summary_rowno WHERE
period_idx >= '''+ #MidDate
+''' AND period_idx <'''+ #MaxDate
+''' AND Client_id ='+ #Client_id
+' and historical_result.[store_idx] = reg_summary_rowno.[store_idx]
And [attributes] ='+ #attributes +')'
Click here for more information on escaping quotes in SQL.
A couple of better options, IMHO.
If you really want to use dynamic SQL, read up on sp_executesql - and use the ability to pass in parameters to the SQL. You'll prevent SQL injection attacks this way and will also avoid running into problems with having to string-ify parameter values.
Otherwise, used stored procedures - which I would consider the better option here.
To fix your ERROR, you need to add some single quotes ' around the dates within the string.
One more thing which improves clarity. Use the BETWEEN keyword:
WHERE period_idx BETWEEN #MinimumDate AND #MaximumDate
You can use instead of datetime a smalldatetime
And you may use the dates like this :
Declare #MidDate smalldatetime,
set #MidDate = '20110317'
Hope it helps.
If you must pass a date in string format - first of all, put it in quotes, and second of all, I would strongly urge you to use the standard ISO-8601 date format (YYYYMMDD or YYYY-MM-DDTHH:MM:SS).
The big benefit of these ISO standard formats is that they'll work no matter what language and regional settings your SQL Server is set to. Any other string representation is language-dependent, e.g.
05/10/2010
will mean:
10th of May 2010 in the US
5th of October 2010 in pretty much all of the rest of the world
but 20101005 is clear and never ambiguous - it's always the 5th of October, 2010 - even for in the US :-)
I think you should use convert before concatenate the date variable with the sentence
Declare #MidDate datetime, #MaxDate datetime
set #qrysales_trans_unit_26wks = 'update historical_result
set sales_trans_unit_26wks = (
SELECT
SUM(sales_trans_unit)
FROM reg_summary_rowno
WHERE
period_idx >= '+ '''' + convert(varchar, #MidDate, 112) + '''' // error
+' AND period_idx <'+ '''' + convert(varchar, #MaxDate, 112) + '''' /error
+' AND Client_id ='+ #Client_id
+' and historical_result.[store_idx] = reg_summary_rowno.[store_idx]
And [attributes] ='+ #attributes +')'
I would really recommend that you shy from concatenating SQL this way. It will really open you to injection attacks etc.
Look at this sample to see another approach that you might take.
use tempdb
create table foo (id int not null identity, data datetime)
insert foo(data) values
('1/1/2010'),('1/10/2010'),('3/31/2010')
Declare #SQLStr nvarchar(4000)
set #SQLStr = 'select * from foo where data = #Data'
exec sp_executeSQL #SQLStr, N'#Data datetime', '1/1/2010'