SQL Table "Pointer"? - sql

Using SQl Server 2000 I have a stored procedure that joins 2 tables and then returns the data. I want this sp to be able to do this for whatever table name I pass into it, otherwise I'll have the exact same code with the exception of the table name 20 or so times in a giant if statement. Basically, how do I use a variable to point to a table, or is that allowed? Thanks.

You need dynamic SQL, start here The Curse and Blessings of Dynamic SQL to learn how to do it correctly so that nobody drops your tables or does anything else possible with SQL Injection

Try building the SELECT as a string and then calling EXEC and passing the string.
e.g.
declare #sql varchar(500)
set #sql = 'select whatever from ' + #tableName
exec #sql

One proc to do anything is usually a bad bad idea. 20 possible tables I might need to go to depending on the circumstances almost always indicates a database design that is bad. Read the article Denis posted on the curses and blessings of dynamic SQL.

Related

Dynamic sql - Is this a good approach?

I have a table that has a column named category. The column category has values like "6,7,9,22,44". I would like to know if this approach is an efficient approach for SQL Server 2005 or 2008. I basically pass in the "6,7,9,22,44" value to the stored procedure. I am open links/suggestions for a better approach.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note: I should clarify that I am maintaining an app written by someone else and the category column already has comma separated values. I would have placed the data in a separate table but I won't be able to do that in the short term. With regards to SQL Injection - The column is populated by values checked from a checkbox list. Users cannot type in or enter any data that would affect that specific stored proc. Is SQL injection still possible with that scenario? Please explain how. Thanks.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CREATE PROCEDURE [dbo].[GetCategories]
-- Add the parameter that contain the string that is passed in
#catstring varchar(300)
AS
DECLARE #SQLQuery nvarchar(500);
BEGIN
/* Build Transact-SQL String with parameter value */
SET #SQLQuery = 'select id,category from categoryTable where id in ('+ #catstring +')'
/* Execute Transact-SQL String */
EXECUTE(#SQLQuery)
END
GO
This is a very bad idea because it leaves your system open to SQL injection. A malicious user might be able to pass in a string like "') DROP TABLE categoryTable --" (notice the single quote at the beginning of the string), or something worse.
There are plenty of ways to protect yourself against SQL injection, and there are plenty of resources out there. I would suggest you research it on your own since it's too large an area to cover in a single question.
But for your particular problem, I would suggest this website to learn how to create a Table Valued Parameter to pass in a list of integer IDs: http://www.sommarskog.se/arrays-in-sql-2008.html
It's only valid for SQL Server 2008, you should follow the links in that article for solutions for SQL Server 2005 if you need to.
As Jack said, this creates a possibility of SQL injection. Sommarskog has an excellent look at different ways of handling this at: The Curse and Blessing of Dynamic SQL.

Stored procedure, pass table name as a parameter

I have about half a dozen generic, but fairly complex stored procedures and functions that I would like to use in a more generic fashion.
Ideally I'd like to be able to pass the table name as a parameter to the procedure, as currently it is hard coded.
The research I have done suggests I need to convert all existing SQL within my procedures to use dynamic SQL in order to splice in the dynamic table name from the parameter, however I was wondering if there is a easier way by referencing the table in another way?
For example:
SELECT * FROM #MyTable WHERE...
If so, how do I set the #MyTable variable from the table name?
I am using SQL Server 2005.
Dynamic SQL is the only way to do this, but I'd reconsider the architecture of your application if it requires this. SQL isn't very good at "generalized" code. It works best when it's designed and coded to do individual tasks.
Selecting from TableA is not the same as selecting from TableB, even if the select statements look the same. There may be different indexes, different table sizes, data distribution, etc.
You could generate your individual stored procedures, which is a common approach. Have a code generator that creates the various select stored procedures for the tables that you need. Each table would have its own SP(s), which you could then link into your application.
I've written these kinds of generators in T-SQL, but you could easily do it with most programming languages. It's pretty basic stuff.
Just to add one more thing since Scott E brought up ORMs... you should also be able to use these stored procedures with most sophisticated ORMs.
You'd have to use dynamic sql. But don't do that! You're better off using an ORM.
EXEC(N'SELECT * from ' + #MyTable + N' WHERE ... ')
You can use dynamic Sql, but check that the object exists first unless you can 100% trust the source of that parameter. It's likely that there will be a performance hit as SQL server won't be able to re-use the same execution plan for different parameters.
IF OBJECT_ID(#tablename, N'U') IS NOT NULL
BEGIN
--dynamic sql
END
ALTER procedure [dbo].[test](#table_name varchar(max))
AS
BEGIN
declare #tablename varchar(max)=#table_name;
declare #statement varchar(max);
set #statement = 'Select * from ' + #tablename;
execute (#statement);
END

How do I execute sql text passed as an sp parameter?

I have a stored procedure with an nvarchar parameter. I expect callers to supply the text for a sql command when using this SP.
How do I execute the supplied sql command from within the SP?
Is this even possible?-
I thought it was possible using EXEC but the following:
EXEC #script
errors indicating it can't find a stored procedure by the given name. Since it's a script this is obviously accurate, but leads me to think it's not working as expected.
Use:
BEGIN
EXEC sp_executesql #nvarchar_parameter
END
...assuming the parameter is an entire SQL query. If not:
DECLARE #SQL NVARCHAR(4000)
SET #SQL = 'SELECT ...' + #nvarchar_parameter
BEGIN
EXEC sp_executesql #SQL
END
Be aware of SQL Injection attacks, and I highly recommend reading The curse and blessing of Dynamic SQL.
you can just exec #sqlStatement from within your sp. Though, its not the best thing to do because it opens you up to sql injection. You can see an example here
You use EXECUTE passing it the command as a string. Note this could open your system up to serious vulnerabilities given that it is difficult to verify the non-maliciousness of the SQL statements you are blindly executing.
How do I execute the supplied sql command from within the SP?
Very carefully. That code could do anything, including add or delete records, or even whole tables or databases.
To be safe about this, you need to create a separate user account that only has dbreader permissions on just a small set of allowed tables/views and use the EXECUTE AS command to limit the context to that user.

How should I pass a table name into a stored proc?

I just ran into a strange thing...there is some code on our site that is taking a giant SQL statement, modifying it in code by doing some search and replace based on some user values, and then passing it on to SQL Server as a query.
I was thinking that this would be cleaner as a parameterized query to a stored proc, with the user values as the parameters, but when I looked more closely I see why they might be doing it...the table that they are selecting from is variably dependant on those user values.
For instance, in one case if the values were ("FOO", "BAR") the query would end up being something like "SELECT * FROM FOO_BAR"
Is there an easy and clear way to do this? Everything I'm trying seems inelegant.
EDIT: I could, of course, dynamically generate the sql in the stored proc, and exec that (bleh), but at that point I'm wondering if I've gained anything.
EDIT2: Refactoring the table names in some intelligent way, say having them all in one table with the different names as a new column would be a nice way to solve all of this, which several people have pointed out directly, or alluded to. Sadly, it is not an option in this case.
First of all, you should NEVER do SQL command compositions on a client app like this, that's what SQL Injection is. (Its OK for an admin tool that has no privs of its own, but not for a shared use application).
Secondly, yes, a parametrized call to a Stored procedure is both cleaner and safer.
However, as you will need to use Dynamic SQL to do this, you still do not want to include the passed string in the text of the executed query. Instead, you want to used the passed string to look up the names of the actual tables that the user should be allowed to query in the way.
Here's a simple naive example:
CREATE PROC spCountAnyTableRows( #PassedTableName as NVarchar(255) ) AS
-- Counts the number of rows from any non-system Table, *SAFELY*
BEGIN
DECLARE #ActualTableName AS NVarchar(255)
SELECT #ActualTableName = QUOTENAME( TABLE_NAME )
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = #PassedTableName
DECLARE #sql AS NVARCHAR(MAX)
SELECT #sql = 'SELECT COUNT(*) FROM ' + #ActualTableName + ';'
EXEC(#SQL)
END
Some have fairly asked why this is safer. Hopefully, little Bobby Tables can make this clearer:
0
Answers to more questions:
QUOTENAME alone is not guaranteed to be safe. MS encourages us to use it, but they have not given a guarantee that it cannot be out-foxed by hackers. FYI, real Security is all about the guarantees. The table lookup with QUOTENAME, is another story, it's unbreakable.
QUOTENAME is not strictly necessary for this example, the Lookup translation on INFORMATION_SCHEMA alone is normally sufficient. QUOTENAME is in here because it is good form in security to include a complete and correct solution. QUOTENAME in here is actually protecting against a distinct, but similar potential problem know as latent injection.
I should note that you can do the same thing with dynamic Column Names and the INFORMATION_SCHEMA.COLUMNS table.
You can also bypass the need for stored procedures by using a parameterized SQL query instead (see here: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.parameters?view=netframework-4.8). But I think that stored procedures provide a more manageable and less error-prone security facility for cases like this.
(Un)fortunately there's no way of doing this - you can't use table name passed as a parameter to stored code other than for dynamic sql generation. When it comes to deciding where to generate sql code, I prefer application code rather that stored code. Application code is usually faster and easier to maintain.
In case you don't like the solution you're working with, I'd suggest a deeper redesign (i.e. change the schema/application logic so you no longer have to pass table name as a parameter anywhere).
I would argue against dynamically generating the SQL in the stored proc; that'll get you into trouble and could cause injection vulnerability.
Instead, I would analyze all of the tables that could be affected by the query and create some sort of enumeration that would determine which table to use for the query.
Sounds like you'd be better off with an ORM solution.
I cringe when I see dynamic sql in a stored procedure.
One thing you can consider is to make a case statement that contains the same SQL command you want, once for each valid table, then pass as a string the table name into this procedure and have the case choose which command to run.
By the way as a security person the suggestion above telling you to select from the system tables in order to make sure you have a valid table seems like a wasted operation to me. If someone can inject passed the QUOTENAME() then then injection would work on the system table just as well as on the underlying table. The only thing this helps with it to ensure it is a valid table name, and I think the suggestion above is a better approach to that since you are not using QUOTENAME() at all.
Depending on whether the set of columns in those tables is the same or different, I'd approach it in two ways in the longer term:
1) if they the same, why not create a new column that would be used as a selector, whose value is derived from the user-supplied parameters ? (is it a performance optimization?)
2) if they are different, chances are that handling of them is also different. As such, it seems like splitting the select/handle code into separate blocks and then calling them separately would be a most modular approach to me. You will repeat the "select * from" part,
but in this scenario the set of tables is hopefully finite.
Allowing the calling code to supply two arbitrary parts of the table name to do a select from feels very dangerous.
I don't know the reason why you have the data spread over several tables, but it sounds like you are breaking one of the fundamentals. The data should be in the tables, not as table names.
If the tables have more or less the same layout, consider if it would be best to put the data in a single table instead. That would solve your problem with the dynamic query, and it would make the database layout more flexible.
Instead of Querying the tables based on user input values, you can pick the procedure instead.
that is to say
1. Create a procedure FOO_BAR_prc and inside that you put the query 'select * from foo_bar' , that way the query will be precompiled by the database.
2. Then based on the user input now execute the correct procedure from your application code.
Since you have around 50 tables, this might not be a feasible solution though as it would require lot of work on your part.
In fact, I wanted to know how to pass table name to create a table in stored procedure. By reading some of the answers and attempting some modification at my end, I finally able to create a table with name passed as parameter. Here is the stored procedure for others to check any error in it.
USE [Database Name]
GO
/****** Object: StoredProcedure [dbo].[sp_CreateDynamicTable] Script Date: 06/20/2015 16:56:25 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_CreateDynamicTable]
#tName varchar(255)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQL nvarchar(max)
SET #SQL = N'CREATE TABLE [DBO].['+ #tName + '] (DocID nvarchar(10) null);'
EXECUTE sp_executesql #SQL
END
#RBarry Young
You don't need to add the brackets to #ActualTableName in the query string because it is already included in the result from the query in the INFORMATION_SCHEMA.TABLES. Otherwise, there will be error(s) when executed.
CREATE PROC spCountAnyTableRows( #PassedTableName as NVarchar(255) ) AS
-- Counts the number of rows from any non-system Table, SAFELY
BEGIN
DECLARE #ActualTableName AS NVarchar(255)
SELECT #ActualTableName = QUOTENAME( TABLE_NAME )
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = #PassedTableName
DECLARE #sql AS NVARCHAR(MAX)
--SELECT #sql = 'SELECT COUNT(*) FROM [' + #ActualTableName + '];'
-- changed to this
SELECT #sql = 'SELECT COUNT(*) FROM ' + #ActualTableName + ';'
EXEC(#SQL)
END
I would avoid dynamic SQL at all costs.
Isn't the most elegant solution but does the job perfectly.
PROCEDURE TABLE_AS_PARAMTER (
p_table_name IN VARCHAR2
) AS
BEGIN
CASE p_table_name
WHEN 'TABLE1' THEN
UPDATE TABLE1
SET
COLUMN1 =1
WHERE
ID =1;
WHEN 'TABLE2' THEN
UPDATE TABLE1
SET
COLUMN1 =1
WHERE
ID =2;
END CASE;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK
END TABLE_AS_PARAMTER;

How do I run SQL queries on different databases dynamically?

I have a sql server stored procedure that I use to backup data from our database before doing an upgrade, and I'd really like it to be able to run the stored procedure on multiple databases by passing in the database name as a parameter. Is there an easy way to do this? The best I can figure is to dynamically build the sql in the stored procedure, but that feels like its the wrong way to do it.
build a procedure to back up the current database, whatever it is. Install this procedure on all databases that you want to backup.
Write another procedure that will launch the backups. This will depend on things that you have not mentioned, like if you have a table containing the names of each database to backup or something like that. Basically all you need to do is loop over the database names and build a string like:
SET #ProcessQueryString=
'EXEC '+DatabaseServer+'.'+DatabaseName+'.dbo.'+'BackupProcedureName param1, param2'
and then just:
EXEC (#ProcessQueryString)
to run it remotely.
There isn't any other way to do this. Dynamic SQL is the only way; if you've got strict controls over DB names and who's running it, then you're okay just truncating everything together, but if there's any doubt use QUOTENAME to escape the parameter safely:
CREATE PROCEDURE doStuff
#dbName NVARCHAR(50)
AS
DECLARE #sql NVARCHAR(1000)
SET #sql = 'SELECT stuff FROM ' + QUOTENAME(#dbName) + '..TableName WHERE stuff = otherstuff'
EXEC sp_ExecuteSQL (#sql)
Obviously, if there's anything more being passed through then you'll want to double-check any other input, and potentially use parameterised dynamic SQL, for example:
CREATE PROCEDURE doStuff
#dbName NVARCHAR(50)
#someValue NVARCHAR(10)
AS
DECLARE #sql NVARCHAR(1000)
SET #sql = 'SELECT stuff FROM ' + QUOTENAME(#dbName) + '..TableName WHERE stuff = #pOtherStuff'
EXEC sp_ExecuteSQL (#sql, '#pOtherStuff NVARCHAR(10)', #someValue)
This then makes sure that parameters for the dynamic SQL are passed through safely and the chances for injection attacks are reduced. It also improves the chances that the execution plan associated with the query will get reused.
personally, i just use a batch file and shell to sqlcmd for things like this. otherwise, building the sql in a stored proc (like you said) would work just fine. not sure why it would be "wrong" to do that.
best regards,
don
MSSQL has an OPENQUERY(dbname,statement) function where if the the server is linked, you specify it as the first parameter and it fires the statement against that server.
you could generate this openquery statement in a dynamic proc. and either it could fire the backup proc on each server, or you could execute the statement directly.
Do you use SSIS? If so you could try creating a couple ssis packages and try scheduling them,or executing them remotely.