sql2008 - Better than multiple replace statements? - sql

Further to my previous post, I would like to copy dependent objects (such as views or procedures) to a 'static' database. However, schema names and other object prefixes are not the same between Production and Static databases...
[I've read Aaron Bertrand's articles on setting up an Audit database, but this is a little much for our needs at this time.]
After extracting the object definitions into a variable using some dynamic sql, I am running multiple replace statements for each change so that the views/procedures still run, pulling data from the Static database.
The reason for the replace statements is that the views/procedures have been created using differing naming conventions. Sometimes I find <dbname>.dbo.<objectname>, other times it's <dbname>..<objectname> or even just dbo.<objectname> !
Instead of using multiple replace statements as below (I feel this may grow quite large!), is there a better method? Would a table-driven approach (using a CURSOR) be wiser/wisest?
[Database/object names have been modified in the code below for simplicity]
declare #sql nvarchar(500), #parmdef nvarchar(500),
#dbname varchar(20), #objname varchar(255), #ObjDef varchar(max);
set #dbname = 'ProdC';
--declare cursor; get object name using cursor on dbo.ObjectsToUpdate
--[code removed for simplicity]
set #sql = N'USE '+quotename(#dbname) +'; ' ;
set #sql = #sql + N'SELECT #def=OBJECT_DEFINITION(OBJECT_ID(''dbo.'+#objname+ '''));'
set #parmdef = N'#def nvarchar(max) OUTPUT' ;
exec sp_executesql #sql, #parmdef, #def=#ObjDef OUTPUT;
--Carry out object definition replacements
set #ObjDef= replace(#ObjDef, 'CREATE VIEW [dbo].[', 'ALTER VIEW ['+#dbname+'].[');
set #ObjDef= replace(#ObjDef, 'Prod1.dbo.', #dbname+'.'); --replace Prod1 with #dbname
set #ObjDef= replace(#ObjDef, ' dbo.', ' '+#dbname+'.'); --replace all 'dbo.'
set #ObjDef= replace(#ObjDef, 'dbo.LookupTable1', #dbname+'.LookupTable1');
--[code removed for simplicity]
exec(#ObjDef);
--get next object name from cursor
--[remaining code removed for simplicity]
Many thanks in advance.

Another problem that you have is using OBJECT_DEFINITION. This returns only the first 4,000 characters of the object name.
The same problem exists using INFORMATION_SCHEMA.ROUTINES.
Check out this post for a discussion of the alternatives...
Dude, where's the rest of my procedure?

"However, schema names and other object prefixes are not the same between Production and Static databases"
That's your problem right there. Make them the same and your problem will go away. With SQL Server supporting multiple instances there should be no barrier to doing that.

#ben: the Static database pull from several Production databases and there is a desire to maintain the original 'source' by using the database name as schema name in the Static database.
Closed. Duplicate with this post

Related

Using single stored procedure for different data imports to different tables

I am in a designing stage of an application, I have a huge functionality of importing data into a SQL Server database. As there are numerous tables in database, I want to avoid conventional based approach of creating models and writing stored procedures for each Import. Is there a way by which I can use create single stored procedure for different tables and insert data into them?
Note: columns will vary from table to table.
Thanks in advance
Well, I would stick to comments discouraging it, but on other hand, if this procedure will be super simple and maintenance will be transferred to JSON creator, you can do it like this:
declare #tablename as nvarchar(max)
declare #json as nvarchar(max)
declare #query as nvarchar(max)
set #tablename = (SELECT TableName FROM YourAllowedTableNamesList WHERE Tablename = #tablename)
Set #query =
'Insert into ' + #tablename +
'SELECT * FROM OPENJSON(' + #json + ')'
Exec (#query)
Yes I have done something like this at my current shop. Your question is too broad so I will give you only a broad overview of what we have done.
We wrote a console app that gets a SQL Command from a meta table and executes it on the source into an in-memory DataTable. It then bulk-inserts that data into a staging table on the destination database.
Then we run a generic merge proc that looks at the system tables to get the Primary Keys and datatypes of the final destination table and constructs INSERT and UPDATE statements using dynamic sql.
Despite the well-meaning warnings of others, it's working well for us, though it does have some limitations, such as an inability to handle BLOB datatypes in a generic way. There may be other limitations that we just haven't encountered yet as well.

stored procedure receiving DB name to work with

I am looking to write a stored procedure which received a database name along with other parameters, and the stored procedure needs to work on the Database which it received
any thoughts please
Something like the following should work, as long as correct permissions are setup:
CREATE PROCEDURE dbo.sptest
#DB VARCHAR(50)
AS
BEGIN
DECLARE #sqlstmt VARCHAR(MAX)
SET #sqlstmt='SELECT TOP 10 * FROM ' + #DB + '.dbo.YourTableName'
sp_executesql #sqlstmt
END
GO
As mentioned, be very careful when using dynamic SQL like this- only use with trusted sources because of the ability to wreck havoc on your DB. At a minimum, you should add some checking of the value of #DB passed in to make sure it matches a limited list of database names that it will work with.

Copy trigger from one database to another

Is it possible, in a script executed in MS SQL Server 2005, to copy a trigger from one database to another?
I've been asked to write a test script for a trigger my project is using. Our test structure is to create an empty database containing only the object under test, then execute a script on that database that creates all the other objects needed for the test, fills them, runs whatever tests are needed, compares the results against expected results, and then drops everything except the object under test.
I can't just create a database that is empty except for the trigger, because the trigger depends on several tables. My test script currently runs the CREATE TRIGGER after all the required tables are created, but this won't do because the test script isn't allowed to contain the object under test.
What's been suggested is that, instead of running a CREATE TRIGGER, I somehow copy the trigger at that point in the script from the live database to the test database. I've had a quick Google and haven't found a way to do this. Thus my question - is this even possible, and if so, how can I do it?
You could read the text of the trigger with sp_helptext (triggername)
Or you can select the text into a variable and execute that:
declare #sql varchar(8000)
select #sql = object_definition(object_id)
from sys.triggers
where name = 'testtrigger'
EXEC #sql
I have a stored procedure that copies a bunch of tables to a test database. To make it less prone to mistakes that could potentially change the wrong database, I want to avoid using USE and instead explicitly specify per statement which database the trigger is copied from and to.
With the help of this answer, I came up with this solution:
DECLARE #sql NVARCHAR(MAX);
EXEC SourceDB.sys.sp_executesql
N'SELECT #output = (SELECT OBJECT_DEFINITION(OBJECT_ID(''TriggerName'')))',
N'#output VARCHAR(MAX) OUTPUT',
#output = #sql OUTPUT;
EXEC DestDB.sys.sp_executesql #sql;

How should I pass a table name into a stored proc?

I just ran into a strange thing...there is some code on our site that is taking a giant SQL statement, modifying it in code by doing some search and replace based on some user values, and then passing it on to SQL Server as a query.
I was thinking that this would be cleaner as a parameterized query to a stored proc, with the user values as the parameters, but when I looked more closely I see why they might be doing it...the table that they are selecting from is variably dependant on those user values.
For instance, in one case if the values were ("FOO", "BAR") the query would end up being something like "SELECT * FROM FOO_BAR"
Is there an easy and clear way to do this? Everything I'm trying seems inelegant.
EDIT: I could, of course, dynamically generate the sql in the stored proc, and exec that (bleh), but at that point I'm wondering if I've gained anything.
EDIT2: Refactoring the table names in some intelligent way, say having them all in one table with the different names as a new column would be a nice way to solve all of this, which several people have pointed out directly, or alluded to. Sadly, it is not an option in this case.
First of all, you should NEVER do SQL command compositions on a client app like this, that's what SQL Injection is. (Its OK for an admin tool that has no privs of its own, but not for a shared use application).
Secondly, yes, a parametrized call to a Stored procedure is both cleaner and safer.
However, as you will need to use Dynamic SQL to do this, you still do not want to include the passed string in the text of the executed query. Instead, you want to used the passed string to look up the names of the actual tables that the user should be allowed to query in the way.
Here's a simple naive example:
CREATE PROC spCountAnyTableRows( #PassedTableName as NVarchar(255) ) AS
-- Counts the number of rows from any non-system Table, *SAFELY*
BEGIN
DECLARE #ActualTableName AS NVarchar(255)
SELECT #ActualTableName = QUOTENAME( TABLE_NAME )
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = #PassedTableName
DECLARE #sql AS NVARCHAR(MAX)
SELECT #sql = 'SELECT COUNT(*) FROM ' + #ActualTableName + ';'
EXEC(#SQL)
END
Some have fairly asked why this is safer. Hopefully, little Bobby Tables can make this clearer:
0
Answers to more questions:
QUOTENAME alone is not guaranteed to be safe. MS encourages us to use it, but they have not given a guarantee that it cannot be out-foxed by hackers. FYI, real Security is all about the guarantees. The table lookup with QUOTENAME, is another story, it's unbreakable.
QUOTENAME is not strictly necessary for this example, the Lookup translation on INFORMATION_SCHEMA alone is normally sufficient. QUOTENAME is in here because it is good form in security to include a complete and correct solution. QUOTENAME in here is actually protecting against a distinct, but similar potential problem know as latent injection.
I should note that you can do the same thing with dynamic Column Names and the INFORMATION_SCHEMA.COLUMNS table.
You can also bypass the need for stored procedures by using a parameterized SQL query instead (see here: https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.parameters?view=netframework-4.8). But I think that stored procedures provide a more manageable and less error-prone security facility for cases like this.
(Un)fortunately there's no way of doing this - you can't use table name passed as a parameter to stored code other than for dynamic sql generation. When it comes to deciding where to generate sql code, I prefer application code rather that stored code. Application code is usually faster and easier to maintain.
In case you don't like the solution you're working with, I'd suggest a deeper redesign (i.e. change the schema/application logic so you no longer have to pass table name as a parameter anywhere).
I would argue against dynamically generating the SQL in the stored proc; that'll get you into trouble and could cause injection vulnerability.
Instead, I would analyze all of the tables that could be affected by the query and create some sort of enumeration that would determine which table to use for the query.
Sounds like you'd be better off with an ORM solution.
I cringe when I see dynamic sql in a stored procedure.
One thing you can consider is to make a case statement that contains the same SQL command you want, once for each valid table, then pass as a string the table name into this procedure and have the case choose which command to run.
By the way as a security person the suggestion above telling you to select from the system tables in order to make sure you have a valid table seems like a wasted operation to me. If someone can inject passed the QUOTENAME() then then injection would work on the system table just as well as on the underlying table. The only thing this helps with it to ensure it is a valid table name, and I think the suggestion above is a better approach to that since you are not using QUOTENAME() at all.
Depending on whether the set of columns in those tables is the same or different, I'd approach it in two ways in the longer term:
1) if they the same, why not create a new column that would be used as a selector, whose value is derived from the user-supplied parameters ? (is it a performance optimization?)
2) if they are different, chances are that handling of them is also different. As such, it seems like splitting the select/handle code into separate blocks and then calling them separately would be a most modular approach to me. You will repeat the "select * from" part,
but in this scenario the set of tables is hopefully finite.
Allowing the calling code to supply two arbitrary parts of the table name to do a select from feels very dangerous.
I don't know the reason why you have the data spread over several tables, but it sounds like you are breaking one of the fundamentals. The data should be in the tables, not as table names.
If the tables have more or less the same layout, consider if it would be best to put the data in a single table instead. That would solve your problem with the dynamic query, and it would make the database layout more flexible.
Instead of Querying the tables based on user input values, you can pick the procedure instead.
that is to say
1. Create a procedure FOO_BAR_prc and inside that you put the query 'select * from foo_bar' , that way the query will be precompiled by the database.
2. Then based on the user input now execute the correct procedure from your application code.
Since you have around 50 tables, this might not be a feasible solution though as it would require lot of work on your part.
In fact, I wanted to know how to pass table name to create a table in stored procedure. By reading some of the answers and attempting some modification at my end, I finally able to create a table with name passed as parameter. Here is the stored procedure for others to check any error in it.
USE [Database Name]
GO
/****** Object: StoredProcedure [dbo].[sp_CreateDynamicTable] Script Date: 06/20/2015 16:56:25 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_CreateDynamicTable]
#tName varchar(255)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQL nvarchar(max)
SET #SQL = N'CREATE TABLE [DBO].['+ #tName + '] (DocID nvarchar(10) null);'
EXECUTE sp_executesql #SQL
END
#RBarry Young
You don't need to add the brackets to #ActualTableName in the query string because it is already included in the result from the query in the INFORMATION_SCHEMA.TABLES. Otherwise, there will be error(s) when executed.
CREATE PROC spCountAnyTableRows( #PassedTableName as NVarchar(255) ) AS
-- Counts the number of rows from any non-system Table, SAFELY
BEGIN
DECLARE #ActualTableName AS NVarchar(255)
SELECT #ActualTableName = QUOTENAME( TABLE_NAME )
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = #PassedTableName
DECLARE #sql AS NVARCHAR(MAX)
--SELECT #sql = 'SELECT COUNT(*) FROM [' + #ActualTableName + '];'
-- changed to this
SELECT #sql = 'SELECT COUNT(*) FROM ' + #ActualTableName + ';'
EXEC(#SQL)
END
I would avoid dynamic SQL at all costs.
Isn't the most elegant solution but does the job perfectly.
PROCEDURE TABLE_AS_PARAMTER (
p_table_name IN VARCHAR2
) AS
BEGIN
CASE p_table_name
WHEN 'TABLE1' THEN
UPDATE TABLE1
SET
COLUMN1 =1
WHERE
ID =1;
WHEN 'TABLE2' THEN
UPDATE TABLE1
SET
COLUMN1 =1
WHERE
ID =2;
END CASE;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK
END TABLE_AS_PARAMTER;

How do I run SQL queries on different databases dynamically?

I have a sql server stored procedure that I use to backup data from our database before doing an upgrade, and I'd really like it to be able to run the stored procedure on multiple databases by passing in the database name as a parameter. Is there an easy way to do this? The best I can figure is to dynamically build the sql in the stored procedure, but that feels like its the wrong way to do it.
build a procedure to back up the current database, whatever it is. Install this procedure on all databases that you want to backup.
Write another procedure that will launch the backups. This will depend on things that you have not mentioned, like if you have a table containing the names of each database to backup or something like that. Basically all you need to do is loop over the database names and build a string like:
SET #ProcessQueryString=
'EXEC '+DatabaseServer+'.'+DatabaseName+'.dbo.'+'BackupProcedureName param1, param2'
and then just:
EXEC (#ProcessQueryString)
to run it remotely.
There isn't any other way to do this. Dynamic SQL is the only way; if you've got strict controls over DB names and who's running it, then you're okay just truncating everything together, but if there's any doubt use QUOTENAME to escape the parameter safely:
CREATE PROCEDURE doStuff
#dbName NVARCHAR(50)
AS
DECLARE #sql NVARCHAR(1000)
SET #sql = 'SELECT stuff FROM ' + QUOTENAME(#dbName) + '..TableName WHERE stuff = otherstuff'
EXEC sp_ExecuteSQL (#sql)
Obviously, if there's anything more being passed through then you'll want to double-check any other input, and potentially use parameterised dynamic SQL, for example:
CREATE PROCEDURE doStuff
#dbName NVARCHAR(50)
#someValue NVARCHAR(10)
AS
DECLARE #sql NVARCHAR(1000)
SET #sql = 'SELECT stuff FROM ' + QUOTENAME(#dbName) + '..TableName WHERE stuff = #pOtherStuff'
EXEC sp_ExecuteSQL (#sql, '#pOtherStuff NVARCHAR(10)', #someValue)
This then makes sure that parameters for the dynamic SQL are passed through safely and the chances for injection attacks are reduced. It also improves the chances that the execution plan associated with the query will get reused.
personally, i just use a batch file and shell to sqlcmd for things like this. otherwise, building the sql in a stored proc (like you said) would work just fine. not sure why it would be "wrong" to do that.
best regards,
don
MSSQL has an OPENQUERY(dbname,statement) function where if the the server is linked, you specify it as the first parameter and it fires the statement against that server.
you could generate this openquery statement in a dynamic proc. and either it could fire the backup proc on each server, or you could execute the statement directly.
Do you use SSIS? If so you could try creating a couple ssis packages and try scheduling them,or executing them remotely.