SQL Server Synonyms and Concurrency Safety With Dynamic Table Names - sql

I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.

Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.

Related

How to tell if ALTER PROCEDURE worked?

I'm pretty new to SQL and SQL Server. I'm trying to run an ALTER PROCEDURE query from a .sql file called through C# code. Before I move on to making sure my query does what it's supposed to do, I want to verify that my ALTER PROCEDURE query actually altered the procedure, but I don't know how to verify that.
For example, in SQL Server, I can see where the stored procedure I'm trying to edit lives, in:
- database-name
- Programmability/
- Stored Procedures/
- dbo.MyStoredProcedure
If my ALTER TABLE query worked correctly, would I be able to see my procedure code here, or would I check somewhere else? Or am I thinking about this the wrong way?
Generally, we rely on error and exception messages to tell us when something like this has not worked. However, I suppose that it might be possible that the procedure Alter-ed was not the one that was intended (implying bugs in the name/path/call construction, of course).
In that case, you can get the current text of any SQL Module (Procedure, View, Trigger, etc., anything script-baseD) from the sys.sql_modules table:
SELECT definition FROM sys.sql_modules
WHERE object_id=OBJECT_ID('dbo.UserSamples_Insert')
You should note that usually when something like this happens without an error message it is because either:
You are executing in the wrong database (like PROD when you meant to be in DEV or vice-versa), or
You are not using the correct Schema (because you can make and use schemas other than 'dbo').
Wait, you say ALTER PROCEDURE twice, but then the third time you say ALTER TABLE. Which is it? I ask because unlike almost every other SQL object, tables are not script-based and their definition cannot be found in any of the Sql script repositories like sys.sql_modules. I actually use either SMO (from a client) or a tool that #SeanLange wrote years ago for that (from the server itself).

Best practices of structuring stored procedures

As a developer mainly writing c# I have adopted some good practices when writing c# code. When I sometimes write stored procedures I have trouble applying those practices to the stored procedure code.
On several occasions I have inherited nightmare stored procedure code, first three or four layers of stored procedures setting up some temp tables and mostly calling each other. No real work done and just a few lines of code. Then at last there is a call to "the final" stored procedure, a big monster of 3000-5000 lines of SQL code. That code usually have a lot of code smells like code duplication, intricate control flows (a.k.a. spaghetti) and a method that does too many things stacked after each other with no clear separation where one chunk of work starts and where it ends (not even a comment as a divisor).
I have also noticed the use of out commented select statements that selects from intermediate temp tables. The selects can be turned back on for debug purposes, but need to be removed before any calling code expecting a specific order of the returned result sets.
Apparently my fellow team mates also share my lack of good SQL writing practices.
So... ( and here comes the real question) ... what are good practices for writing modular maintainable stored procedures?
Both home made practices and references to books/blogs are welcome. Methods as well as tools that help with certain tasks.
Lets summarize some areas where I have not found good practices
Modularization and encapsulation (is stored procedures communication via temp tables really the way to go?)
In c# I use assemblies, classes and methods decorated with access modifiers to accomplish this.
Debugging/testing (better than modifying the target of debugging?)
Debug tools?
Debug traces?
Test fixtures?
Emphasizing code/logic/data/control flow using code the structure of the code
In c# I refactor and break out smaller methods that does just one logical task each.
Code duplication
Mostly I encounter SQL Server as DBMS but DBMS agnostic answers or answers pointing out features of other DBMS:es that help in above cases are also welcome.
To give some background: Most large stored procedures I have encountered are in reporting scenarios where the base is to just create some summary values from a large table. But along the way you need to exclude some of the values that happen to be in some exception table, add some of the values in some not yet completed stuff table, compare with last year (can you imagine the ugly code that handles products changing department between years?), etc.
I write a lot of complex stored procs. Some things I would consider best practices:
Don't use dynamic SQl in a stored proc unless you are doing a search proc with lots of parameters which may or may not be needed (then it is currently one of the best solutions). If you must use dynamic SQl in a proc always have a debug input parameter and if the debug parameter is set, then print the SQL statement created rather than executing it. This will save hours of debugging time!
If you are performing more than one action query in a proc (insert/update/delete), use Try Cacth blocks and transaction processing. Add a test parameter to the input parameters and when it is is set to 1, always rollback the entire transaction. Before rolling back in test mode, I usually have a section that returns the values in the tables I'm affecting to ensure that what I think I am doing to the database is in fact what I did do. Or you could have checks as go as shown below. That is as simple as putting in the following code around your currently commented out selects (and uncommenting them) once you have the #test parameter.
If #test =1
Begin
Select * from table1 where field1 = #myfirstparameter
End
Now you don't have to go through and comment and uncomment each time time you test.
#test or #debuig should always be set with a default value of 0 and placed last in the list. That way adding them won't break existing calls of the proc.
Consider having logging and/or error logging tables for procs doing inserts/updates/deletes. If you record the the steps and or errors in table variables as you go, they are still available after a rollback to be inserted into the logging table. Knowing what part of a complex proc failed and what the error was can be invaluable later on.
Where possible do not nest stored procs. If you need to run multiple records in a loop, replace the stored proc with one that has a table-valued parameter and set up the proc to run in a set-based and not individual record fashion. This will work if the table-valued parameter has one record or many records.
If you have a complex select with a lot of subqueries or derived tables, consider using CTEs instead. Refactor any correlated subqueries or cursors to better performing set-based code. Always think in terms of sets of data not one record.
Do not, under any conceivable circumstance, nest views. The performance hit is much worse than any small amount of saved development time. And trust me, nested views do not save maintenance time when the change needs to be to the view the furthest into the chain of views.
All stored procs (and other database code) should be in source control.
Table variables are good for smaller data sets, but temp tables (real ones that start with # or ## not staging tables) can be better for performance in large data sets. If using temp tables drop them when you don't need them anymore. Try to avoid the use of global temp tables.
Learn to write performant SQL. It is usually just as easy to write SQL that will perform well than SQL which will not once you know the techiniques. If you write complex stored procs, there is no excuse for not knowing which techniques work better than which other ones. Learn how to make sure your query is sargable. Avoid cursors, correlated subqueries, scalar functions and other things which run row-by-agonizing-row.
Communication via temp tables is sometimes a huge code smell. Such procedures often cannot be run by a user without interfering with each other (if you re-use a temp table name for different procedures' ins and outs and they aren't re-created or if you use the same name with two different table schemas). They can be hard to troubleshoot - like any feature, use them when necessary and better alternatives don't exist. Using real tables temporarily can also be problematic.
Stored procs which pass data to each other in SQL Server at all (more than parameters) can be problematic. There are table-valued parameters now and many things which previously would have been done with procs can now be done with inline table-valued functions or (and usually preferred over) multi-statement table-valued functions.
In SQL Server, avoid heavy use of scalar functions and multi-statement table-valued function on large rowsets - they do not perform very well, so modular techniques which may seem obvious in C# don't really apply here.
I would recommend you look at Ken Henderson's Guru's Guide to SQL Server Stored Procedures - published in 2002, it still has a wealth of useful information on database application design.
This is such a good question. As a C# dev myself having to dabble in SQL it seems SQL by its very nature gets in the way of the best-practices I'm used to with C#.
Common Table Expresions are great to isolate queries in a stored procedure but you can only use them once! That leads you to define views but then you've lost your encapsulation.
A resultset from one stored procedure are very difficult to use in another so you might be tempted to write table-valued functions. That adds to your permissions-maintenance burden and forces you to write functions 'twice' - once as a function and another as a procedure that calls the function. Otherwise, you have different interfaces to your DAL depending on whether it's a procedure or not.
All of this has caused me, over time, to stick to simple CRUD stored procedures (that do not call each other) in the database and few, isolated, queries when the relationships are complex. More BI-stuff. Everything else is in the BLL.
Physically, SQL is isolated in seperate files by function or the table they revolve around and managed in source control.
Avoid SELECT * and favor specifying columns. That saves you from run-time problems when you change a table and don't touch all the procs. Yes, there is a recompile for procs but it WILL miss some, especially if views are involved. Plus, SELECT * almost always returns more columns than you really need and that's a waste of bandwidth.
The comments above are great advice of Do's and Dont's when it comes to SQL code writing. If I understand your questions correctly, you are asking if this is normal for SQL Developer to write hundreds even thousands of code in a single stored procedure. In C# this is a big no-no. You are to encapsulate logic into small chucks using methods, assemblies, and classes. SQL Developer tend to write the entire logic in one stored procedure to accomplish a relating task; as HLGEM mentioned above, "If possible, do not nest stored procedures". Do not nest Views.
For example: A simple Get and Insert design in C# looks like this:
Call GetData Method
Call Get Data Method
Call Transform Data Method
Call CheckAlphaNumeric Method
Call Data Enrichment Method
Call Load Transformed Data Method
A SQL developer will design it like this:
In a single stored proc:
Get Data and Transform using either temp table or table variable, then Load it into the final table.
If you are to change the way SQL is written to match the writing structure of C# Developer, you would then do this:
Execute Main Stored Procedure (which calls the below sproc.)
Execute GetData Stored Procedure and load into Stage Table
Execute Transform Stored Procedure which read the Stage Table and transform data
Execute Load Data stored procedure to load Staged or Transformed data into final table.

Can parameterized statement stop all SQL injection?

If yes, why are there still so many successful SQL injections? Just because some developers are too dumb to use parameterized statements?
When articles talk about parameterized queries stopping SQL attacks they don't really explain why, it's often a case of "It does, so don't ask why" -- possibly because they don't know themselves. A sure sign of a bad educator is one that can't admit they don't know something. But I digress.
When I say I found it totally understandable to be confused is simple. Imagine a dynamic SQL query
sqlQuery='SELECT * FROM custTable WHERE User=' + Username + ' AND Pass=' + password
so a simple sql injection would be just to put the Username in as ' OR 1=1--
This would effectively make the sql query:
sqlQuery='SELECT * FROM custTable WHERE User='' OR 1=1-- ' AND PASS=' + password
This says select all customers where they're username is blank ('') or 1=1, which is a boolean, equating to true. Then it uses -- to comment out the rest of the query. So this will just print out all the customer table, or do whatever you want with it, if logging in, it will log in with the first user's privileges, which can often be the administrator.
Now parameterized queries do it differently, with code like:
sqlQuery='SELECT * FROM custTable WHERE User=? AND Pass=?'
parameters.add("User", username)
parameters.add("Pass", password)
where username and password are variables pointing to the associated inputted username and password
Now at this point, you may be thinking, this doesn't change anything at all. Surely you could still just put into the username field something like Nobody OR 1=1'--, effectively making the query:
sqlQuery='SELECT * FROM custTable WHERE User=Nobody OR 1=1'-- AND Pass=?'
And this would seem like a valid argument. But, you would be wrong.
The way parameterized queries work, is that the sqlQuery is sent as a query, and the database knows exactly what this query will do, and only then will it insert the username and passwords merely as values. This means they cannot effect the query, because the database already knows what the query will do. So in this case it would look for a username of "Nobody OR 1=1'--" and a blank password, which should come up false.
This isn't a complete solution though, and input validation will still need to be done, since this won't effect other problems, such as XSS attacks, as you could still put javascript into the database. Then if this is read out onto a page, it would display it as normal javascript, depending on any output validation. So really the best thing to do is still use input validation, but using parameterized queries or stored procedures to stop any SQL attacks.
The links that I have posted in my comments to the question explain the problem very well. I've summarised my feelings on why the problem persists, below:
Those just starting out may have no awareness of SQL injection.
Some are aware of SQL injection, but think that escaping is the (only?) solution. If you do a quick Google search for php mysql query, the first page that appears is the mysql_query page, on which there is an example that shows interpolating escaped user input into a query. There's no mention (at least not that I can see) of using prepared statements instead. As others have said, there are so many tutorials out there that use parameter interpolation, that it's not really surprising how often it is still used.
A lack of understanding of how parameterized statements work. Some think that it is just a fancy means of escaping values.
Others are aware of parameterized statements, but don't use them because they have heard that they are too slow. I suspect that many people have heard how incredibly slow paramterized statements are, but have not actually done any testing of their own. As Bill Karwin pointed out in his talk, the difference in performance should rarely be used as a factor when considering the use of prepared statements. The benefits of prepare once, execute many, often appear to be forgotten, as do the improvements in security and code maintainability.
Some use parameterized statements everywhere, but with interpolation of unchecked values such as table and columns names, keywords and conditional operators. Dynamic searches, such as those that allow users to specify a number of different search fields, comparison conditions and sort order, are prime examples of this.
False sense of security when using an ORM. ORMs still allow interpolation of SQL statement parts - see 5.
Programming is a big and complex subject, database management is a big and complex subject, security is a big and complex subject. Developing a secure database application is not easy - even experienced developers can get caught out.
Many of the answers on stackoverflow don't help. When people write questions that use dynamic SQL and parameter interpolation, there is often a lack of responses that suggest using parameterized statements instead. On a few occasions, I've had people rebut my suggestion to use prepared statements - usually because of the perceived unacceptable performance overhead. I seriously doubt that those asking most of these questions are in a position where the extra few milliseconds taken to prepare a parameterized statement will have a catastrophic effect on their application.
Well good question.
The answer is more stochastic than deterministic and I will try to explain my view, using a small example.
There many references on the net that suggest us to use parameters in our queries or to use stored procedure with parameters in order to avoid SQL Injection (SQLi). I will show you that stored procedures (for instance) is not the magic stick against SQLi. The responsibility still remains on the programmer.
Consider the following SQL Server Stored Procedure that will get the user row from a table 'Users':
create procedure getUser
#name varchar(20)
,#pass varchar(20)
as
declare #sql as nvarchar(512)
set #sql = 'select usrID, usrUName, usrFullName, usrRoleID '+
'from Users '+
'where usrUName = '''+#name+''' and usrPass = '''+#pass+''''
execute(#sql)
You can get the results by passing as parameters the username and the password. Supposing the password is in free text (just for simplicity of this example) a normal call would be:
DECLARE #RC int
DECLARE #name varchar(20)
DECLARE #pass varchar(20)
EXECUTE #RC = [dbo].[getUser]
#name = 'admin'
,#pass = '!#Th1siSTheP#ssw0rd!!'
GO
But here we have a bad programming technique used by the programmer inside the stored procedure, so an attacker can execute the following:
DECLARE #RC int
DECLARE #name varchar(20)
DECLARE #pass varchar(20)
EXECUTE #RC = [TestDB].[dbo].[getUser]
#name = 'admin'
,#pass = 'any'' OR 1=1 --'
GO
The above parameters will be passed as arguments to the stored procedure and the SQL command that finally will be executed is:
select usrID, usrUName, usrFullName, usrRoleID
from Users
where usrUName = 'admin' and usrPass = 'any' OR 1=1 --'
..which will get all rows back from users
The problem here is that even we follow the principle "Create a stored procedure and pass the fields to search as parameters" the SQLi is still performed. This is because we just copy our bad programming practice inside the stored procedure. The solution to the problem is to rewrite our Stored Procedure as follows:
alter procedure getUser
#name varchar(20)
,#pass varchar(20)
as
select usrID, usrUName, usrFullName, usrRoleID
from Users
where usrUName = #name and usrPass = #pass
What I am trying to say is that the developers must learn first what an SQLi attack is and how can be performed and then to safeguard their code accordingly. Blindly following 'best practices' is not always the safer way... and maybe this is why we have so many 'best practices'- failures!
Yes, the use of prepared statements stops all SQL injections, at least in theory. In practice, parameterized statements may not be real prepared statements, e.g. PDO in PHP emulates them by default so it's open to an edge case attack.
If you're using real prepared statements, everything is safe. Well, at least as long as you don't concatenate unsafe SQL into your query as reaction to not being able to prepare table names for example.
If yes, why are there still so many successful SQL injections? Just because some developers are too dumb to use parameterized statements?
Yes, education is the main point here, and legacy code bases. Many tutorials use escaping and those can't be easily removed from the web, unfortunately.
I avoid absolutes in programming; there is always an exception. I highly recommend stored procedures and command objects. A majority of my back ground is with SQL Server, but I do play with MySql from time to time. There are many advantages to stored procedures including cached query plans; yes, this can be accomplished with parameters and inline SQL, but that opens up more possibilities for injection attacks and doesn't help with separation of concerns. For me it's also much easier to secure a database as my applications generally only have execute permission for said stored procedures. Without direct table/view access it's much more difficult to inject anything. If the applications user is compromised one only has permission to execute exactly what was pre-defined.
My two cents.
I wouldn't say "dumb".
I think the tutorials are the problem. Most SQL tutorials, books, whatever explain SQL with inlined values, not mentioning bind parameters at all. People learning from these tutorials don't have a chance to learn it right.
Because most code isn't written with security in mind, and management, given a choice between adding features (especially something visible that can be sold) and security/stability/reliability (which is a much harder sell) they will almost invariably choose the former. Security is only a concern when it becomes a problem.
Can parameterized statement stop all SQL injection?
Yes, as long as your database driver offers a placeholder for the every possible SQL literal. Most prepared statement drivers don't. Say, you'd never find a placeholder for a field name or for an array of values. Which will make a developer to fall back into tailoring a query by hand, using concatenation and manual formatting. With predicted outcome.
That's why I made my Mysql wrapper for PHP that supports most of literals that can be added to the query dynamically, including arrays and identifiers.
If yes, why are there still so many successful SQL injections? Just because some developers are too dumb to use parameterized statements?
As you can see, in reality it's just impossible to have all your queries parameterized, even if you're not dumb.
First my answer to your first question: Yes, as far as I know, by using parameterized queries, SQL injections will not be possible anymore. As to your following questions, I am not sure and can only give you my opinion on the reasons:
I think it's easier to "just" write the SQL query string by concatenate some different parts (maybe even dependent on some logical checks) together with the values to be inserted.
It's just creating the query and executing it.
Another advantage is that you can print (echo, output or whatever) the sql query string and then use this string for a manual query to the database engine.
When working with prepared statements, you always have at least one step more:
You have to build your query (including the parameters, of course)
You have to prepare the query on the server
You have to bind the parameters to the actual values you want to use for your query
You have to execute the query.
That's somewhat more work (and not so straightforward to program) especially for some "quick and dirty" jobs which often prove to be very long-lived...
Best regards,
Box
SQL injection is a subset of the larger problem of code injection, where data and code are provided over the same channel and data is mistaken for code. Parameterized queries prevent this from occurring by forming the query using context about what is data and what is code.
In some specific cases, this is not sufficient. In many DBMSes, it's possible to dynamically execute SQL with stored procedures, introducing a SQL injection flaw at the DBMS level. Calling such a stored procedure using parameterized queries will not prevent the SQL injection in the procedure from being exploited. Another example can be seen in this blog post.
More commonly, developers use the functionality incorrectly. Commonly the code looks something like this when done correctly:
db.parameterize_query("select foo from bar where baz = '?'", user_input)
Some developers will concatenate strings together and then use a parameterized query, which doesn't actually make the aforementioned data/code distinction that provides the security guarantees we're looking for:
db.parameterize_query("select foo from bar where baz = '" + user_input + "'")
Correct usage of parameterized queries provides very strong, but not impenetrable, protection against SQL injection attacks.
To protect your application from SQL injection, perform the following steps:
Step 1. Constrain input.
Step 2. Use parameters with stored procedures.
Step 3. Use parameters with dynamic SQL.
Refer to http://msdn.microsoft.com/en-us/library/ff648339.aspx
even if
prepared statements are properly used throughout the web application’s own
code, SQL injection flaws may still exist if database code components construct
queries from user input in an unsafe manner.
The following is an example of a stored procedure that is vulnerable to SQL
injection in the #name parameter:
CREATE PROCEDURE show_current_orders
(#name varchar(400) = NULL)
AS
DECLARE #sql nvarchar(4000)
SELECT #sql = ‘SELECT id_num, searchstring FROM searchorders WHERE ‘ +
‘searchstring = ‘’’ + #name + ‘’’’;
EXEC (#sql)
GO
Even if the application passes the user-supplied name value to the stored
procedure in a safe manner, the procedure itself concatenates this directly into
a dynamic query and therefore is vulnerable.

Finding stored procedures with errors in SQL Server 2008?

I have a database which consists of almost 200 tables and 3000 stored procedures.
I have deleted some fields from some tables, how can I now find stored procedures in which those deleted fields are referred?
Have a look at the FREE Red-Gate tool called SQL Search which does this - it searches your entire database for any kind of string(s).
It's a great must-have tool for any DBA or database developer - did I already mention it's absolutely FREE to use for any kind of use??
So in your case, you could type in the column name you deleted, and select to search only your stored procedures - and within a second or so, you'll have a list of all stored procs that contain that particular column name. Absolutely great stuff!
You can use sys.sql_modules
SELECT
OBJECT_NAME(object_id)
FROM
sys.sql_modules
WHERE
definitiion LIKE '%MyDeletedColumn%'
Or OBJECT_DEFINITION
The INFORMATION_SCHEMA views are unreliable for this because the definition is split over several nvarchar(4000) rows. The 2 methods above return nvarchar(max)
Edit: Given SQL Search is free as note by marc_s, this will a better solution.
select object_name(object_id), *
from sys.sql_module
where definition like '%ColName%'
One possible approach is to call each stored procedure with dummy parameters with SET SHOWPLAN_XML ON active. This won't run the procedure, but will generate an .xml representation of the plan - and will fail if referenced columns are missing. If you make use of #temp tables, however, this'll fail regardless. :(
You'd most likely want to automate this process, rather than writing out 3000 procedure calls.
DISCLAIMER: This isn't a bulletproof approach to picking up on missing columns, but good luck finding anything better!

Cross-database queries with different DB names in different environments?

How would you handle cross database queries in different environments. For example, db1-development and db2-development, db1-production and db2-production.
If I want to do a cross-database query in development from db2 to db1 I could use the fully qualified name, [db1-development].[schema].[table]. But how do I maintain the queries and stored procedures between the different environments? [db1-development].[schema].[table] will not work in production because the database names are different.
I can see search and replace as a possible solution but I am hoping there is a more elegant way to solve this problem. If there are db specific solutions, I am using SQL Server 2005.
Why are the database names different between dev and prod? It'd, obviously, be easiest if they were the same.
If it's a single table shared, then you could create a view over it - which only requires that you change that view when moving to production.
Otherwise, you'll want to create a SYNONYM for the objects, and make sure to always reference that. You'll still need to change the SYNONYM creation scripts, but that can be done in a build script fairly easily, I think.
For this reason, it's not practical to use different names for development and production databases. Using the same db name on development, production, and optionally, acceptance/Q&A environments, makes your SQL code much easier to maintain.
However, if you really have to, you could get creative with views and dynamic SQL. For example, you put the actual data retrieval query inside a view, and then you select like this:
declare #environment varchar(10)
set #environment = 'db-dev' -- input parameter, comes from app layer
declare #sql varchar(8000)
set #sql = 'select * from ' + #environment + '.dbo.view'
execute(#sql)
But it's far from pretty...