Is there a way to reference a SQL statement to the C# EF code which generated the SQL? - sql

When I troubleshoot a large .NET app which uses only stored procedures, I capture the sql which includes the SP name from SQL Server Profiler and then it's easy to do a global search for the SP in the source files and find the exact line which produced the SQL.
When using Entity Framework, this is not possible due to the dynamic creation of SQL statements. However there are times when I capture some problematic sql statements from production and want to know where in the code they were generated from.
I know one can have EF generate logs and tracing on demand. This probably would be taxing for a busy server and produces too much logs. I read some stuff about using mini profiler but not sure if it fits my needs as I don't have access to the production server. I do however have access to attach SQL Server Profiler to the database server.
My idea is to find a way to have EF attach/inject a unique code to the generated SQL but it doesn't affect the outcome of the SQL. I can then use it to cross reference it to the line of code which injected it into the SQL. The unique code is static which means a unique static code is used for every EF linq statement. Maybe sent as a dummy sql or a comment along with the sql statement.
I know this will add some extra traffic but in my case, it will add extra flexibility and cut a lot of troubleshooting time.
Any ideas of how to do this or any alternatives?

One very simple approach would be to execute something via ExecuteStoreCommand(): Refresh data from stored procedure. I'm not sure if you can "execute" just a comment, but at the very least you should be able to do something like:
ExecuteStoreCommand("DECLARE #MyTag VARCHAR(100) = 'some_unique_id';");
This is very simple, but you would have to find the association in two steps:
Get the SessionID (i.e. SPID) from poorly performing query in SQL Server Profiler
Search the Profiler entries for the prior SQL statement for that same SPID
Another option that might be a little more complicated but would remove that additional step when it comes to making that association is to "intercept" the commands before they get executed and inject a comment with your unique id. Please see the following S.O. Answer for details. You shouldn't need the full extent of what they did, but even if you do, it seems like all of the code (or all the relevant stuff) is there:
Adding a query hint when calling Table-Valued Function
By the way, this situation is a point in favor of using Stored Procedures instead of an ORM. And, what do you expect to be able to do in terms of performance tuning once you do find the offending app code? (another point in favor of using Stored Procedures instead of an ORM ;-).

Related

Oracle 8i trace of sql statements

I am investigating a legacy app that uses an Oracle 8i database in a test environment, specifically trying to find out what tables are accessed for read, insert, update or delete when the user performs an app function.
What is the best/easiest way to do this? Can I simply get a list of all sql statements sent to the database? Can I see when stored procedures are called?
Having little experience with Oracle but getting help from a DBA, I'm thinking I should either use a trace or look at the redo log with LogMiner, but how?
Thanks!
What you could do is to harvest the sql's from v$sql. If the SQL's are properly written - using bind variables - you should be able to catch most of the statements in a table for this. I currently have no running v8 at hand but this should be possible.
In order to get most of them, you probably need to repeat the harvesting during the various workloads that run on the database.

SQL Server : list all columns used in queries

Is there a way to detect which columns and which tables are used in a SQL Server database?
Just against SQL Server 2012 would be fine.
We can assume there are no '*' for column usage in the legacy site.
Details:
I'm working on updating the table structure of a legacy system to work on a newer database (2005 to 2012)
There are a lot of bloated tables, with columns that are never used, and even tables that are never used. Identifying all of them would be a pain by manually going through the code.
(My assumption is that we can run SQL Server profiler while running a complete test pass on the app, but I don't know a convenient way to extract the columns)
Thanks.
You can list dependencies for a table in Mgmt Studio, which will show you which SPs, UDFs etc depend on the table in question. You can't do that for a single field. However, that would only show the internal dependencies. Sql Profiler would theoretically show you all fields that get requested by your app however it still would not really tell you much as the app may not do anything with the values it retrieves. If you are going to change the db it would only really make sense to put in the effort if you were also going to change the app and then you should be really get some input from users on what features are still useful and what is broken before you get too involved in a back-end refresh. IMHO.

Auditing execution of stored procedures in Sql Server

My boss and I have been trying to see what sort of auditing plan we could try for our stored procedures. Currently there're two external applications taking information from our database through stored procedures and we're interested in auditing when they're being executed, and what values are passed as parameters. So far what I've done is simply create a table for the stored procedures one of the apps is using, and as they use the same input parameters, have one column per parameter. Obviously this isn't the best choice, but we wanted to get quick info to see if they were running batch processes and when they were running them. I've tried SQL Server Audit, but it doesn't catch the parameters unless you're executing a SP in a query.
SQL Server Profiler will do this for you; its included for free. Setup a trace and let it run.
You can also apply quite a bit of filtering to the trace, so you don't need to track everything; you can also direct the output to a file, or sql table for later analysis. This is probably your best bet for a time limited audit.
I think I've used the SQL Server Profiler (http://msdn.microsoft.com/en-us/library/ms181091.aspx) in the past to audit SQL execution. It's not something you would run all the time, but you can get a snapshot of what's running and how it's being executed.
I haven't tried using them, but you might look at event notifications and see if they will work for you.
From BOL
Event notifications can be used to do the following:
Log and review changes or activity occurring on the database.

Disable all queries in SQL Server that don't use named parameters?

It seems that one could stop all threat of Sql injection once and for all by simply rejecting all queries that don't use named parameters. Any way to configure Sql server to do that? Or else any way to enforce that at the application level by inspecting each query without writing an entire SQL parser? Thanks.
Remove the grants for a role to be able to SELECT/UPDATE/INSERT/DELETE against the table(s) involved
Grant EXECUTE on the role for stored procedures/functions/etc
Associate the role to database user(s) you want to secure
It won't stop an account that also has the ability to GRANT access, but it will stop the users associated to the role (assuming no other grants on a per user basis) from being able to execute queries outside of the stored procedure/functions/etc that exist.
There are only a couple ways to do this. OMG Ponies has the best answer: don't allow direct sql statements against your database and instead leverage the tools and security sql server can provide.
An alternative way would be to add an additional tier which all queries would have to go through. In short you'd pass all queries (SOA architecture) to a new app which would evaluate the query for passing on to sql server. I've seen exactly one company do this in reaction to sql injection issues their site had.
Of course, this is a horrible way of doing things because SQL injection is only one potential problem.
Beyond SQL Injection, you also have issues of what happens when the site itself is cracked. Once you can write a new page to a web server it becomes trivial to pass any query you want to the associated database server. This would easily bypass any code level thing you could put in place. And it would allow the attacker to just write select * from ... or truncate table ... Heck, an internal person could potentially just directly connect to the sql server using the sites credentials and run any query they wanted.
The point is, if you leverage the security built into sql server to prevent direct table access then you can control through stored procedures the full range of actions availble to anyone attempting to connect to the server.
And how do you want to check for that? Queries sometimes have constant values that would just as easy be added to the query. For instance, I have a database that is prepared to be multi lingual, but not all code is, so my query looks like this:
SELECT NAME FROM SOMETABLE WHERE ID = :ID AND LANGUAGEID = 1
The ID is a parameter, but the language id isn't. Should this query be blocked?
You ask to block queries that don't use named parameters. That can be easily enforced. Just block any query that doesn't specify any parameters. You can do this in your application layer. But it will be hard to block queries like the one above, where one value is a parameter and the other one isn't. You'll need to parse that query to detect it, and it will be hard too.
I don't think sql server has any built in features to do this.

Clone entire database with a SP

I'm trying to find out if this is possible, but so far I haven't found out any good solutions. What I would like to achieve is write a stored procedure that can clone a database but without the stored data. That means all tables, views, constraints, keys and indexes should be included but without any data. Can it be done?
Sure - your stored proc would have to read the system catalog views to find out what objects are in the database, determine their potential dependencies, and then create a single or a collection of SQL scripts which re-create the database, and execute those.
It's possible - not very nice and easy to do. Especially the dependencies between objects might cause more headaches than first meets the eye....
You could also:
use something like SQL Server Management Studio (if you're on SQL Server - you didn't specify) and create the scripts manually, and just re-execute them on a separate server
use a "diff" tool like Redgate SQL Compare to compare two servers and have the second one brought up to date
I've successfully used the Microsoft SQL Server Database Publishing Wizard for this purpose. It's pretty straightforward, no coding needed. Here's a sample call:
sqlpubwiz script -d DatabaseName -S ServerName -schemaonly C:\Projects2\Junk\ DatabaseName.sql
I believe the default is to create both data and schema, but you can use the schemaonly parameter.
Download it here
In SQL Server you can roll through the system tables (sys.tables, sys.columns, etc.) and construct things one at a time. It's going to be very manual and error prone at the beginning, but it should become systematic pretty quickly.
Another way to do it is to write something in .Net using SMO. Check out this link:
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated