SQL Server 2005 system stored procedure to find out the list of tables affected - sql

Is there any system defined sp is available in SQL Server 2005, to find what are the tables are got affected when the applicaion is running and we are navigating from one page to other.

There's really no easy way (if any at all) to find that out, unfortunately.
As SQL Server MVP Aaron Bertrand puts it in his excellent blog post When was my database / table last accessed? :
A frequently asked question that surfaced again today is, "how do I see when my data has been accessed last?" SQL Server does not track this information for you. SELECT triggers still do not exist. Third party tools are expensive and can incur unexpected overhead. And people continue to be reluctant or unable to constrain table access via stored procedures, which could otherwise perform simple logging. Even in cases where all table access is via stored procedures, it can be quite cumbersome to modify all the stored procedures to perform logging.
However, with the help of the sys.dm_db_index_usage_stats DMV (dynamic management views) function and some clever T-SQL programming by Aaron, you can find out a few of those answers - check out his very enlightening blog post for details !
However: since this information is based on a DMV and the "D" in DMV stands for dynamic, those values are only valid since the last server reboot and will be wiped out and not preserved when you next have to restart your SQL Server process / reboot your server machine.

I know of none, but Profiler offers a solution. Run Profiler (can be a developer box) and navigate. It will create an output file for you of what is being run.
There are also code tools that show dependencies. I would imagine at least one shows dependencies on SQL objects.

I don't think so. You can run the SQL-profiler to see which commands are fired against the SQL server but you will have to parse them yourself.
You could also try to empty the query cache and then look at it when your navigation is done, but this cache will be contaminated by other queries running on the server (including the ones run by SQL server itself).

Related

SQL Server : list all columns used in queries

Is there a way to detect which columns and which tables are used in a SQL Server database?
Just against SQL Server 2012 would be fine.
We can assume there are no '*' for column usage in the legacy site.
Details:
I'm working on updating the table structure of a legacy system to work on a newer database (2005 to 2012)
There are a lot of bloated tables, with columns that are never used, and even tables that are never used. Identifying all of them would be a pain by manually going through the code.
(My assumption is that we can run SQL Server profiler while running a complete test pass on the app, but I don't know a convenient way to extract the columns)
Thanks.
You can list dependencies for a table in Mgmt Studio, which will show you which SPs, UDFs etc depend on the table in question. You can't do that for a single field. However, that would only show the internal dependencies. Sql Profiler would theoretically show you all fields that get requested by your app however it still would not really tell you much as the app may not do anything with the values it retrieves. If you are going to change the db it would only really make sense to put in the effort if you were also going to change the app and then you should be really get some input from users on what features are still useful and what is broken before you get too involved in a back-end refresh. IMHO.

Performance hit on DB2 transactional database after linking to SQL Server 2005

We have an AS400 mainframe running our DB2 transactional database. We also have a SQL Server setup that gets loaded nightly with data from the AS400. The SQL Server setup is for reporting.
I can link the two database servers, BUT, there's concern about how big a performance hit DB2 might suffer from queries coming from SQL Server.
Basically, the fear is that if we start hitting DB2 with queries from SQL Server we'll bog down the transactional system and screw up orders and shipping.
Thanks in advance for any knowledge that can be shared.
Anyone who has a pat answer for a performance question is wrong :-) The appropriate answer is always 'it depends.' Performance tuning is best done via measure, change one variable, repeat.
DB2 for i shouldn't even notice if someone executes a 1,000 row SELECT statement. Take Benny's suggestion and run one while the IBM i side watch. If they want a hint, use WRKACTJOB and sort on the Int column. That represents the interactive response time. I'd guess that the query will be complete before they have time to notice that it was active.
If that seems unacceptable to the management, then perhaps offer to test it before or after hours, where it can't possibly impact interactive performance.
As an aside, the RPG guys can create Excel spreadsheets on the fly too. Scott Klement published some RPG wrappers over the Java POI/HSSF classes. Also, Giovanni Perrotti at Easy400.net has some examples of providing an Excel spreadsheet from a web page.
I'd mostly agree with Buck, a 1000 row result set is no big deal...
Unless of course the system is looking through billions of rows across hundreds of tables to get the 1000 rows you are interested in.
Assuming a useful index exists, 1000 rows shouldn't be a big deal. If you have IBM i Access for Windows installed, there's a component of System i Navigator called "Run SQL Scripts" that includes "Visual Explain" that provides a visual explanation of the query execution plan. View that you can ensure that an index is being used.
On key thing, make sure the work is being done on the i. When using a standard linked table MS SQL Server will attempt to pull back all the rows then do it's own "where".
select * from MYLINK.MYIBMI.MYLIB.MYTABE where MYKEYFLD = '00335';
Whereas this format sends the statement to the remote server for processing and just gets back the results:
select * from openquery(MYLINK, 'select * from mylib.mytable where MYKEYFLD = ''00335''');
Alternately, you could ask the i guys to build you a stored procedure that you can call to get back the results you are looking for. Personally, that's my preferred method.
Charles

SQL Server 2012: A way to see if the database has been tampered with?

I have a delicate situation wherein some records in my database are inexplicably missing. Each record has a sequential number, and the number sequence skips over entire blocks. My server program also keeps a log file of all the transactions received and posted to the database, and those missing records do appear in the log, but not in the database. The gaps of missing records coincide precisely with the dates and times of the records that show in the log.
The project, still currently under development, consists of a server program (written by me in Visual Basic 2010) running on a development computer in my office. The system retrieves data from our field personnel via their iPhones (running a specialized app also developed by me). The database is located on another server in our server room.
No one but me has access to my development server, which holds the log files, but there is one other person who has full access to the server that hosts the database: our head IT guy, who has complained that he believes he should have been the developer on this project.
It's very difficult for me to believe he would sabotage my data, but so far there is no other explanation that I can see.
Anyway, enough of my whining. What I need to know is, is there a way to determine who has done what to my database?
If you are using identity for your "sequential number", and your insert statement errors out the identity value will still be incremented even though no record has been inserted. Just another possible cause for this issue outside of "tampering".
Look at the transaction log if it hasn't been truncated yet:
How to view transaction logs in SQL Server 2008
How do I view the transaction log in SQL Server 2008?
If you want to catch the changes in real time, I suggest you consider using SqlDependency. This way, when data changes, you will be alerted immediately and can check which user is using the database at the very moment (this could also be done using code).
You can use this code sample.
Coming to think about it, you can establish the same effect using a trigger and writing ti a table active users. Of course, if you are suspecting someone is tempering with data, using SqlDependency might be a better way to go with, as the data will be stored outside of the tampered database.
You can run a trace, for example a distant profiler trace, that will get all SQL queries containing the DELETE keyword. This way, nobody will be aware that queries are traced. You can also query the default trace regularly to get the last DELETE commands: Maintaining SQL Server default trace historical events for analysis and reporting

Auditing execution of stored procedures in Sql Server

My boss and I have been trying to see what sort of auditing plan we could try for our stored procedures. Currently there're two external applications taking information from our database through stored procedures and we're interested in auditing when they're being executed, and what values are passed as parameters. So far what I've done is simply create a table for the stored procedures one of the apps is using, and as they use the same input parameters, have one column per parameter. Obviously this isn't the best choice, but we wanted to get quick info to see if they were running batch processes and when they were running them. I've tried SQL Server Audit, but it doesn't catch the parameters unless you're executing a SP in a query.
SQL Server Profiler will do this for you; its included for free. Setup a trace and let it run.
You can also apply quite a bit of filtering to the trace, so you don't need to track everything; you can also direct the output to a file, or sql table for later analysis. This is probably your best bet for a time limited audit.
I think I've used the SQL Server Profiler (http://msdn.microsoft.com/en-us/library/ms181091.aspx) in the past to audit SQL execution. It's not something you would run all the time, but you can get a snapshot of what's running and how it's being executed.
I haven't tried using them, but you might look at event notifications and see if they will work for you.
From BOL
Event notifications can be used to do the following:
Log and review changes or activity occurring on the database.

Linked Tables - SQL Server 2008 to SQL Server 2005

I have a database server that currently has two databases, call them [A] and [B].
[A] is the standard database used for my application
[B] is used by another application (that puts considerable load on the server), however, we use a few tables sparingly (that are shared between my application and the primary application that utilizes [B]). Recently the use of [B] has been increasing and it's causing long wait periods and timeouts in my application.
I'm open to alternative ideas, but at the moment the quickest potential solution I've come up with is to get a second database server and move [A] to that. However, I do need access to some of the tables in [B] - ideally with as little code changes as possible.
We were thinking a setup something like:
Current:
DB Server 1 {[A],[B]} (SQL Server 2005)
New Setup
DB Server 1 {[B]} (SQL Server 2005)
DB Server 2 {[A], [B]} (SQL Server 2008)
Where in the new setup, DBServer2.[B] is a linked table (kind of) to DBServer1.[B]
I looked into linked databases, from my understanding (limited) those work as an alias to a database on another server, which is almost what we want, except it adds the extra server qualifier, and it works on the db level. I'd like to do something like this, but ideally without the server qualifier and on a table level.
So for example, [B] has tables Users and Events. The Users table is updated weekly by batch, and we use it often, so we'd like to have a local copy on the new DBServer2. However, Events we use far less often and needs to be real-time between the two servers. In this case, it would be great to have Events as a linked table.
Further, it would be fantastic if we could use the same markup for queries. Today to query one db from the other we'd do something like
select * from b.events join a.dates
We'd like to continue that, except have the database server know that when we touch events it's really located at dbserver1.b.events.
Does that make sense? I confuse myself sometimes.
Thanks for any help
~P
You can use synonyms for linked objects - http://msdn.microsoft.com/en-us/library/ms177544%28SQL.90%29.aspx
This unfortunately only works for single objects, you CANNOT make a synonym for linkedserver.databasename and then reference synDBName.table.
"Alternative Idea" from me since you said you are open...
How about looking into the cause of the slowness? What is the "Load" you are measuring?
How are your disks laid out on the server?
Maybe you could use more memory, another CPU or Some SQL tuning?
Fixing your issues with a software or hardware fix MAY be faster than getting a server and doing all the installs and then working through the integration problems you may run into.