SQL Server 2012: A way to see if the database has been tampered with? - sql

I have a delicate situation wherein some records in my database are inexplicably missing. Each record has a sequential number, and the number sequence skips over entire blocks. My server program also keeps a log file of all the transactions received and posted to the database, and those missing records do appear in the log, but not in the database. The gaps of missing records coincide precisely with the dates and times of the records that show in the log.
The project, still currently under development, consists of a server program (written by me in Visual Basic 2010) running on a development computer in my office. The system retrieves data from our field personnel via their iPhones (running a specialized app also developed by me). The database is located on another server in our server room.
No one but me has access to my development server, which holds the log files, but there is one other person who has full access to the server that hosts the database: our head IT guy, who has complained that he believes he should have been the developer on this project.
It's very difficult for me to believe he would sabotage my data, but so far there is no other explanation that I can see.
Anyway, enough of my whining. What I need to know is, is there a way to determine who has done what to my database?

If you are using identity for your "sequential number", and your insert statement errors out the identity value will still be incremented even though no record has been inserted. Just another possible cause for this issue outside of "tampering".

Look at the transaction log if it hasn't been truncated yet:
How to view transaction logs in SQL Server 2008
How do I view the transaction log in SQL Server 2008?

If you want to catch the changes in real time, I suggest you consider using SqlDependency. This way, when data changes, you will be alerted immediately and can check which user is using the database at the very moment (this could also be done using code).
You can use this code sample.
Coming to think about it, you can establish the same effect using a trigger and writing ti a table active users. Of course, if you are suspecting someone is tempering with data, using SqlDependency might be a better way to go with, as the data will be stored outside of the tampered database.

You can run a trace, for example a distant profiler trace, that will get all SQL queries containing the DELETE keyword. This way, nobody will be aware that queries are traced. You can also query the default trace regularly to get the last DELETE commands: Maintaining SQL Server default trace historical events for analysis and reporting

Related

How to track which tables the database wrote data to?

I use "Lexware Warenwirtschaft Premium 2014" (a well-known merchandise management software in Germany). It uses Sybase as a database. I connect to the database by using a ODBC connection(SQL Anywhere driver). The database has 800+ tables. For example when Lexware creates a new Article, it writes data into different tables.
Is there a way to track into which tables Lexware wrote data?
As an ad-hoc measure you could switch on ODBC tracing, and then review the contents.
http://support.microsoft.com/kb/274551 tells you how to do this from a Windows client, and you can find similar information for Linux/Unix and other clients.
You'd then have to parse the trace file to see which queries were inserted into. The first step would probably be to isolate all the SQLPrepare and SQLExecDirect statements, and check them for INSERT, UPDATE and other relevant Sybase statements.
Note that this is not something you'd want as an ongoing solution, just a way to find out what an ODBC client does if you do not have access to e.g. logging information on the database itself. However, the trace slows down execution and would generate a very large trace file if you left it running for any significant period.
I don't think so. Whatever this program does behind the interface is hidden in its binaries and unreadable for humans, so you can't read the code to see which tables are altered.
You might be able to figure out which table was edited last, depending on the SQL-Server and it's version.

SQL Server : list all columns used in queries

Is there a way to detect which columns and which tables are used in a SQL Server database?
Just against SQL Server 2012 would be fine.
We can assume there are no '*' for column usage in the legacy site.
Details:
I'm working on updating the table structure of a legacy system to work on a newer database (2005 to 2012)
There are a lot of bloated tables, with columns that are never used, and even tables that are never used. Identifying all of them would be a pain by manually going through the code.
(My assumption is that we can run SQL Server profiler while running a complete test pass on the app, but I don't know a convenient way to extract the columns)
Thanks.
You can list dependencies for a table in Mgmt Studio, which will show you which SPs, UDFs etc depend on the table in question. You can't do that for a single field. However, that would only show the internal dependencies. Sql Profiler would theoretically show you all fields that get requested by your app however it still would not really tell you much as the app may not do anything with the values it retrieves. If you are going to change the db it would only really make sense to put in the effort if you were also going to change the app and then you should be really get some input from users on what features are still useful and what is broken before you get too involved in a back-end refresh. IMHO.

SQL Server 2005 system stored procedure to find out the list of tables affected

Is there any system defined sp is available in SQL Server 2005, to find what are the tables are got affected when the applicaion is running and we are navigating from one page to other.
There's really no easy way (if any at all) to find that out, unfortunately.
As SQL Server MVP Aaron Bertrand puts it in his excellent blog post When was my database / table last accessed? :
A frequently asked question that surfaced again today is, "how do I see when my data has been accessed last?" SQL Server does not track this information for you. SELECT triggers still do not exist. Third party tools are expensive and can incur unexpected overhead. And people continue to be reluctant or unable to constrain table access via stored procedures, which could otherwise perform simple logging. Even in cases where all table access is via stored procedures, it can be quite cumbersome to modify all the stored procedures to perform logging.
However, with the help of the sys.dm_db_index_usage_stats DMV (dynamic management views) function and some clever T-SQL programming by Aaron, you can find out a few of those answers - check out his very enlightening blog post for details !
However: since this information is based on a DMV and the "D" in DMV stands for dynamic, those values are only valid since the last server reboot and will be wiped out and not preserved when you next have to restart your SQL Server process / reboot your server machine.
I know of none, but Profiler offers a solution. Run Profiler (can be a developer box) and navigate. It will create an output file for you of what is being run.
There are also code tools that show dependencies. I would imagine at least one shows dependencies on SQL objects.
I don't think so. You can run the SQL-profiler to see which commands are fired against the SQL server but you will have to parse them yourself.
You could also try to empty the query cache and then look at it when your navigation is done, but this cache will be contaminated by other queries running on the server (including the ones run by SQL server itself).

What strategies are available for migrating Access databases to SQL server-based applications?

I'm considering undertaking a project to migrate a very large MS Access application to a new system based on SQL Server. The existing system is essentially an ERP application with a couple of dozen users, all sharing the Access database over the network. The database has around 300 tables and lots of messy VBA code. This system is beginning to break down (actually, it's amazing it has worked as long as it has).
Due to the size and complexity of the Access application, a 'big bang' approach is not really feasible. It seems sensible to rope off chunks of functionality and migrate them piecemeal to the new system. During the migration process, which I expect to take several months, there may be a need for both databases to be in operation and be able to query and modify data in both systems.
I have considered using something like the ADO.NET Entity Framework to implement a data abstraction layer to handle this, but as far as I can tell, the Entity Framework has no Access provider.
Does my approach seem reasonable? What other strategies have people used to accomplish similar goals?
You may find that the main problem is using the MS Access JET engine as the backend. I'm assuming that you do have an Access FE (frontend) with all objects except tables, and a BE (backend - tables only).
You may find that migrating the data to SQL Server, and linking the Access FE to that, would help alleviate problems immediately.
Then, if you don't want to continue to use MS Access as the FE, you could consider breaking it up into 'modules', and redesign modules one by one using a separate development platform.
We faced a similar situation a few years ago, but we knew from the beginning that we'll have to swich one day to SQL SERVER, so the whole code was written to work from an Access client to both Access AND SQL server databases.
The idea of having a 'one-step' migration to SQL server is certainly the easier way to manage this on the database side, and there are many tools for that. But, depending on the way your client app talks to the database, your code might then not work properly. If, for example, your code includes a lot of SQL instructions (or generates them on the fly by, for example, adding filters to SELECT instructions), your syntax might not be 'SQL server' compatible: access wildcards, dates, functions, will not work on SQL server.
In addition to this, and as said by #mjv, the other drawback of a one time switch to MS SQL is that you will inheritate many of the problems from the original database: wrong or inapropriate field names, inapropriate primary/foreign key policies, hidden one-to-many relations that you'd like to implement in the new database model, etc.
I'll propose here some principles and rules to implement a 'soft transition' solution, which clearly best fits you. Just to say that it's not going to be easy, but it's definitely very interesting, paticularly when dealing with 300 tables! Lucky you!
I assume here that yo have the ability to update the client code, and you'd prefer to keep at all times the same client interface. It is of course possible to have at transition time two different interfaces, one for each database, but this will be very confusing for the users, and a permanent source of frustration for them.
According to me, the best solution strongly depend on:
The original connection technology,
and the way data is managed in your
client's code: Access linked tables,
ODBC, ADODB, recordset, local
tables, forms recordsources, batch
updating, etc.
The possibilities to split your
tables and your app in 'mostly
independant' modules.
And you will not spare the following mandatory activities:
setup up of a transfer
procedure from Access database to SQL server. You
can use already existing tools (The
access upsizing wizard is very poor,
so do not hesitate to buy a real
one, like SSW or EMS SQL Manager,
very powerfull) or build your own
one with Visual Basic. If your plan
is to make some changes in Data
Definition, you'll definitely have
to write some code. Keep in mind
that you will run this code
maaaaaany times, so make sure that
it includes all time-saving
instructions that will allow you to
restart the process from the start
as many times as you want. You will
have to choose between 2 basic data
import strategies when importing data:
a - DELETE existing record, then INSERT imported record
b - UPDATE existing record from imported record
If you plan to switch to new Primary\foreign key types, you'll have to keep track of old identifiers in your new database model during the transition period. Do not hesitate to switch to GUID Primary Keys at this stage, especially if the plan is to replicate data on multiple sites one of these days.
This transfer procedure will be divided in modules corresponding to the 'logical' modules defined previously, and you should be able to run any of these modules independantly (keeping of course in mind that they'll probably have to be implemented in a specific order, where the 'customers' module has to run before the 'invoicing' module).
implement in your client's code the possibility to connect to both original ms-access database and new MS SQL server. Ideally, you should be able to manage from within your code both connections for displaying and validating data.
This possibility will be implemented by modules, where you will have, for each of them, a 'trial period', ie the possibility to choose at testing time between access connection and sql connection when using the module. Once testing is done and complete, the module can then be run in exclusive SQL server mode.
During the transfer period, that can last a few months, you will have to manage programatically the database constraints that exist between 'SQL server' modules and 'Access' modules. Going back to our customers/invoicing example, the customers module will be first switched to MS SQL. Before the Invoicing module can be switched, you'll have to implement programmatically the one to many relations between Customers and Invoices, where each of the tables will be in a different database. Such a constraint can be implemented on the Invoice form by populating the Customers combobox with the Customers recordset from the SQL server.
My proposal is to build your modules following your database model, allways beginning with the 'one' tables or your 'one-to-many' relations: basic lists like 'Units', 'Currencies', 'Countries', shall be switched first. You'll have a first 'hands on' experience in writting data transfer code, and managing a second connection in your client interface. You'll be then able to 'go up' in your database model, switching the 'products' and 'customers' tables (where units, countries and currencies are foreign keys) to the new server.
Good luck!
I would second the suggestion to upsize the back end to SQL Server as step 1.
I would never go to the suggested Step 2, though (i.e., replacing the Access front end with something else). I would instead suggest investing the effort in fixing the flaws of the schema, and adjusting the Access app to work with the new schema.
Obviously, it is never the case that everything just works hunky dory when you upsize -- some things that were previously quite fast will be dogs, and some things that were previously quite slow will be fast. And I've found that it is often the case that the problems are very often not where you anticipate that they will be. You can only figure out what needs to be fixed by testing.
Basically, anything that works poorly gets re-architected, or moved entirely server-side.
Leverage the investment in the existing Access app rather than tossing all that out and starting from scratch. Access is a fine front end for a SQL Server back end as long as you don't assume it's going to work just the same way as it would with a Jet/ACE back end.
...thinking out loud... I think this may work.
I appears that the complexity of the application resides in the various VBA modules rather than the database table/schema themselves. A possible migration path could therefore be to first migrate the data storage to SQL server, exactly as-is, as follow:
prevent any change to the data for a few hours
duplicate all tables to the SQL server; be sure to create the same indexes as well.
create linked tables to ODBC Source pointing to the newly created tables on SQL Server
these tables should have the very same name as the original tables (which therefore may require being renamed, say with a leading underscore, for possible reference).
Now, the application can be restarted and should be using the SQL tables rather than the Access tables. All logic should work as previously (right...), possible slowness to be expected, depending on the distance between the two machines.
All the above could be tested in about a day's work or so; the most tedious being the creation of the tables on SQL server (much of that can be automated, I'm sure). The next most tedious task is to assert that the application effectively works as previously, but with its storage on SQL.
EDIT: As suggested by a comment, I should stress that there is a [fair ?] possibility that the application would not readily work so smoothly under SQL server back-end, and could require weeks of hard work in testing and fixing. However, and unless some of these difficulties can be anticipated because of insight into the application not expressed in the question, I propose that attempting the "As-is" migration to SQL Server should be considered; after all, it may just work with minimal effort, and if it doesn't, we'd know this very quickly. This is therefore a hi-return, low risk proposal...
The main advantage sought with this approach is that there will be a single storage during the [as the OP expects] longer period during which the old Access application will co-exist with the new application.
The drawback of this approach, is that, at least at first, the schema of original database is reproduced verbatim, i.e. including some of its known quirks and legacy-herited idiosyncrasies. These schema issues (and the underlying application logic) can be in time corrected, but this is of course less easy than if the new application starts ab initio, with its own, separate, storage, and distinct schema.
After the storage is moved to SQL server, the most used and/or the most independent modules of the Access application can be re-written in the new application, and as significant portions of the original application is ported, effective usage, by select beta testers or by actual users can start to be switched to the new application.
Possibly, some kind of screen-scraping based logic or some other system could be used to produce an hybrid application which would provide the end users with a comprehensive application, which sometimes work from new logic, and sometimes from the original MS-Access program.

SQL Server 2005 multiple database deployment/upgrading software suggestions

We've got a product which utilizes multiple SQL Server 2005 databases with triggers. We're looking for a sustainable solution for deploying and upgrading the database schemas on customer servers.
Currently, we're using Red Gate's SQL Packager, which appears to be the wrong tool for this particular job. Not only does SQL Packager appear to be geared toward individual databases, but the particular (old) version we own has some issues with SQL Server 2005. (Our version of SQL Packager worked fine with SQL Server 2000, even though we had to do a lot of workarounds to make it handle multiple databases with triggers.)
Can someone suggest a product which can create an EXE or a .NET project to do the following things?
* Create a main database with some default data.
* Create an audit trail database.
* Put triggers on the main database so audit data will automatically be inserted into the audit trail database.
* Create a secondary database that has nothing to do with the main database and audit trail database.
And then, when a customer needs to update their database schema, the product can look at the changes between the original set of databases and the updated set of databases on our server. Then the product can create an EXE or .NET project which can, on the customer's server...
* Temporarily drop triggers on the main database so alterations can be made.
* Alter database schemas, triggers, stored procedures, etc. on any of the original databases, while leaving the customer's data alone.
* Put the triggers back on the main database.
Basically, we're looking for a product similar to SQL Packager, but one which will handle multiple databases easily. If no such product exists, we'll have to make our own.
Thanks in advance for your suggestions!
I was looking for this product myself, knowing that RedGate solution worked fine for "one" DB; unfortunately I have been unable to find such tool :(
In the end, I had to roll my own solution to do something "similar". It was a pain in theā€¦ but it worked.
My scenario was way simpler than yours, as we didn't have triggers and T-SQL.
Later, I decided to take a different approach:
Every DB change had a SCRIPT. Numbered. 001_Create_Table_xXX.SQL, 002_AlterTable_whatever.SQL, etc.
No matter how small the change is, there's got to be a script. The new version of the updater does this:
Makes a BKP of the customerDB (just in case)
Starts executing scripts in Alphabetical order. (001, 002...)
If a script fails, it drops the BD. Logs the Script error, Script Number, etc. and restores the customer's DB.
If it finishes, it makes another backup of the customer's DB (after the "migration") and updates a table where we store the DB version; this table is checked by the app to make sure that the DB and the app are in sync.
Shows a nice success msg.
This turned out to be a little bit more "manual" but it has been really working with little effort for three years now.
The secret lies in keeping a few testing DBs to test the "upgrade" before deploying. But apart from a few isolated Dbs where some scripts failed because of data inconsistency, this worked fine.
Since your scenario is a bit more complex, I don't know if this kind of approach can be ok with you.
As of this writing (June 2009) there's still no product on the market that'll do all this for multiple databases. I work for Quest Software, makers of Change Director for SQL Server, another database change automation system. Ours doesn't handle multiple databases like you're after, and I've seen the others out there. No dice.
I wouldn't hold out hope for it either, given the directions I've seen in SQL Server management. Things are going more toward packaged applications being contained in a single database, and most of the code is focusing on that.