Product version in SQL Server - sql-server-2005

Simple situation. I've created an application which uses SQL Server as database. I could add a table to this database which contains the version number of my application, so my application can check if it's talking to the correct version of the database. But since there are no other settings that I store inside a database, this would mean that I would add a single table with a single field, which contains only one record.
What a waste of a good resource...
Is there another wat that I can tell the SQL Server database about the product version that it's linked to?
I'm not interested in the version of SQL Server itself but of the database that it's using.
(Btw, this applies to both SQL Server 2000 and 2005.)

If you're using SQL 2005 and up, you can store version info as an Extended Property of the database itself and query the sys.extended_properties view to get the info, eg :
sys.sp_addextendedproperty #name=N'CurrentDBVersion', #value=N'1.4.2'
SELECT Value FROM sys.extended_properties WHERE name = 'CurrentDBVersion' AND class_desc = 'DATABASE'
If SQL 2000, I think your only option is your own table with one row. The overhead is almost non-existent.

I'd go with the massive overhead of a varchar(5) field with a tinyint PK. It makes the most sense if you're talking about a product that already uses the SQL Server database.
You're worried about overhead on such a small part of the system, that it becomes negligible.

I would put the connection settings in the application or a config file that the application reads. Have the app check the version number in the connection settings.

Even if there was such a feature in SQL Server, I wouldn't use it. Why?
Adding a new table to store the information is negligible to both the size and speed of the application and database
A new table could store other configuration data related to the application, and you've already got a mechanism in place for it (and if your application is that large, you will have other configuration data)
Coupling the application to a specific database engine (especially this way) is very rarely a good thing
Not standard practice, and not obvious to someone new looking at the system for the first time

I highly recommend writing the data base version into the database.
In an application we maintained over a decade or so we had updates of the database schema every release.
When the user started the application after an update installation it could detect if the database was to old and convert it to the newer schema. We actually did an incremental update: In order to get from 7 to 10 we did 7 -> 8, 8->9, 9->10.
Also imagine the scenario when somebody restores the database to an older state from a backup.
Don't even think about adding a single table, just do it (and think about the use cases).

Related

Storing database version in the database itself

I'm writing a program that uses a h2 database to store data.
The database will be evolving as we add features to our software, but we still want users to be able to use an older version of the database with a newer version of the program. This way the program could automatically upgrade the database to the newer version (maybe asking first for confirmation from user).
To write this "database upgrader" we need to store the database version inside the database itself, so that it is possible to just move the database file (we're using file mode of the h2 database engine).
We tried doing something like this:
TABLE configuration (databaseVersion INT NOT NULL);
but this would mean having a table where only a single row is ever used without explicit checking of the row count.
Is there any better way to do this?
Thanks in advance for you help.
I think this is a good solution, if you just need to persist the database version.
Sometimes you need to persist more than one such 'global' settings, for example if your application consists of multiple modules, and each module has its own version. Or other things, like the location of the last backup. What I usually use is a settings table with a key/value pair, where both the key (the primary key of that table) and the value are of type varchar.

Detecting modified pages in SQL Server tables

Is there a way to check if an SQL Server table (or even better, a page in that table) was modified since a certain moment? E.g. SQL differential backup uses dirty flags to know which parts of data were changed since last backup, and resets these flags after a successful backup.
Is there any way to get this functionality from MS SQL Server? I.e. if I want to cache certain aggregate values on a database table which sometimes changes, how would I know when to invalidate the cache? Or is the only way to do it to implement it programmatically and keep tract of this while writing to the database?
I am using C# .NET 4.5 to access SQL Server 2008 R2 through NHibernate.
I suggest you think about your problem in terms of application layer data caching instead of SQL Server low-level data pages. You can use SqlDependency or QueryNotification in your C# code to get notified of changes to the underlying data. Note that this requires ServericeBroker be enabled in the SQL Server database and there are some restrictions the on queries that qualify for notification.
See http://www.codeproject.com/Articles/529016/NHibernate-Second-Level-Caching-Implementation for an example of using this it with NHibernate.

Keeping database structure compatible between MS-Access and SQL Server

I'm working on a legacy project, written for the most part in Delphi 5 before it was upgraded to Delphi 2007. A lot has changed after this upgrade, except the database that's underneath. It still uses MS-Access for data storage.
Now we want to support SQL Server as an alternate database. Still just for single-user situations, although multi-user support will be a feature for the future. And although there won't be many migration problems (see below) when it needs to use a different database, keeping two database structures synchronized is a bit of a problem.
If I would create an SQL script to generate the SQL Server database then I would need a second script to keep the Access database up-to-date too. They don't speak the same dialect. (At least, not for our purposes.) So I need a way to maintain the database structure in a simple way, making sure it can generate both a valid SQL Server database as an Access database. I could write my own tool where I store the database structure inside an XML file, which combined with some smart code and ADOX would generate both database types.
But isn't there already a good tool that can do this?
Note: the application also uses ADO and all queries are just simple select statements. Although it has 50+ tables, there's one root "Document" table and the user selects one of the "documents" in this table. It then collects all records from all tables that are related to this document record and stores them in an in-memory structure. When the user saves the data, it just writes the document record and all changed data back to the database again. Basically, this read/write mechanism of documents is the only database interaction in the whole application. So using a different database is not a big problem.
We will drop the MS-Access database in the future but for now we have 4000 customers using this application. We first need to make sure the whole thing works with SQL Server and we need to continue to maintain the current code. As a result, we will have to support both databases for at least a year.
Take a look at the DB Explorer, there is a trial download too.
OR
Use migration wizard from MS Access to SQL Server
After development in Access (schema changes), use the wizard again.
Use a tool to compare SQL Server schemata.

Queries for migrating data in live database?

I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.

SQL Server 2005 multiple database deployment/upgrading software suggestions

We've got a product which utilizes multiple SQL Server 2005 databases with triggers. We're looking for a sustainable solution for deploying and upgrading the database schemas on customer servers.
Currently, we're using Red Gate's SQL Packager, which appears to be the wrong tool for this particular job. Not only does SQL Packager appear to be geared toward individual databases, but the particular (old) version we own has some issues with SQL Server 2005. (Our version of SQL Packager worked fine with SQL Server 2000, even though we had to do a lot of workarounds to make it handle multiple databases with triggers.)
Can someone suggest a product which can create an EXE or a .NET project to do the following things?
* Create a main database with some default data.
* Create an audit trail database.
* Put triggers on the main database so audit data will automatically be inserted into the audit trail database.
* Create a secondary database that has nothing to do with the main database and audit trail database.
And then, when a customer needs to update their database schema, the product can look at the changes between the original set of databases and the updated set of databases on our server. Then the product can create an EXE or .NET project which can, on the customer's server...
* Temporarily drop triggers on the main database so alterations can be made.
* Alter database schemas, triggers, stored procedures, etc. on any of the original databases, while leaving the customer's data alone.
* Put the triggers back on the main database.
Basically, we're looking for a product similar to SQL Packager, but one which will handle multiple databases easily. If no such product exists, we'll have to make our own.
Thanks in advance for your suggestions!
I was looking for this product myself, knowing that RedGate solution worked fine for "one" DB; unfortunately I have been unable to find such tool :(
In the end, I had to roll my own solution to do something "similar". It was a pain in theā€¦ but it worked.
My scenario was way simpler than yours, as we didn't have triggers and T-SQL.
Later, I decided to take a different approach:
Every DB change had a SCRIPT. Numbered. 001_Create_Table_xXX.SQL, 002_AlterTable_whatever.SQL, etc.
No matter how small the change is, there's got to be a script. The new version of the updater does this:
Makes a BKP of the customerDB (just in case)
Starts executing scripts in Alphabetical order. (001, 002...)
If a script fails, it drops the BD. Logs the Script error, Script Number, etc. and restores the customer's DB.
If it finishes, it makes another backup of the customer's DB (after the "migration") and updates a table where we store the DB version; this table is checked by the app to make sure that the DB and the app are in sync.
Shows a nice success msg.
This turned out to be a little bit more "manual" but it has been really working with little effort for three years now.
The secret lies in keeping a few testing DBs to test the "upgrade" before deploying. But apart from a few isolated Dbs where some scripts failed because of data inconsistency, this worked fine.
Since your scenario is a bit more complex, I don't know if this kind of approach can be ok with you.
As of this writing (June 2009) there's still no product on the market that'll do all this for multiple databases. I work for Quest Software, makers of Change Director for SQL Server, another database change automation system. Ours doesn't handle multiple databases like you're after, and I've seen the others out there. No dice.
I wouldn't hold out hope for it either, given the directions I've seen in SQL Server management. Things are going more toward packaged applications being contained in a single database, and most of the code is focusing on that.