Sync stored procedures with software updates - sql

I develop a software system that runs on multiple computers that connect to a centralized database.
At this point in time, all SQL queries are inline with the applications source code. I would like to start migrating these into stored procedures.
If I need to make a change to a stored procedure that will require a software change, how can I synchronize the updates? For example: I change sp_SelectRecordByID and publish an update for the software. Immediately, all running software versions will receive an error upon running sp_SelectRecordByID. Once they crash and the update is received, all is good.
How do I prevent this scenario?
I've come up with a few ideas:
Make a new stored procedure and let the old one die off slowly
Add version checking to the stored procedure. This is highly undesirable.
Are there more effective methods or am I stuck with these options?

From what I understood this is more of a deployment issue. The way I see it there are couple options.
If you can deploy SQL Server and application changes simultaneously (or at least near to that) then you can just publish both at the same time but I guess it depends on the system.
Can you do this over the weekend when there is no risk of applications crashing?
Do you deploy the new application version to all users at once? If yes then you can just create new SP first, deploy new application version and then delete old SP.
Anyway, hope this helps. If not, please provide more details on number of servers, number of client applications and such…

Are you using Visual Studio? You can keep your stored procedures in a database project and have version control this way.
More info in here: http://msdn.microsoft.com/en-us/library/xee70aty.aspx

Related

How to set up Redgate SQL Source Control with Continuous Integration

My question is this:
What is the best setup for managing SQL changes in a development team?
Our team consists of 4 developers, each with their own copy of a database.
When committing SQL/Application changes to our TFS server, we wish to ensure that any build errors do not get propagated to other developers. So, we are going to implement continuous integration to assist with this.
The idea is that
1.SQL and application code changes are committed to TFS.
2. A central database gets the SQL updates, and we build the application.
3. Unit tests are executed on the build server.
4. If any of these steps fail, the checkin is rejected and the database gets rolled back to the state it was in before the commit.
What is the best way to set up our Redgate SQL source code to implement this?
If you want to use SQL Source Control, based on your requirements this is a possible setup to consider.
For each developer machine:
Install SQL Source Control
Link each developer database to your TFS repository using Dedicated database as development model.
Install SQL Prompt to write your SQL more easily
Configure SQL Test for writing unit tests for SQL Server
On the build server:
Install Redgate DLM Automation (there are Add-Ons to simplify setup)
Configure a Validate build task to validate the schema by checking the database can be built successfully from scratch
Configure a Test build task to run SQL tests
Configure a Sync build task to update your central database with the SQL updates
Run your application unit tests
If the test fails, you can run a custom script that revert the last check-in, use the sync build task again to roll back the database changes and trigger a new build. You can use Redgate DLM Automation PowerShell cmdlets to do it.
The last step could be tricky. I honestly prefer and recommend to use branches instead of relying on a single central database. In this way, each developer can be working fully independently and you can merge the new changes in the master branch only when the work has being validated on each individual branch.
If you want to go further and also implement deployment you can use Redgate DLM Automation Deployment to create a release database package and deploy your database changes to production directly from your build server or using a release tool like Octopus Deploy.
Finally, I would also advice you to have a look at Redgate ReadyRoll especially if you are considering a migration-first approach to database changes.
As you can see, there are different ways of using Redgate tools to manage database changes and there is no single best way of setting them up. It always depends by the specific requirements and problems you need to solve.
Hope this helps.
You can use a Database Project. It can contain the entire database schema plus stored procedures. During a build, it will verify that the stored procedures match the schema.
Then enable the Gated Check-in option in build definition, it accepts check-ins only if the submitted changes merge and build successfully.
For the data written to database, it's based on your test method, you can set the method to delete the data if the test failed, or you shouldn't be writing to a real database. Instead you should mock the database classes. This way you don't actually have to connect and modify the database and therefor no cleanup is needed.
For more information you can reference below articles:
How To Unit Test – Interacting with the Database
Database cleanup after Junit tests

How to migrate shared database from Access to SQL Express

I have been using MS Access databases via DAO for many years, but feel that I ought to embrace newer techniques.
My main application runs on end user PCs (no server) and uses a shared database that is created and updated on-the-fly. When the application is first run it detects the absence of a database and creates a new empty one.
Any local user running the application is allowed to add or update records in this shared database. We have a couple of other shared databases, that contain templates, regional information, etc., but these are not updated directly by the application.
Updates of the application are released from time to time and each new update checks the main database version and if necessary executes code to bring the database up to the latest specification. This may involve the creation or deletion of tables and/or columns. New copies of the template databases are also included as part of the update.
Our users are not required to be computer-literate and should not need to run any sort of database management software beyond those facilities provided by the application.
It all works very nicely with DAO/Access, but I'm struggling to find how to do it with SQL Express. The databases seem to be squirrelled away in locations that are user-specific and database creation and update seems at best awkward to do by program code alone.
I came across some references "Xcopy deployment" that looks like it could be promising, but there seem to be references to "user instances" that sound suspiciously like something that's not shared. I'd appreciate advice from anyone who has done it.
It sounds to me like you haven't fully absorbed the fundamental difference between the Access Database Engine (ACE/Jet) and SQL Server:
When your users launch your Access application it connects to the Access Database Engine that has been installed on their machine. Their copy of ACE/Jet opens the shared database file (.accdb or .mdb) in the network folder. The various instances of ACE/Jet work together to manage concurrent updates, record locking, and so on. This is sometimes called a "peer-to-peer" or "shared-file" database architecture.
With an application that uses a SQL Server back-end, the copies of your application on each user's machine connect over the network to the same instance of SQL Server (that's why it's called "SQL Server"), and that instance of SQL Server manipulates the database (which is stored on its local hard drive) on behalf of all of the clients. This is called "client-server" or "server-based" database architecture.
Note that for a multi-user database you do not install SQL Server on the client machines, you only install the SQL Server Client components (OleDb and ODBC drivers). SQL Server itself is only installed in one place: the machine that will act as the SQL... Server.
re: "database creation and update seems at best awkward to do by program code alone" -- Not at all, it's just "different". Once again, you pass all of your commands to the SQL Server and it takes care of creating the actual database files. For example, once you've connected to the SQL Server if you tell it to
CREATE DATABASE NewDatabase
it will create the database files (NewDatabase.mdf and NewDatabase_log.LDF) in whatever local folder it uses to store such things, which is usually something like
C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA
on the server machine.
Note that your application never accesses those files directly. In fact it almost certainly cannot do so, and indeed your application does not even care where those files reside or what they are called. Your app simply talks to the SQL Server (e.g. ServerName\SQLEXPRESS) and the server takes care of the details.
Just to update on my progress. Inspired by suggestions here and this article on code project:
http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily,
I've created a wrapper for the ADO.NET methods that looks quite similar to the DAO stuff that I am familiar with.
I have a class that I can use just like a DAO Database. It wraps ADO methods like ExecuteReader, ExecuteNonQuery, etc. with overloads that can accept a SQL parameter. This allows me to directly replace DAO Recordsets with readers, OpenRecordset with ExecuteReader and Execute with ExecuteNonQuery.
Each method obtains and releases the connection from its parent class instance. These in turn open or close the underlying connection as required depending on the transaction state, if any. So a connection is held open for method calls that are part of a transaction, but closed immediately for a single call.
This has greatly simplified the migration of my program since much of the donkey work can be done by a simple "find and replace". The remaining issues are then relatively easy to find and sort out.
Thanks, once again to Gord and Maxwell for your advice.
This answer is too long to right down... but go to Microsoft page, there they explain how to make it: http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I hope this help you!!

Building SQL deployment scripts into the application?

We currently have a rather manual, fiddly, messy & error prone way of running SQL deployment scripts when we update our clients' software installations. We're considering finding a 3rd party SQL deployment tool to automate this process.
However, I'm pushing the idea of building our own SQL deployment tool into the application itself. It would be simple - on application startup, it would:
1) Check the existing database schema version (eg. "35")
2) Check against "up to date" database schema version (eg. "38")
3) Retrieve relevant SQL deployment scripts from resource files (eg. "36", "37", "38")
4) Lock the database and run each required SQL deployment script
Note that this would still be run by an IT technician in case any errors occurred, not by end users.
It seems unorthodox but I don't really see any problem. Your thoughts?
I don't see anything inheritly wrong with this.
At a company I've worked for, they built a custom SQL-script installer that would allow them to automatically apply changes to the database, roll back the changes if necessary, and keep tabs on the version of what's been applied.
No matter the desired result of the application, you'll need to set conventions (i.e. database releases should have this folder structure, etc.) and identify the needs and processes that will be used in running the tool (i.e. just how automated you'll make it)
Don't build your own. Far too common a problem for a bespoke solution.
You're looking for a database migration tool, my recommendation would be liquibase. It can be run from the command line or integrated into the build process. Unique features that are especially valuable to me is the generation of SQL upgrade (and downgrade) scripts, which are often demanded from us when supporting production installs.
For more a more detailed listing of alternative migration tools see the following answer:
Migrations for Java

How to automatically export stored procedures for a release SQL Server 2008

Our team was releasing a new version of our system yesterday and we came across some issues with stored procedures. To cut a long story short we had to upload the old stored procedures to fix the issues.
I have now been given the task to automatically back up the stored procedures for our database before we release a build. I have went through a lot of sites and I've looked at generating scripts, making batch files, doing whole backups and scheduling tasks etc but none of these solutions would automatically backup only the stored procedures.
Any help in this case would be greatly appreciated thanks in advance.
Best Regards
Ryan
In Management Studio, right click on your database in the Object Explorer window, go to Tasks -> Generate Scripts... and follow the wizard.
You need to use SMO libraries to create your scripts and use them in command line batch files. Read more in http://msdn.microsoft.com/en-us/library/ms162153.aspx
Before run de script generator, set Option Continue scripting on Error, otherwise script will not be gereated.
If option DROP and Create is chosen, set option Script Object-Levels permission for stored procedures
Is your software source code checked into source control? It might be of benefit if your database is as well. This is the method software has used to manage versions and releases for years, and its about time DB's joined the party.
I suggest you look into a database project (available in the current 2015 free version of SQL Server Data Tools), which is a way of checking your objects in and out of a repository etc. It's a more complete way of managing database objects and fits into the software lifecycle. You can release your database codebase in conjunction with your software codebase and manage it all in one.

Strange Sql Server 2005 behavior

Background:
I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code.
Problem:
I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar.
There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database.
Question:
Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened.
Any ideas?
Using SQL Server's built-in backup and restore mechanism, there is no means to pick only certain objects to restore. With transaction log backups, you can restore to a point in time which might be before a certain transaction or ALTER statement was made but that's the closest you get. There are tool's which will let you pick certain objects to restore however they work by either restoring the database to a copy and copying over the objects you want or reading the backup directly and copying out those objects. In other words, this is not something could have happened using the built-in tools accidentally. My guess is that someone accidentally ran an old script of the stored proc(s) that reverted it.
It would be trivial to change a stored procedure without touching any data, or any other stored procedure. How who why when, that's the problem.
One suggestion, run
select * from sys.procedures
and check the create_date and modify_date columns, for both your problem procedure and all other procedures in the database.
I've witnessed similar things happening with an app I have installed at one client location. Every so often the s'procs revert to an older version.
It's just one client, the app is installed at several others which have never had this issue, and they happen to be a school district as well. It happens about once every 3 months or so, and no one should be touching that machine. I'm not even sure they have anyone in house that would know how to open enterprise manager.
Out of curiousity, what backup software is your client using? and, after checking the creation / modify dates on the procedures, did a server reboot occur around that time?
The reason I ask is that my client has backup software that does some really weird things on that server. For example, on reboot it has to "play back" changes, including file operations, since the last successful backup. Also, is it installed in a VM?
Through Data Transformation Services (DTS) ? or if the scripts that set up the database are available someplace..