My company at present has the following Setup:
127 SQL servers (2000 and 2005) and over 700 databases. We are in the process of server consolidation and are planning on having a Master server / Target servers setup to enable centralized administration. As part of this project, I have been given the responsiblity of creating a script based automated backup / maintenance solution.
Thanks to Ola Hallengren's script available here I have made a lot of progress.
Here is what I plan:
I have a database in the master server which has details of SQL instances, databases and backup path details.
I am in the process of modifying Hallengren's script to read from this database and create jobs dynamically.
Users are allowed to define what kind of backup they want, how often and how long the backup needs to be kept.
I am also looking at having the ability to spread out jobs, so that I do not have too many jobs running at the same time.
My thoughts are to create tables that have the data needed to be passed as parameters to sp_add_job, sp_add_jobstep and sp_add_jobschedule.
Please share your experiences in the pitfalls and hurdles with this setup. All ideas welcome.
Thanks,
Raj
You might also consider the approach of creating a full backup job and a transaction log backup job on each server, which retrieve the databases from the master database and feeds them into a procedure for backup. You could run the jobs every 5 minutes and the procedure would have to work out what time it is and what type of backup is required.
I say this, because things could get messy creating massive amounts of jobs in an automated manner - but maybe you have it worked out well. Be sure to create a 'primary key' with the job name if you take your original approach.
Spreading out jobs should be easy if you keep record in the database and find available windows to put new jobs in. I've seen scripts for this.
You could also do this in a SSIS package that runs from your admin server. It would iterate over the databases and connect to servers to perform the backups.
Check out the first 3 article here: http://www.sqlmag.com/Authors/AuthorID/1089/1089.html
Moving to 2005 we were not happy with the Intergration Services Maintenance Plans
We wrote sp's to do Full and Tran Backups as well as reindex, arrgegate management, archiving etc
A Daily and Hourly Job would step thru master..sysdatabases and in a try catch block
apply the necessary maintenance. Would not be hard to read from any table and check user defined conditions.
Unfortunatedly there was no way to tie output to msdb..sysjobhistry, so we logged to central table. The upside was we had a lot more control over what was logged.
However would be simpler to read than going thru 700 jobs.
Related
I have two databases for my customers, a LIVE database and a TEST database. Periodically my customers would like the LIVE data copied over to the TEST database so it has current data that they can mess around with.
I am looking for a simple way that I could run maybe a command in my application that would copy all the data from one and move it into the other.
Currently I have to remote in with their IT department or consultant and restore from a backup of the LIVE to the TEST. I would prefer to have a button in my application that says RESTORE TEST and it will do that for me.
Any suggestions on how I could do this? Is it possible in SQL? Is there a tool out there in .NET that could help?
Thanks
If you have a backup plan, which I hope you do, you could simply restore the latest full .bak, if it is accessible to your application. However, this would require some tunneling for your application to access the latest backup file and this is generally a no-no for zones containing database servers.
Perhaps you could set up a scheduled delivery of a backup from machine to machine. Does the LIVE server have access to your TEST server. I wouldn't think that a DBA would be inclined to set up a delivery of backup unless it was to have a remote backup location for disaster recovery and that is usually to dedicated locations, not a testing server. Maybe you could workout a scenario where your TEST server doubles as an extra remote backup location for the production environment.
It would be better to be shipped a backup and then periodically or manually start a job to restore from a local backup. This would take the burden of your application. Then you would only need to simply kick of the sql job from within your app as needed.
Here is the use-case: we need to backup some of the tables from a client server, copy it to our servers, restore it, then running some queries using ODBC.
I managed to do this process for the entire database by using probkup for backup, prorest for restore and proserve to make it accessible for SQL queries.
However, some of the databases are big (> 8GB), so we are looking for a solution to do the backup for only the tables we need. I didn't find anything with the documentation of probkup how this can be done.
Progress only supports full database backups.
To get the effect that you are looking for you could dump (export) the tables that you want and then load them into an empty database.
"proutil dump" and "proutil load" are where you want to start digging.
The details will vary depending on exactly what you want to do and what resources and capabilities you have available to you.
Another option would be to replicate the tables in question to a partial database. Progress has a product called "pro2" that can help with that. It is usually pointed at SQL targets but you could also point it at a Progress database.
Or, if you have programming skills, you could put together a solution using replication triggers (under the covers that's what pro2 does...)
probkup and prorest are block-level programs and can't do a backup or restore by table.
To do what you're asking for, you'll need to do a dump the data from the source db's tables and then load it into the target db.
If your object is simply to maintain a copy of the DB, you might also try incremental backups. Depending upon your situation, that might speed things up a bit.
Other options include various forms of DB replication, which allow you to keep real- or near-real-time copies of your database.
OpenEdge Replication. With the correct license, you can do query-only access on the replication target, which is good for reporting and analysis.
Third-party replication products. These can be more flexible in terms of both target DBs and limiting the tables to be replicated.
Home-grown replication (by copying and applying AI files). This is not terribly complicated, but you have to factor the cost of doing the work and maintaining the system. There are some scripts out there that can get you started.
Or, as Tom said, you can get clever with replication via triggers.
I hope someone can give me advice or point me to some readings for this. I generate business reports for my team. We host a subscription website so we need to track several things sometimes on a daily basis. A lot of sql queries are involved.The problem is querying a large volume information from the live database will slow or cause timeouts to our website.
My current solution requires me to run bcp scripts that copy new rows to a backup database, (that I use purely for reports) daily. Then I use an application I made to generate reports from there. The output is ultimately an excel file or several (for the benefit of the business teams, it's easier for them to read.) There several problems in my temporary solution though,
It only adds new rows. Updates to previous rows are not copied. and
It doesn't seem very efficient.
Is there another way to do this? My main concern is that the generation or the querying should not slow down our site.
I can think of three options for you, each of which could have various implementation methods. The first one is Azure SQL Data Sync Services, the second is the AS COPY COPY operation and the third is rides on top of a backup.
The Sync Services are a good option if you need more real time reporting capability; meaning if you need to run your reports multiple times a day, at just about any time, and you need your data as real time as you can get it. Sync Services could have a performance impact on your primary database because it runs based off of triggers, but with this option you can choose what to sync; in other words you can replicate a filtered set of data, which minimizes the performance impact. Then you can reports on the sync'ed database. Another important shortcoming of this approach is that you would end up maintaining a sync service; if your primary database schema changes, you may need to recreate some or all of the sync configuration.
The second option, AS COPY OF, is a simply database copy operation which essentially gives you a clone of your primary database. Depending on the size of the database, this could take some time, so testing is key. However, if you are performing a morning report for yesterday's activities and having the latest data is not as important, then you could run the AS COPY OF operation on a schedule after hours (or when the activity on your database is the lowest) and run your report on the secondary database. You may need to build a small script, or use third party tools to help you automate this. There would be little to no performance impact on your primary database. In addition, the AS COPY OF operation provides transactional consistency, if this important to you.
The third option could be to use a backup mechanism (such as the Azure Export, or Azure backup tools), and restore the latest backup before running your reports. This has the advantage to leverage your backup strategy without much additional effort.
I have two MS SQL 2005 servers, one for production and one for test and both have a Recovery Model of Full. I restore a backup of the production database to the test server and then have users make changes.
I want to be able to:
Roll back all the changes made to the test SQL server
Apply all the transactions that have occurred on the production SQL server since the test server was originally restored so that the two servers have the same data
I do not want to do a full database restore from backup file as this takes far too long with our +200GB database especially when all the changed data is less than 1GB.
EDIT
Based on the suggestions below I have tried restoring a database with NoRecovery but you cannot create a snapshot of a database that is in that state.
I have also tried restoring it to Standby Read only mode which works and I can take a snapshot of the database then and still apply transaction logs to the original db but I cannot make the database writable again as long as there are snapshots against it.
Running:
restore database TestDB with recovery
Results in the following error:
Msg 5094, Level 16, State 2, Line 1 The operation cannot be performed on a database with database snapshots or active DBCC replicas
First off, once you've restored the backup and set the database to "recovered", that's it -- you will never be able to apply another transaction log backup to it.
However, there are database snapshots. I've never used them, but I believe you could use them for this purpose. I think you need to restore the database, leave it in "not restored" mode -- definitly not standby -- and then generate snapshots based on that. (Or was that mirroring? I read about this stuff years ago, but never had reason to use it.)
Then when you want to update the database, you drop the snapshot, restore the "next" set of transaction log backups, and create a fresh snapshot.
However, I don't think this would work very well. Above and beyond the management and maintenance overhead of doing this, if the testers/developers do a lot of modifications, your database snapshot could get very big, even bigger than the original database -- and that's hard drive space used in addition to the "original" database. For infrequently modified databases this could work, but for large OLTP systems, I have serious doubts.
So what you really want is a copy of Production to be made in Test. First, you must have a current backup of production somewhere??. Usually on a database this size full backups are made Sunday nights and then differential backups are made each night during the week.
Take the Sunday backup copy and restore it as a different database name on your server, say TestRestore. You should be able to kick this off at 5:00 pm and it should take about 10 hours. If it takes a lot longer see Optimizing Backup and Restore Performance in SQL Server.
When you get in in the morning restore the last differential backup from the previous night, this shouldn't take long at all.
Then kick the users off the Test database and rename Test to TestOld (someone will need something), then rename your TestRestore database to be the Test database. See How to rename a SQL Server Database.
The long range solution is to do log shipping from Production to TestRestore. The at a moments notice you can rename things and have a fresh Test database.
For the rollback, the easiest way is probably using a virtual machine and not saving changes when you close it.
For copying changes across from the production to the test, could you restore the differential backups or transaction log backups from production to the test db?
After having tried all of the suggestions offered here I have not found any means of accomplishing what I outlined in the question through SQL. If someone can find a way and post it or has another suggestion I would be happy to try something else but at this point there appears to be no way to accomplish this.
Storage vendors (as netapp) provide the ability to have writeable snapshots.
It gives you the ability to create a snapshot within seconds on the production, do your tests, and drop/recreate the snapshot.
It's a long term solution, but... It works
On Server1, a job exists that compresses the latest full backup
On Server2, there's a job that performs the following steps:
Copies the compressed file to a local drive
Decompresses the file to make the full backup available
Kills all sessions to the database that is about to be restored
Restores the database
Sets the recovery model to Simple
Grants db_owner privileges to the developers
Ref:http://weblogs.sqlteam.com/tarad/archive/2009/02/25/How-to-refresh-a-SQL-Server-database-automatically.aspx
We have 18 databases that should have identical schemas, but don't. In certain scenarios, a table was added to one, but not the rest. Or, certain stored procedures were required in a handful of databases, but not the others. Or, our DBA forgot to run a script to add views on all of the databases.
What is the best way to keep database schemas in sync?
For legacy fixes/cleanup, there are tools, like SQLCompare, that can generate scripts to sync databases.
For .NET shops running SQL Server, there is also the Visual Studio Database Edition, which can create change scripts for schema changes that can be checked into source control, and automatically built using your CI/build process.
SQL Compare by Red Gate is a great tool for this.
SQLCompare is the best tool that I have used for finding differences between databases and getting them synced.
To keep the databases synced up, you need to have several things in place:
1) You need policies about who can make changes to production. Generally this should only be the DBA (DBA team for larger orgs) and 1 or 2 backaps. The backups should only make changes when the DBA is out, or in an emergency. The backups should NOT be deploying on a regular basis. Set Database rights according to this policy.
2) A process and tools to manage deployment requests. Ideally you will have a development environment, a test environment, and a production environment. Developers should do initial development in the dev environment, and have changes pushed to test and production as appropriate. You will need some way of letting the DBA know when to push changes. I would NOT recommend a process where you holler to the next cube. Large orgs may have a change control committee and changes only get made once a month. Smaller companies may just have the developer request testing, and after testing is passed a request for deployment to production. One smaller company I worked for used Problem Tracker for these requests.
Use whatever works in your situation and budget, just have a process, and have tools that work for that process.
3) You said that sometimes objects only need to go to a handful of databases. With only 18 databases, probably on one server, I would recommend making each Databse match objects exactly. Only 5 DBs need usp_DoSomething? So what? Put it in every databse. This will be much easier to manage. We did it this way on a 6 server system with around 250-300 DBs. There were exceptions, but they were grouped. Databases on server C got this extra set of objects. Databases on Server L got this other set.
4) You said that sometimes the DBA forgets to deploy change scripts to all the DBs. This tells me that s/he needs tools for deploying changes. S/He is probably taking a SQL script, opening it in in Query Analyzer or Manegement Studio (or whatever you use) and manually going to each database and executing the SQL. This is not a good long term (or short term) solution. Red Gate (makers of SQLCompare above) have many great tools. MultiScript looks like it may work for deployment purposes. I worked with a DBA that wrote is own tool in SQL Server 2000 using O-SQl. It would take an SQL file and execute it on each database on the server. He had to execute it on each server, but it beat executing on each DB. I also helped write a VB.net tool that would do the same thing, except it would also go through a list of server, so it only had to be executed once.
5) Source Control. My current team doesn't use source control, and I don't have enough time to tell you how many problems this causes. If you don't have some kind of source control system, get one.
I haven't got enough reputation to comment on the above answer but the pro version of SQL Compare has a scriptable API. Given that you have to replicate stuff to all of these databases you could use this to make an automated job to either generate the change scripts or to validate that the databases are all in sync. It's also not much more expensive than the standard version.
Aside from using database comparison tools, with 18 databases you should have a DBA, so enforce a policy that only the DBA can change tables at the database level by restricting access to CREATE and ALTER to the DBA only. On both your test and live databases. The dev database shouldn't have this, of course! Make the developers who have been creating or altering the schemas willy-nilly go via the DBA.
Create a single source-controlled DDL/SQL script for each release and only use it to update the databases. The diff tools can be useful but mainly for checking that you haven't made a mistake and getting out of trouble when the policies fail. Combine the DDL, SQL, and stored procedure scripts into a single script so that it's not easy to "forget" to run one of the scripts.
We have got a tool called DB Schema Difftective that can compare and sync database schemas. With our other tool, DB MultiRun you can easily deploy generated (sync) scripts to multiple db servers (project based).
I realize this post is old, but TurnKey is correct. If you are a developer working in a team environment, the best way to maintain a database schema for a large application, is to make updates to a Master Schema in what ever source safe you use. Simply write your own Scripting class and your Database will be perfect every time.