I'm in a rather odd situation. At my work, we have two MSSQL 2012 servers, one physically hosted here, one virtual. Through a long, frustrating set of circumstances, our migration plans fell apart and we now have both servers with different data on each. I have to take a database, let's call it cms1, from the physical server and move it to the virtual server. However, I have to also make sure the virtual server's copy of cms1 remains intact, then run a script to move the changed tables from one to the other.
What I've tried so far is:
Make a full back up of the physical server's copy into cms1.bak, then copy that bak file over to the virtual server.
Rename the virtual server's version of the database with "alter database cms1 modify name = cms1_old". Good so far.
Take the newly renamed cms1_old db offline, then restore from my bak file. I get an error that the file for cms1 (NOT cms_old) is in use.
I went to the actual location on disk and renamed the two files associated with cms1 to be cms1_old. I closed SSMS and re-opened it, and tried the restore again. I got the same error, that the file for cms1 (again, NOT the file for cms1_old) was in use.
(update) I have since discovered detaching databases and tried to use that. When re-attaching after renaming the files for cms1_old, though, SSMS says that the files cannot be found. I believe I've gotten every path correct, so I'm not sure why this is happening.
Was my mistake in not taking the cms1 database offline BEFORE renaming it? If so, is there a way to fix this, or should I start again? This cms1 database is a test, not the real thing, but I want to get the procedure nailed down before working on our live database. How would I move a copy of cms1 from physical to virtual, keeping cms1 on the virtual server, so both can exist side by side while I move data from certain tables of one to the other? I really hope I'm making sense--I've been fighting with this for two hours straight. Thanks for any suggestions. I'm not too experienced in this sort of thing; I know SQL reasonably well, but dealing with physical DB files, backups, etc is new to me.
Related
Due to a lightning strike on my house my old computer was recently fried. But I bought a new one and much to my delight, the C: SSD filesystem from that old machine was still working after I ported it to the new one, albeit now as a D: drive.
Now I'm getting ready to install PostgreSQL and would like to be able to access the old database that resides on the D: drive. I am stumped as how to proceed.
There does not seem to be any way to tell a running PostgreSQL instance, "Hey look over there on the D: drive - that's a data base you can use. There's a CREATE Database and a DROP database, but not a "Use this database". I should say I was running version 14 on the old machine and could certainly install that same DB again on the new one before upgrading, if there were a way to add to its catalogue.
There is no data base dump/conversion utility that works without going through a running PostgreSQL server instance, so I see no way to convert the old data out of its proprietary format and reload it to the new PostgreSQl instance.
The only thought that occurs to me is to install a version as close to the old version 14 as possible, then CREATE a second data base somewhere new (perhaps on the D: drive), then stop the PostgreSQL server instance, copy the old data over top of the new data (with all subdirectories), then restart the server and hope for the best. Sounds like a 50-50 proposition at best.
Anyone else have any other thoughts/ideas?
So, just in case someone else has this problem and finds this question, here is what I found.
The installer for PostgreSQL has a prompt for what data directory to use. (After making a backup copy of the data,) I told it to use D:\Program Files\PostgreSQL\14\data and it recognized that this was an existing PostgreSQL data repository and it preserved all my tables.
As an experiment afterward I copied the backup data back into the data directory (after stopping the DB), restarted the DB, and everything was fine after PostgreSQL complained a little about the LOG files locations. I would say this can work as long as you are running the same version of PostgreSQL that last worked with the database on your old computer.
I make a copy of an .mdb database (and it's other partition) every night, and test it by opening it up to see if it works.
By "make a copy" I mean:
I kick all the users out of the database who are connected via RDP (not automated...)
Rename both backend files...and then proceed to make a copy of the files. (automated...)
And by "see if it works" I mean:
Relink a frontend file (.mde) to both files (this is automated)
Open it (and it's other partition) with a frontend (.mde) and workgroup security file (.mdw) on my local machine to see if it works. (this is not automated, and the part I am focusing on here...)
There are only two tables in the other partition, so I run the part of the frontend file I know uses that partition to test if the backup is going to work.
Would connecting to the backup of the files and doing a query on some table in both partitions be enough to prove that the backup is good without actually looking at it with human eyes?
I have also automated the process of compacting the live database, but I don't feel safe automating this part until I have verified that the backups indeed work.
Also before I get any posts along the lines of "Why are you still using access?", let me just state that I don't get to make those decisions and this database was here a long time before I got here.
(Please Note: if you feel I have posted this on SO in error please feel free to migrate to the DBA SE or to Serverfault)
I have backed up a database I had created on an other machine running SQL server 2012 express edition and I wanted to restore it on my machine, which is running the same. I have ticked the checkbox overwriting existing one, and got this error:
Backup mediaset is not complete. Files: D:\question.bak. Family count:2. Missing family sequence number:1
This happens if, when you made the backup, you had multiple files listed in the backup destination textbox. Go back to your source server and create the backup again; this time, make sure there's only one destination file listed.
If you had more than one file listed as the backup destination, the backup is striped across them; you'll need all the files to perform the restore.
You can verify this by performing a RESTORE LABELONLY against the single file you copied to your destination server.
Sandra Walter's Answer provides an accurate description of what has happened, but I found the answer a bit lacking.
To make a backup which isn't striped (which is what has occurred in this situation), go back to the window where you setup the backup of your database. At the bottom is a list of paths where the different stripes will go to.
Go to each of the listed paths and delete the stripe of the backup.
Then remove all but one of the paths from the list in the window. And click the "OK" button. Your unstriped backup will be created at that one path.
Hope that helps.
My backup was scheduled on two different locations. once I selected both options during restoration its worked for me.
Currently we have a lot of mailfiles in different directories on the same server, some are located on the server in data\mail and others are located in data\mail\DK... or data\mail\USA...
These mailfiles are also replicated to other servers and we have noticed on the other servers the mailfiles have another file structure.
This makes administration very difficult so we would like to move all our maiLfiles to the data\mail... directories on all servers.
(Some clients have local replicas)
What is the best practise for doing this?
can the admin process do this, move the file, update person record and update clients?
AdminPs "Move to Another Server" functionality works fine for that job (watch out for the delete requests, though).
My guess is that the original administrator set up the system so that the users mailfile on the home mail server is in the root of the \mail directory and that the sub directories contain replicas of mailfiles from others servers as a means of cheap backup.
I'd suggest looking at the NAb and seeing if this is indeed the case and if it is then you are in luck. All you will need to do in this case is bring the server down, move all the mailfiles in the subdirectories into the main mail directory and restart the server. Once the server comes back up it will continue to replicate these mailfiles with the other server.
I would check the replication connection documents to see if any special replication schedules have been setup for those subdirectories, if so you'll have to adjust them to ensure proper replication.
If the users home mailserver is not using the root mail directory as their mailfile storage area then it is a longer process. You can use AdminP to do it but it COULD cause you issues if you accidently come back in at a later date and approve the deletion requests or if the server doesn't have enough diskspace to double all the mailfiles, also having two replicas of a mailfile on a single server is not a good idea either.
If you need to do the long process I'd look at doing it manually. Down the server, move the mailfiles, bring the server up and edit each person doc to set the correct location for the mailfile and then visit each user machine to edit their location document to point to correct location. It is the only safe way to do it.
The last option is to buy a new server and then use adminp to move all the users to that server, making sure the mailfiles is stored in the /mail directory, no risk of duplicate replicas on a single server, adminP looks after adjusting the settings on all the users machines and you end up with a nice clean, new server ( on which you could implement things like transaction logs and daos )
As for the safe way to go on this one:
Correcting the physical mailfile location should be quite easy (bring server down, move mailfiles, start server again), but modifying all those person documents could be quite complicated if you go one by one.
I would encourage you to use Ytria's ScanEZ software, so that you can mass-modify all person docs in the NAB using a simple Replace Substring formula to correct the Mailfile path information at once.
This is an incredibly fast process, should not take more than 10-20 seconds to go.
I have got a database of ms-sql server 2005 named mydb, which is being accessed by 7 applications from different location.
i have created its copy named mydbNew and tuned it by applying primary keys, indexes and changing queries in stored procedure.
now i wants to replace old db "mydb" from new db "mydbnew"
please tell me what is the best approach to do it. i though to do changes in web.config but all those application accessing it are not accessible to me, cant go for it.
please provide me experts opinion, so that i can do replace database in minimum time without affecting other db and all its application.
my meaning of saying replace old db by new db is that i wants to rename old db "mydb" to "mydbold" and then wants to remname my new db "mydbnew" to "mydb"
thanks
Your plan will work but it does carry a high risk, especially since I'm assuming this is a system that has users actively changing data, which means your copy won't have the same level of updated content in it unless you do a cut right before go-live. Your best bet is to migrate your changes carefully into the live system during a low traffic / maintenance period and extensively test it once your done. Prior to doing this, or the method you mentioned previously, backup everything.
All of the changes you described above can be made to an online database without the need to actually bring it down. However, some of those activities will change the way in which the data is affected by certain actions (changes to stored procs), that means that during the transition the behaviour of the system or systems may be unpredicatable and therefore you should either complete this update at a low point in day to day operations or take it down for a maintenance window.
Sql Server comes with a function to make a script file out of you database, you can also do this manually but clicking on the object you want to script and selecting the Script -> CREATE option. Depending on the amount of changes you have to make it may be worthwhile to script your whole new database (By clicking on the new database and selecting Tasks -> Generate Scripts... and selecting the items needed).
If you want to just script out the new things you need to add individually then you simply click on the object you want to script, select the Script <object> as -> then select DROP and CREATE to if you want to kill the original version (like replacing a stored proc) or select CREATE to if your adding new stuff.
Once you have all the things you want to add/update as a script your then ready to execute that against the new database. This would be the part where you backup everything. Once your happy everything is backed up and the system is in maintenance or a low traqffic period, you execute the script. There may be a few problems when you do this, you will need to fix these as quickly as possible (usually mostly just 'already exisits' errors, thats why drop and create scripts are good) and if anything goes really wrong restore your backups and try again (after figuring out what happened and how to fix it).
Make no mistake if you have a lot of changes to make this could be a long process, or it could take mere minutes, you just need to adapt if things go wrong and be sure to cover yourself with backups/extensive prayer. Good Luck!