Why aren't my SQL Server 2005 backups being deleted? - sql-server-2005

I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.

If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.

My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim

Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.

Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?

Related

TFS 2015 The item is locked in workspace (null);(null)

We recently upgraded from TFS 2010 to TFS 2015. Everything appears to be fine post-upgrade, but we are getting the error "The item is locked in workspace (null);(null)." on some source control files. It looks like we have some orphaned locks that need to be tracked down and cleaned up, but the tbl_lock database table is not on the database, so the following select query won't work:
select * FROM tbl_Lock l
LEFT JOIN tbl_PendingChange pc
ON l.PendingChangeId = pc.PendingChangeId
WHERE pc.PendingChangeId IS NULL
Does anyone know how to detect and remove these locks in TFS 2015?
I also installed the TFS power tools, and neither Visual Studio 2015 nor the power tools are picking up the locks.
Updated:
BTW, when I run the SELECT query to find out where PendingChangeId is NULL, I get back no rows. I think the trick is the LEFT JOIN. PendingChangeId would be NULL when tbl_Lock also had no record for the PendingChangeId on tbl_PendingChange (and thus the lock was orphaned). So I'd still need to know where the PendingChangeId should normally be joined to in TFS 2015, to identify which files have a lock that is bad. (Or where a workspace no longer exists, which may be another possible source for the issue.)
And I also still need to know how to clean up those bad locks. I'd prefer to do this using the tools, either via the GUI or the command line, but could also do this programmatically either using the API or the TFS Object Model files for TFS 2015.
I really would rather only touch the database directly as a last ditch resort. And I would also rather use tf vc destroy on the item as a last ditch resort as well, since that would wipe out all history on the files.
Update 2
Aha! I think I found a way to identify the files, and it looks like my thinking for what happened may be correct. Unfortunately, I had to probe the database using a READ UNCOMMITTED query to find the information. I couldn't get at this information programmatically or using the tools. (They all showed or acted like the file is not checked out.) The query that I used on TFS 2015 was:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
This returned the three files that have the (null);(null) lock on our database, because the WorkspaceId listed on tbl_PendingChange does not exist anymore on tbl_Workspace.
How did this happen? Our CI server uses temporary TFS workspaces. I think what happened after the upgrade is that our CI server went to check out the file and apply an update to it. (For example, to increment version numbers as part of the build process.) It checked out the file, but failed to apply the update. (Our tools like working with Server workspaces, but it may have ended up with a Local workspace and thus the file was still checked in Local, but checked out on the Server. Thus the change to the file couldn't be applied.) The code that we are using performs a workspace.Delete operation when the process completes, so the workspace was deleted - even though the workspace still had the file checked out! So this created an orphan record on tbl_PendingChange that isn't linked to any Workspace, and thus the file is still locked with pending changes. But the GUI and tools aren't seeing it as such, because they're not realizing the pending change's workspace is non-existent.
So this brings me back around to how do I fix this? If someone knows of a way to get at these orphaned pending changes, I'd appreciate it. I tried using:
TfsTeamProjectCollection tfsTeamProjectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(szProjectUri));
VersionControlServer versionControlServer = tfsTeamProjectCollection.GetService<VersionControlServer>();
string[] items = new[] { ... server item path ... };
PendingSet[] queryPendingSets = versionControlServer.QueryPendingSets(items, RecursionType.None, null, null);
PendingSet[] getPendingSets = versionControlServer.GetPendingSets(items, RecursionType.None);
but these aren't finding the orphans.
Update 3
I finally installed Team Foundation Sidekicks 2015 and gave it a try - status tool specifically, but then other tools. It's finding pending changes, but not the orphaned ones.
You can use Team Foundation Sidekicks to search and undo lock by following steps:
Install the tool and launch it.
Select TFS server to connect.
Select "Tools\Status Sidekick".
Set the "Search criteria" for the information you want.
Click "Search" button.
Select the locked file and click "Unlock lock" button.
You can using below command to undo the pending changes:
tf undo "file_path" /workspace:workspace_name
Or you can just use below command to delete the old workspace
tf workspace /delete /server:your_tfs_server workspace;username
From Visual Studio 2015 GUIļ¼š
File -> Source Control -> Advanced -> Workspaces...
In the dialog that came up, check "Show remote workspaces" and the locked workspace came up in the window. Then selected it and click "Remove".
Details about it, please check this blog and more ways to resolve this you can refer the similar question: What do you do if the file in TFS is locked by someone else?
Update:
According to the sql query. It's looking for .PendingChangeId IS NULL . You can use the similarly tbl_PendingChange under collection database. However, it's not a commendatory method. Since operate directly in the TFS database is not recommended.
The following command has cleared up the pending changesets that were orphaned:
tf vc destroy <itemspec> /startcleanup
After running this command, the file was able to be added back to TFS, and the file could be checked in and out and edited as normal. Running the query:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
also showed that the pending changeset record related to this file was gone as well.
Microsoft's documentation on this command can be found at https://msdn.microsoft.com/en-us/library/bb386005.aspx. Before using this command, you should review the documentation carefully and be sure to understand the consequences of using it.
Because this command permanently removes files and potenally all history from TFS - and does so recursively - you need to take precautions and be absolutely certain that you are targeting the command correctly. So before using this command, I would recommend taking the following additional precautions:
Stop all user and external accesses to TFS and any other software that may be running from the machine.
Make sure to run a full backup of TFS and any other databases located on the machine.
If you can, take a snapshot in time of the server.
That way if something goes horribly wrong, you will have one or more points to fall back on.

SQLServer 2012 Installed copy showing problems:

My problem is like this: I had a copy of SqlServer 2012 installed on my machine. It's been there for over 3 years without any glitches at all. Just 4-5 days ago, a problem sprouted up. When I started Management Studio it told me that
msdb got corrupted so it cannot be opened.
The complete message is something like this:
Cannot display policy health state at the server level, becuase the user doesn't have permission. Permission to the database msdb is required for this feature to work correctly.
So what could be wrong here? What sudden changes/anomalies could have crept in that has made this unstable? Someone told me it could be due to a wide range of possibilities. The reason could be anything. Even some nuget packages affect the database. Initially I though this could have been an issue with login, permissions etc. So I tried to run as administrator also. No, it did not cure this problem. If you try to create a new database it simply tells me, that I can't do it. The message is something like this:
An exception occurred while executing a T-SQL statement or batch.[Microsoft.SqlServer.ConnectionInfo]. Database msdb cannot be opened. It has been marked as SUSPECT by recovery. [Microsoft Sql Server, Error:926]
How do I recover from this? Can you provide me some guidance? Or a clue where precisely to look for the hints of problem? All my work is stalled. Any kind of assistance in recovering my ailing sqlserver installation will be humbly received.
So, I'm requesting you all to show me the way.
Thanks in anticipation.
I fixed mine with Solution C from the following website. my MSDB was corrupt and not loading so I stopped the services and replaced it with the files from the template in the SQL Server directory.
https://www.mssqltips.com/sqlservertip/3191/how-to-recover-a-suspect-msdb-database-in-sql-server/
"The templates are saved in "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Templates" (the path varies by version and install choices, this is the default for SQL Server 2012). By shutting down the instance and replacing the bad MSDB data (msdbdata.mdf) and transaction log (msdblog.ldf) files with the template files I was able to restart the instance without error!" (just incase the website link doesn't work I have quoted it here).
Fissh
If your MSDB is corrupted, restore from your most recent backup. That's the safest thing to do and that's why we have backups to begin with.
If you do not have a backup of MSDB, you have a couple of options.
Recreate it. Detailed instructions here: https://msdn.microsoft.com/en-us/library/dd207003(v=sql.110).aspx#CreateMSDB. This is the best way to ensure you get a clean, functional MSDB and is the fastest way to get up and running again. IMPORTANT: Doing this means you lose all jobs, backup history, etc... that is stored in MSDB. Remember to recreate all maintenance jobs after you're done else you're just waiting for the next thing to fall over (e.g. transaction log backups no longer run, tlogs grow till you run out of disk space - now you can't run any queries that will commit transactions).
DBCC CHECKDB WITH Repair_allow_data_loss is another option which you'll probably find if you google/bing the issue. This might work but it is not recommended. The problem is you don't really know what will be lost. It works by deleting what it can't read then fix the links to get the database physically functional again. Once that's done, you'll have to go back and figure out what remains and is still functional. That is tedious and error prone. Besides, if you're gonna do this very thorough manual check to ensure all your jobs are intact, you're better off just re-creating them on a new, clean MSDB.

How does one test if a backup of an `.mdb` file is good in an automated manner?

I make a copy of an .mdb database (and it's other partition) every night, and test it by opening it up to see if it works.
By "make a copy" I mean:
I kick all the users out of the database who are connected via RDP (not automated...)
Rename both backend files...and then proceed to make a copy of the files. (automated...)
And by "see if it works" I mean:
Relink a frontend file (.mde) to both files (this is automated)
Open it (and it's other partition) with a frontend (.mde) and workgroup security file (.mdw) on my local machine to see if it works. (this is not automated, and the part I am focusing on here...)
There are only two tables in the other partition, so I run the part of the frontend file I know uses that partition to test if the backup is going to work.
Would connecting to the backup of the files and doing a query on some table in both partitions be enough to prove that the backup is good without actually looking at it with human eyes?
I have also automated the process of compacting the live database, but I don't feel safe automating this part until I have verified that the backups indeed work.
Also before I get any posts along the lines of "Why are you still using access?", let me just state that I don't get to make those decisions and this database was here a long time before I got here.
(Please Note: if you feel I have posted this on SO in error please feel free to migrate to the DBA SE or to Serverfault)

sql server restoring back up error

I have backed up a database I had created on an other machine running SQL server 2012 express edition and I wanted to restore it on my machine, which is running the same. I have ticked the checkbox overwriting existing one, and got this error:
Backup mediaset is not complete. Files: D:\question.bak. Family count:2. Missing family sequence number:1
This happens if, when you made the backup, you had multiple files listed in the backup destination textbox. Go back to your source server and create the backup again; this time, make sure there's only one destination file listed.
If you had more than one file listed as the backup destination, the backup is striped across them; you'll need all the files to perform the restore.
You can verify this by performing a RESTORE LABELONLY against the single file you copied to your destination server.
Sandra Walter's Answer provides an accurate description of what has happened, but I found the answer a bit lacking.
To make a backup which isn't striped (which is what has occurred in this situation), go back to the window where you setup the backup of your database. At the bottom is a list of paths where the different stripes will go to.
Go to each of the listed paths and delete the stripe of the backup.
Then remove all but one of the paths from the list in the window. And click the "OK" button. Your unstriped backup will be created at that one path.
Hope that helps.
My backup was scheduled on two different locations. once I selected both options during restoration its worked for me.

sql server batch database alters, batch database changes - best and safest way

We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.