xp_delete_file not removing old backups - sql-server-2012

This command is not deleting backups:
EXEC xp_delete_file 0,N'F:\path\cms',N'*.bak',N'2014-01-30T21:08:04'
Also tried
EXEC xp_delete_file 0,N'F:\path\cms',N'bak',N'2014-01-30T21:08:04'
and
EXEC xp_delete_file 0,N'F:\path\cms',N'.bak',N'2014-01-30T21:08:04'
SQL Server Agent has permissions on the folder.

Did you try:
EXEC xp_delete_file 0,N'F:\path\cms\',N'bak',N'2014-01-30T21:08:04';
--- this slash may be important ---^
That said, you should simply not be using this stored procedure to clean up your backup folders. It is undocumented and unsupported. Take a look at this Connect item, which complains about exactly the same problem, seven years ago to the day. Note that it is closed as "won't fix" and of particular interest should be this official statement from Terrence Nevins of Microsoft:
The stored procedure was never intended to be called by an end user and the chances that you could ever be successful in having it do what you intend are almost zero. I took a quick peek at the implementation we are using and it takes some very specialzed arguments for a very specific task.
If you detemine that you do need to access the file system directly from a stored proc, then I would imagine you would need to write that yourself in .net. Or perhaps there is already a vendor that provides this.
We don't document this XP or promote its usage by anyone for very good reasons. Most of all, they can 'go away' from release to release.

Solved: Both users for agent and sql server service need read/write/delete permissions on the backup folder.

The path name must end with a \ and the extension must not contain the dot. Then it will work.

Make sure that you have "Full Control" on the directory where your backups are. Unfortunately running the xp_delete_file doesn't return an error if you don't have the correct privs nor do you see anything in your SQL Server agent log files.

Related

Can I undo the last few transaction, or revert to yesterday in MariaDB?

I have an SQL file containing several commands, when I need to make a correction to my application database that the application can't yet do, I use DBVis to select and execute the command I need (e.g. to delete an incorrect entry). Problem is, the button to run the whole page is right next to the button to run a selected command. So I just dropped and re-created my table, losing all my data. Is there a way to undo this?
I'm looking to either 'undo' each command until I get back to the right place, or revert back to yesterday, where I know everything was correct.
Thanks!
Yes, you can if...
your administration tool did set autocommit=OFF by default, you can
just execute a ROLLBACK (or just shutdown your administration tool)
If latter doesn't work, check if your binary log was enabled, and restore with mysqlbin log tool
If none of the above mentioned solution works, use your (probably not existent) backup for restoring

SQLServer 2012 Installed copy showing problems:

My problem is like this: I had a copy of SqlServer 2012 installed on my machine. It's been there for over 3 years without any glitches at all. Just 4-5 days ago, a problem sprouted up. When I started Management Studio it told me that
msdb got corrupted so it cannot be opened.
The complete message is something like this:
Cannot display policy health state at the server level, becuase the user doesn't have permission. Permission to the database msdb is required for this feature to work correctly.
So what could be wrong here? What sudden changes/anomalies could have crept in that has made this unstable? Someone told me it could be due to a wide range of possibilities. The reason could be anything. Even some nuget packages affect the database. Initially I though this could have been an issue with login, permissions etc. So I tried to run as administrator also. No, it did not cure this problem. If you try to create a new database it simply tells me, that I can't do it. The message is something like this:
An exception occurred while executing a T-SQL statement or batch.[Microsoft.SqlServer.ConnectionInfo]. Database msdb cannot be opened. It has been marked as SUSPECT by recovery. [Microsoft Sql Server, Error:926]
How do I recover from this? Can you provide me some guidance? Or a clue where precisely to look for the hints of problem? All my work is stalled. Any kind of assistance in recovering my ailing sqlserver installation will be humbly received.
So, I'm requesting you all to show me the way.
Thanks in anticipation.
I fixed mine with Solution C from the following website. my MSDB was corrupt and not loading so I stopped the services and replaced it with the files from the template in the SQL Server directory.
https://www.mssqltips.com/sqlservertip/3191/how-to-recover-a-suspect-msdb-database-in-sql-server/
"The templates are saved in "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Templates" (the path varies by version and install choices, this is the default for SQL Server 2012). By shutting down the instance and replacing the bad MSDB data (msdbdata.mdf) and transaction log (msdblog.ldf) files with the template files I was able to restart the instance without error!" (just incase the website link doesn't work I have quoted it here).
Fissh
If your MSDB is corrupted, restore from your most recent backup. That's the safest thing to do and that's why we have backups to begin with.
If you do not have a backup of MSDB, you have a couple of options.
Recreate it. Detailed instructions here: https://msdn.microsoft.com/en-us/library/dd207003(v=sql.110).aspx#CreateMSDB. This is the best way to ensure you get a clean, functional MSDB and is the fastest way to get up and running again. IMPORTANT: Doing this means you lose all jobs, backup history, etc... that is stored in MSDB. Remember to recreate all maintenance jobs after you're done else you're just waiting for the next thing to fall over (e.g. transaction log backups no longer run, tlogs grow till you run out of disk space - now you can't run any queries that will commit transactions).
DBCC CHECKDB WITH Repair_allow_data_loss is another option which you'll probably find if you google/bing the issue. This might work but it is not recommended. The problem is you don't really know what will be lost. It works by deleting what it can't read then fix the links to get the database physically functional again. Once that's done, you'll have to go back and figure out what remains and is still functional. That is tedious and error prone. Besides, if you're gonna do this very thorough manual check to ensure all your jobs are intact, you're better off just re-creating them on a new, clean MSDB.

Another Oracle sql monitoring tool

Probably has been asked before, but i'm looking for a utility, which can
Identify a particular session and record all activity.
Able to identify the sql that was executed under that session.
Identify any stored procedures/functions/packages that were executed.
And able to show what was passed as parameters into the procs/funcs.
I'm looking for a IDE thats lightweight, fast, available and won't take 2 day's to install, i.e something I can get down, install and use in the next 1 hour.
Bob.
if you have license for Oracle Diagnostic/Tuning Packs, you may use Oracle Active Session History feature ASH
The easiest way I can think of to do this is probably already installed in your database - it's the DBMS_MONITOR package, which writes trace files to the location identified by user_dump_dest. As such, you'd need help from someone with access to the database server to access the trace files.
But once, you've identified the SID and SERIAL# of the session you want to trace, you can just call:
EXEC dbms_monitor.session_trace_enable (:sid, :serial#, FALSE, TRUE);
To capture all the SQL statements being run, including the values passed in as binds.

undo changes to a stored procedure

I altered a stored procedure and unknowingly overwrote some changes that were made to it by another developer. Is there a way to undo the changes and get the old script back?
Unfortunately I do not have a backup of that database, so that option is ruled out.
The answer is YES, you can get it back, but it's not easy. All databases log every change made to it. You need to:
Shutdown the server (or at least put it into read-only mode)
Take a full back up of the server
Get a copy of all the db log files going back to before when the accident happened
Restore the back up onto another server
Using db admin tools, roll back through the log files until you "undo" the accident
Examine the restored code in the stored proc and code it back into your current version
And most importantly: GET YOUR STORED PROCEDURE CODE UNDER SOURCE CONTROL
Many people don't grok this concept: You can only make changes to a database; you can't roll back the stored proc version like you can with application code by replacing files with their previous versions. To "roll back", you must make more changes that drop/define your stored proc.
Note to nitpickers: By "roll back" I do not mean "transaction roll back". I mean you've made your changes and decide once the server is back up that the change is no good.
"Is there a way to undo the changes and get the old script back?"
Short answer: Nope.
:-(
In addition to the sound advice to either use a backup or recover from source control (and if you're doing neither of those things, you need to start), you could also consider getting SSMS Tools Pack from #MladenPrajdic. His Management Studio add-in allows you to keep a running history of all the queries you've worked on or executed, so it is very easy to go back in time and see previous versions. While that doesn't help you if someone else worked on the last known good version, if your entire team is using it, anyone can go back and see any version that was executed. You can dictate where it is saved (to your own file system, a network share, or a database), and fine-tune how often auto-save kicks in. Really priceless functionality, especially if you're lazy about backups and/or source control (though again, I stress, you should be doing these things before you touch your production server again).
You could look through the cached execution plans and try to find the one where your colleague made his changes and run the relevant parts again.
EDIT
Although Bohemian looks to have a good answer if you've got the changes in the TL, this is what I'm talking about. Review the SQL text for the plan.
SELECT cached.*,
sqltext.*
FROM sys.dm_exec_cached_plans cached
CROSS APPLY sys.dm_exec_sql_text (cached.plan_handle) AS sqltext
But as squillman points out, there is no execution plan for DDL.
You won't be able to get it back from the database side of things. Your options at this point are pretty much limited to 1) recover from backup, 2) go to source control or 3) hope that someone else has a copy still up in an editor somewhere or saved to a file.
If neither of these are an option for you, then here's the obligatory "you should take regular backups and use source control"....
I'm way late to the game on this but I did this same thing this morning and found I had forgot to save my script at some point in the past and needed to recover it. (It will be in source control after I get done fixing this!!!)
Some people mentioned restoring from a backup but no one really mentioned how easy this is if you have a back up. Moreover, you aren't locked into rolling back the production database. I think this is key and assuming you have a back up I would say this is a much better alternative to what has been voted up to the best answer.
All you have to do is take your back up and restore it to a new database. Pull out the sp you are looking for and voila, you've recovered the missing code.
Don't forget to drop the newly created database after you've recovered the missing file.
I had the same problem, and I don't have the confidence to go restoring from log files to another server. I was pretty distraught until I realised the solution was very simple...
Press Ctrl-Z over and over until I had undone my changes and the run the ALTER PROCEDURE again.
Admittedly I was pretty lucky that I still had it there to revert to but it really is the easiest fix. Probably a bit late now though.
If you have scripted the stored procedure out from management studio object explorer this will work.
Before expand and collapse the object explorer just scroll and point to the stored procedure you have opened. Script the stored procedure as create or alter to then you can get the previous version of the proc since the object explorer doesn't refreshed yet. This is always my life saver.

Why aren't my SQL Server 2005 backups being deleted?

I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?