I got the below error during my scheduled backup. My backup occurs on my server itself and not on S3. I am on Opscenter 5.2.2
This does not happen daily. Before this, it had happened during the first time when I scheduled the backup, and today this is the second time. Also the link shown in the image for troubleshooting is broken. Why does this occur & how can I prevent this from happening?
Related
We are trying to restore Ravendb from the backup file. We are using Raven studio. The restore process copied index files from the backup to the new location but it's stuck at the below step:
Esent Restore: Restore Begin
Esent Restore: 18 1001
I couldn't see any other logs or exceptions.
The backup size is around 123 GB.
How do I fix this stuck process?
After lots of investigation, I found the issue.
Seems the IIS application pool was configured to recycle itself every 20 min. So, after 20 mins, RavenDB was used to kill the restore process.
I found the issue by monitoring the Resouce Monitor -> CPU -> Process Manager tab. You should able to see the Raven process doing loads of write operations during restore and should able to monitor when service gets stopped.
I lost my system drive a day ago and with it all my duplicati settings/jobs (not the backups - they are ok)
Restoring works fine but I'd like to restore/recreate my jobs from an existing backup and continue backing up to that location
Is there a way to do that (couldn't find it in the web interface nor online documentation)
I created the job with the same settings (default settings and same AES password) and tried to start a backup, duplicati complained about a missing database and suggeted a db repair.
After the db repair run everything was back to normal.
I am trying to backup a large filesystem (~800 GB) from Ubuntu 16.04 to Amazon S3 using Duplicity. It looks like it backed up most of the filesystem, but keeps getting stuck towards the end.
I have run this command several times now and it keeps failing/aborting in the same place (about 8 hours into the backup):
$ duplicity --no-encryption --s3-use-ia --archive-dir /var/abc/tmp --tempdir /var/abc/tmp --exclude /var/abc/tmp /var/abc s3://s3-us-west-2.amazonaws.com/mybucket
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup left a partial set, restarting.
Last full backup date: Tue Jul 25 11:13:45 2017
RESTART: Volumes 32085 to 32085 failed to upload before termination.
Restarting backup at volume 32085.
Restarting after volume 32084, file backups/resourcespace.20170730.sql.gz, block 399
Attempt 1 failed. error: [Errno 104] Connection reset by peer
Attempt 2 failed. error: [Errno 104] Connection reset by peer
Attempt 3 failed. error: [Errno 104] Connection reset by peer
Attempt 4 failed. error: [Errno 104] Connection reset by peer
Giving up after 5 attempts. error: [Errno 104] Connection reset by peer
After my first attempt I tried upgrading duplicity to the latest PPA and am now running 0.7.13.1. Tried again -- same failure.
Next I upgraded boto from 2.38.0 to 2.48.0 (via PIP) and am still seeing the same failure.
I found some older posts suggesting that this used to happen due to some sort of 5GB limitation on the Amazon side, however those posts also claim it was supposed to be fixed in the 0.7-series of Duplicity (which I am running).
Any suggestions on how to proceed with further troubleshooting would be much appreciated, thanks!
Wanted to post a follow up here. I did manage to get this working finally, although the precise answer is a little unclear.
Originally I had a partial/aborted backup, I upgraded duplicity and then boto, and then tried to resume the aborted backup to see if I could get it to complete.
After giving up on that, I did the following:
Deleted the original backup to start over with all updated code.
Added --volsize 1024 to Duplicity to reduce the number of volumes being recorded.
Added --s3-use-multiprocessing to try to make things go faster.
One or more of those steps cured the Errno 104 problem and now my backups complete. Things looked different in my temp dirs when I ran the new backup, so I am highly suspicious that perhaps resuming the old backup from the older code was causing issues. But it could have easily been something related to the volsize (it went from 20MB to 1024MB).
My signature file is still quite large at 7.7GB but is no longer causing issues.
Everything appears to be working fine now with the clean backup.
I was having the same exact issue. It turns out my AWS access & secret keys were wrong. Updating them fixed the problem.
Maybe it's not your case, but it could help others affected.
I'm using the latest stable version (3.0.3660) on a VM on Windows Azure and would like to enable period backup. Have tried to enable both local backup and backup to Azure but the GUI doesn't seem to persist the changes. Modal dialog says "Saving..." but nothing more.
Is there a log for this so that I can troubleshoot what doesn't work?
/Erik
I tried it too and the database is non-responsive for several minutes (a co-worker was waiting for tens of minutes). But after waiting a while it actually does something. I configured the Azure backup and that went wrong because it couldn't upload a blob of that large a size. The error was logged and can be found in the studio > status > logs.
Running the server standalone (instead of running as a service) doesn't give any additional feedback either.
Managed it to work by setting "Raven/AnonymousAccess" to Admin and then save the changes, not sure why. Connected with API key that should have full access.
I have a database showing up in SQL Enterprise Manager as "(Restoring...)"
If i do SP_WHO there is no restore process.
The disk and CPU activity on the server is very low
I think it is not restoring at all.
How can I get rid of this?
I've tried renaming the underlying MDF file, but even when I do "NET STOP MSSQLSERVER" it tells me the file is open.
I've tried using PROCEXP to find what process has the file open, but even the latest PROCEXP can't seem to do that on Windows Server 2003 R2 x64. The lower pane view is blank.
In the SQL Server log it says "the database is marked RESTORING and is in a state that does not allow recovery to be run"
Sql Server has two backup types:
Full backup, contains the entire database
Transaction log backup, contains only the changes since the last full backup
When restoring, Sql Server asks you if you want to restore additional logs after the full backup. If you choose this option, called WITH NORECOVERY, the database will be left in Restoring state. It will be waiting for more transaction logs to be restored.
You can force it out of Restoring mode with:
RESTORE DATABASE <DATABASE_NAME> WITH RECOVERY
If this command gives an error, detach the database, remove the MDF files, and start the restore from scratch. If it keeps failing, your backup file might be corrupted.
Here's a screenshot of the restore options, with the default selected. The second option will leave the database in Restoring state.
Image of the restore options http://img193.imageshack.us/img193/8366/captureu.png
P.S.1. Are you running the 64 bit version of process explorer? Verify that you see procexp64.exe in the task manager.
P.S.2. This is more like a question for serverfault.
WITH RECOVERY option is used by default when RESTORE DATABASE/RESTORE LOG commands is executed. If you're stuck in "restoring" process you can bring back a database to online state by executing:
RESTORE DATABASE YourDB WITH RECOVERY
GO
You can look for more options and some third party tools on this SO post https://stackoverflow.com/a/21192066/2808398
If you are trying to get rid of the lock on the file I would recommend getting Unlocker http://www.emptyloop.com/unlocker/
It'll give you an option to unlock the file, or kill the process that has locked the file. Run this on the mdf and ldf files.
Another option is to try to Detach the files from Enterprise Manager or Sql Management Studio and then reattach the db. You can try this before running unlocker to see if sql server will just release the mdf and ldf files.
CAUTION: If you kill the process you might lose data or the data might get corrupted so use this only if you are trying to get rid of it and you have a good and tested backup.