MARS Backup - system state backup failing
I am having issues with few servers, where the system state is not getting backup.
MARS system backup failure
I did follow the article below and attempt it, by changing the scratch folder to different location, it did not made any difference.
How do I change the cache location for the MARS agent?
Run this command in an elevated command prompt to stop the Backup engine:
Net stop obengine
If you have configured System State backup, open Disk Management and unmount the disk(s) with names in the format "CBSSBVol_".
By default, the scratch folder is located at \Program Files\Microsoft Azure Recovery Services Agent\Scratch
Copy the entire \Scratch folder to a different drive that has sufficient space. Ensure the contents are copied, not moved.
Update the following registry entries with the path of the newly moved scratch folder.
Table 2Registry path Registry Key Value
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config
ScratchLocation
New scratch folder location
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Azure Backup\Config\CloudBackupProvider
ScratchLocation
New scratch folder location
Restart the Backup engine at an elevated command prompt:
command Copy
Net stop obengine
Net start obengine
Run an on-demand backup. After the backup finishes successfully using the new location, you can remove the original cache folder.
Can I please have some advice, as I have got few servers where the system state is not being backed up.
This is actually a failure in the Windows Server Backup operation. See this KB article for steps on how you can modify required registry keys to fix the Windows Server Backup failures.
https://support.microsoft.com/en-us/help/4053355/microsoft-azure-recovery-services-agent-system-state-backup-failure
I am having an issue with Azure Storage Emulator. I tried to re-initialise the database and got the error below.
This was after installing Visual Studio 2019 Preview but this may just be a co-incidence. I tried for an hour or so to get it running and then gave up and just reset my machine with the "keep my files" option, re-installed Visual Studio 2017 and the Azure Tools but still see the same problem.
I know a reset sounds a bit drastic but VS 2019 broke my Azure Functions in VS2017, they would not launch so I wanted a clean install.
If I manually create the DB with sqllocaldb create (version 13.1.4001.0), the DB gets created fine but the init still fails with the same message.
Any ideas?
C:\Program Files (x86)\Microsoft SDKs\Azure\Storage
Emulator>AzureStorageEmulator.exe init
Windows Azure Storage Emulator 5.7.0.0 command line tool
Found SQL Instance (localdb)\MSSQLLocalDB.
Creating database AzureStorageEmulatorDb57 on SQL instance '(localdb)\MSSQLLocalDB'.
Cannot create database 'AzureStorageEmulatorDb57' : The database 'AzureStorageEmulatorDb57' does not exist. Supply a valid database
name. To see available databases, use sys.databases..
One or more initialization actions have failed. Resolve these errors before attempting to run the storage emulator again.
Error: Cannot create database 'AzureStorageEmulatorDb57' : The database 'AzureStorageEmulatorDb57' does not exist. Supply a valid
database name. To see available databases, use sys.databases..
After resetting my machine (and keeping files), I ran into this issue. For me, I was unable to run an Azure function in Visual Studio 2019 due to an error around being unable to start the emulator.
It looks like I had the same permissions issues as (I presume) my new account after reset, did not have permission to touch the DB.
I resolved this by:
Deleting the Azure Storage Emulator DB file: %USERPROFILE%/AzureStorageEmulatorDb[number].mdf
Then running AzureStorageEmulator.exe start with admin rights
I was then able to run the Azure Function without issue.
Stop the Azure Emulator if it is running.
Open SSMS and connect to your (localdb) instance.
Manually create the "AzureStorageEmulatorDb57".
Open a command prompt as Administrator.
Run the "AzureStorageEmulator.exe init".
Run your VS project.
I was running into this same issue after installing LocalDb for SQL Server 2017. These steps helped me to resolve the problem I was facing:
Open a command line in C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator
Run AzureStorageEmulator.exe init /forceCreate
From checking my error logs (located at %USERPROFILE%\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\MSSQLLocalDB), I saw
2018-12-21 15:41:13.47 spid65 CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'C:\Users{username}AzureStorageEmulatorDb59.mdf'.
This error lead me to the following post: https://dba.stackexchange.com/questions/191393/localdb-v14-creates-wrong-path-for-mdf-files
From reading answers there, I gathered that this is a bug in SQL Server 2017. Without having access to the patch, the solution that worked for me was granting Everyone access to modify C:\Users. This was only an issue on my development laptop, so I could afford to make that security change
or as commented by Andrii install CU13 HotFix for SQL Server 2017. After that AzureStorageEmulatorDb<xxx>.mdf will be created you your user directory as it should.
I had this problem and I don't know why an AzureStorageEmulatorDb57_log.ldf was still present in my %USERPROFILE% directory when I deleted my MSSQLLocalDB instance, but after dropping that file the problem went away.
I came across this issue where I had changed the userlogin to my machine. I have created the database from my previous useraccount. I have copied the database files to the new user account but it gave me this error. It seems to be a permission issue.
You need to find the saved location of the mdf and ldf file of this database. In my case it was stored in 'C:\Users\yourUserName'
Simply delete these files and run AzureStorageEmulator.exe init again and it will create the new mdf and ldf files for you.
After manually upgrading my MSSQL 2016 LocalDB to MSSQL 2019 following these instructions, I got the error mentioned as I was unaware that the Azure Storage Emulator uses LocalDB internally.
To fix it, I simply had to manually re-attach the database located in %UserProfile% with the following SQL command:
CREATE DATABASE [AzureStorageEmulatorDb510]
ON (FILENAME = 'C:\Users\<username>\AzureStorageEmulatorDb510.mdf'),
(FILENAME = 'C:\Users\<username>\AzureStorageEmulatorDb510_log.ldf')
FOR ATTACH;
Worked for me:
Delete any storage/sql database related to azure emulator
run this command on StorageEmulator path:
AzureStorageEmulator.exe init /server .
(Or your SQL instance, Mine was ".")
Check you had install Azure SDK with Visual Studio, if you did't you can add the feature
You can locate the mdf and ldf files in your userprofile directory. Just stop the emulator and copy those files to some other place and delete it from userprofile directory.
Then run the emulator again and it's going to create new mdf and ldf files.
Then stop the emulator and copy the old files back and restart the emulator. This way you won't loose any data.
I will help you with this. First of all create a sql server local db.
Then go to storage emulator folder
_-The Storage Emulator is installed by default to C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator.
Then run this AzureStorageEmulator.exe init /server
docs: AzureStorageEmulator.exe init /server localhost\SQLEXPRESS01
Open SSMS and connect to your (localdb) instance.
Manually create the "AzureStorageEmulatorDb...".
To add yet another answer, I did not have the any MDF or LDF files. Instead, I only had a config file at %USERPROFILE%\AppData\Local\AzureStorageEmulator\AzureStorageEmulator.5.10.config. I also could not connect to my local (localdb) instance with SSMS.
I changed the SQLInstance value in that config file to be localhost rather than (localdb)\MSSQLLocalDB, and it started working.
You should have an app called Microsoft Azure Storage Emulator.
Start this application.
If the application indicates that it is running run AzureStorageEmulator.exe stop first otherwise run AzureStorageEmulator.exe Start directly. Should create your database automatically, at least it did for me.
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-emulator
This seems to be because the mdf file already exists but LocalDB doesn't have it attached. You can delete and recreate as others have mentioned, but in my case I was able to just re-attach it and it worked fine.
Open SSMS to (localdb)\mssqllocaldb
Right click Databases
Choose Attach
Click Add
Select the existing MDF file (mine was in my user profile and named AzureStorageEmulatorDb510.mdf)
Click Ok
Then try running the emulator again.
This solution is not recommended in generally, but you can try it.
I think AzureStorageEmulator by somehow can not full access the localDB whitch setup in directory has limited the permission.
You can go to folder propertiy > sercurity > edit to full permission ( with me directory is user > Appdata).
Then restart the emulator
cmd restart the azure emulator.
Now it worked. You must consider it's unsercurity later on.
I initialized the db instance and succeed, bu my SQLServer is 2017.
Then I search the solution and the doc said delete the trouble database will solve the problem. Maybe you can try it follow the steps in the doc.
I have a directory in remote Linux machine where files are being archived and kept for a certain period of time. I want to delete a file from remote (Linux) machine using kettle transformation based on some condition.
If file does not exists then job should not throw any error but if file exists at remote location, then job should delete file or raise an error in case some other reason, i.e., permission issue.
Here, the file name will be retrieved as a variable from previous steps of transformation and directory path of archived files will be fixed one.
How can I achieve this in Pentaho Kettle transformation?
Make use of "Run SSH commands" utility to pass commands to your remote server.
Assuming you do a rm -f /path/file it won't error for a non-existent file.
You can capture the output and perform an error handling as well (Filter rows and trigger the course of action).
Or you can mount remote directory to machine where kettle is, and try to delete file as regular.
Using ssh, i think, non trivial. It needs a lots of experiments to find out error types, to find way to distinguish errors. It might be and error with ssh connection or error to delete file.
I have an intermittent issue when distributing indexes :( All servers are Windows Server 2008.
I have two servers to distribute to and one of them has failed twice with this error:
INFO: [MDEXHost1] Starting shell utility 'move_dgraph-input_to_dgraph-input-old'.
10-Jun-2015 06:08:36 com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript
SEVERE: Utility 'move_dgraph-input_to_dgraph-input-old' failed.
With a bit of further digging I've found this error in a log file in the PlatformServices\workspace\logs\shell folder:
Failed to move D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input to
D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old: No such file or directory at -e line 1.
The state of the server is that is has a dgraph_input_new folder but it's struggling to create the dgraph_input_old folder. The dgraph_input folder does exist so the 'No such file or directory' is interesting.
The server has plenty of disk space for the operation and as it's intermittent I don't think it's file/folder permissions (otherwise it would fail all the time). I've even asked for on-access virus scanning to be disabled for those folders in case our virus scanner was locking files/folders.
I'm struggling to come up with a resolution to the issue, HALP!
EDIT: The forge process did stop the dgraph but the TomCat6 process is still running. Is that normal? Could TomCat be locking the folder?
EDIT: The task to move the folder is a bit of Perl that looks like this:
perl.exe -e "use strict; use File::Spec; use File::Copy; use File::Glob qw/:glob/;my $source = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input'; $source =~ s/[\\\/]+$//;my #sources = bsd_glob($source); foreach my $file (#sources) {my #fromPath = File::Spec->splitdir($file); if (scalar #fromPath eq 0) { die \"Failed to split path: $!\"; } my $fromRelative = #fromPath[scalar #fromPath - 1];my $toFile = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old'; if ( -d $toFile ) { $toFile =File::Spec->catdir($toFile, $fromRelative); } my $res = move($file, $toFile);if (! $res) { die \"Failed to move $file to$toFile: $!\"; }}"
EDIT: It seems to be a plain permission issue, I can't rename the folder without elevating myself to an administrator. The service is running as a user who's in the Administrators group.
What could happen to make this folder admin only?
I know this question is dated back 3 years ago. But recently i experienced the same issue, thought it could help others.
The logs were interesting as GogLlundain pointed out.
The way to solve this is.
Stop mdex server in the workbench which will in parallel kill
dgraph process too.
If you open the task manager in the server where mdex is defined, you can
find two dgraph.exe is running.
Kill the older task(i.e dgraph.exe) then run the baseline script, Your
process will run smoothly.
I am using the 7zip standalone .exe to unzip a file. I am using the Execute Process task for this. I have tested this over and over again on multiple machines and I know it works (at least in debug mode/visual studio). I have uploaded this package the server. I have created a job that calls said package from the Package Store. The package is not able to find the .exe no matter where I put it.
My first thought was to put the .exe on the C:\ drive, which failed. I have also failed in my attempts to place the .exe on a network location that the account the package is running under has full control over.
Basically, has anybody else had issues getting the Execute Process Task to find an executable when the package is uploaded to the server?
The error message is
Can't find 7za.exe in directory C:\7zip
I'll risk a downvote for being wrong, but I believe you have a permission issue.
You say it runs fine on other servers from BIDS, try it without BIDS. Call it from a command-line on a box that it works on.
dtexec.exe /file C:\HereComesTheUnzipper.dtsx
If that works, then repeat the step on the troublesome server. RDC into the box and try again
dtexec.exe /ser localhost /sq HereComesTheUnzipper
If that still works, then you are looking at an issue with the job. What account is the SQL Agent service running as? Is the SSIS job step running as a particular set of credentials? If so, is it a SQL Server login (which wouldn't map to anything on the physical box)? Regardless of what your answer is, the resolution will be to ensure the account has access to
7z.exe
whatever scratch area 7zip may use while unpacking files (I assume %temp%)
the output folder (C:\bin\7z.exe -e e:\data\MyThing.7z)