Restore Embedded RavenDB on top of existing data - ravendb

I'm trying to do RavenDB backup/restore from within the application. Raven runs in embedded mode. I've got the backups working fine, now I need to do a restore.
Problem with restore that I'd like to restore into the same location that is currently used by embedded storage. My plan was to shut-down embedded storage, delete all the files in the location, restore there:
var configuration = embeddedStore.Configuration;
var backupLocation = // location of a backup folder
var databaseLocation = // location of current DB
// shutdown the storage???
documentStore.Dispose();
// now, trying to delete the directory, I get exception "Can't delete the folder it is in use by other process"
Directory.Delete(databaseLocation, true); // <-- exception here
Directory.CreateDirectory(databaseLocation);
DocumentDatabase.Restore(configuration, backupLocation, databaseLocation, s => { }, defrag: true);
(The full source on GitHub)
The problem with shutting down the storage. From the exception I get, the engine is not shut down, because I can't delete the folder:
The process cannot access the file '_0.cfs' because it is being used by another process.
The application I run is MVC5. Issue that the DB location is set in web.config and I don't really want to modify it in any way, so the restore has got to go in the same location as existing DB.
What is the correct way to restore embedded database into the same location as the existing DB?

one workaround is initialize ravendb(properly in Application_Start) with a condition to not let ravendb start.
if (WebConfigurationManager.AppSettings["RavenDBStartMode"]=="Start")
RavenDB.initialize();
for restoring db change "RavenDBStartMode" to "DontStart" in webconfig so application pool will restart and start your restore operation
string dbLocation = Server.MapPath("your database location");
string backupLocation = Server.MapPath("your backup location");
System.IO.File.Delete(dbLocation);
DocumentDatabase.Restore(new RavenConfiguration() { DataDirectory = dbLocation }
, backupLocation, dbLocation, x => { }, false);
RavenDB.initialize();
finally change your "RavenDBStartMode" to "Start" again so ravendb can start for other restart reasons

Related

Core Data + CloudKit Migration: Cannot create or modify field [...] in record [...] in production schema

I use NSPersistentCloudKitContainer to sync Core Data with Cloud Kit. To prepare for a new migration, I have created a new model version of the xcdatamodel and marked it as "current". I created a new entity and added a relationship from another entity. Nothing spectacular and suitable for a lightweight migration I thought.
Let's name this new entity: EntityNew
This is my code to initialize the NSPersistentCloudKitContainer:
lazy var persistentContainer: NSPersistentContainer = {
let container = NSPersistentCloudKitContainer(name: "MyContainerName")
container.loadPersistentStores(completionHandler: { _, error in
guard let error = error as NSError? else { return }
fatalError("###\(#function): Failed to load persistent stores:\(error)")
})
container.viewContext.automaticallyMergesChangesFromParent = true
return container
}()
shouldMigrateStoreAutomatically and shouldInferMappingModelAutomatically are set to true by default.
Everything worked fine locally. No errors occurred during the migration.
The problems started when I created a new instance of EntityNew:
let newItem = EntityNew(context: context)
newItem = "..."
saveContext()
newItem was created locally without any problems, but the iCloud Sync stopped working from this moment. The following error appeared in the console:
"<CKRecordID: 0x283fb1460; recordName=2E2209A1-F9F6-4DF2-960D-2C31F764ED05, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a5950: \"Batch Request Failed\" (22/2024); server message = \"Atomic failure\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[MyContainerID]\">"
"<CKRecordID: 0x283fb1a00; recordName=3145C837-D80D-47E0-B944-DBC6576A9B0A, zoneID=com.apple.coredata.cloudkit.zone:__defaultOwner__>" = "<CKError 0x2830a4000: \"Invalid Arguments\" (12/2006); server message = \"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema\"; uuid = ADA626F4-160E-49FE-A0BD-2198E5FBD09A; container ID = \"iCloud.[ContainerID]\">";
"Cannot create or modify field 'CD_[Fieldname in EntityNew]' in record 'CD_[OtherEntityName]' in production schema"
Cloud Kit tries to modify the field CD_[Fieldname in EntityNew] (which is correct) on the record CD_[OtherEntityName], which is not the entity I created above! So Core Data tries to modify the wrong entity! This behavior does not happen for all fields (approx. 5 out of 10). I checked the local sqlite file of my iPhone but the local tables seems correct. The phenomenon can be observed in both, the Development and the Production icloud-container-environment. If I start with an empty database (which already contains the new entity, so no migration is necessary) the synchronization works.
What did I miss? Any ideas?
Thank you!

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

Azure: How to synchronize log files without overwriting old ones?

I have an issue about synchronizing a log file to the blob storage. Actually, I can synchronize a log file to the blob storage, but after that when I make a new deployment of my project to the azure my project files are changing and the log files' contents changing too, although the file name stays same. Thus WebRole is trying to synchronize the log file and does it, however because of the file name, file is overwriting and all the data in the blob storage has gone. How can I hold log files for different deployments? I hope I can explain you, sorry for my English.
You can change the fileName before going to use it. By overriding the File property you can add any unique prefix (DeploymentID,TimeTicks,GUID...) to the file name.
public class AzureLocalStorageAppender:RollingFileAppender
{
public override string File
{
get
{
//Trace.WriteLine("get_"+base.File);
return base.File;
}
set
{
base.File = RoleEnvironment.GetLocalResource("LocalResourceNameHere").RootPath + #"\"
+ "_" + Guid.NewGuid().ToString()
+ new FileInfo(value).Name;
//Trace.WriteLine(base.File);
}
}
}

How to update database file over internet?

My App has a database file in its NSBundle. I want to get update database file from internet whenever new database file is available. and this should happen before app displays data from the database file.
Here is the logic i am trying to use. i don't know if it makes sense or is there a better way to do it. an example will be awesome
if (file is available in the Documents Directory)
{
if( check if internet is available )
{
1. get file from network
2. store it in Documents Directory
if ( compare contents of the old & new file)
{
delete downloaded file
} else {
move or delete old file & rename new file ( so that the new file's data can be accessed )
}
} else {
use old file in Documents Directory
}
} else {
copy file from bundle to Documents Directory
}
Ideally you should have a timestamp or version number for the copy of the DB you have on the phone, and transmit that to the server. Then have the server only send a new copy if there is a newer version. Saves the user a lot of data charges.

Copy SQL Server MDF and LDF files while server is in use

I am using the following code to copy files from one folder to another...
Public Shared Sub CopyFlashScriptFile(ByVal SourceDirectory As String, ByVal DestinationDirectory As String)
Try
Dim f() As String = Directory.GetFiles(SourceDirectory)
For i As Integer = 0 To UBound(f)
File.Copy(f(i), DestinationDirectory & "\" & System.IO.Path.GetFileName(f(i)),True)
Next
Catch ex As Exception
MsgBox(ex.Message)
End Try
End Sub
The files I am copying are database files, .mdf and .ldf. Which are being used by the application. Now the problem is when I try to copy the files it throws an error
file is being used by another process
Can anyone help me with this?
Is there anyway I can programmatically stop SQL Server and copy the files, then start the server again?
To expand on my comment - I would build a .sql file with the T-SQL command to backup the database to another location, and then I can use sqlcmd from the command line in order to run the backup .sql file.
So to build the .sql file I would go through the process of backing up the database via SQL Server Management Studio. Here is a tutorial on how to do this:
http://www.serverintellect.com/support/sqlserver/database-backup-ssmse.aspx
Then, before clicking OK to perform the backup, click on the "Script" button on the backup window and choose "Script Action To New Query Window". This will generate the SQL of your settings from the backup database window. Save that SQL into a file and you're done.
Next is to use sqlcmd.exe to execute the .sql file to backup the database whenver you want. There is a very good example of using sqlcmd.exe from C# code here:
http://geekswithblogs.net/thomasweller/archive/2009/09/08/automating-database-script-execution.aspx
I always prefer doing stuff like this without affecting the running SQL Server (unless it's one running on my dev machine, where I'll happily stop/start the service). You just never what might happen if you stop a production SQL Server service to copy some files. Could be very costly, so better to be safe.
Depending on which version of SQL you are using you could make use of the Microsoft.SqlServer.Management.Smo.Wmi.Service objects to Start and Stop the Service that runs the SQL instance.
After doing this you should be able to simply copy the files as needed.
For SQL Server 2008
{
//Declare and create an instance of the ManagedComputer
//object that represents the WMI Provider services.
ManagedComputer mc;
mc = new ManagedComputer();
//Iterate through each service registered with the WMI Provider.
foreach (Service svc in mc.Services)
{
Console.WriteLine(svc.Name);
}
//Reference the Microsoft SQL Server service.
Service Mysvc = mc.Services["MSSQLSERVER"];
//Stop the service if it is running and report on the status
// continuously until it has stopped.
if (Mysvc.ServiceState == ServiceState.Running) {
Mysvc.Stop();
Console.WriteLine(string.Format("{0} service state is {1}", Mysvc.Name, Mysvc.ServiceState));
while (!(string.Format("{0}", Mysvc.ServiceState) == "Stopped")) {
Console.WriteLine(string.Format("{0}", Mysvc.ServiceState));
Mysvc.Refresh();
}
Console.WriteLine(string.Format("{0} service state is {1}", Mysvc.Name, Mysvc.ServiceState));
//Start the service and report on the status continuously
//until it has started.
Mysvc.Start();
while (!(string.Format("{0}", Mysvc.ServiceState) == "Running")) {
Console.WriteLine(string.Format("{0}", Mysvc.ServiceState));
Mysvc.Refresh();
}
Console.WriteLine(string.Format("{0} service state is {1}", Mysvc.Name, Mysvc.ServiceState));
Console.ReadLine();
}
else {
Console.WriteLine("SQL Server service is not running.");
Console.ReadLine();
}
}
From msdn
I am using the .mdf file in my application...in case of a system crash or format the user is going to loose the data..so if the user copies the data(.mdf) to some other drive ..he/she can replace the new .mdf file with the old one which has all there data...correct me if i am wrong...thanks.
That's exactly what "normal" backups are for.
As you noticed yourself, you can backup a SQL Server database by simply copying the .mdf and .ldf files, but the downside is that you can only do this when the SQL Server service is not running.
And stopping the SQL Server service just to backup the database is not a good idea, because your users can't access the database while the service is stopped.
Taking a "normal" backup (usually a .bak file) can be done while the database is running, so there's no need to stop SQL Server every time you want to make a backup.
There are several ways how to do a backup:
a) Manually in SQL Server Management Studio:
see the first link in Jason Evans' answer
b) If you want to take a backup regularly (say, once a day) you need to use sqlcmd.
Jason Evans described this in his answer as well, but IMO there's an easier way - you need only two files with one line each. See How to create jobs in SQL Server Express edition.
(if you were using a full SQL Server edition and not only Express, you could set up a Maintenance Task in Management Studio instead, but that's not possible in SQL Server Express, so you have to do it manually like described above).