How do you restore a "versioned node" in a jackrabbit 2.1 repository? - jcr

Jackrabbit 2.1 has versioned nodes. We want to support the "undo" of a delete of one of these nodes. "Finding it" seems to be the tricky part.

Not sure how to iterate over the version tree properly - should be possible I think, by going over /jcr:system/jcr:versionStorage, see JCR 1.0 section 8.2.2.1 and JCR 2.0 section 15.10 - but you can query the version tree with a query like
SELECT * FROM nt:frozenNode WHERE prop = 'value'
(if there is a search index configured for the version workspace in Jackrabbit, which should be by default).
The nodes returned will be the frozen nodes, get the parent node to retrieve the Version:
NodeIterator iter = res.getNodes();
while (iter.hasNext()) {
Node frozenNode = iter.nextNode();
Version v = (Version) frozenNode.getParent();
// ...
}
It makes sense to store the (parent) path of the node as property whenever you create a version in the first place, so that you can query for it and also know where to restore it later (see below).
You know that it is deleted when the jcr:frozenUuid of the frozenNode can't be found in the session:
boolean deleted = false;
try {
session.getNodeByUUID(
frozenNode.getProperty(JcrConstants.JCR_FROZENUUID).getString()
);
} catch (ItemNotFoundException e) {
deleted = true;
} catch (RepositoryException e) {
continue;
}
To restore it, take the Version and pass it to the version manager, along with the absolute path to restore it to (which could come from the property saved on the version's frozen node):
VersionManager vMgr = session.getWorkspace().getVersionManager();
vMgr.restore(path, v, true);
If you somehow know it without needing to search for it, you can also get the version by its UUID:
Version v = (Version) session.getNodeByUUID( versionUUID );

Related

Provided schema version 1 is less than last set version 0 in React native

Although the question looks like a duplicate with another existing question, I could not get it through. In order to change the type of one property for a schema, I added the versionSchema as required, (e.g., let realm = new Realm({schemaVersion:1, schema:[PersonSchema]}), but it shows:
Provided schema version 1 is less than last set version 0
I tried to both deleting the added sentence and editing the sentence, but it shows the same error as though it's already remembered it.
How can I fix this?
The problem can occur if your database is set to readOnly. I changed that to false and the migration block started getting called.
In AppDelegate.m :
#import <Realm/RLMRealmConfiguration.h>
//Inside your [AppDelegate didFinishLaunchingWithOptions:]
//Realm detect new properties and removed properties
RLMRealmConfiguration *config = [RLMRealmConfiguration defaultConfiguration];
// Set the new schema version. This must be greater than the previously used
// version (if you've never set a schema version before, the version is 0).
config.schemaVersion = 1;
// Set the block which will be called automatically when opening a Realm with a
// schema version lower than the one set above
config.migrationBlock = ^(RLMMigration *migration, uint64_t oldSchemaVersion) {
// We haven’t migrated anything yet, so oldSchemaVersion == 0
if (oldSchemaVersion < 1) {
// Nothing to do!
// Realm will automatically detect new properties and removed properties
// And will update the schema on disk automatically
}
};
// Tell Realm to use this new configuration object for the default Realm
[RLMRealmConfiguration setDefaultConfiguration:config];
// Now that we've told Realm how to handle the schema change, opening the file
// will automatically perform the migration
[RLMRealm defaultRealm];

Delete build definitions in TFS via API

When creating TFS build definitions via the API, I need to first delete, if the definition pre-exists:
if (BuildServer.QueryBuildDefinitions(teamProject).Any(d => d.Name == buildDefinitionName))
{
buildDefinition = BuildServer.GetBuildDefinition(teamProject, buildDefinitionName);
var builds = BuildServer.QueryBuilds(buildDefinition);
if (builds != null && builds.Any())
{
Console.WriteLine("delete {0} builds for build definition: {1}", builds.Count(), buildDefinition.Name);
BuildServer.DeleteBuilds(builds);
}
if (buildDefinition.Workspace.Mappings.Any())
{
var mappings = buildDefinition.Workspace.Mappings.Select(m => m.ServerItem).ToArray();
foreach (var mapping in mappings)
{
Console.WriteLine("remove workspace mapping: {0}", mapping);
buildDefinition.Workspace.RemoveMapping(mapping);
}
}
Console.WriteLine("delete build definition: {0}", buildDefinition.Name);
BuildServer.DeleteBuildDefinitions(new[] { buildDefinition });
}
This works as does the subsequent:
buildDefinition = BuildServer.CreateBuildDefinition(teamProject);
buildDefinition.Name = buildDefinitionName;
However, when the first build gets run, it throws an error about conflicting workspaces:
Exception Message: Unable to create the workspace 'some-new-workspace' due to a mapping conflict. You may need to manually delete an old workspace. You can get a list of workspaces on a computer with the command 'tf workspaces /computer:%COMPUTERNAME%'.
Details: The path C:\some-path is already mapped in workspace some-old-workspace. (type MappingConflictException)
As you can see in the first snippet, my attempt to delete workspaces with .Workspace.RemoveMapping(), has no effect. The workspaces still exist on the build controller. I can delete them manually but they really should get deleted when I delete the build definition. Is there some other DeleteWorkspace() mechanism in the API?
A more complete code gist is here: https://gist.github.com/grenade/cce374cb4e27e366bc5b
It turns out that the reason it's complicated is that the owner of the various workspaces created by the build could be some other user (that the build agent runs under).
I found a way to do it by relying on the previous build definition id which is used in the workspace naming convention [build definition id]_[build agent id]_[workspace host]:
var workspaceNamePrefix = string.Concat(buildDefinition.Id, '_');
var workSpaces = VersionControlServer.QueryWorkspaces(null, null, null).Where(w => w.Name.StartsWith(workspaceNamePrefix)).ToArray();
for (var i = workSpaces.Count() - 1; i > -1; i--)
{
try
{
workSpaces[i].Delete();
Console.WriteLine("delete workspace: {0}", workSpaces[i].Name);
}
catch (ResourceAccessException rae)
{
Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine(rae.Message);
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("workspace needs to be deleted by an administrator using the following command:");
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("tf workspace /delete {0};{1}", workSpaces[i].Name, workSpaces[i].OwnerName);
Console.ResetColor();
}
}
I have updated the gist: https://gist.github.com/grenade/cce374cb4e27e366bc5b

How to get local path for payload in WiX/Burn Managed Bootstrapper Application?

I am currently working in a WiX/Burn Managed Bootstrapper Application and cannot figure out how to get the local path for a payload (MSI).
I let the user select which applications they want to install in my custom UI, and I want to not show applications for which the MSI is missing. I also need to see information in the MSI's database.
I know I can determine missing payloads by handling "ResolveSource" but that doesn't happen until right before the application in installed.
I deserialize the BootstrapperApplicationData.xml file first thing so I have information about which MSIs MIGHT be installed, but it still doesn't help me determine the source of the MSIs.
Does anyone know how to determine the local path to a payload?
EDIT: Here is an example for how I reference all the installers:
<MsiPackage Id="AppName"
SourceFile="$(var.ProjectName.TargetDir)ProjectName.msi"
Name="MSI\ProjectName.msi"
Compressed="no"/>
In the GetLastUsedSourceFolder function in cache.cpp, you can see that the engine gets the source folder from the WixBundleLastUsedSource variable, and the parent directory of the WixBundleOriginalSource variable if WixBundleLastUsedSource isn't set.
You can use this along with the Name attribute of the WixPayloadProperties element in the BootstrapperApplicationData.xml file to predetermine where the engine will look for a payload. Note that the engine will actually look in the cache first.
The MSI files are embedded into the bundle .exe and aren't extracted from the bundle until right before the application is installed, which corresponds to when the ResolveSource event fires. However, if you really want to get this information, you can programatically extract the MSI files yourself and inspect them using the WiX DTF library (wix.dll in the /bin folder of your WiX install).
using Microsoft.Tools.WindowsInstallerXml;
private void ExtractEmbeddedMsiInstallers()
{
var tmpFolder = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());
var bundlePath = Engine.StringVariables["WixBundleOriginalSource"];
Unbinder unbinder = null;
try
{
unbinder = new Unbinder();
//The next line will extract the MSIs into the tmpFolder in a subfolder named "AttachedContainer"
unbinder.Unbind(bundlePath, OutputType.Bundle, tmpFolder);
}
finally
{
if (null != unbinder)
unbinder.DeleteTempFiles();
}
}
You also mentioned needing to inspect data in the MSI database. Here's a sample of how to do that:
using (var database = new InstallPackage(msiFilePath, DatabaseOpenMode.Transact) { WorkingDirectory = _someTempFolder })
{
if (database.Tables.Contains("CustomAction"))
{
using (View view = database.OpenView("SELECT `Action`, `Type`, `Source`, `Target` FROM `CustomAction`"))
{
view.Execute();
foreach (Record rowRecord in view)
using (rowRecord)
{
var actionName = rowRecord.GetString(1);
var actionType = rowRecord.GetInteger(2);
var binaryName = rowRecord.GetString(3);
var methodName = rowRecord.GetString(4);
//Do something with the values
}
}
}
}

RavenDB fails with ConcurrencyException when using new transaction

This code always fails with a ConcurrencyException:
[Test]
public void EventOrderingCode_Fails_WithConcurrencyException()
{
Guid id = Guid.NewGuid();
using (var scope1 = new TransactionScope())
using (var session = DataAccess.NewOpenSession)
{
session.Advanced.UseOptimisticConcurrency = true;
session.Advanced.AllowNonAuthoritativeInformation = false;
var ent1 = new CTEntity
{
Id = id,
Name = "George"
};
using (var scope2 = new TransactionScope(TransactionScopeOption.RequiresNew))
{
session.Store(ent1);
session.SaveChanges();
scope2.Complete();
}
var ent2 = session.Load<CTEntity>(id);
ent2.Name = "Gina";
session.SaveChanges();
scope1.Complete();
}
}
It fails at the last session.SaveChanges. Stating that it is using a NonCurrent etag. If I use Required instead of RequiresNew for scope2 - i.e. using the same Transaction. It works.
Now, since I load the entity (ent2) it should be using the newest Etag unless this is some cached value attached to scope1 that I am using (but I have disabled Caching). So I do not understand why this fails.
I really need this setup. In the production code the outer TransactionScope is created by NServiceBus, and the inner is for controlling an aspect of event ordering. It cannot be the same Transaction.
And I need the optimistic concurrency too - if other threads uses the entity at the same time.
BTW: This is using Raven 2.0.3.0
Since no one else have answered, I had better give it a go myself.
It turns out this was a human error. Due to a bad configuration of our IOC container the DataAccess.NewOpenSession gave me the same Session all the time (across other tests). In other words Raven works as expected :)
Before I found out about this I also experimented with using TransactionScopeOption.Suppress instead of RequiresNew. That also worked. Then I just had to make sure that whatever I did in the suppressed scope could not fail. Which was a valid option in my case.

Autoupgrade of an Eclipse (Lotus Notes 8.5.2) plugin

I am trying to provide an autoupdate feature for a Lotus Notes 8.5.2. The plugin is being developed under Eclipse 3.4.2. So far I haven't managed to find a standard way for doing this by hooking into the Lotus Notes API. What comes to my mind are the following two approaches.
use the Eclipse p2 SDK to perform the autoupgrade at runtime (at early startup of the plugin the updater will be checking for new versions and update the plugin). This entry describes the approach -> http://help.eclipse.org/indigo/index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Fguide%2Fp2_api_overview.htm. Unfortunately the SDK is not part of Eclipse 3.4.2 and I didn't manage to use this approach with 3.4.2.
use an external process that closes Lotus Notes, removes the old version of the plugin from the plugin directory of Lotus, copies the new version to the plugin directory, starts Lotus Notes again and terminates the process.
The second approach seems reasonable but requires closing of Lotus Notes during the autoupgrade process. So my question is - is there any approach similar to the first one above or any other standard procedure for Lotus Notes ? Thanks in advance.
Have a read of the widget catalog. It will do what you want.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.help.domino.admin85.doc/H_MANAGING_CLIENTS_USING_WIDGETS_AND_THE_WIDGETS_CATALOG_OVER.html
You will still need to restart the client after any plugin updates though.
Thanks for the suggestion Simon - I have found a more direct way using the suggestion from this post -> http://www.eclipsezone.com/eclipse/forums/t97689.html
with the addition of a configure operation (IConfigureFeatureOperation) for update the version of the feature to the platform.xml file of Lotus Notes.
Here is a sample snippet that illustrates the approach:
String updateSiteUrl = configuration.getUpdateSiteUrl();
IProgressMonitor monitor = new NullProgressMonitor();
ISite updateSite = SiteManager.getSite(new URL(updateSiteUrl),
monitor);
IFeatureReference[] siteFeatures = updateSite
.getFeatureReferences();
ILocalSite localSite = SiteManager.getLocalSite();
List<IInstallFeatureOperation> installOps = new ArrayList<IInstallFeatureOperation>();
List<IConfigFeatureOperation> configOps = new ArrayList<IConfigFeatureOperation>();
IConfiguredSite[] configuredSites = localSite
.getCurrentConfiguration().getConfiguredSites();
for (IConfiguredSite configuredSite : configuredSites) {
IFeatureReference[] localSiteFeatures = configuredSite
.getConfiguredFeatures();
for (IFeatureReference siteFeature : siteFeatures) {
for (IFeatureReference localSiteFeature : localSiteFeatures) {
VersionedIdentifier featureVi = siteFeature
.getVersionedIdentifier();
VersionedIdentifier localFeatureVi = localSiteFeature
.getVersionedIdentifier();
if (featureVi.getIdentifier().equals(
localFeatureVi.getIdentifier())) {
if (featureVi.getVersion().isGreaterThan(
localFeatureVi.getVersion())) {
installOps
.add(OperationsManager
.getOperationFactory()
.createInstallOperation(
configuredSite,
siteFeature
.getFeature(monitor),
null, null, null));
configOps
.add(OperationsManager
.getOperationFactory()
.createConfigOperation(
configuredSite,
siteFeature
.getFeature(monitor),
null, null));
}
}
}
}
}
if (installOps.size() > 0) {
// install new feature
for (Iterator<?> iter = installOps.iterator(); iter
.hasNext();) {
IInstallFeatureOperation op = (IInstallFeatureOperation) iter
.next();
op.execute(monitor, null);
}
// configure new feature
for (Iterator<?> iter = configOps.iterator(); iter
.hasNext();) {
IConfigFeatureOperation op = (IConfigFeatureOperation) iter
.next();
op.execute(monitor, null);
}
localSite.save();}