Raven DB patch request not running on start - ravendb

I currently have a ravenDB database with a model that has a specific set of fields that I have been working with. I realized there is a field or two that I need to add and have successfully used ravenDB's patch request once to patch my documents in my database to initialize those fields on all the pre existing documents. I wanted to add another field again but I cannot get the patch code to run again to update my documents another time. I was wondering if someone could tell me if there was any documentation or methods to check the database at deploy and see if the models are the same, if not to patch the ones that are not but leave the ones that are alone and ensure after an update the pre existing models are not reset to what the patch is patching.
private void updateDb(IDocumentStore store)
{
store.DatabaseCommands.UpdateByIndex("Interviews_ByCandidateInterviewAndDate",
new IndexQuery{
Query = "Candidate:"
},
new []{
new PatchRequest{
Type = PatchCommandType.Set,
Name = "IsArchived",
Value = true
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "ArchiveDate",
Value = null
},
new PatchRequest{
Type = PatchCommandType.Set,
Name = "TestingField",
Value = 14
}
},
new BulkOperationOptions
{
AllowStale = false
}
);
}
The first two patch requests went through and shows up in the database but one thing I cannot see is if i were to run this patch again to get that third field into the model, would it change all the values that are already existing in the database for the first two to true and null or would it leave them the way they are and more importantly, I cannot get this code to run again.
Any pointers in the right direction would be greatly appreciated! thanks.

You query is wrong:
Query = "Candidate:"
Should have no results (invalid query).
Use:
Query = "Candidate:*"

Related

Existing saga instances after applying the [Unique] attribute to IContainSagaData property

I have a bunch of existing sagas in various states of a long running process.
Recently we decided to make one of the properties on our IContainSagaData implementation unique by using the Saga.UniqueAttribute (about which more here http://docs.particular.net/nservicebus/nservicebus-sagas-and-concurrency).
After deploying the change, we realized that all our old saga instances were not being found, and after further digging (thanks Charlie!) discovered that by adding the unique attribute, we were required to data fix all our existing sagas in Raven.
Now, this is pretty poor, kind of like adding a index to a database column and then finding that all the table data no longer select-able, but being what it is, we decided to create a tool for doing this.
So after creating and running this tool we've now patched up the old sagas so that they now resemble the new sagas (sagas created since we went live with the change).
However, despite all the data now looking right we're still not able to find old instances of the saga!
The tool we wrote does two things. For each existing saga, the tool:
Adds a new RavenJToken called "NServiceBus-UniqueValue" to the saga metadata, setting the value to the same value as our unique property for that saga, and
Creates a new document of type NServiceBus.Persistence.Raven.SagaPersister.SagaUniqueIdentity, setting the SagaId, SagaDocId, and UniqueValue fields accordingly.
My questions are:
Is it sufficient to simply make the data look correct or is there something else we need to do?
Another option we have is to revert the change which added the unique attribute. However in this scenario, would those new sagas which have been created since the change went in be OK with this?
Code for adding metadata token:
var policyKey = RavenJToken.FromObject(saga.PolicyKey); // This is the unique field
sagaDataMetadata.Add("NServiceBus-UniqueValue", policyKey);
Code for adding new doc:
var policyKeySagaUniqueId = new SagaUniqueIdentity
{
Id = "Matlock.Renewals.RenewalSaga.RenewalSagaData/PolicyKey/" + Guid.NewGuid().ToString(),
SagaId = saga.Id,
UniqueValue = saga.PolicyKey,
SagaDocId = "RenewalSaga/" + saga.Id.ToString()
};
session.Store(policyKeySagaUniqueId);
Any help much appreciated.
EDIT
Thanks to David's help on this we have fixed our problem - the key difference was we used the SagaUniqueIdentity.FormatId() to generate our document IDs rather than a new guid - this was trivial tio do since we were already referencing the NServiceBus and NServiceBus.Core assemblies.
The short answer is that it is not enough to make the data resemble the new identity documents. Where you are using Guid.NewGuid().ToString(), that data is important! That's why your solution isn't working right now. I spoke about the concept of identity documents (specifically about the NServiceBus use case) during the last quarter of my talk at RavenConf 2014 - here are the slides and video.
So here is the long answer:
In RavenDB, the only ACID guarantees are on the Load/Store by Id operations. So if two threads are acting on the same Saga concurrently, and one stores the Saga data, the second thread can only expect to get back the correct saga data if it is also loading a document by its Id.
To guarantee this, the Raven Saga Persister uses an identity document like the one you showed. It contains the SagaId, the UniqueValue (mostly for human comprehension and debugging, the database doesn't technically need it), and the SagaDocId (which is a little duplication as its only the {SagaTypeName}/{SagaId} where we already have the SagaId.
With the SagaDocId, we can use the Include feature of RavenDB to do a query like this (which is from memory, probably wrong, and should only serve to illustrate the concept as pseudocode)...
var identityDocId = // some value based on incoming message
var idDoc = RavenSession
// Look at the identity doc's SagaDocId and pull back that document too!
.Include<SagaIdentity>(identityDoc => identityDoc.SagaDocId)
.Load(identityDocId);
var sagaData = RavenSession
.Load(idDoc.SagaDocId); // Already in-memory, no 2nd round-trip to database!
So then the identityDocId is very important because it describes the uniqueness of the value coming from the message, not just any old Guid will do. So what we really need to know is how to calculate that.
For that, the NServiceBus saga persister code is instructive:
void StoreUniqueProperty(IContainSagaData saga)
{
var uniqueProperty = UniqueAttribute.GetUniqueProperty(saga);
if (!uniqueProperty.HasValue) return;
var id = SagaUniqueIdentity.FormatId(saga.GetType(), uniqueProperty.Value);
var sagaDocId = sessionFactory.Store.Conventions.FindFullDocumentKeyFromNonStringIdentifier(saga.Id, saga.GetType(), false);
Session.Store(new SagaUniqueIdentity
{
Id = id,
SagaId = saga.Id,
UniqueValue = uniqueProperty.Value.Value,
SagaDocId = sagaDocId
});
SetUniqueValueMetadata(saga, uniqueProperty.Value);
}
The important part is the SagaUniqueIdentity.FormatId method from the same file.
public static string FormatId(Type sagaType, KeyValuePair<string, object> uniqueProperty)
{
if (uniqueProperty.Value == null)
{
throw new ArgumentNullException("uniqueProperty", string.Format("Property {0} is marked with the [Unique] attribute on {1} but contains a null value. Please make sure that all unique properties are set on your SagaData and/or that you have marked the correct properties as unique.", uniqueProperty.Key, sagaType.Name));
}
var value = Utils.DeterministicGuid.Create(uniqueProperty.Value.ToString());
var id = string.Format("{0}/{1}/{2}", sagaType.FullName.Replace('+', '-'), uniqueProperty.Key, value);
// raven has a size limit of 255 bytes == 127 unicode chars
if (id.Length > 127)
{
// generate a guid from the hash:
var key = Utils.DeterministicGuid.Create(sagaType.FullName, uniqueProperty.Key);
id = string.Format("MoreThan127/{0}/{1}", key, value);
}
return id;
}
This relies on Utils.DeterministicGuid.Create(params object[] data) which creates a Guid out of an MD5 hash. (MD5 sucks for actual security but we are only looking for likely uniqueness.)
static class DeterministicGuid
{
public static Guid Create(params object[] data)
{
// use MD5 hash to get a 16-byte hash of the string
using (var provider = new MD5CryptoServiceProvider())
{
var inputBytes = Encoding.Default.GetBytes(String.Concat(data));
var hashBytes = provider.ComputeHash(inputBytes);
// generate a guid from the hash:
return new Guid(hashBytes);
}
}
}
That's what you need to replicate to get your utility to work properly.
What's really interesting is that this code made it all the way to production - I'm surprised you didn't run into trouble before this, with messages creating new saga instances when they really shouldn't because they couldn't find the existing Saga data.
I almost think it might be a good idea if NServiceBus would raise a warning any time you tried to find Saga Data by anything other than a [Unique] marked property, because it's an easy thing to forget to do. I filed this issue on GitHub and submitted this pull request to do just that.

Yii CacheDependency based on cache value

This seems like a simple thing and should be part of the base code for Yii, but I can't find a solution anywhere. Here is my scenario.
1) User updates their record (use beforesave to set a cache value, changes with each new save, php unique())
public function beforeSave()
{
Yii::app()->cache->set('userupdate'.$this->id,uniqid());
return parent::beforeSave();
}
2) User data is cached using the cache value in step one as a dependency in the loadModel function of the model.
$model=Users::model()->cache(1800, $dependency)->findByPk($id);
3) User views a page that calls to retrieve their data. Yii evaluates the request to see if the cached valued from step 1 has changed, if it has not pull from cache, if it has pull from db.
While reading this page (http://www.yiiframework.com/doc/guide/1.1/en/caching.data) it has that function if a file date changes, but not one for it a variable changes. Any help in this matter would be great as I am at a loss of how to implement this.
NOTE: I need to use cache to hold the variable as I'm running multiple instances of my application and they need shared over each server and all users (thus session won't work).
After fighting with this I found the solution, don't feel it's completely pretty, but it does work. Any feedback on a cleaner way is much appreciated.
$cache = Yii::app()->cache;
$key1 = 'userupdate'.$id; //main cache value
$key2 = '2userupdate'.$id; //will equal main cache when query is cached
$cache1 = $cache['userupdate'.$id];
$cache2 = $cache['2userupdate'.$id];
$dependency = new CExpressionDependency("Yii::app()->cache->get('$key1') == Yii::app()->cache->get('$key2')");
$model=Users::model()->cache(1800,$dependency)->findByPk($id);
if($cache1 != $cache2)
$cache['2userupdate'.$id] = $cache['userupdate'.$id];
One of the dependency options is CExpressionDependency. You could compare the currently cached beforeSave value to the value you get from the loadModel call.

Checking if certain key exists in database

I have saved certain MDX query and I run them using ADOMD.NET. I get CellSet back which I convert into dataset. All this is working fine. Now the DB team has changed the cube structure. They have updated the DimesnionName, Attribute Name etc. Some dimensions got renamed and some got deleted. Becuase of this I am unable to run my saved queries. I want to create a console application, which will take list of keys ([DimensionName].[AttributeName] or [DimensionName].[AttributeName].[MemeberName] format) and it will tell me following keys does not exists.
Please let me know if this is possible programatically. I dont want to check it manually.
Kindly share a link or code which will help me acheive this.
Thank you.
If you're using ADOMD already this should be no problem, just use the metadata queries:
http://msdn.microsoft.com/en-us/library/ms123485.aspx
Alternatively, AMO is nice http://msdn.microsoft.com/en-us/library/microsoft.analysisservices.aspx
I use it in SSIS for processing, you could easily use it in .Net to test existence of elements:
using Microsoft.AnalysisServices;
...
Server server = new Server();
server.Connect(cubeConnectionString);
Database database = server.Databases.FindByName(databaseName);
Cube cube = database.Cubes.FindByName(cubeName);
foreach (MeasureGroup measureGroup in cube.MeasureGroups)
{
foreach (Partition partition in measureGroup.Partitions)
{
...
}
}
foreach (CubeDimension cubeDimension in cube.Dimensions)
{
Dimension dimension = cubeDimension.Dimension;
var dimName = dimension.Name;
...
}
Finding the names in advance for all the elements you need is probably the hard part (And keeping it all up-to-date).
Would it not be easier to fire all the queries at the cube and try to trap the "no such thing" response?

Check if property exists in RavenDB

I want to add property to existing document (using clues form http://ravendb.net/docs/client-api/partial-document-updates). But before adding want to check if that property already exists in my database.
Is any "special,proper ravendB way" to achieve that?
Or just load document and check if this property is null or not?
You can do this using a set based database update. You carry it out using JavaScript, which fortunately is similar enough to C# to make it a pretty painless process for anybody. Here's an example of an update I just ran.
Note: You have to be very careful doing this because errors in your script may have undesired results. For example, in my code CustomId contains something like '1234-1'. In my first iteration of writing the script, I had:
product.Order = parseInt(product.CustomId.split('-'));
Notice I forgot the indexer after split. The result? An error, right? Nope. Order had the value of 12341! It is supposed to be 1. So be careful and be sure to test it thoroughly.
Example:
Job has a Products property (a collection) and I'm adding the new Order property to existing Products.
ravenSession.Advanced.DocumentStore.DatabaseCommands.UpdateByIndex(
"Raven/DocumentsByEntityName",
new IndexQuery { Query = "Tag:Jobs" },
new ScriptedPatchRequest { Script =
#"
this.Products.Map(function(product) {
if(product.Order == undefined)
{
product.Order = parseInt(product.CustomId.split('-')[1]);
}
return product;
});"
}
);
I referenced these pages to build it:
set based ops
partial document updates (in particular the Map section)

Sencha Touch 2 Beta 2 Store Sync Issues

In ST1.x I had no problem syncing an onlinestore to an offlinestore with the below method, now it seems that sync doesn't work in STB2. I can see the records being output on the console. Anyone else having this issue? I believe it may be a bug...
var remoteStore = Ext.getStore('UpdateConfig');
var localStore = Ext.getStore('UpdateLocalConfig');
remoteStore.each(function (record) {
localStore.add(record.data);
console.log(record.data);
});
localStore.sync();
same question + answer # Sencha Forum
...and same user??? XD
This was answered on the Sencha Touch 2 Forums by TommyMaintz, but I wanted to give the answer here as well.
"One thing I think I see which is wrong is that you are adding a record to the LocalStore using the record.data. In ST2 we now have a Model cache. This means that if you create two instances with the exact same model and id, the second time you create that instance it will just return the already existing instance. This means that if you sync your local store, it won't recognize that record as a 'phantom' record because it already has an id. What you would have to do in your case if you want to make a "copy" of your record by using all the data but removing the id. This will generate a new simple id for it and when you save it to your local storage it will generate a proper local id for it.
When I tried doing this I noticed the "copy" method on Model hasn't been updated to handle this. If you apply the following override you should be able to do localStore.add(record.copy()); localStore.sync()"
Ext.define('Ext.data.ModelCopyFix', {
override: 'Ext.data.Model',
/**
* Creates a copy (clone) of this Model instance.
*
* #param {String} id A new id. If you don't specify this a new id will be generated for you.
* To generate a phantom instance with a new id use:
*
* var rec = record.copy(); // clone the record with a new id
*
* #return {Ext.data.Model}
*/
copy: function(newId) {
var me = this,
idProperty = me.getIdProperty(),
raw = Ext.apply({}, me.raw),
data = Ext.apply({}, me.data);
delete raw[idProperty];
delete data[idProperty];
return new me.self(null, newId, raw, data);
}
});