Updating entities with primary key and alternate composite key - sql

I have a series of metric snapshot data I am uploading into my database on a daily basis. I take the input and check to determine if it is already in the database, and if it's not I add it. Each record uses a composite key made up of three columns, and also has a primary key.
I have since tried to add logic so that I can optionally force an update on records that already exist in the database, in addition to adding those that don't yet exist. I run into an error though preventing me, saying that there is already an object with the specified key being tracked.
The instance of entity type 'MembershipSnapshot' cannot be tracked
because another instance of this type with the same key is already
being tracked. When adding new entities, for most key types a unique
temporary key value will be created if no key is set (i.e. if the key
property is assigned the default value for its type). If you are
explicitly setting key values for new entities, ensure they do not
collide with existing entities or temporary values generated for other
new entities. When attaching existing entities, ensure that only one
entity instance with a given key value is attached to the context.
Here's a snippet of my code.
// Get the composite keys from the supplied list
var snapshotKeys = snapshots.Select(s => new { s.MembershipYear, s.DataDate, s.Aggregate }).ToArray();
// Find which records already exist in the database, pulling their composite keys
var snapshotsInDb = platformContext.MembershipSnapshots.Where(s => snapshotKeys.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.Select(s => new { s.MembershipYear, s.DataDate, s.Aggregate }).ToArray();
// And filter them out, so we remain with the ones that don't yet exist
var addSnapshots = snapshots.Where(s => !snapshotsInDb.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.ToList();
// Update the ones that already exist
var updateSnapshots = snapshots.Where(s => snapshotsInDb.Contains(new { s.MembershipYear, s.DataDate, s.Aggregate }))
.ToList();
platformContext.MembershipSnapshots.AddRange(addSnapshots);
platformContext.MembershipSnapshots.UpdateRange(updateSnapshots);
platformContext.SaveChanges();
How do I go about accomplishing this task?
I don't have a compelling reason why I have an auto-increment primary key, other than perhaps whatever performance issues it might give SQL internally?
EDIT: The way I've currently solved this issue is my removing my surrogate key, which I'm not using at all for anything. Still, it would be nice to know a workaround without having to remove this as a surrogate key could come in handy in the future.

Related

Find namespace records in Redis

How do I find all the records in Redis?
USER_EXCHANGE = table
USER_ID = User ID (primary key)
UID = relationship
The key is stored with the following structure
USER_EXCHANGE:USER_ID:4030:UID:63867a4c6948e9405f4dd73bd9eaf8782b7a6667063dbd85014bd02046f6cc2e
I am trying to find all the records of the user 4030...
using (var redisClient = new RedisClient ())
{
List<object> ALL_UID = redisClient.Get<List<object>>("USER_EXCHANGE:USER_ID:4030:UID:*");
}
What am I doing wrong? Thank you all for your help.
Hi as you're trying to fetch all keys matching a pattern you should use KEYS.
GET won't match patterns. by will retrieve complete full names.
Caution this is a debug function and not a production function.
doc: https://redis.io/commands/keys
Production simple solution, recommanded for you is :
store a list of your key in a LIST
USER_EXCHANGE:USER_ID:4030 => [ uid1, uid2, uid3 ....]
get list of uids for a specific user ID by getting the list.
This is a good practice in REDIS.

NHibernate: Bag performance confusion. Is documentation outdated?

If you see the documentation for performance of the collections :
http://nhibernate.info/doc/nhibernate-reference/performance.html#performance-collections-taxonomy
It says:
Bags are the worst case. Since a bag permits duplicate element values and has no index column, no primary key may be defined. NHibernate has no way of distinguishing between duplicate rows. NHibernate resolves this problem by completely removing (in a single DELETE) and recreating the collection whenever it changes. This might be very inefficient.
However I cannot confirm this case. For example if I have a simple parent child relation with cascade all, using bag, with the following code:
using (var sf = NHibernateHelper.SessionFactory)
using (var session = sf.OpenSession())
{
var trx = session.BeginTransaction();
var par = session.Query<Parent>().First();
var c = new Child { Id = 4, Name = "Child4" };
par.Children.Add(c);
trx.Commit();
}
I don't see any deletes, but an insert to child table and an update for parentid. This actually make sense. However it seems to contradict with the docs. What am I missing?
The example you give is almost exactly like the efficient case documented in the NHibernate reference at 19.5.3. Bags and lists are the most efficient inverse collections.

Existing saga instances after applying the [Unique] attribute to IContainSagaData property

I have a bunch of existing sagas in various states of a long running process.
Recently we decided to make one of the properties on our IContainSagaData implementation unique by using the Saga.UniqueAttribute (about which more here http://docs.particular.net/nservicebus/nservicebus-sagas-and-concurrency).
After deploying the change, we realized that all our old saga instances were not being found, and after further digging (thanks Charlie!) discovered that by adding the unique attribute, we were required to data fix all our existing sagas in Raven.
Now, this is pretty poor, kind of like adding a index to a database column and then finding that all the table data no longer select-able, but being what it is, we decided to create a tool for doing this.
So after creating and running this tool we've now patched up the old sagas so that they now resemble the new sagas (sagas created since we went live with the change).
However, despite all the data now looking right we're still not able to find old instances of the saga!
The tool we wrote does two things. For each existing saga, the tool:
Adds a new RavenJToken called "NServiceBus-UniqueValue" to the saga metadata, setting the value to the same value as our unique property for that saga, and
Creates a new document of type NServiceBus.Persistence.Raven.SagaPersister.SagaUniqueIdentity, setting the SagaId, SagaDocId, and UniqueValue fields accordingly.
My questions are:
Is it sufficient to simply make the data look correct or is there something else we need to do?
Another option we have is to revert the change which added the unique attribute. However in this scenario, would those new sagas which have been created since the change went in be OK with this?
Code for adding metadata token:
var policyKey = RavenJToken.FromObject(saga.PolicyKey); // This is the unique field
sagaDataMetadata.Add("NServiceBus-UniqueValue", policyKey);
Code for adding new doc:
var policyKeySagaUniqueId = new SagaUniqueIdentity
{
Id = "Matlock.Renewals.RenewalSaga.RenewalSagaData/PolicyKey/" + Guid.NewGuid().ToString(),
SagaId = saga.Id,
UniqueValue = saga.PolicyKey,
SagaDocId = "RenewalSaga/" + saga.Id.ToString()
};
session.Store(policyKeySagaUniqueId);
Any help much appreciated.
EDIT
Thanks to David's help on this we have fixed our problem - the key difference was we used the SagaUniqueIdentity.FormatId() to generate our document IDs rather than a new guid - this was trivial tio do since we were already referencing the NServiceBus and NServiceBus.Core assemblies.
The short answer is that it is not enough to make the data resemble the new identity documents. Where you are using Guid.NewGuid().ToString(), that data is important! That's why your solution isn't working right now. I spoke about the concept of identity documents (specifically about the NServiceBus use case) during the last quarter of my talk at RavenConf 2014 - here are the slides and video.
So here is the long answer:
In RavenDB, the only ACID guarantees are on the Load/Store by Id operations. So if two threads are acting on the same Saga concurrently, and one stores the Saga data, the second thread can only expect to get back the correct saga data if it is also loading a document by its Id.
To guarantee this, the Raven Saga Persister uses an identity document like the one you showed. It contains the SagaId, the UniqueValue (mostly for human comprehension and debugging, the database doesn't technically need it), and the SagaDocId (which is a little duplication as its only the {SagaTypeName}/{SagaId} where we already have the SagaId.
With the SagaDocId, we can use the Include feature of RavenDB to do a query like this (which is from memory, probably wrong, and should only serve to illustrate the concept as pseudocode)...
var identityDocId = // some value based on incoming message
var idDoc = RavenSession
// Look at the identity doc's SagaDocId and pull back that document too!
.Include<SagaIdentity>(identityDoc => identityDoc.SagaDocId)
.Load(identityDocId);
var sagaData = RavenSession
.Load(idDoc.SagaDocId); // Already in-memory, no 2nd round-trip to database!
So then the identityDocId is very important because it describes the uniqueness of the value coming from the message, not just any old Guid will do. So what we really need to know is how to calculate that.
For that, the NServiceBus saga persister code is instructive:
void StoreUniqueProperty(IContainSagaData saga)
{
var uniqueProperty = UniqueAttribute.GetUniqueProperty(saga);
if (!uniqueProperty.HasValue) return;
var id = SagaUniqueIdentity.FormatId(saga.GetType(), uniqueProperty.Value);
var sagaDocId = sessionFactory.Store.Conventions.FindFullDocumentKeyFromNonStringIdentifier(saga.Id, saga.GetType(), false);
Session.Store(new SagaUniqueIdentity
{
Id = id,
SagaId = saga.Id,
UniqueValue = uniqueProperty.Value.Value,
SagaDocId = sagaDocId
});
SetUniqueValueMetadata(saga, uniqueProperty.Value);
}
The important part is the SagaUniqueIdentity.FormatId method from the same file.
public static string FormatId(Type sagaType, KeyValuePair<string, object> uniqueProperty)
{
if (uniqueProperty.Value == null)
{
throw new ArgumentNullException("uniqueProperty", string.Format("Property {0} is marked with the [Unique] attribute on {1} but contains a null value. Please make sure that all unique properties are set on your SagaData and/or that you have marked the correct properties as unique.", uniqueProperty.Key, sagaType.Name));
}
var value = Utils.DeterministicGuid.Create(uniqueProperty.Value.ToString());
var id = string.Format("{0}/{1}/{2}", sagaType.FullName.Replace('+', '-'), uniqueProperty.Key, value);
// raven has a size limit of 255 bytes == 127 unicode chars
if (id.Length > 127)
{
// generate a guid from the hash:
var key = Utils.DeterministicGuid.Create(sagaType.FullName, uniqueProperty.Key);
id = string.Format("MoreThan127/{0}/{1}", key, value);
}
return id;
}
This relies on Utils.DeterministicGuid.Create(params object[] data) which creates a Guid out of an MD5 hash. (MD5 sucks for actual security but we are only looking for likely uniqueness.)
static class DeterministicGuid
{
public static Guid Create(params object[] data)
{
// use MD5 hash to get a 16-byte hash of the string
using (var provider = new MD5CryptoServiceProvider())
{
var inputBytes = Encoding.Default.GetBytes(String.Concat(data));
var hashBytes = provider.ComputeHash(inputBytes);
// generate a guid from the hash:
return new Guid(hashBytes);
}
}
}
That's what you need to replicate to get your utility to work properly.
What's really interesting is that this code made it all the way to production - I'm surprised you didn't run into trouble before this, with messages creating new saga instances when they really shouldn't because they couldn't find the existing Saga data.
I almost think it might be a good idea if NServiceBus would raise a warning any time you tried to find Saga Data by anything other than a [Unique] marked property, because it's an easy thing to forget to do. I filed this issue on GitHub and submitted this pull request to do just that.

Laravel Eloquent last inserted object wrong properties

I've got a problem while using Laravel Eloquent ORM:
When inserting a new Eloquent Model in the database, the data is corrupted.
To be concrete:
$newItem = new NotificationNewItem;
$newItem->item_id = $item->id; // item_id is the primary key (returned by getKeyName())
$newItem->save();
return NotificationNewItem::find($item->id);
This code does not return the same as
$newItem = new NotificationNewItem;
$newItem->item_id = $item->id;
$newItem->save();
return $newItem;
whereas the two items should be the same, shouldn't they ?
The weird part is that the returned JSON object (I show it directly in my browser) in the first case is exactly what is inserted in the database, and in the second case the JSON Object's primary key (here item_id) is equal to 0 even if in the database the corresponding entry has a primary key equal to 3 (or different values).
Here's the laravel code if you want to see that error again : http://pastebin.com/9wcsnvSq
There are two "returns" in the model function insertAndGetElement() and those return items with different primary keys (the first one in that pastebin is returning a primary key equal to 0).
Help will be much appreciated.
Thanks in advance,
Robin.
The solution to that problem (Primary key set to 0 after calling save()) is to precisely define the model as not auto_incrementing the primary key.
To do so, just use
public $incrementing = false;
in the model declaration. Thanks to AndreasLutro on #laravel!
i don't know what exactly you want, but to get last insert id use this:
$newItem = new NotificationNewItem;
$newItem->item_id = $item->id;
$newItem->save();
// $newItem->id; this is lastinsertedid
return $newItem->id;
//NotificationNewItem::find($newItem->id);

Invalid index N for this SqlParameterCollection with Count=N only when associated table has null record

I have a rather complex entity which will not save when a particular database table is missing a record. When the record exists the entity saves correctly. When the record does not I receive the exception:
Invalid index N for this SqlParameterCollection with Count=N
After reading a bunch of solutions found via Google and the most closely related questions on Stack Overflow:
What's causing “Invalid index nn for this SqlParameterCollection
with Count=nn” when a column is Null in the database?
Invalid Index n for this SqlParameterCollection with Count=n” OR “foreign key
cannot be null
I believe my issue has to do with the way I have my mapping files setup. The Customer entity has reference to the Person entity. Person maps to a table which we have read, but not write access to. It is when a record for the Person entity does not exist that I generate the exception. If the record exists no issue. I've set the reference to Person from customer to Nullable(). I have also double checked to ensure I do not have a property mapped twice from either entity.
Here is what I feel is the pertinent mapping information, but can provide more as needed:
Customer
//more mapping code...
References(x => x.Person, "snl_id").Nullable();
//more mapping code...
Person
//more mapping code...
ReadOnly();
Id(x => x.SnlId).Column("SNL_ID");
//more mapping code...
To further complicate matters we have some painful code to make NHibernate perform better when Person does not exist. I am not sure it applies here, but thought it pertinent enough to include in my question. We are using the below code because without it the NHibernate JIRA will create tons of queries. This solution is outlined in this Stack Overflow answer.
Customer's person property
public virtual Person Person
{
get
{
try
{
var snlId = per.Name;
return per;
}
catch
{
return null;
}
}
set
{
per = value;
}
}
private EPerson per;
What am I missing in my mappings that would cause this exception? Is there another piece of this problem that I am not seeing?
While Scott's solution of removing the snl_id property from the Customer class fixes the issue it causes problems that I cannot get around-- the snl_id can exist in the Customer table even there is not a corresponding Person table record. Since that is the case there are times when I will need access to the snl_id when I cannot get to it via the associated Person property.
I considered several alternative solutions but settled on creating a view of the Customer table including the Customer table primary key and the snl_id from the customer table. Then mapping that property via a join to the view.
Join("v_cust_id_snl_id", j => j.KeyColumn("cust_id").Map(x => x.SnlId, "snl_id")
This change allowed me to have my cake and eat it to. I was able to keep the SnlId property on customer, but no longer throw the exception when saving.
Do you have the snl_id referenced as a property in Customer as well as being the primary key for the child object? If so, this is causing the error you are receiving. Remove the property from Customer and use Person to get the value.