Out of the blue, i am getting this error when doing a number of updates using nhibernate.
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [MyDomainObject]
there is no additional information in the error. Is there some recommended way to help identify the root issue or can someone give me a better explanation on what this error indicated or is a sympton around.
Some additional info
I looked at the object and all of the data looks fine, it has an ID, etc . .
Note this is running in a single call stack from an asp.net-mvc website so i wouldn't expect there to be any threading issues to worry about in terms of concurrency.
NHibernate has an object, let's call it theObject. theObject.Id has a value of 42. NHibernate notices that the object is dirty. The object's Id is different than the unsaved-value, which is zero (0) for integer primary keys. So NHibernate issues an update statement, but no rows are updated, which means that there is no row in the database for that type of object with an Id of 42. So the object has been deleted without NHibernate knowing about it. This could happen inside another transaction (e.g. you have threading issues) or if someone (or another application) deleted/altered the row using SQL directly against the database.
The other possibility is that your unsaved-value is wrong. e.g. You are using -1 to indicate an unsaved-entity, but your mapping has a unsaved-value of zero. This is unlikely as your application is generally working from the sounds of it. If the unsaved-value was wrong, you wouldn't have been able to save any entities to the database as NHibernate would have been issuing UPDATE statements when it should have been issuing INSERT.
It means that you have multiple transactions accessing the same data, thus producing concurrency issues. You should improve on your data access handling, you probably are updating data from multiple threads, syndicate the changed data into a queue first which handles all the access to the db.
An old post, but hopefully my info will help someone. I was getting a similar error but only when persisting associations, after I had added in a new object. The error was of the form:
NHibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) [My.Entity#0]
Note the zero on the end, which is my identifier property. It should not be trying to save with key zero as I was using identity specification in SQL Server (generator class=native). I had not changed my unsaved-value in my xml so I had no idea what the problem was; for some reason NHibernate was trying to do an update using key value as 0 instead of a save (and getting the next key identity) for my new object.
In the end I found the cause was that I was initialising Version number to 1 for the new object in my constructor! Even though my identifier property was zero, for some reason NHibernate was also looking for a version property of zero as well, to identify it as an unsaved transient instance. The book "NHibernate in Action" does actually mention this on page 120, but for some reason my objects were fine when persisting with version number of 1 normally, and only failing if saving a new object through an association.
So make sure you do not set your Version value (leave as zero or null).
You say that your data is ok, but check if for example you are mapping the ID as self generate. I had the exact same problem, but I was sending an object with an ID different from 0.
Hope it helps!
My problem was this:
[Bind(Include="Name")] EventType eventType
Should have been:
[Bind(Include="EventTypeId,Name")] EventType eventType
Just as other answers suggest nhibernate was using zero as the id for my entity.
If you have a trigger on the table, it can be the reason. In this case, add inside it
SET ROWCOUNT 0;
SET NOCOUNT ON;
This error happened to me in the following way:
List < Device > allDevices = new List < Device > ();
//Add Devices to the list
allDevices.Add(aDevice);
//Add allDevices to database //Will work fine
// allDevices.Clear(); //Should be used here
//Later we add more devices
allDevices.Add(anotherDevice);
//Add allDevices to Database -> We get the error
//Solution to this
allDevices.Clear(); //Before adding new transaction with the oldData,
Related
I created a custom table to store reasons for modifying an Object. I'm doing a POC with BOPF in order to learn, even it may not make sense to use it here.
This is how the persistent structure looks like (simplified):
define type zobject_modifications {
object_id : zobject_id;
#EndUserText.label : 'Modification Number'
mod_num : abap.numc(4);
reason_id : zreason_id;
#EndUserText.label : 'Modification Comments'
comments : abap.string(256);
}
The alternative key consists in the object_id + mod_num. The mod_num should be an auto-generated counter, always adding 1 to the last modification for the object_id.
I created a determination before_save to generate it, checking the MAX mod_num from the database BOs and from the current instantiated BOs and increasing by 1.
But when I try to create 2 BOs for the same object in a single transaction, I get an error because of the duplicated alternative key, since the field MOD_NUM is still initial and the before_save would be triggered later. I tried to change the determination to "After Modify" but I still get the same problem.
The question is: When and how should I generate the next MOD_NUM to be able to create multiple nodes for the same object ID safely?
This must be a very common problem so there must be a best practice way to do it, but I was not able to find it.
Use a number range to produce sequential identifiers. They ensure that you won't get duplicates if there are ongoing and concurrent transactions.
If you want to insist on determining the next identifier on your own, use the io_read input parameter of the determination to retrieve the biggest mod_num:
The database contains only those nodes that have already been committed. But your new nodes are not committed, yet, such that you won't get them.
io_read in contrast accesses BOPF's temporary buffer that also contains the nodes you just created, hence seeing the more actual data.
I have been reading about MediatR and CQRS latelly and I saw many people saying that commands shouldn't return domain objects. They can return values but they're limited to returning erros values, failure/success information and the Id of the newly created entities.
My question is how to return this new objetct to the client if the command can return only the Id of the new entity.
1) Should I query the database again with this new Id? If so, isn't that bad that I making a new trip to the database to get an object that was in the memory a few seconds ago?
2) What's the correct way of returning the entities created by the commands?
I think the more important question is why you shouldn't return domain objects from commands. If the reason for that seems like a valid reason for you, you should look into alternatives such as executing a query right after the command to fetch the domain object.
If, however, returning the domain object from the command fits your needs and does not impose any direct problems, then why not just do it and keep things simple and straightforward?
We are going to implement gemfire for our project. We are currently syncing gemfire cache with our DB2 database. So, we are facing issue while putting DB data into cache.
To put DB data into region. I have implement com.gemstone.gemfire.cache.CacheLoader and override load method of it. As written in java doc load method will return only one Object. But for our requirement we will have to return multiple VO from load method
public List<CmDvceInvtrGemfireBean> load(LoaderHelper<CmDvceInvtrGemfireBean, CmDvceInvtrGemfireBean> helper)
throws CacheLoaderException
While returining multiple VO in form of List<CmDvceInvtrGemfireBean> gemfire region consider it's as single value.
So, when i invoke,
System.out.println("return COUNT" + cmDvceInvtrRecord.query("SELECT COUNT(*) FROM /cmDvceInvtrRecord"));
It return count of one. But i can see total 7 number of data into it.
So, I want to implement the kind of mechanism that will put all the 7 values as a separate VO in Region
Is there any way to do this using Gemfire CacheLoader?
A CacheLoader was meant to load a value only for a single entry in the GemFire Region on a cache miss. As the Javadoc states...
..creates the value for the desired key..
While a key can map to a multi-valued (e.g. an array/Collection) value, the CacheLoader can only populate a single entry.
You will have to resort to other means of populating the cache with multiple "entries" in a single operation.
Out of curiosity, why do you need (requirement?) to load multiple entries (from the DB) at once? Are you trying to minimize the number of round trips to the DB?
Also, what logic are you using to decide what VO from the DB will be loaded based on the information (i.e. key) provided in the CacheLoader?
For instance, are you somehow trying to predictably select values from the DB based on the CacheLoader key that would subsequently minimize cache misses on future Region.get(key) calls?
Sorry, I don't have a better answer for you right now, but answers to some of these questions may help me give you some ideas for alternatives.
Cheers,
John
There are HotelComment and CommentPhoto (1:n) - user can add some photos to own comment. I'm loading slice of comments with one query and want load photos to this comments using other query (using WHERE IN).
$comments = $commentsRepo->findByHotel($hotel);
$comments->loadPhotos(); // of course comments is simple array yet
Loading comments needed on demand, not on PostLoad event.
So question is: how it possible associate loaded comments with objects of HotelComment? Using ReflectionProperty: setAcesseble() + setValue()? Is there simpler sollution? And I'm afraid that UoW detects HotelComment entities as modified and will send updates to db.
If you want to hydrate the related objects this one time only, and not every time the object is loaded, you need to use DQL:
$em->createQuery("SELECT comments, photos FROM HotelComment comments JOIN comments.photos photos");
You can put this in a method on the repository.
This will issue a single SELECT statement, with an INNER JOIN to the comment photos table.
You have to configure your relation as "LAZY". See doctrine documentation:
ManyToOne
ManyToMany
OneToOne
Than you'll be able to load it lazily with $comments->loadPhotos(), at least documentation says so
UPDATE: I think you don't have to to something special to avoid your entities flushing to the DB. In fact, when you query your entries with DQL, they have managed state, so attaching them to other managed entity's collection does not change their states, so they are not flushed unless you have modified them.
Hovewer, that doesn't help at all, because associations are fetched before first usage, so adding an entity to the collection with the following code will result in an implicit database query:
$comment->addPhoto($photo);
//in Comment class
function addPhoto(Photo $photo){
//var_dump(count($this->photos)); //if you have any - they are already here
$this->photos->add($photo);
}
Maybe declaring your collection as public (or that tricks with ReflectionProperty) will help fool the Doctrine, but that's a dirty hack, so I haven't even tried them.
Detaching parent entity also doesn't help. I've ran out of ideas for now....
Using NHibernate I usually query for single records using the Get() or Load() methods (depending on if I need a proxy or not):
SomeEntity obj = session.Get<SomeEntity>(new PrimaryKeyId(1));
Now, if I execute this statement twice, like the example below, I only see one query being executed in my unittests:
SomeEntity obj1 = session.Get<SomeEntity>(new PrimaryKeyId(1));
SomeEntity obj2 = session.Get<SomeEntity>(new PrimaryKeyId(1));
So far, so good. But I noticed some strange behaviour when getting the same object using a ICriteria query. Check out my code below: I get the first object instance. I then change the value of a property to 10 (the value in the database is 8), get another instance and finally check the values of the second object instance.
//get the first object instance.
SomeEntity obj1 = session.CreateCriteria(typeof(SomeEntity))
.Add(Restrictions.Eq("Id", new PrimaryKeyId(1)))
.UniqueResult<SomeEntity>();
//the value in the database and the property is 8 at this point. Let's set it to 10.
obj1.SomeValue = 10;
//get the second object instance.
SomeEntity obj2 = session.CreateCriteria(typeof(SomeEntity))
.Add(Restrictions.Eq("Id", new PrimaryKeyId(1)))
.UniqueResult<SomeEntity>();
//check if the values match.
Assert.AreEqual(8, obj2.SomeValue);
Now, for some reason the assert fails, because the value is 10 of obj2 even though I asked for the object with a new query. the funny thing is, there are 2 exactly the same select queries being executed according to my unit test output window. My question: why are there 2 queries being executed if the second object is fetched from the first level cache?
Am I missing something or is this a bug?
Regards, Ted
edit #1: using NHibernate v2.1.2GA
edit #2: I added some extra explanation about the 2 queries being executed to the last paragraph.
Well, having learned a lot more about NHibernate I can now answer this question myself:
The ICriteria query returns a list of objects fetched by NHibernate. NHibernate does not know which objects are returned until they are matched one by one with the object in the first level cache. If the item is already in the first level cache map the item read from the database is discarded. if it is not in the identity map, the item is put into the first level cache.
Another "a-ha!" moment: suppose you run the query for the first time while there are 5 rows in the database all rows are fetched and put into first level cache. now over time 5 more records are added to the table and you rerun the query. Now all 10 records are fetched, but NHibernate sees 5 of them are already in the cache and will only add the 5 latter records. So basically you fetched 5 records for nothing (just to match the identifiers with the object identifiers in the identity map).
Get/Load use the 1st level cache, this is why you don't see the 2nd call out the db. Queries do not use the 1st level cache. However, you can set up queries to use the 2nd level cache. See details here
UPDATE What's likely happening is the query is doing a 2 phase load. So it's getting the result set, but also checking the 1st level cache to see if any entities exist there. If they do, then it returns the cached object. See NHibernate.Loader.Loader.GetRow method.
Here is the relevant line:
//If the object is already loaded, return the loaded one
obj = session.GetEntityUsingInterceptor(key);
AFAIK, only 'Get' (and maybe Load) use the 1st level cache.
Using the Criteria API always results in a query hitting the DB, unless the 2nd level cache is enabled.
Edit: more information can be found here
I am not sure why a second query is ran, but the expected behavior of NHibernate is if you ask for the same object by ID from the same session, you get the first level cache.
In my understanding, when using a Criteria, you are basically saying to NHibernate: "I want to filter rows based on expressions".
When seen that way, NHibernate has no way of knowing if the query will always return the same filtered row(s) from the database, so it has to query it again.
Also, you can use query caching only with second-level caching, as per the documentation:
So the query cache should always be used in conjunction with the second-level cache.
From here
NHibernate is probably issuing an update between the first and second queries to protect you from a concurrency problem. As Frederik pointed out, you should always use Get to retrieve an object by its key.
I'm curious, what is the PrimaryKeyId wrapper adding?
EDIT:
However it's working (my money's still on an update before select), this behavior is by design. If you want to discard your in-memory object and load a new instance of it from the session, then Evict the original from the session first. There is also a Refresh method you could try.