I have bidirectional mapping between Family and Person:
#Entity class Family {
#OneToMany(mappedBy = "family", ...)
Set<Person> persons;
...
}
#Entity class Person {
#ManyToOne
Family family;
...
}
My problem is that I can concurrently add and/or remove elements from the collection, which it breaks atomicity in the updates, as adding a new person creates INSERT INTO Person ... statement that does not collide with the other insert. #Version field does not help, since I am not updating the Family entity as row in the table, just logically.
What should I do to enable only atomic updates of the collection? I have tried loading Family with LockMode.PESSIMISTIC_WRITE, which can synchronize the updates on the Family itself, but that does not allow second-level caching for the read of Family. I would prefer optimistic transactions with kind of predicate constraint in the DB.
You can use OPTIMISTIC_FORCE_INCREMENT:
entityManager.lock(family, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
This way version will be checked and incremented in the corresponding Family entity instance.
EDIT
Since you are using L2 cache, and for some reason (bug, design decision, or just a missing feature) Hibernate does not update the version in the L2 cache after the force increment, you will have to actually load the entity with this lock for everything to work properly:
entityManager.find(Family.class, id, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
However, now you will have to take care to be consistent and to load the Family entities this way always prior to updating them (to avoid working with stale version values because L2 cache is not in sync with the database).
To overcome this, you could add an artificial column to the Family entity, like lastUpdateTime or something, and update the Family instance without using any explicit lock. Then the regular version check would occur and everything would be synchronized with the L2 cache.
Related
Let's say we have two entities, A and B. B has a many-to-one relationship to A like follows:
#Entity
public class A {
#OneToMany(mappedBy="a_id")
private List<B> children;
}
#Entity
public class B {
private String data;
}
Now, I want to delete the A object and cascade the deletions to all its children B. There are two ways to do this:
Add cascade=CascadeType.ALL, orphanRemoval=true to the OneToMany annotation, letting JPA remove all children before removing the A-object from the database.
Leave the classes as they are and simply let the database cascade the deletion.
Is there any problem with using the later option? Will it cause the Entity Manager to keep references to already deleted objects? My reason for choosing option two over one is that option one generates n+1 SQL queries for a removal, which can take a prolonged time when object A contains a lot of children, while option two only generates a single SQL query and then moves on happily. Is there any "best practice" regarding this?
I'd prefer the database. Why?
The database is probably a lot faster doing this
The database should be the primary place to hold integrity and relationship information. JPA is just reflecting that information
If you're connecting with a different application / platform (i.e. without JPA), you can still cascadingly delete your records, which helps increase data integrity
In EclipseLink you can use both if you use the #CascadeOnDelete annotation. EclipseLink will also generate the cascade DDL for you.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/DeleteCascade
This optimizes the deletion by letting the database do it, but also maintains the cache and the persistence unit by removing the objects.
Note that orphanRemoval=true will also delete objects removed from the collection, which the database cascade constraint will not do for you, so having the rules in JPA is still necessary. There are also some relationships that the database cannot handle deletion for, as the database can only cascade in the inverse direction of the constraint, a OneToOne with a foreign key, or a OneToMany with a join table cannot be cascaded on the database.
This answer raises some really strong arguments about why it should be JPA that handles the cascade, not the database.
Here's the relevant quote:
...if you would make cascades on database, and not declare them in
Hibernate (for performance issues) you could in some circumstances get
errors. This is because Hibernate stores entities in its session
cache, so it would not know about database deleting something in
cascade.
When you use second-level cache, your situation is even worse, because
this cache lives longer than session and such changes on db-side will
be invisible to other sessions as long old values are stored in this
cache.
I'm trying to use 'adonet.batch_size' property in NHibernate. Now, I'm creating entities across multiple sessions at a large rate (hence batch inserting). So what I'm doing is creating a buffer where I keep these entities and them flush them out all at once periodically.
However I need the ID's as soon as I create the entities. So I want to create an entity (in any session) and then have its ID generated (I'm using HiLo generator). And then at a later time (and other session) I want to flush that buffer and ensure that those IDs do not change.
Is there anyway to do this?
Thanks
Guido
I find it odd that you need many sessions to do a single job. Normally a single session is enough to do all work.
That said, the Hilo generator sets the id property on the entity when calling nhSession.Save(object) without necessarily requiring a round-trip to the database and a
nhSession.Flush() will flush the inserts to the database
UPDATE ===========================================================================
This is a method i used on a specific case that made pure-sql inserts while maintaining NHibernate compatibility.
//this will get the value and update the hi-lo value repository in the datastore
public static void GenerateIdentifier(object target)
{
var targetType = target.GetType();
var classMapping = NHibernateSessionManager.Instance.Configuration.GetClassMapping(targetType);
var impl = NHibernateSessionManager.Instance.GetSession().GetSessionImplementation();
var newId = classMapping.Identifier.CreateIdentifierGenerator(impl.Factory.Dialect, classMapping.Table.Catalog, classMapping.Table.Schema,
classMapping.RootClazz).Generate(impl, target);
classMapping.IdentifierProperty.GetSetter(targetType).Set(target, newId);
}
So, this method takes your newly constructed entity like
var myEnt = new MyEnt(); //has default identifier
GenerateIdentifier(myEnt); //now has identifier injected based on nhibernate's mapping
note that this call does not place the entity in any kind of nhibernate managed space. So you still have to make a place to place your objects and make the save on each one. Also note that i used this one with pure sql inserts and unless you specify generator="assigned" (which will then require some custom hi-lo generator) in your entity mapping nhibernate may require a different mechanism to persist it.
All in all, what you want is to generate an Id for an object that will be persisted at some time in the future. This brings up some problems such as handling non-existent entries due to rollbacks and failed commits. Additionally imo nhibernate is not the tool for this particular job, you don't need nhibernate to do your bulk insert unless there is some complex entity logic that is too costly (in dev time) to implement on your own.
Also note that you are implying that you need transient detached entities which however cannot be used unless you call .nhSes.Save(obj) on the first session and flush its contents so the 2nd session when it calls Load on the transient object there will be an existing row in the database which contradicts what you want to achieve.
Imo don't be afraid of storming the database, just optimise the procedure top-to-bottom to be able to handle the volume. Using nhibernate just to do an insert seems counter-productive when you can achieve the same result with 4 times the performance using ado.net or even an isqlquery wrapped-query (and use the method i provided above)
I have an object called "Customer" which will be used in the other tables as foreign keys.
The problem is that I want to know if a "Customer" can be deleted (ie, it is not being referenced in any other tables).
Is this possible with Nhibernate?
What you are asking is to find the existence of the Customer PK value in the referenced tables FK column.
There are many ways you can go about this:
as kgiannakakis noted, try to do the delete and if an exception is thrown rollback. Effective but ugly and not useful. This also requires that you have set a CASCADE="RESTRICT" in your database. This solution has the drawback that you have to try to delete the object to find out that you can't
Map the entities that reference Customer as collections and then for each collection if their Count > 0 then do not allow the delete. This is good because this is safe against schema changes as long as the mapping is complete. It is also a bad solution because additional selects will have to be made.
Have a method that performs a query like bool IsReferenced(Customer cust). Good because you can have a single query which you will use when you want. Not so good because it may be susceptible to errors due to schema and/or domain changes (depending on the type of query you will do: sql/hql/criteria).
A computed property on the class it self with a mapping element like <property name="IsReferenced" type="long" formula="sql-query that sums the Customer id usage in the referenced tables" />. Good because its a fast solution (at least as fast as your DB is), no additional queries. Not so good because it is susceptible to schema changes so when you change your DB you mustn't forget to update this query.
crazy solution: create a schema bound view that makes the calculation. Make the query on it when you want. Good because its schema-bound and is less susceptible to schema changes, good because the query is quick, not-so-good because you still have to do an additional query (or you map this view's result on solution 4.)
2,3,4 are also good because you can also project this behavior to your UI (don't allow the delete)
Personally i would go for 4,3,5 with that preference
I want to know if a "Customer" can be deleted (ie, it is not being referenced in any other tables).
It is not really the database responsibility to determine if the Customer can be deleted. It is rather part of your business logic.
You are asking to check the referential integrity on the database.
It is ok in non OOP world.
But when dealing with objects (like you do) you better add the logic to your objects (objects have state and behavior; DB - only the state).
So, I would add a method to the Customer class to determine if it can be deleted or not. This way you can properly (unit) test the functionality.
For example, let's say we have a rule Customer can only be deleted if he has no orders and has not participated in forum.
Then you will have Customer object similar to this (simplest possible case):
public class Customer
{
public virtual ISet<Order> Orders { get; protected set; }
public virtual ISet<ForumPost> ForumPosts { get; protected set; }
public virtual bool CanBedeleted
{
get
{
return Orders.Count == 0 && ForumPosts.Count == 0
}
}
}
This is very clean and simple design that is easy to use, test and does not heavily relies on NHibernate or underlying database.
You can use it like this:
if (myCustomer.CanBeDeleted)
session.Delete(mycustomer)
In addition to that you can fine-tune NHibernate to delete related orders and other associations if required.
The note: of course the example above is just simplest possible illustrative solution. You might want to make such a rule part of the validation that should be enforced when deleting the object.
Thinking in entities and relations instead of tables and foreign keys, there are these different situations:
Customer has a one-to-many relation which builds a part of the customer, for instance his phone numbers. They should also be deleted by means of cascading.
Customer has a one-to-many or many-to-many relation which is not part of the customer, but they are known/reachable by the customer.
Some other entity has a relation to the Customer. It could also be an any-type (which is not a foreign key in the database). For instance orders of the customer. The orders are not known by the customer. This is the hardest case.
As far as I know, there is no direct solution from NHibernate. There is the meta-data API, which allows you to explore the mapping definitions at runtime. IMHO, this is the wrong way to do it.
In my opinion, it is the responsibility of the business logic to validate if an entity can be deleted or not. (Even if there are foreign keys and constraints which ensures integrity of the database, it is still business logic).
We implemented a service which is called before deletion of an entity. Other parts of the software register for certain types. They can veto against the deletion (eg. by throwing an exception).
For instance, the order system registers for deletion of customers. If a customer should be deleted, the order system searches for orders by this customer and throws if it found one.
It's not possible directly. Presumably your domain model includes Customer's related objects, such as Addresses, Orders, etc. You should use the specification pattern for this.
public class CustomerCanBeDeleted
{
public bool IsSatisfiedBy(Customer customer)
{
// Check that related objects are null and related collections are empty
// Plus any business logic that determines if a Customer can be deleted
}
}
Edited to add:
Perhaps the most straightforward method would be to create a stored procedure that performs this check and call it before deleting. You can access an IDbCommand from NHibernate (ISession.Connection.CreateCommand()) so that the call is database agnostic.
See also the responses to this question.
It might be worth looking at the cascade property, in particular all-delete-orphan in your hbm.xml files and this may take care of it for you.
See here, 16.3 - Cascading Lifecycle
A naive solution will be to use a transaction. Start a transaction and delete the object. An exception will inform you that the object can't be deleted. In any case, do a roll-back.
Map the entities that reference Customer as collections. Name each collection in your Customer class with a particular suffix.For example if your Customer entity has some Orders, name the Orders collection as below:
public virtual ISet<Order> Orders_NHBSet { get; set; } // add "_NHBSet" at the end
Now by using Reflection you can get all properties of Customer at run time and get those properties that their names ends with your defined suffix( In this case "_NHBSet" ) Then check each collection if they contain any element and if so avoid deleting customer.
public static void DeleteCustomer(Customer customer)
{
using (var session = sessions.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
var listOfProperties =typeof(Customer).GetProperties();
foreach (var classProperty in listOfProperties )
{
if (classProperty.Name.EndsWith("_NHBSet"))
{
PropertyInfo myPropInfo = typeof(Customer).GetProperty(classProperty.Name);
dynamic Collection = myPropInfo.GetValue(customer, null);
if (Enumerable.FirstOrDefault(Collection) !=null)// Check if collection contains any element
{
MessageBox.Show("Customer Cannot be deleted");
return;
}
}
}
session.Delete(customer);
transaction.Commit();
}
}
}
The Advantage of this approach is that you don't have to change your code later if you add new collections to your customer class.And you don't need change your sql query as Jaguar suggested.
The only thing you must care about is to add the particular suffix to your newly added collections.
I have 2 domain classes with a many-to-many relationship in grails: decks and cards.
The setup looks like this:
class Deck {
static hasMany = [cards: Card]
}
class Card {
static hasMany = [decks: Deck]
static belongsTo = Deck
}
After I delete a deck, I want to also delete all cards which no longer belong to a deck. The easiest way to accomplish this is to write something like the following sql:
delete from card where card.id not in(select card_id from deck_cards);
However, I can't figure out how to write a HQL query which will resolve to this SQL because the join table, deck_cards, does not have a corresponding grails domain class. I can't write this statement using normal joins because HQL doesn't let you use joins in delete statements, and if I use a subquery to get around this restriction mySQL complains because you're not allowed to refer to the table you're deleting from in the "from" section of the subquery.
I also tried using the hibernate "delete-orphan" cascade option but that results in all cards being deleted when a deck is deleted even if those cards also belong to other decks. I'm going crazy - this seems like it should be a simple task.
edit
There seems to be some confusion about this specific use of "decks" and "cards". In this application, the "cards" are flashcards and there can be tens of thousands of them in a deck. Also, it is sometimes necessary to make a copy of a deck so that users can edit it as they see fit. In this scenario, rather than copying all the cards over, the new deck will just reference the same cards as the old deck, and if a card is changed only then will a new card be created. Also, while I can do this delete in a loop in groovy, it will be very slow and resource-intensive since it will generate tens of thousands of sql delete statements rather than just 1 (using the above sql). Is there no way to access a property of the join table in HQL?
First, I don't see the point in your entities.
It is illogical to make a card belong to more than one deck. And it is illogical to have both belongTo and hasMany.
Anyway, Don't use HQL for delete.
If you actually need a OneToMany, use session.remove(deck) and set the cascade of cards to REMOVE or ALL.
If you really want ManyToMany, do the checks manually on the entities. In pseudocode (since I don't know grails):
for (Card card : deck.cards} {
if (card.decks.size == 0) {
session.remove(card);
}
}
I won't be answering the technical side, but challenging the model. I hope this will also be valuable to you :-)
Functionally, it seems to me that your two objects don't have the same lifecycle:
Decks are changing : they are created, filled with Cards, modified, and deleted. They certainly need to be persisted to your database, because you wouldn't be able to recreate them using code otherwise.
Cards are constant : the set of all cards is known from the beginning, they keep existing. If you delete a Card once in the database, then you will need to recreate the same Card later when someone needs to put it in a Deck, so in all cases you will have a data structure that is responsible for providing the list of possible Cards. If they are not saved in your database, you could recreate them...
In the model you give, the cards have a set of Decks that hold them. But that information has the same lifecycle than the Decks' (changing), so I suggest to hold the association only on the Deck's side (uni-directional Many-To-Many relationship).
Now you've done that, your Cards are really constant information, so they don't even need to be persisted into the database. You would still have a second table (in addition to the Deck), but that Card table would only contain the identifying information for the Card (could be a simple integer 1 to 52, or two values, depending what you need to "select" in your queries), and not other fields (an image, the strength, some points etc...).
In Hibernate, these choices turns the Many-To-Many relationship to a Collection of values (see Hibernate reference).
With a Collection of Values, Card is not an Entity but a Component. And you don't have to delete them, everything is automatically taken care by Hibernate.
I am wondering how can one delete an entity having just its ID and type (as in mapping) using NHibernate 2.1?
If you are using lazy loading, Load only creates a proxy.
session.Delete(session.Load(type, id));
With NH 2.1 you can use HQL. Not sure how it actually looks like, but something like this: note that this is subject to SQL injection - if possible use parametrized queries instead with SetParameter()
session.Delete(string.Format("from {0} where id = {1}", type, id));
Edit:
For Load, you don't need to know the name of the Id column.
If you need to know it, you can get it by the NH metadata:
sessionFactory.GetClassMetadata(type).IdentifierPropertyName
Another edit.
session.Delete() is instantiating the entity
When using session.Delete(), NH loads the entity anyway. At the beginning I didn't like it. Then I realized the advantages. If the entity is part of a complex structure using inheritance, collections or "any"-references, it is actually more efficient.
For instance, if class A and B both inherit from Base, it doesn't try to delete data in table B when the actual entity is of type A. This wouldn't be possible without loading the actual object. This is particularly important when there are many inherited types which also consist of many additional tables each.
The same situation is given when you have a collection of Bases, which happen to be all instances of A. When loading the collection in memory, NH knows that it doesn't need to remove any B-stuff.
If the entity A has a collection of Bs, which contains Cs (and so on), it doesn't try to delete any Cs when the collection of Bs is empty. This is only possible when reading the collection. This is particularly important when C is complex of its own, aggregating even more tables and so on.
The more complex and dynamic the structure is, the more efficient is it to load actual data instead of "blindly" deleting it.
HQL Deletes have pitfalls
HQL deletes to not load data to memory. But HQL-deletes aren't that smart. They basically translate the entity name to the corresponding table name and remove that from the database. Additionally, it deletes some aggregated collection data.
In simple structures, this may work well and efficient. In complex structures, not everything is deleted, leading to constraint violations or "database memory leaks".
Conclusion
I also tried to optimize deletion with NH. I gave up in most of the cases, because NH is still smarter, it "just works" and is usually fast enough. One of the most complex deletion algorithms I wrote is analyzing NH mapping definitions and building delete statements from that. And - no surprise - it is not possible without reading data from the database before deleting. (I just reduced it to only load primary keys.)