NHibernate: why field.camelcase? - nhibernate

Can someone tell me why in NHibernate mapping we can set access="field.camelcase", since we have access="field" and access="property"?
EDIT: my question is "why we can do this", not "what does it mean". I think this can be source of error for developper.

I guess you wonder what use field.camelcase have when we can do the same with just field? That's true, but that would give (NH) properties unintuive names when eg writing queries or reference the property from other mappings.
Let's say you have something you want to map using the field, eg
private string _name;
public string Name { get { return _name; } }
You sure can map the field using "field" but then you would have to write "_name" when eg writing HQL queries.
select a from Foo a where a._name = ...
If you instead using field.camelcase the data, the same query would look like
select a from Foo a where a.Name...
EDIT
I now saw you wrote "field.camelcase" but my answer is about "field.camelcase-underscore". The principles are the same and I guess you get the point ;)

the portion after the '.' is the so called naming strategy, that you should specify when the name you write in the hbm differ from the backing field. In the case of field.camelcase you are allowed to write CustomerName in the hbm, and NHibernate would look for a field with name customerName in the class. The reason for that is NHibernate not forcing you to choose a name convention to be compliant, NH will works with almost any naming convention.

There are cases where the properties are not suitable for NH to set values.
They may
have no setter at all
call validation on the data that is set, which is not used when loading from the database
do some other stuff that is only used when the value is changed by the business logic (eg. set other properties)
convert the value in some way, which would cause NH performing unnecessary updates.
Then you don't want NH to call the property setter. Instead of mapping the field, you still map the property, but tell NH to use the field when reading / writing the value. Roger has a good explanation why mapping the property is a good thing.

Related

How can I store Enum value as a String with Hibernate to SQL Server?

So I'm having an Enum-property in an Entity bean:
#Entity
#Table(name = "fileAttachment")
public class FileAttachment
// other properties..
#Enumerated(EnumType.STRING)
FileAttachmentType type;
// getters and setters
However, when I persist the bean, the value in that column is shown as a number such as 0 or 1 or 2.
If I println the value of the enum just before persisting the bean with EntityManager, the value prints out as String, such as INVOICE but in the SQL Server table that row has value 2 for example on the fileAttachmentType-column. What else do I need to configure? I thought the EnumType.STRING would do the trick.
Do you create the table in DB by yourself or rely on Hibernate in it?
If first make sure the column type suits for strings storing.
If second try to use annotation like
#Column(columnDefinition = "enum('VALUE1','VALUE2')")
Ok, in this case things worked out when I added the annotation: #Enumerated(EnumType.STRING)
to the getter of that field and NOT to the actual field.
In another project it works when the annotation is on the field and not anywhere else... so, as far as I comprehed, the answer is the good old "for some reason", but it works now.
If someone would comment the reason for this, I'll update the answer.
EDIT: The reason was found. There was already an annotation on a getter in that Entity class. That's why the annotation on a field didn't work. It happens to be so, that you should either have annotations ONLY on fields OR ONLY on getters. Not annotations on both.

AliasToBean DTO with known type

All the examples I am finding for using the AliasToBean transformer use the sessions CreateSqlQuery method rather than the CreateQuery method. They also only return the basic value types, and not any object's of the existing mapped types.
I was hoping it would be possible that my DTO have a property of one of my mapped Domain objects, like below, but I am not getting traction. I get the following exception:
Could not find a setter for property '0' in class 'namespace.DtoClass'
My select looks like the following on my mapped classes (I have confirmed the mappings pull correctly):
SELECT
fcs.MeasurementPoint,
fcs.Form,
fcs.MeasurementPoint.IsUnscheduled as ""IsVisitUnscheduled"",
fcs.MultipleEntryAllowed
FROM FormCollectionSchedule fcs
My end query will be more complex, but I wanted to confirm if this AliasToBean method can return mapped domain objects as well as basic field values from tables retrieved via sql.
the query execution looks like the following:
var result = session.CreateQuery(hqlQuery.ToString())
.SetResultTransformer(NHibernate.Transform.Transformers.AliasToBean(typeof (VisitFormCollectionResult)))
.List<VisitFormCollectionResult>();
note: the VisitFormCollectionResult DTO has more properties, but I wanted to know if I could populate the domain object properties matching the names
update found my problem! I have to explicitly alias each of the fields. once I added an alias, even though the member property on the class matched my DTO's property name, the hydration of the object worked correctly.
The answer to my own question was that each of the individual fields in the select needed an explicit alias matching the property, regardless if the field name already matched the property name of the DTO object:
SELECT
fcs.MeasurementPoint as "MeasurementPoint",
fcs.Form as "Form",
fcs.MeasurementPoint.IsUnscheduled as "IsVisitUnscheduled",
fcs.MultipleEntryAllowed as "MultipleEntryAllowed"
FROM FormCollectionSchedule fcs

CF9 ORM Populating an entity with an object

I am using Model-Glue/Coldspring for a new application and I thought I would throw CF9 ORM into the mix.
The only issue I am having right now is with populating an entity with an object. More or less the code below verifies that only one username can exist. There is some other logic that is not displayed.
My first thought was to using something like this:
var entity = entityload('UserAccount' ,{UserName=arguments.UserAccount.getUserName()},"true")
entity = arguments.UserAccount;
How ever this does not work the way that I expected. Is it even possible to populate an entity with an object or do I need to use the setters?
Not sure if this is what you're looking for. If you have...
component persistent="true" entityName="Foo"
{
property a;
property b;
}
You can pass a struct in the 2nd param to init the entity (added in CF9.0.1 I believe)
EntityNew("Foo", {a="1",b="2"});
To populate Foo with another object, you can use the Memento pattern, and implement a GetMemento() function to your object that returns a struct of all its properties.
EntityNew("Foo", bar.getMemento());
However, CF does NOT call your custom setters! If you want to set them using setters, you may add calls to the setters in your init() constructor, or use your MVC framework of choice to populate the bean. In Model-Glue, it is makeEventBean().
Update: Or... Here's hack...
EntityNew("Foo", DeserializeJSON(SerializeJSON(valueObject)));
Use this at your own risk. JSON might do weird things to your numbers and the 'yes','no','true','false' strings. :)
Is it even possible to populate an entity with an object or do I need to use the setters?
If you mean "Is it possible to create load an ORM Entity from an instance of that persistent CFC that already exists and has properties set?", then yes you can using EntityLoadByExample( object,[unique] )
entity = EntityLoadByExample( arguments.userAccount,true );
This assumes the userAccount CFC has been defined as persistent, and its username value has been set before being passed in (which seems to be the case in your situation).
Bear in mind that if any other properties have been set in the object you are passing, including empty strings, they will be used as filters to load the entity, so if they do not exactly match a record in your database, nothing will be loaded.

Set Accessor on class does not appear to work with TextInfo and TitleCase

Whilst playing around with an nhibernate mapping, I noticed that a property setter I had was being overloaded (or ignored). This is expected default behaviour with an nhibernate mapping.
So I changed it to use the field.camelCase - so NHibernate would set the private field of the entity class and not the propety getter/setter so I could then use the getter to implement
get { return (new TextInfo()).ToTitleCase(_property);}
I noticed that the output was still what was persisted and this method did not work.
I changed the to _property.ToLower(); and the output was expected as lower case text.
So it appears that there is something I have not done quite right with TextInfo. NHibernate was working correctly (NB NHibernate rocks)
Any ideas why TextInfo is doing this? Probably something trivial I have missed..
For some reason it doesn't work with upper-case strings, uhmmmm Microsoft ;P
Your solution will be to lower case the input first:
get { return (new TextInfo()).ToTitleCase(_property.ToLower());}

NHibernate - Changing sub-types

How do you go about changing the subtype of a row in NHibernate? For example if I have a Customer entity and a subclass of TierOneCustomer, I have a case where I need to change a Customer to a TierOneCustomer but the TierOneCustomer should have the same Id (PK) as the original Customer entity.
The mapping looks something like this:
<class name="Customer" table="SiteCustomer" discriminator-value="C">
<id name="Id" column="Id" type="Int64">
<generator class="identity" />
</id>
<discriminator column="CustomerType" />
... properties snipped ...
<subclass name="TierOneCustomer" discriminator-value="P">
... more properties ...
</subclass>
</class>
I'm using the one-table per class hierarchy model, so using plain-sql, it'd be just a matter of a sql update of the discriminator (CustomerType) and set the appropriate columns relevant for the type. I can't find the solution in NHibernate, so would appreciate any pointers.
I'm also thinking whether the model is correct considering this use-case, but before I go down that route, I want to make sure doing as described above is actually possible in the first place. If not, I'll almost certainly think about changing the model.
Short answer is yes, you can change the discriminator value for the particular row(s) using native SQL.
However, I don't think NHibernate is intended to work this way, since the discriminator is generally "invisible" to the Java layer, where its value is supposed to be set initially according to the class of the persisted object and never changed.
I recommend looking into a cleaner approach. From the standpoint of the object model, you're trying to convert a superclass object into one of its subclass types while not changing the identity of its persisted instance, and that's where the conflict is (the converted object isn't really supposed to be the same thing). Two alternative approaches are:
Create a new instance of TierOneCustomer based on the information in the original Customer object, then delete the original object. If you were relying on the Customer's Primary Key for retrieval, you'll need to take note of the new PK.
or
Change your approach so the object type (discriminator) doesn't need to change. Instead of relying on a subclass to distinguish TierOneCustomer from Customer, you can use a property that you can modify freely at any time, i.e. Customer.Tier = 1.
Here are some related discussions on the Hibernate Forums that may be of interest:
Can we update the discriminator column in Hibernate
Table-per-Class Problem: Discriminator and Property
Converting a persisted instance into a subclass
You're doing something wrong.
What you are trying to do is to change the type of an object. You can't do that in .NET or in Java. That simply doesn't make sense. An object is of exactly one concrete type, and its concrete type cannot be changed from the time the object is created until the time the object is destroyed (black magic notwithstanding). In order to accomplish what you are trying to do, but with the class hierarchy you laid out, you would have to destroy the customer object which you want to turn into a tier-one customer object, create a new tier-one customer object, and copy all the relevant properties from the customer object to the tier-one customer object. That is how you do it with objects, in object-oriented languages, with your class hierarchy.
Obviously, the class hierarchy you have isn't working for you. You don't destroy customers in real life when they become tier-one customers! So don't do it with objects either. Instead, come up with a class hierarchy that makes sense, given the scenarios you need to implement. Your use scenarios include:
A customer who previously is not tier-one status now becomes tier-one status.
That means you need a class hierarchy which can accurately capture this scenario. As a hint, you should favor composition over inheritance. That means, it may be a better idea to have a property named IsTierOne, or a property named DiscountStrategy, etc., depending on what works best.
The entire purpose of NHibernate (and Hibernate for Java) is to make the database invisible. To allow you to work with objects natively, with the database magically there behind the scenes to make your objects persistent. NHibernate will let you work with the database natively, but that's not the type of scenario which NHibernate is built for.
This is REALLY late, but may be of use to the next person looking to do something similar:
While the other answers are correct that you shouldn't change the discriminator in most cases, you can do it purely within the scope of NH (no native SQL), with some clever use of mapped properties. Here's the gist of it using FluentNH:
public enum CustomerType //not sure it's needed
{
Customer,
TierOneCustomer
}
public class Customer
{
//You should be able to use the Type name instead,
//but I know this enum-based approach works
public virtual CustomerType Type
{
get {return CustomerType.Customer;}
set {} //small code smell; setter exists, no error, but it doesn't do anything.
}
...
}
public class TierOneCustomer:Customer
{
public override CustomerType Type {get {return CustomerType.TierOneCustomer;} set{}}
...
}
public class CustomerMap:ClassMap<Customer>
{
public CustomerMap()
{
...
DiscriminateSubClassesOnColumn<string>("CustomerType");
DiscriminatorValue(CustomerType.Customer.ToString());
//here's the magic; make the discriminator updatable
//"Not.Insert()" is required to prevent the discriminator column
//showing up twice in an insert statement
Map(x => x.Type).Column("CustomerType").Update().Not.Insert();
}
}
public class TierOneCustomerMap:SubclassMap<TierOneCustomer>
{
public CustomerMap()
{
//same idea, different discriminator value
...
DiscriminatorValue(CustomerType.TierOneCustomer.ToString());
...
}
}
The end result is that the discriminator value is specified for inserts, and used to determine the instantiated type on retrieval, but then if a record of a different subtype with the same Id is saved (as if the record was cloned or un-bound from the UI to a new type), the discriminator value is updated on the existing record with that ID as an object property, so that future retrievals of that type are as the new object. The setter is required on the properties because AFAIK NHibernate can't be told that a property is read-only (and thus "write-only" to the DB); in NHibernate's world, if you write something to the DB, why wouldn't you want it back?
I used this pattern recently to allow users to change the basic type of a "tour", which is in reality a set of rules governing the scheduling of the actual "tour" (a single digital "visit" to a client's on-site equipment to ensure it all works properly). While they're all "tour schedules" and need to be collectable in lists/queues etc as such, the different types of schedules require very different data and very different processing, calling for a similar data structure as the OP has. I therefore completely understand the OP's desire to treat a TierOneCustomer in a substantially different way while minimizing the effect at the data layer, so, here ya go.
If you're doing it offline (e.g. in a DB upgrade script), just use SQL and ensure consistency yourself.
If this is something you plan will happen in while the app is running, I think your requirements are wrong, just like keeping the same pointer address for a different object is wrong.
If you save the ID and use it to access the customer again (e.g. in a URL) consider making a new field that contains a token for this that will be the business key. Since it's not the ID, it's easy to create a new entity instance and copy over the token (you'll probably need to remove the token from the old one).