How to add custom (user defined) properties to entities with EclipseLink? - eclipselink

I'd like to add user-defined custom fields to an existing entity in EclipseLink. For performance reasons, I want them to be stored directly in the entity's table, and I also want them to be "first class citizens", i.e. usable in queries.
From an implementation standpoint, the entity should have two methods to set and get custom fields:
public Object getCustomProperty(String key) { ... }
public void setCustomProperty(String key, Object value) { ... }
When setting a custom property foo, EclipseLink should store the value in the entity's table in a field named custom_foo.
From an end user standpoint, I would like to provide a GUI where the user can define and manage custom fields, which are then dynamically added to or removed from the database.
Is this possible in EclipseLink?
Regards,
Jochen

Check out EclipseLink's Extensibility feature
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Advanced_JPA_Development/Extensible_Entities
This with support for adding columns:
http://wiki.eclipse.org/EclipseLink/DesignDocs/368365
seems somewhat like what you are asking for - except for the conflicting statement that it shouldn't store in the main table, then later state it should store in the entity's table "custom_foo" column.
You will need to create the GUI that creates the mappings. Eclipselink ships with a metadata source implementation that reads from an orm.xml file, so you may have to write your own implementation for EclipseLink to use if your GUI cannot write to an orm file.

Related

JPA persist generic entity

I need some helps with JPA 2.1 (and ORM Hibernate).
I have some entity (3-4) mapped in database table, and I can see their rows into separate primefaces datatable. I also export these data in .xls file with apache poi.
Everything works perfectly.
Now, I need to import and read an excel file (I already done it), and insert the new information in a table.
Can I implement a generic JPA method to insert (make persistent) a series of data?
Something like this
EntityManager em = getEntityManager;
em.getTransaction().begin();
Employee employee = new Employee();
employee.setFirstName("Bob");
....
em.persist(employee);
em.getTransaction().commit();
But with "generic" and not specific entity (in this case is "Employee"), so as to create an unique persistent method for all entities, and not several specific method for each entity? (Whereas they also have different names of columns).
Thank you all!
You need a way to determine the following:
What entity type should the excel row be implied as.
What is the property mappings between the various excel columns and the entity properties.
With those pieces of information, you could easily use a no-arg constructor for each of you entity classes to construct a new instance, use a library like BeanUtils or equivalent to set the property values based on your column to property mappings and then persist it.
From a JPA persistence perspective, there is absolutely nothing here that affects that.

How is idProperty used for more complex schema? Dojo dmodel

It’s not very clear how idProperty is used in the data store when building a data model. The documentation says “If the store has a single primary key, this indicates the property to use as the identity property. The values of this property should be unique. This defaults to "id".
Is this assuming the schema from which the model is based, has a mostly flat structure? For example an array of objects – each with an identity property?
What if the schema is not a simple array but has more complex structure starting from a single object that contains several sub levels of properties within properties. OR is just multiple arrays on the same level where each group of arrays identify property are independent of one another?
A store is an extension of a collection.
A collection is the interface for a collection of items (your obect with a potentially complex schema).
You can use Custom Querying on a collection to define special queries to find your data with any subset of properties.
In short, yes you can querying your data even if it has a custom schema but you need to define a Custom Querying.
More info can be found here at the end of the article: https://github.com/SitePen/dstore/blob/master/docs/Collection.md

Class that represents parts of another object?

I have a class called "EntryData" with a couple of fields in it, "name", "entrydate", "enteredby" and "key" a foreign key. The key points to the "DataEntered" class with "key", "startdate", "enddate" and "values", an array of doubles. This matched the layout of a db we were using, which stored data in two tables.
Now we are adapting this to a db that stores all the same data, but in a single table. We would like the API to remain the same.
So, I hope this makes sense:
Can I make a new class called "DataEntered" that has no instances its own, and consists solely of pointers to particular fields in "EntryData"? That is, there would be no objects of this class, it would simply be a sort of wrapper that always referred to the underlying EntryData it was called on, like this…
myEntryDataInstance.DataEntered.startdate
In this case there is no instance of DataEntered.
The easy way to solve my problem would be if I could put periods in my method names, but that doesn't seem to work. :-)
What your asking for is called inheritance but VB does not allow you to inherit from multiple classes. You would have to inherit from one class and then write in pointers to the second class which would have to be saved as a variable in your class.
Problem is, what do you plan to do with this? Are you using the Entity Framework or your own homebrewed data manager? If you are using something gereric, it won't know how to handle you NEW class with multiple tables.

How To Override Default LINQ to SQL Association Name

I am working on a pretty straight forward C# application that uses LINQ to SQL for database access. The application is a non-web (i.e. thick client) application.
The problem that I have recently run into is with the default association name that LINQ to SQL is creating for fields that are foreign keys to another table. More specifically, I have provided an example below:
Example of Problem
The majority of my combo boxes are filled using values from a reference data table (i.e. RefData) that stores a type, description, and a few other fields. When the form initially loads, it fills the combo boxes with values based on a query by type. For example, I have a form that allows the user to add customers. On this form, there is a combo box for state. The stateComboBox is filled by running a query against the RefData table where type = stateType. Then, when the user saves the customer with a selected state the id of the RefData column for the selected state is stored in the state column of the customer table. All of this works as expected. However, if my customer table has more than one column that is a foreign key to the RefData table it quickly becomes very confusing because the association name(s) created by LINQ are Customer.RefData, Customer.RefData1, Customer.RefData2, etc... It would be much easier if I could override the name of the association so that accessing the reference data would be more like Customer.State, Customer.Country, Customer.Type, etc...
I have looked into changing this information in the DBML that is generated by VS but, my database schema is still very immature and constantly requires changes. Right now, I have been deleting the DBML every day or two to regenerate the LINQ to SQL files after making changes to the database. Is there an easy way to create these associations with meaningful names that will not be lost while I frequently re-create the DBML?
I am not sure LINQ to SQL is the best method of accessing data, period, but I find it even more problematic in your case.
Your real issue is you have the concept of your domain objects fairly static (you know what the program needs to use to get work done), but you are not sure how you are persisting the data, as your schema is in flux. This is not a good scenario for automagic updates.
If it were me, I would code the domain models so they do not change except when you desire change. I would then determine how to link to the persistent schema (database in this case). If you like a bit more automagic, then I would consider Entity Framework, as you can use code first and map to the schema as it changes.
If you find this still does not help, because your database schema changes are incompatible with the domain models, you need to get away from coding and go into a deeper planning mode. Otherwise, you are going to continue to beat your head against the proverbial wall of change.
Create a partial class definition for your Customer table and add more meaningful getter properties for the LINQ to SQL generated member names:
public partial class Customer
{
public string Name { get; set; }
[JsonIgnore]
public RefData State => this.RefData;
[JsonIgnore]
public RefData Country => this.RefData1;
}
I blogged about this here

NHibernate HiLo ID Generator. Generating an ID before saving

I'm trying to use 'adonet.batch_size' property in NHibernate. Now, I'm creating entities across multiple sessions at a large rate (hence batch inserting). So what I'm doing is creating a buffer where I keep these entities and them flush them out all at once periodically.
However I need the ID's as soon as I create the entities. So I want to create an entity (in any session) and then have its ID generated (I'm using HiLo generator). And then at a later time (and other session) I want to flush that buffer and ensure that those IDs do not change.
Is there anyway to do this?
Thanks
Guido
I find it odd that you need many sessions to do a single job. Normally a single session is enough to do all work.
That said, the Hilo generator sets the id property on the entity when calling nhSession.Save(object) without necessarily requiring a round-trip to the database and a
nhSession.Flush() will flush the inserts to the database
UPDATE ===========================================================================
This is a method i used on a specific case that made pure-sql inserts while maintaining NHibernate compatibility.
//this will get the value and update the hi-lo value repository in the datastore
public static void GenerateIdentifier(object target)
{
var targetType = target.GetType();
var classMapping = NHibernateSessionManager.Instance.Configuration.GetClassMapping(targetType);
var impl = NHibernateSessionManager.Instance.GetSession().GetSessionImplementation();
var newId = classMapping.Identifier.CreateIdentifierGenerator(impl.Factory.Dialect, classMapping.Table.Catalog, classMapping.Table.Schema,
classMapping.RootClazz).Generate(impl, target);
classMapping.IdentifierProperty.GetSetter(targetType).Set(target, newId);
}
So, this method takes your newly constructed entity like
var myEnt = new MyEnt(); //has default identifier
GenerateIdentifier(myEnt); //now has identifier injected based on nhibernate's mapping
note that this call does not place the entity in any kind of nhibernate managed space. So you still have to make a place to place your objects and make the save on each one. Also note that i used this one with pure sql inserts and unless you specify generator="assigned" (which will then require some custom hi-lo generator) in your entity mapping nhibernate may require a different mechanism to persist it.
All in all, what you want is to generate an Id for an object that will be persisted at some time in the future. This brings up some problems such as handling non-existent entries due to rollbacks and failed commits. Additionally imo nhibernate is not the tool for this particular job, you don't need nhibernate to do your bulk insert unless there is some complex entity logic that is too costly (in dev time) to implement on your own.
Also note that you are implying that you need transient detached entities which however cannot be used unless you call .nhSes.Save(obj) on the first session and flush its contents so the 2nd session when it calls Load on the transient object there will be an existing row in the database which contradicts what you want to achieve.
Imo don't be afraid of storming the database, just optimise the procedure top-to-bottom to be able to handle the volume. Using nhibernate just to do an insert seems counter-productive when you can achieve the same result with 4 times the performance using ado.net or even an isqlquery wrapped-query (and use the method i provided above)