With EclipseLink Is it possible to create orm.xml mappings during runtime for a class? - eclipselink

I am new to EclipseLink. I am trying to generate orm mappings for a class during runtime and do mapping.
Is it possible at all?
I see examples where a class is generated during runtime but that doesn't fit my situation.
thanks

It could be possible, depending on what you are trying to do and when. Persistence units are pretty static creations that should be known upfront - just like java classes themselves. So if you are not using Dynamic entities, why wouldn't you know upfront that the class should be apart of the persistence unit up front?
While it is not a great idea, you could create a static persistence unit and specify that it use a customizer as described here http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Advanced_JPA_Development/Customizers with which you could add in descriptors or mappings to the persistence unit. The customizer is only run once though, during initialization. So if you wanted to make changes later on, you would need to refresh the persistence unit using the refreshMetadata on the EntityManagerFactory to have it reload the persistence unit. Running EntityManagers will not be affected by the changes.
Using the EMF refreshMetadata, you could also use a MetadataRepository to pick up different or extended ORM.xml files for your entities - so you could incorporate changes made to the xml instead of using a customizer. This is described somewhat here:
http://www.eclipse.org/eclipselink/documentation/2.5/solutions/extensible001.htm#CIAIJHAG

Related

How to do Object–Relational mapping in ABAP?

I'm currently tring to port an application using hibernate to ABAP.
So short version is: I've (at least) two tables, let's say Entity(entity_id, ...) and SubEntity(sub_entity_id, entity_id).
Now in ABAP OO I'm representing these entities as classes, like zcl_app_entity. Now I'm wondering how I could use ABAP to persist these entities and relationships.
I've use-cases like:
Lock entity, and then add subentity
Get all subEntities and send them via http as json
In Java with JPA I'd do something like
Entity entity = userRepository.findById(entityId);
entity.lock(); // granted, this mechanism would on DB Level here while ABAP needs ABAP Locks
entity.getSubEntities.add(address);
There's a session with UnitOfWork automatically with the repository call. But as far as I'm aware ABAP doesn't offer a Repository pattern which automatically transforms classes to managed entities.
I could of course add INSERT etc directly into the add calls, or create a load / persist method on every class. But then I lose the testability.
I could create Repositories myself, passing in the objects. But then my Objects are repository aware itself (addAddress would call the repository).
Another way would be in the service class to call the repository, and then pass that object to the add method after it's persisted. Quite error prone.
Also lazy loading of e.g. xstrings (like 50MB) would be great, this won't work when the object has no access to the repository / sql interface to load on demand though.
I'd be super suprised if there isn't something like this (JPA/Hibernate), since these are common patterns.
IEntityRepository, ISubEntitiyRepository, IMyService (calls repo interfaces)
All calls make objects managed, maybe with OneToMany etc relationships, rollbacks, lazy load.
Weirdly I found the most ABAP way to have some logic in classes (e.g. the Entity->lock( ), entity->add_subentity( xyz ) but then just use an SQL persistency interface to get all data and return some structures. There wouldn't really be OO relationships. At most a class would be used as a short time driver of a struct. But when I'd say update all sub_entities it would be more like data_provider->get_sub_entity( entity_id ) which returns an internal table. And then the requestor has to persist it again if required data_provider->update_sub_entity_status( entity_id, 'R' )
So how do I use Object–Relational mapping in ABAP, e.g. when I want to update the status of all sub_entities of entitiy X to 'DONE' while keeping it testable?
You can take a look at the basic ABAP persistence service.
In se24 if you create a class, you can mark it as "persistent".
Eg SFLIGHT.
https://blogs.sap.com/2012/04/18/abap-persistent-object-services-demystified/#:~:text=The%20Persistence%20Object%20Services%20can,again%20when%20you%20need%20them.
There is an generated example on every system CL_SPFLI_PERSISTENT
If you have used ORMs in other languages like c# , this will be a very disappointing experience.
This toolset began around 20 yrs ago, but offers a questionable return on invest if you ask me.
Apart from the fact the approach doesnt conform to traditional OO principals.
When this first came out SAP already had 3GL code updating traditional
tables. Most developers even SAP internal, had no clear idea how to implement an ABAP OO model.
90% of the code SAP delivered didnt use this type of model. So there where no good examples to base your own work on.
Unfortunately it never took off and was never extended to have the functionality expected in a true ORM persistence tool.
I dont recall seeing anything inside the toolset that manages things like cascade delete. Nor A proper class relationship model?
Please correct me if I missed that.
If you ask me, sap generated a class with GET and SET methods and a PERSIST method and that was where it stopped.
Actually using Classes not DDIC structures as the model and implementing things like cascade delete look outstanding.
If you google SAP and ORM you will see a Javascript tool using Hibernate and HANA as the DB. Not an ABAP based tool.
or you will see ORM meaning Operational Risk Mgt.
That pretty much says it all. The ABAP layer has a toy Class generator but no true ORM tool like entity framework on .net.

Analyzer dependencies from class in IDEA or in another java tool

I'm looking the way to understand full influence of changing some class. For example I added new field to DTO class. It is needed to find all web-services and methods that are affected by this change. For a now I need to find all classes that extend changed class, all classes that aggregate this class or any of its children, all classes that use type of changed class or of its children, etc. Risk to lose affected web-services is very high. And if I lose even one web-services it means bug during integration process.
Is there tool that makes this process automatic?
It sounds like you want to use the Dependency Matrix, which is available under the Analyze menu from the action Analyze Dependency Matrix. This shows you efferent and afferent dependencies, along with metrics that can be used to infer the strength of the dependency.

What criteria should one used to determine if Dependency Injection Framework should be used? [duplicate]

I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.

NHibernate: completely overriding base domain entity

I have a situation where I have a Common.Domain.Person and Specific.Domain.Person.
First one should be provided as a part of a common package.
Second one appears when common package has to be customized to fit the needs of specific project.
In the object model, it can be easily implemented with inheritance.
In the NH mapping, however, I have encountered a small problem.
I can create an NHibernate <subclass> mapping, but that would require me to use an discriminator. However, I know that if specific person class was inherited, then common class instances will never be used within this specific project.
What is the best way to implement this without adding discriminator column to the base class (since there are no different cases to discriminate)?
this is what i wanted and nhibernate supports it using xml entities. Unfortunately this feature has been borked since (at least) NH v2++.
see also Using Doctype in Nhibernate
A work-around could be to inject these properies programmaticaly when you create the SessionFactory (Dynamic Mapping)
see also http://ayende.com/Blog/archive/2008/05/01/Dynamic-Mapping-with-NHibernate.aspx
Just map the Specific.Domain.Person and leave Common.Domain.Person unmapped.
If you are not saving instances of it, NHibernate does not need to know about it.

Extensible domain model with NHibernate

I'm currently designing solution where the domain model & the repository can be extended by application plugins. Now, i have come across a few issues that i am listing below.
My first issue is making domain model extensible. I was thinking about using inheritance here, but honestly, i have no idea how i can leverage multiple plugin assemblies extending the same domain object. I'm kind of leaning toward making every domain object partial and allowing plugins extend it this way. In case i have multiple plugins extending the same domain object, i won't have to worry about loading different extended domain assemblies for each plugin. I would still have only one merged domain object at run-time. Any ideas on this?
Another issue is extending the NHibernate mapping file. I could have each assembly embed mapping file for the domain object it's extending and have my NHibernate manager load it instead of the one provided in the core domain. Once again, the issue is what if i have multiple plugins extending the same domain object. I could have one plugin overriding mapping file for the other.
The solution i have to the last problem is not so great, but I was thinking about including a checksum into the plugin assembly as a signature for the original mapping file it used before extending it. I can verify this checksum during load and only load the plugin map if the checksums match. Pretty ugly, but at least i won't be overriding any maps that differ from the base map used to extend upon in the plugin assembly.
Any way, i'd like to hear what you guys think about this. Thanks!
The good news is that what you are asking for is possible and not that difficult to manage.
About plugin management, you can take a look at Microsoft Prism (http://msdn.microsoft.com/fr-fr/magazine/cc785479.aspx) which as several nice features about modular application development.
About 1. You can map subclasses in separate mappings, in separate assemblies, look for NH documentation. A separate mapping file for a subclass looks like this :
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<subclass name="YourClassFullName, YourPluginAssemblyName"
extends="YourParentClassFullName, TheAssemblyWhereYourBaseClassIsDefined"
discriminator-value="whateveryouwant">
... add your subclass mapping here ...
</subclass>
</hibernate-mapping>
About 2. You can keep your core domain mapping. A simpler way would be to create a service (let's say IMappingLoader) your plugins can use to register your extra mappings (without overriding the base class mapping). Your implementation of this service would add your mapping to NH Configuration class. For example, in Microsoft Prism, all your plugins must implement the IModule interface, which function Initialize() is called when it is loaded. This function is the ideal place to call your IMappingLoader service.
Hope it helped.
To get domain model extensible I'll use lots of factories. Factories can be swap in/out via dependency injection and domain objects should be coded against interfaces.
Mapping could be done for instance via Fluent NHibernate and these could be in that plugin assembly.
Finally I would add loadable configuration to that plugin assembly, which setups DI container and load new mappings. For main assembly there could be scanner for plugin configurations. Maybe MEF could be helpful here or you could make your own, which should not be complicated.