How do I configure persistence.xml for ORM on glassfish v3 with Derby and Eclipselink - orm

I'm using the internal glassfish 3.1 plugin for Eclipse, along with a derby database I installed (it shows up on the datasource explorer in the Database Developer view in Eclipse), and I'm fumbling at the "last" step of getting the ORM working so that I can develop an app that persists data with EJBs using Eclipselink for the JPA implementation.
I know I need to configure the persistence.xml file, but I'm at a loss for what needs to be in it, what the individual field names mean. I feel like the purpose of the persistence.xml is to tell Glassfish where to find the database to store everything in, and which JPA implementation to use to do that storing.
I have a bunch of questions.
Do I have to have a persistence entry for each class that represents an object in the database? So if I had a Book class and a Library class, would I need two enteries in persistence.xml or could I just do one case that services them both?
Where can I find more information about how to configure the persistence.xml file IN GENERAL. I have found tons of very specific tutorials with information on how to configure it in X, Y, or Z setting, but nothing that explains the individual bits, and how you'd configure them from a high level.
Once I've setup my persistence.xml file correctly, what else do I need to do to ensure that my #annotated classes are going to be serviced by the ORM implementation correctly? Is there something I need to configure in Glassfish?

I'm not an expert but...
1) Yes, in my experience you need an entry for each class. There could be exceptions to this but I am unfamiliar with them.
2) [http://wiki.eclipse.org/EclipseLink/] is a good place to start.
[http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Configuration/JPA/persistence.xml] has some details that you may already know. I've have trouble finding a perfect resource myself. I've tended to find information fragmented all over the place.
3) In general most of my persistence.xml file has been generated automatically by eclipselink.
After I creating a connection pool and JDBC resource from the glassfish Administration Console
I had to add my
<jta-data-source>jdbc/your_name</jta-data-source>
to persistence.xml.[1]
<property name="eclipselink.ddl-generation" value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="database"/>
I added these properties so my identity columns would auto-increment using JPA.
Try these two tutorials to get a better understanding of how it works:
[1] http://programming.manessinger.com/tutorials/an-eclipse-glassfish-java-ee-6-tutorial/#heading_toc_j_24
http://itsolutionsforall.com/index.php
[*apologies I can't post more than 2 links at the moment]

Related

Reading different properties for different cluster/node

I have developed a hybrid worklight app and everything is set up. Now my case is that I have a load balance and two clusters. These two clusters have been synchronized with only one WAR file. Due to some reason, we have a server java file in the WAR for sharing some global variables with worklight adapters.
The problem now is that these 2 clusters are working independently (will be redirected by the load balance). The global variables in the JAVA file inside their WAR will not be shared. How can we maintain only one set of global variable in this case?
Or is there any method for the JAVA to read the current cluster detail(for example cluster id or IP address) so that I can write logic to point to different properties in worklight.properties
[PS: not good at English. I will clarify more if you guys don't understand me]
What you actually need here is not to use static variables to share this information.
I suggest using Redis or Memcached (or some other free solution) to share information across the cluster.
A simpler solution (but less efficient) can be using an SQL database to store/load those shared properties. You can actually create a "configuration" adapter (SQL adapter) which will be called by the other adapters to read/write the configuration properties.

How to setup a project and break it into sub-projects, how to use slick in this setup

This is a brand new project, so I can use the latest version of play.
I am using IntelliJ 13.
So I want to break the models/db/service layer because I will also have a job service (reading messages off a queue for example) that will need this server layer also.
Since slick is outside of play, how do I setup the datasource for this project, keeping in mind I will be connecting to multiple databases.
Do I need to create a custom config file for this?
web-app (play2!)
- service
service (models + dao)
models
dao
jobs (service)
I don't see any examples like this, which I find strange because I think pretty much any project would have to be setup this way in the real world (beyond simple examples).
Can someone show be sample code where things are broken down like this?
This example isn't broken into sub-projects, but it is very split up and would allow you to specify multiple databases.
https://github.com/geigerma/play-cake

eclipselink without persistence.xml and multiple persistence.xml

Hi I am trying to develop a JPA application using EclipseLink.
I am trying to create it without persistence.xml or at least the Persistence Unit will be created during run time. So I have an empty (or rather dummy) persistence.xml to start with.
I looked at this post eclipselink without persistence.xml but after debugging I noticed, EL forces to declare at least PU name and all other parameters can be set (including provider) during runtime.
Is there a way, PU can be created only during runtime? I am trying to create multiple PU based on number of categories in application which is known only during run time.
currently I am overwriting all other parameters except name (which I can't overwrite) and create a EM factory per category.
Thanks in advance,
Gopi
JPA defines a createContainerEntityManagerFactory(PersistenceUnitInfo info, Map properties) in PersistenceProvider that does not require a persistence.xml (the container is responsible for processing it). You could probably call this directly on the EclipseLink PersistenceProvider, and pass your own PersistenceUnitInfo.
You may want to look into EclipseLink multitenancy support,
http://www.eclipse.org/eclipselink/documentation/2.4/solutions/multitenancy.htm#CHDBJCJA

Migration patch from NServiceBus 2.6 to NServiceBus 3.0

I have an existing NServiceBus 2.6 application that I want to start moving to 3.0. I'm looking for the minimum change upgrade in the first instance. Is this as simple as replace the 2.6 DLLs with the 3.0 Nuget packages or are there other considerations?
For the most part the application migration is quite straight forward, but depending on your configuration and environment, you may need to make the following changes:
The new convention over configuration for endpoints may mean you will need to rename your endpoints to match your queue names (#andreasohlund has a good post about this).
persistence of saga, timeouts, subscriptions etc. now defaults to RavenDb, so if you use SQL Server to persist data, you need to make sure you have to correct profile and endpoint configuration. For SQL Server storage, make sure you add a reference to NServiceBus.NHibernate as it is no longer part of the core.
Error queues are now referenced differently using different configuration ie. use MessageForwardingInCaseOfFaultConfig instead of the regular MsmqTransportConfig error property. You should still be able to use it, but it will look for the MessageForwardingInCaseOfFaultConfig first.
Other than that, I don't think you need to do anything else to get you upgrade working. I modified some of my message definitions to take advantage of the new ICommand and IEvent interfaces as a way communicatinf intent more clearly.
Anyway, I'm sure there will be some cases that are specific to your environment that will require different changes but I hope this helps a bit.

Alternative to property files in IBM WebSphere environment (WAS)

Im looking for an alternate way to store "environment variable" in the current environment that I'm currently working on. (IBM WebSphere)
We are currently in a situation where we have to many property files that it has become difficult to manage.
Im looking for a way to consolidate all these properties into a central place that is easily administered by the admin team.
Some of the options I have explored include :
Storing properties in the database.
Consolidation into a single text file
Any other suggestions would be most welcome!
#Nic Willemse: you may want to use jndi namespace binding provided by websphere...go to admin console-> environment and specify new namespace binding key,value pair.
It depends on the kinds of properties you want to be edited by the admin team. Usually for us those are the items that should be managed as JEE Resources in web.xml, which allows them to be configured in standard locations in the WebSphere admin console.
We keep most of our property files in c:\cfg\<>\
Makes it easy to have different configurations for different servers.