Can a space in jboss-deployment-structure.xml make the spring framework jdbc update failed? - jboss7.x

here is my jboss-deployment-structure.xml
the original xml
My question is Does the space or the order of the tags in jboss-deployment-structure.xml can cause spring framework jdbc update failed?
With this xml, I exeucted the update and it didn't work. However, after I changed the xml with this one
the edited xml, the data I wanted update into the Data Base were successfully updated.

Related

Apache Ignite: can I create cache tables manually, using DDL statements?

Speaking about the latest version (currently 2.3). Seems like old-way a little bit useless now.
If it is possible to create table(s) manually, here comes another question: how to map model POJO's fields and column names so I can fill in cache using DataStreamers. (#QuerySqlField.name, isn't it?)
If you are going to populate cache with data, using DataStreamer, then you should create it using Java API, or by configuring it in Ignite XML configuration.
Tables can be configured, using indexedTypes or queryEntities properties in CacheConfiguration. Take a look at the documentation: https://apacheignite.readme.io/docs/cache-queries#section-query-configuration-by-annotations

Can I read or update dynamically the multiple Mule app properties without reloading or restarting?

We are planning to create a application that reads all the properties file associated to the mule application running in server.
Our intention to read the properties and update from the custom application instead of providing access to the MMC.
Users can update the Quartz Time schedule, Reschedule, Pause and Resume Jobs and Triggers.
Can we create an application and run in parallel to Mule Instance deployment and read all application properties and update dynamically with out effecting the deployment ( No restart and deployment).
Short answer is no:
The reason why you can't do this is because many components have a special lifecycle and bring up server sockets or connect to jms queues upon that configuration, so even if you change the properties you would least at a bare minimum stop and start the respective components so the previous resources are released and new ones acquired.
The properties file change cannot be detected until the Mule XML is changed. So an alternet option to read the changes in the
properties file is to change some thing in Mule XML and save it.
When ever the Mule server detect any changes in the XML file,
XML is read and the mule context is created. This has already been discussed here :- http://forum.mulesoft.org/mulesoft/topics/can-we-reload-properties-without-restarting-application
So if you want to create an application in Mule to read the updated value from properties you always need that application to do some changes in your Mule XML

Relationship between codes under bootstrap configuration and the CMS console?

Whenever I change the configurations e.g. under hst:pages in the CMS console, will it get reflected in the XML files under bootstrap configuration?
On the other hand, when I update the XML file under bootstrap configuration, is it supposed to be reflected in the CMS console as well?
I've tried (I increased the version number in the hippoecm-extension.xml file, cleaned and rebuilt the project) but this doesn't seem to happen
1) Your changes will get reflected in the XML files only if you have the auto-export feature activated. More information on this feature can be found here:
http://www.onehippo.org/library/development/automatic-export-add-on.html
2) You have to do several things for this to work: update the version number in the XML file, mark the item you want to reload as reload on startup, and add the flag -Drepo.bootstrap=true when you start the server. You can find more detailed information on this topic here:
http://www.onehippo.org/library/concepts/update/deploying-content-and-configuration-updates.html

EclipseLink and GlassFish: initial DDL creation

I try to use Flyway to update my databases. In order to setup Flyway I need to get the current DDL. Normaly one would use EclipseLinks schema generation mechanism by configuring:
<property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
<property name="eclipselink.ddl-generation.output-mode" value="sql-script" />
But I am unable to get this working using a GlassFish 3.1.2.
Is there any possibility to achieve what I want, or am I on the wrong track?
Glassfish overrides the eclipselink DDL properties for its own DDL generation feature described here http://docs.oracle.com/cd/E18930_01/html/821-2418/gbwlh.html
It will force writing out scripts that it then uses to create tables, and drop them if required. I don't have the location it has them written to, but check if you can control it through glassfish. Otherwise, try specifying the "eclipselink.create-ddl-jdbc-file-name" property to define the file name and location it ahold be written to. You can also access the persistence unit in a simple java main class outside glassfish if you need the script just once and want to store it with the persistence unit.

eclipselink without persistence.xml and multiple persistence.xml

Hi I am trying to develop a JPA application using EclipseLink.
I am trying to create it without persistence.xml or at least the Persistence Unit will be created during run time. So I have an empty (or rather dummy) persistence.xml to start with.
I looked at this post eclipselink without persistence.xml but after debugging I noticed, EL forces to declare at least PU name and all other parameters can be set (including provider) during runtime.
Is there a way, PU can be created only during runtime? I am trying to create multiple PU based on number of categories in application which is known only during run time.
currently I am overwriting all other parameters except name (which I can't overwrite) and create a EM factory per category.
Thanks in advance,
Gopi
JPA defines a createContainerEntityManagerFactory(PersistenceUnitInfo info, Map properties) in PersistenceProvider that does not require a persistence.xml (the container is responsible for processing it). You could probably call this directly on the EclipseLink PersistenceProvider, and pass your own PersistenceUnitInfo.
You may want to look into EclipseLink multitenancy support,
http://www.eclipse.org/eclipselink/documentation/2.4/solutions/multitenancy.htm#CHDBJCJA