I want the internal logging statements from EclipseLink (version 2.7.x) to be logged through my Log4j2 setup.
I know there's a bug report on this (https://issues.apache.org/jira/browse/LOG4J2-444) which never got solved. I also know, there is a wiki how to implement an own logger (https://wiki.eclipse.org/EclipseLink/Examples/Foundation/Logging).
Since I think someone must have already addressed this problem (since there should be thousands of EclipseLink + log4j2 setups out there) and come out with a reliable solution, my question would be:
Where can i find it?
I am not sure this solution is the best, but
The library contain EclipseLink logger impelementation for slf4j:
https://search.maven.org/artifact/org.eclipse.persistence/org.eclipse.persistence.extension
You can set it to use in persistence.xml:
<property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.slf4j.SLF4JLogger" />
Now you need bridge slf4j -> log4j2:
https://search.maven.org/artifact/org.apache.logging.log4j/log4j-slf4j-impl
And finally log4j2.xml: <Logger name="eclipselink.logging" level="ALL"> ... </Logger>
level="ALL" because what exatly will be logged is configured in persistence.xml e.g. eclipselink.logging.level property an so on:
https://wiki.eclipse.org/EclipseLink/Examples/JPA/Logging
Related
we are using the Jboss fuse 6.2 along with technical stack blueprint,camel ,activeMQ and Mybatis.
We need to know about how to configure the property files in OSGI ,
as per my knowledge we could configure .cfg files, but is there any simplest way to use like spring configuring the configuring.
In Our code we are reading from property files . using namespace ext:proeprtyplaceHolder giving that bean id and values we are giving .
Help to provide is there any simplest way to read the property files
There is several ways to add configuration, because OSGi services can access configuration via ConfigurationAdmin service. The blueprint also can access property values over it.
JBoss fuse using karaf, so you can use the following methods.
(There is some quotes from http://www.liquid-reality.de/display/liquid/2011/09/23/Karaf+Tutorial+Part+2+-+Using+the+Configuration+Admin+Service)
Configuration with Blueprint
The integration with our bean class is mostly a simple bean definition where we define the title property and assign the placeholder which will be resolved using the config admin service. The only special thing is the init-method. This is used to give us the chance to react after all changes were made like in the pure OSGi example.
For blueprint we do not need any maven dependencies as our Java Code is a pure Java bean. The blueprint context is simply activated by putting it in the OSGI-INF/blueprint directory and by having the blueprint extender loaded. As blueprint is always loaded in Karaf we do not need anything else.
<cm:property-placeholder persistent-id="ConfigApp" update-strategy="reload" >
<cm:default-properties>
<cm:property name="title" value="Default Title"/>
</cm:default-properties>
</cm:property-placeholder>
<bean id="myApp" init-method="refresh">
<property name="title" value="${title}"></property>
</bean>
After you can put a cfg file (which is a standard java property file) to
karaf's etc or deploy directory with the name of of the given persistent-id which is MyApp in our example. (For example: /etc/ConfigApp.cfg)
title=Configured title
I'm trying to set up the following broker using JDBC persistence:
<amq:broker id="activeMQBroker" brokerName="activeMQBroker" useJmx="false" persistent="true">
<amq:transportConnectors>
<amq:transportConnector name="vm" uri="vm://activeMQBroker" />
</amq:transportConnectors>
<amq:persistenceAdapter>
<amq:jdbcPersistenceAdapter dataSource="#dataSource" />
</amq:persistenceAdapter>
</amq:broker>
On startup, I get:
java.lang.NoClassDefFoundError: org/apache/kahadb/page/Transaction$Closure
If I add the KahaDB JAR to the classpath, all is well and the ActiveMQ database tables get created (in Postgres). I'd rather not have this additional dependency, though, since I'm not using it.
Any idea why ActiveMQ is still looking for KahaDB, even though I'm using JDBC? I tried setting schedulerSupport="false", as described in this question, but no luck.
P.S. Could someone with enough rep please create a "KahaDB" tag?
Current versions of ActiveMQ are tied pretty hard to KahaDB. The TempStore uses a paged list that uses KahaDB underneath as well. Its simplest to just include the library.
How to generate 2 log files; One will log Hibernate sql messages generated by show-sql = true property. And another will log rest of the Hibernate logs.
I have configured logback.xml as shown below:
<logger name="org.hibernate" level="debug" additivity="false">
<appender-ref ref="hibernate" />
</logger>
<logger name="org.hibernate.SQL" additivity="false">
<appender-ref ref="hibernate-sql" />
</logger>
It is generating 2 log files as expected. However, It is duplicating hibernate-sql log messages in Tomcat console, hibernate appender, and hibernate-sql appender.
How could I restrict logback to generate Hibernate sql logs in hibernate-sql appender only?
Hibernate writes the generated SQL in two distinct and completely separate ways. When you set the property hibernate.show_sql to true, it tells Hibernate to write generated SQL to stdout. No logging framework is involved in this in any way. That's why you should pretty much never use it. Just remove that property from your config, and the SQL in the Tomcat console will go away.
The second way Hibernate writes the SQL, and the way you should use, is that it sends it to the logging framework under the org.hibernate.SQL logging category. It has no connection at all with hibernate.show_sql.
As an additional tidbit, in case you don't know, Hibernate also logs all values bound to parameters of the prepared statements using the org.hibernate.type category. This is something you can't get with hibernate.show_sql, so using Hibernate's logging instead of show_sql is both more flexible and more informative.
I'm trying to get integration testing working for a GlassFish 2.x project, using Maven2 and Cargo. I finally have Cargo attempting to deploy my EAR but it fails to start because the data source is not configured. The app also depends on a few JMS queues and a connection factory - how do I add these?
The Cargo Glassfish 2.x plugin says existing configurations are not supported, so I can't do that.
Using the maven-glassfish-plugin is an option, but we also run OC4J so a Cargo solution would be preferred.
edit: The resources are: 1 JDBC connection pool, 1 JDBC resource, 4 JMS queues, 2 JMS connection factories and a custom security realm (pear tree optional). The realm needs an entry in the login.conf like:
myRealm {
uk.co.mycom.MyGlassFishLoginModule required;
};
I'm not sure (I never used this) but IIRC, you should be able to put your datasource configuration in a sun-resources.xml file and package it under META-INF/sun-resources.xml in your EAR and GlassFish is supposed to create the resources at deploy time.
Here is an example sun-resources.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems Inc.//DTD Application Server 9.0 Domain//EN" "sun-resources_1_3.dtd">
<resources>
<jdbc-connection-pool name="SPECjPool" steady-pool-size="100"
max-pool-size="150" max-wait-time-in-millis="60000"
pool-resize-quantity="2" idle-timeout-in-seconds="300"
is-isolation-level-guaranteed="true"
is-connection-validation-required="false"
connection-validation-method="auto-commit"
fail-all-connections="false"
datasource-classname="oracle.jdbc.pool.OracleDataSource">
<property name="URL"
value="jdbc:oracle:thin:#iasperfsol12:1521:specdb"/>
<property name="User" value="spec"/>
<property name="Password" value="spec"/>
<property name="MaxStatements" value="200"/>
<property name="ImplicitCachingEnabled" value="true"/>
</jdbc-connection-pool>
<jdbc-resource enabled="true" pool-name="SPECjPool"
jndi-name="jdbc/SPECjDB"/>
</resources>
Give it a try.
Resources
The sun-resources.xml File
Thanks, that worked. The datasource seems to have gone in okay and the app has deployed. However from the doc you linked, I can't see how to add the other things I need (edited more detail into my question about these). This solution also means that I will have to (use profiles to?) build my EAR differently for IT, which is imperfect.
I somehow missed that you wanted to create other resources than Datasources and I've seen several threads reporting that the suggested approach won't work with GlassFish v2 for any resources (like JMS resources). My bad.
So, given the current state, your options are (IMO):
contribute to Cargo to provide an "existing" configuration implementation for GlassFish v2
use the maven-glassfish-plugin as you suggested
I don't have any better suggestions.
I'm currently trying to 'port' my Java EE 5 Application from Jboss 6 M2 to Glassfish 3.0.1
Jboss used to create my JMS Destination Queues a deployment-time thanks to the -service.xml files. I really liked this feature and I would like to find a way to do the same thing on Glassfish. Is this even possible ?
I'm not sure of the exact status with GlassFish 3.0.1 but according to these threads:
http://markmail.org/thread/cqj56ehulg7qdenp
http://markmail.org/thread/zs4naxy534ijbpic
creating JMS destinations at deploy time was not supported. But these threads are pretty old and things might have changed (see below).
You can however declare them in a sun-resources.xml file and pass it to the asadmin add-resources command.
That being said, several documents (like this one or this one) mention the deployment of application-scoped-resources defined in a sun-resources.xml bundled in the application (that will become glassfish-resources.xml in GlassFish 3.1) as part of the deploy/undeploy of the app but:
I don't know if this is relevant for 3.0.1.
I don't know the exact status, especially for JMS resources.
This would require testing.
With glassfish v4x, Connection factory and destinations(ie queue and topics) can be configured in domain.xml file under glassfish/domains/your-domain-name
Eg :
<resources>
<connector-connection-pool resource-adapter-name="jmsra" max-pool-size="250" steady-pool-size="1" name="jms/DurableConnectionFactory-Connection-Pool" description="connection factory for durable subscriptions" connection-definition-name="javax.jms.ConnectionFactory">
<property name="ClientId" description="MyID" value="MyID"></property>
</connector-connection-pool>
<connector-resource pool-name="jms/DurableConnectionFactory-Connection-Pool" description="connection factory for durable subscriptions" jndi-name="jms/DurableConnectionFactory"></connector-resource>
<admin-object-resource res-adapter="jmsra" description="PhysicalQueue" res-type="javax.jms.Queue" jndi-name="jms/MyQueue">
<property name="Name" value="PhysicalQueue">
</property>
</admin-object-resource>
</resources>