I'm trying to set up the following broker using JDBC persistence:
<amq:broker id="activeMQBroker" brokerName="activeMQBroker" useJmx="false" persistent="true">
<amq:transportConnectors>
<amq:transportConnector name="vm" uri="vm://activeMQBroker" />
</amq:transportConnectors>
<amq:persistenceAdapter>
<amq:jdbcPersistenceAdapter dataSource="#dataSource" />
</amq:persistenceAdapter>
</amq:broker>
On startup, I get:
java.lang.NoClassDefFoundError: org/apache/kahadb/page/Transaction$Closure
If I add the KahaDB JAR to the classpath, all is well and the ActiveMQ database tables get created (in Postgres). I'd rather not have this additional dependency, though, since I'm not using it.
Any idea why ActiveMQ is still looking for KahaDB, even though I'm using JDBC? I tried setting schedulerSupport="false", as described in this question, but no luck.
P.S. Could someone with enough rep please create a "KahaDB" tag?
Current versions of ActiveMQ are tied pretty hard to KahaDB. The TempStore uses a paged list that uses KahaDB underneath as well. Its simplest to just include the library.
Related
I was following a tutorial for setting up a WebSphere Liberty Server Here and didn't really know what a part of the tutorial did. I completed the tutorial and it works fine.
On step 3 it has me modify the server.xml with these two lines and I dont really know what they do.
<applicationMonitor updateTrigger="mbean" />
<feature>localConnector-1.0</feature>
I Found the documentation for localConnector-1.0 but its a little over my head
https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.liberty.autogen.nd.doc/ae/rwlp_feature_localConnector-1.0.html
I think localConnector allows IntelliJ to run the server somehow but i dont know what updateTrigger="mbean" does.
If anyone has an explanation that would be great. Thanks!
The localConnector-1.0 feature enables the local JMX connector on Liberty so that the JMX Client (IntelliJ) can connect to and administer Liberty.
You can find more documentation on the feature here: https://www.ibm.com/support/knowledgecenter/en/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/twlp_admin_localconnector.html
updateTrigger="mbean" is setting the application updates to only occur when trigger by an mbean call (whereas the default is to poll for changes).
You can find more documentation here:
https://www.ibm.com/support/knowledgecenter/SSAW57_liberty/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/twlp_setup_dyn_upd.html
I have a problem with my publish/subscribe implementation. I'm upgrading from NServiceBus version 2.6 to 4.0.4 and everything seems okay as far as I can understand from the logs but the messages are processed really slowly by the subscriber. I use NServiceBus.Host.exe.
In the old implementation I have configured threads as follows:
<MsmqTransportConfig
ErrorQueue="error"
NumberOfWorkerThreads="40"
MaxRetries="5" />
And the messages go through with nice speed.
In the new implementation I've tried to make the changes needed for the configurations:
<TransportConfig
MaximumConcurrencyLevel="10"
MaxRetries="5"
MaximumMessageThroughputPerSecond="500"/>
Am I missing something critical?
I have a valid license so I should have max threads in use. I haven't got RavenDB or SQL, the implementation uses MSMQ, I've disabled Sagas and TimeoutManager in my subscribers configuration code:
NServiceBus.SetLoggingLibrary.Log4Net(log4net.Config.XmlConfigurator.Configure);
Configure.Features.Disable<Sagas>();
NServiceBus.Configure.With()
.DefaultBuilder()
.UseTransport<Msmq>()
.DisableTimeoutManager()
.UnicastBus()
.LoadMessageHandlers();
I did a crude test and the difference in my development environment is so that the 2.6 version handled approximately 80 messages per second and the 4.0.4 version handled approximately 8 messages per second - which is really very bad. So something's wrong here and I just can't seem to find what it is.
Edit: It looks like the problem was generated from our project structure, for some reason the older version of NServiceBus didn't mind our structural approach with generic subsrcriber that uses MEF to load the actual subsrciber-assemblies but the new one went to sleep. I changed the folder structure and now the subscriber works as intended. So the configurations I was using work just fine, but I did delete MaximumMessageThroughputPerSecond from my settings so that it won't present a future problem since the aim is to be as fast as is possible.
I'm using neo4j in a glassfish server through a modified version of Alex Smirnov neo4j JCA connector.
My version is available here : https://github.com/Riduidel/neo4j-connector
I'm using this connector with neo4j 1.8.
As a consequence, when i want to use it, i first install the connector in my Glassfish application server, then use this connector in applications wishing to connect to.
It works OK when using it with fresh stores.
But, when using it with stores created with previous version, I encounter weird bugs.
Typically, I got today the following stack
javax.resource.spi.ResourceAllocationException: Error in allocating a connection. Cause: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#3bbd53b1 from NONE to STOPPED
...
...
.../* JCA internal exception stack */
...
...
Caused by: com.sun.appserv.connectors.internal.api.PoolingException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:924)
at com.sun.enterprise.resource.pool.ConnectionPool.createResource(ConnectionPool.java:1185)
at com.sun.enterprise.resource.pool.datastructure.RWLockDataStructure.addResource(RWLockDataStructure.java:98)
... 66 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Failed to transition org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader#494b584c from NONE to STOPPED
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:388)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:82)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:116)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:227)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:79)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:70)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:165)
at com.netoprise.neo4j.AbstractNeo4jManagedConnectionFactory.createDatabase(AbstractNeo4jManagedConnectionFactory.java:127)
at com.netoprise.neo4j.Neo4jManagedConnectionFactory.createManagedConnection(Neo4jManagedConnectionFactory.java:163)
at com.sun.enterprise.resource.allocator.ConnectorAllocator.createResource(ConnectorAllocator.java:160)
at com.sun.enterprise.resource.pool.ConnectionPool.createSingleResource(ConnectionPool.java:907)
... 68 more
Caused by: java.lang.AssertionError
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:265)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.cleanWriteLocks(LuceneDataSource.java:260)
at org.neo4j.index.impl.lucene.LuceneDataSource.<init>(LuceneDataSource.java:185)
at org.neo4j.index.lucene.LuceneIndexProvider.load(LuceneIndexProvider.java:72)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.loadIndexImplementations(InternalAbstractGraphDatabase.java:1171)
at org.neo4j.kernel.InternalAbstractGraphDatabase$DefaultKernelExtensionLoader.init(InternalAbstractGraphDatabase.java:1143)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:382)
... 78 more
A fast inspection reveals that this exception is linked to an undeletable "write.lock" file. My write.lock file can't be deleted because I guess migration is not over.
How can I make sure the migration is done before using it without migrating it outside of Glassfish ?
Is there a way to ahve exclusive store migrations in that context ? And if so, how ?
And is it the solution for my problem ?
EDIT 1 Added exception message.
EDIT 2 All this only happen when loaded graph was previously used with a Neo4j 1.5 and now with a Neo4j 1.8 connector. when graph is created by connector, absolutely no error happens.
EDIT 3 Strangely enough, this happens as long as there is no debugger plugged into that code : as soon as I try to debug it, the issue stop appearing. Which make me thinking there may be a migration cleanup mechanism that remvoe the write lock once migration is done, and this cleanup is not performed when using my neo4j JCA connector. Is it a valid observation ?
I am not too familiar with the JCA connector, but to be sure, I would just write a very small migration java class that opens the database, lets it migrate and shut down. Then try it again with the JCA connector?
After further investigations, truth revealed to not be in multiple calls to the EmbeddedGraphDatabase constructor, but instead to multiple identicail IndexProvider being loaded.
I use neo4j embedded in an open-source JCA connector.
In this connector, the org.neo4j.kernel.Service class is replaced by a custom one which contains a workaround regarding service loading for JBoss non shared libraries.
Unfortunatly, in our context, this workaround implies loading twice the index provider :
once using the EAR classloader
once using the Glassfish library classloader.
Why ?
Because, as our neo4j instance is using for application data AND for authentication, neo4j connector jar is put in ${domain}/lib. As a consequence, due to Classloader delegation in application server, the EAR classloader delegates to the Glassfish library classloader, and find this way the LuceneIndexProvider. Then, the Glassfish library classloader is directly used to load the same LuceneIndexProvider class.
This concludes by us having two LuceneIndexProvider objects, both trying to migrate the lucene index. Which lead to the AssertionError as the write.lock file created by the first object should be deleted by the second one, which can't do that.
I've then changed slightly that very specific class to use JBoss workaround only when default loading mechanism do not return any class (seee commit here). This small change worked like a charm, so I think you can considered this issue as fixed.
This question is now up for Bounty! First answer that solves this problem wins.
So I've recently discovered that bundles in OSGI are not 100% isolated from each other, especially when your bundles share a common bundle that has a singleton in it, which can result in two unrelated bundles overwritting the singleton. This issue has manifested itself with the CXF libraries. Let me give a detailed example of what is happening:
We have bundle A, B and the shared bundle CXF all in a FuseESB ServiceMix (An osgi platform). CXF's Bus class is a singleton and because of how OSGI has a single classloader per bundle it will share this singleton with every other bundle that uses CXF. So I seem to be unable to create different buses for bundle A and bundle B, which is important that I do because bundle A should be using SSL and bundle B should not be using SSL. This is even more frustrating given that bundle A and bundle B have nothing to do with each other at all other than that they must be deployed together on the same ServiceMix.
Now I've been at this problem for a while now (1-2 months) and I've read up a lot of different solutions. The problem however is that a lot of the solutions require me to have complete control over the source code and in this case I do not. Bundle A that I'm creating is using some proprietary third-party non-osgi library, called Xenara, which uses CXF. For business reasons beyond my control I MUST use this third-party library. Fortunately I do have access to the CXF spring bean file that this library uses.
My guess for solving this problem is that I need to some how make it so that bundle A can use its own personal instance of CXF or at least make it instantiate its CXF Bus that isn't shared with other bundles. Here are the methods I've tried or considered:
I embedded CXF into bundle A but unfortunately the classloader kept fetching CXF from outside of bundle A instead of looking on the classpath. Never figured out how to force it to search for CXF in bundle A first before searching outside of bundle A.
Suggestions were made to make bundle A into a service. I think there were some misunderstandings and people thought that the singleton was in A and not in CXF. Regardless I tried it and it didn't solve the problem. The CXF bus was still shared between bundle A and B.
Override the classloading so that bundle A uses a different classloader for loading the CXF classes. I don't fully understand the logic for this but I'm sure it will be very tricky given that a spring bean is being used to create the CXF bus and http-conduit. See (4) below to get a better idea.
In CXF there is a way to set the CXF bus and http-conduit for a given thread context. I really want to use this solution, but I can't figure out how to translate the CXF bean file into equivalent java code. The CXF spring bean file is provided below. Note I don't have access to the source code using this http-conduit, which is why I haven't used examples show in this link here at "Using Java Code" because I don't have access to the SOAPService, the wsdl, etc...
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE" />
<property name="searchSystemEnvironment" value="true" />
<property name="ignoreUnresolvablePlaceholders" value="true" />
</bean>
<cxf:bus>
<cxf:outInterceptors>
<bean class="com.xenara.messaging.security.IdentityAssertingOutInterceptor"
scope="singleton" />
</cxf:outInterceptors>
<cxf:features>
<wsa:addressing xmlns:wsa="http://cxf.apache.org/ws/addressing"/>
</cxf:features>
</cxf:bus>
<http-conf:conduit name="*.http-conduit">
<http-conf:client AllowChunking="false" Connection="Keep-Alive" />
<http-conf:tlsClientParameters disableCNCheck="true" secureSocketProtocol="TLS">
<sec:keyManagers keyPassword="${javax.net.ssl.keyStorePassword}">
<sec:keyStore type="JKS" password="${javax.net.ssl.keyStorePassword}"
file="${javax.net.ssl.keyStore}" />
</sec:keyManagers>
<sec:trustManagers>
<sec:keyStore type="JKS" password="${javax.net.ssl.trustStorePassword}" file="${javax.net.ssl.trustStore}" />
</sec:trustManagers>
<sec:cipherSuitesFilter>
<sec:include>SSL_RSA_WITH_3DES_EDE_CBC_SHA</sec:include>
...
</sec:cipherSuitesFilter>
</http-conf:tlsClientParameters>
</http-conf:conduit>
This sounds like the basic premisse of OSGi to me: isolation is provided, but you can do a lot of what you can in regular OSGi; such as, modify static members of a class, and since you all share that class (A presumably exports it, B and C import it), others will notice.
In most situations, I would advise you to not use static class state, since it is bound to mess something up for other bundles.
In your situation, it seems to me that bundle A is a library that has no real use being shared in the framework. I would package the library inside both of the using bundles, if you need real isolation, and not worry about the overhead too much.
For the record: this situation has nothing to do with Servicemix, it's basic Java: if we're talking about the same class, and someone changes a static property, others will notice. If this situation confuses you, you could read up a bit about the class loading and sharing mechanisms in OSGi.
The problem you are facing is fairly essential and basic. You have a static state in a supporting library CXF, while you still want shared instances of the libraries using CXF. You cannot modify the shared libraries (due to the sheer size), nor can you modify CXF (closed-source?). Let's call these shared libraries Foo and Bar.
Suppose you have the following classes:
CXF#1
Foo#1, using CXF#1
Bar#1, using CXF#1
WebApp#1, using Foo#1 and Bar#1
If I understand correctly, you now want another application to use the same instances of Foo and Bar, without using the same underlying library CXF#1. This amounts to the following situation.
CXF#2
CXF#1
Foo#1, using CXF#1 when called by App#1, using CXF#2 when called by App#2
Bar#1, using CXF#1 when called by App#1, using CXF#2 when called by App#2
WebApp#1, using Foo#1 and Bar#1
WebApp#2, using Foo#1 and Bar#1
This is just not possible; not in OSGi and not in any Java framework. An existing class cannot dynamically bind to another class, making the choice based on the calling Bundle. The only way to do this without modifying the libraries, is to duplicate the supporting libraries:
CXF#2
CXF#1
Foo#1, using CXF#1
Bar#1, using CXF#1
Foo#2, using CXF#2
Bar#2, using CXF#2
WebApp#1, using Foo#1 and Bar#1
WebApp#2, using Foo#2 and Bar#2
Indeed, this is a lot of effort and will explode the number of packages on disk and in memory. If the CXF package can only be used by a single application, the most logical solution is to duplicate the package and embed it everywhere you use it. Yes, this includes any and all libraries the package depends on.
A hacky/risky way to resolve this is as follows. You should be able to decompile the CXF class. This will allow you to modify the class as follows:
class CXF {
[...]
public static CXF getInstance() {
// based on the current Stack frame, determine which instance to return. Remember, the instance should be based on the WebApp bundle (while you still have shared libraries in between!)
}
}
This is not foolproof. Suppose your WebApp starts a callback thread originating from library A. This thread calls CXF.getInstance() -> The getInstance() method has no way of determining which WebApp started the callback thread.
The correct solution is to modify all libraries not to use the Singleton pattern. You can probably hack your way around the problem by implementing a special classloader, but this opens a whole other can of worms.
-- EDIT --
After reading up on CXF, it seems very strange that CXF exposes a Singleton class. The thing is made for OSGi! You are probably better off asking the question on the CXF mailing list; they will know all of the special sugar and reasons for making a singleton instance, and probably already thought about this usecase.
I am trying to connect to GlassFish 3's JMS service from a standalone remote client. However I am getting a java.lang.ClassNotFoundException: com.sun.messaging.jms.ra.ResourceAdapter. Any ideas on how to fix this?
Here's my setup so far:
Glassfish 3 JMS Service in LOCAL mode (I am assuming that EMBEDED mode will not work in this case because it bypasses the network stack)
JNDI properties are specified as follows:
java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory
java.naming.factory.url.pkgs=com.sun.enterprise.naming
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl
gf-client-module.jar (in GLASSFISH_HOME/modules) added to the standalone application's classpath. Also tried adding other jars present in the modules directory (such as jms-core.jar), but still getting the same ClassNotFoundException.
Any help would be much appreciated.
Instead of using all of the individual Glassfish jar files that you might need (such as gf-client-module.jar, imqjmsra.jar, and imqbroker.jar), the preferred method is to use the gf-client.jar file. It can be found at $GLASSFISH_HOME/lib.
There is more information at http://glassfish.java.net/javaee5/ejb/EJB_FAQ.html#StandaloneRemoteEJB. That document pertains to using EJBs in standalone clients, but the solution is the same for using JMS.
Ok. I found a solution. See here for details, but the short answer is that I needed to add two jars to the classpath: imqjmsra.jar and imqbroker.jar. These were available inside a rar called imqjmsra.rar which can be found under glassfish's mq directory. I had to extract the two jars from this rar!
This is the complete list of client jars for glassfish 3 :
auto-depends.jar
deployment-common.jar
glassfish-corba-internal-api.jar
internal-api.jar
management-api.jar
bean-validator.jar
dol.jar
glassfish-corba-newtimer.jar
javax.ejb.jar
orb-connector.jar
common-util.jar
ejb-container.jar
glassfish-corba-omgapi.jar
javax.jms.jar
orb-iiop.jar
config-api.jar
ejb.security.jar
glassfish-corba-orb.jar
javax.resource.jar
security.jar
config-types.jar
glassfish-api.jar
glassfish-corba-orbgeneric.jar
javax.servlet.jar
ssl-impl.jar
config.jar
glassfish-corba-asm.jar
glassfish-naming.jar
javax.transaction.jar
transaction-internal-api.jar
connectors-internal-api.jar
glassfish-corba-codegen.jar
gmbal.jar
jta.jar
container-common.jar
glassfish-corba-csiv2-idl.jar
hk2-core.jar
kernel.jar
As mentioned in the Ivan A Krizsan's notes for the EJB certification, and depending on the Glassfish version, this should be enough:
GlassFish 3 (and GlassFish 4 too, I've just tested it): $GLASSFISH_HOME/lib/gf-client.jar
GlassFish 2: $GLASSFISH_HOME/lib/appserv-rt.jar and $APS_HOME/lib/javaee.jar