OSB WLS Initialisation issue - weblogic

facing some strange behavior in OSB, i have configured WLS with MQ in client mode, i am doing some minor test to check the connection, i have created a proxy service to read the message from Q1 and a Business Service(BS) to route it to Q2. The issue is the proxy is able to read the message but the BS is throwing this:
JMSPool BEA-169807 There was an error while making the initial connection to the JMS resource named ALSB_JMS_SessionPool_491704821 from within an EJB or a servlet. The server will attempt the connection again later. The error was javax.jms.JMSException: [JMSPool:169803]JNDI lookup of the JMS connection factory AKBConnFact failed: javax.naming.NoInitialContextException: Cannot instantiate class: com.sun.jndi.fscontext.RefFSContextFactory [Root exception is java.lang.ClassNotFoundException: com.sun.jndi.fscontext.RefFSContextFactory
Note: The classpath or the domain/lib folder contains the RefFSContextFactory class
Any ideas gang..? TIA

The answer is this is a bug in OSB which needs to be reported.
As a workaround you need to individually set the jars in the weblogic classpath in your domain/server/bin folder. just go through the link below for more details:
http://forums.oracle.com/forums/thread.jspa?threadID=2135523&start=0&tstart=0

Related

ActiveMQ integration with Oracle Service bus(OSB) 12c

we are trying to do the assessment around ActiveMQ to use in OSB 12c as JMS based integration. I did follow few blogs like https://bizzperform.com/blog/?p=686 but this is not helping and generating error like below.
did anyone came across this scenario and did implemented same .. kinldy advise.
<Failed to check whether connection factory LocalConnectionFactory supports XA. Will assume it does not: javax.naming.NoInitialContextException: Cannot instantiate class: org.apache.activemq.jndi.ActiveMQInitialContextFactory [Root exception is java.lang.ClassNotFoundException:
Active MQ client jar is missing from domain class path, you need to download it and add to PRE_CLASSPATH
Thanks, I finally got it working.
two quick changes and that worked
added the jar file in setDomainEnv.cmd like this
set PRE_CLASSPATH=%DOMAIN_HOME%\lib\activemq-all-5.16.3.jar;%PRE_CLASSPATH%
or you can put the complete URL of domain home.
while configuring the JMS on OSB, its always tricky to set the JNDI and I had to use both like below
jms://localhost:7001//
this helped and established a connection.

JMX connection to Gemfire over SSL

I have used GFSH to start locator like below
start locator --name=gemfire_locator --security-properties-file="../config/gfsecurity.properties" --J=-Dgemfire.ssl-enabled-components=all --mcast-port=0 --J=-Dgemfire.jmx-manager-ssl=true
Also started server
start server --name=server1 --security-properties-file="../config/gfsecurity.properties" --J=-Dgemfire.ssl-enabled-components=all --mcast-port=0 --J=-Dgemfire.jmx-manager-ssl=true
I am trying to connect to Gemfire as ClientCache which works perfectly fine over SSL. But When I connect as JMX client, I am getting below error in Java code as well as Jconsole.
Error:
Exception in thread "main" java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: non-JRMP server at remote endpoint]
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369)
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270)
at SamplePlugin.main(SamplePlugin.java:101)
Am I missing any other configuration?
Here is my JAVA_TOOL_OPTIONS:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=true
-Djava.rmi.server.hostname=myhostname
You will also need to add the geode-core jar to your classpath for jvisualvm. Use the --cp:a option. I would suggest just using geode-dependencies.jar as that will get everything you might need.
The reason this is required is explained a bit in the comments for ContextAwareSSLRMIClientSocketFactory. Basically it seems that when RMI uses SSL, the necessary RMIClientSocketFactory is exported from the server to the client for use there. In general this would simply just be SslRMIClientSocketFactory. But in our case, we have a custom socket factory and so the client (jvisualvm in this case) needs to have access to it.

Apache camel / MQTT through SSL : Failed to create Producer for endpoint (java.lang.NullPointerException)

I'm trying to publish to a MQTT topic thanks to the appropriate Apache Camel MQTT component.
So in my spring context XML I have the following:
<camel:to uri="mqtt:test?host=ssl://myhost:8883&publishTopicName=test&userName=test&password=test"/>
But I'm getting the following error at startup :
Failed to create Producer for endpoint:
Endpoint[mqtt:test?host=ssl://myhost:8883&publishTopicName=test&userName=test&password=test]. Reason: java.lang.NullPointerException
Everything is working fine when not using ssl, the following configuration (regular tcp instead of ssl) is working well :
<camel:to uri="mqtt:test?host=tcp://myhost:1883&publishTopicName=test&userName=test&password=test"/>
I've added the javax.net.ssl.trustStore JVM property pointing to my certificates store but without any effect.
Does someone already met this issue ? Is there something to specifically add in the spring DSL configuration file when using the camel mqtt component with ssl ?
EDIT :
The stacktrace of the NPE :
Caused by: java.lang.NullPointerException at
org.fusesource.hawtdispatch.transport.SslTransport.connecting(SslTransport.java:194)
at
org.fusesource.mqtt.client.CallbackConnection.createTransport(CallbackConnection.java:285)
at
org.fusesource.mqtt.client.CallbackConnection.connect(CallbackConnection.java:138)
at
org.apache.camel.component.mqtt.MQTTEndpoint.connect(MQTTEndpoint.java:305)
at
org.apache.camel.component.mqtt.MQTTProducer.doStart(MQTTProducer.java:38)
at
org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at
org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:3219)
at
org.apache.camel.impl.DefaultCamelContext.doAddService(DefaultCamelContext.java:1209)
at
org.apache.camel.impl.DefaultCamelContext.addService(DefaultCamelContext.java:1170)
at
org.apache.camel.impl.ProducerCache.doGetProducer(ProducerCache.java:442)
... 33 more
Debugging through javax.net.debug=ssl was useful.
Actually there were an issue on the java.security where the security.provider property was not set properly. That was manually changed for testing purpose related to another application.
Since, everything is working fine. Sorry for the post related to a internal specific mistake.
Alex.

ActiveMQ KahaDB compatibility

We want to upgrade ActiveMQ 5.3 to 5.6 and also keep the connections information.
Here is what we do,
Backup files under data\kahadb\ and uninstall AcitveMQ 5.3
Install ActiveMQ 5.6
Overwrite the files under data\kahadb with previous backup files
if the count of my clients is about 20, it works well.
if the count of my clients is more than 100, i can't connect to my broker again.
here are the logs in wrapper.log
Failed to load: class path resource [activemq.xml], reason: Error creating bean with name 'org.apache.activemq.xbean.XBeanBrokerService#0' defined in class path resource [activemq.xml]: Invocation of init method failed; nested exception is org.apache.kahadb.page.Transaction$InvalidPageIOException: Page id is not valid
we got some exception when we created a consumer:
Apache.NMS.ActiveMQ.BrokerException: java.io.EOFException :
Apache.NMS.ActiveMQ.Connection.SyncRequest(Command command, TimeSpan requestTimeout)
Apache.NMS.ActiveMQ.Session.CreateConsumer(IDestination destination, String selector, Boolean noLocal)
Apache.NMS.ActiveMQ.Session.CreateConsumer(IDestination destination)
Is the db compatible issue? or how to keep the connection data after upgrading MQ?
According to this thread : http://activemq.2283324.n4.nabble.com/Migrate-existing-kahaDB-to-a-new-version-of-ActiveMQ-possible-td4486455.html, if overwriting the DB is not working, you could simply create a Camel route that consumes msgs from the old broker instances and loads them onto the new broker. Some properties like timestamps or message ids will be changed however.

Glassfish Bean Validation weird error

I am using bean validation in my application. When there is no constraint validation errors everything works nicely. Every time there is a validation error, Glassfish throws the following exception:
Caused by: java.lang.ClassNotFoundException: javax.validation.groups.Default: java.net.MalformedURLException: Unknown protocol: osgi
at com.sun.corba.ee.impl.util.JDKBridge.loadClassM(JDKBridge.java:325)
at com.sun.corba.ee.impl.util.JDKBridge.loadClass(JDKBridge.java:228)
at com.sun.corba.ee.impl.javax.rmi.CORBA.Util.loadClass(Util.java:640)
at com.sun.corba.ee.impl.util.RepositoryId.getClassFromType(RepositoryId.java:628)
at com.sun.corba.ee.impl.orbutil.RepIdDelegator.getClassFromType(RepIdDelegator.java:169)
at com.sun.corba.ee.impl.encoding.CDRInputStream_1_0.readClass(CDRInputStream_1_0.java:1439)
The bean-validation.jar is present in glassfish/modules folder. The startup doesn't throw any exceptions regarding validation.
PS. Note that we are using remote beans with CORBA.
Reproducable on GlassFish 3.1.2.2 and 3.1.1.
I traced this problem to serialization of the ConstraintValidationException through CORBA. Somehow the bean-validation module is not loaded properly with osgi and the javax.validation.groups.Default class is missing. I made a quick workaround so that the ConstraintValidationException is intercepted and never sent through CORBA. Instead the validation error information is gathered in a custom Exception class that can be actually serialized through the services.