How to connect to JMX service on Glassfish from within an EJB? - glassfish

I have a message-driven EJB deployed to a Glassfish 2.x system. When I get a message that causes an exception or isn't able to be sent or consumed, I would like to do one of the following things:
Pause the EJB's subscription to the Topic/Queue
Shut down the EJB itself
Cease consuming messages until I give an 'all clear' or something equivalent
This is all so that I can stop repeatedly throwing exceptions after calling context.setRollbackOnly() on the message.
I've tried connecting to the server via JMX, but from what I've looked at in documentation says that I'd have to persist:
username
password
jmx url
in my EJB somewhere. Can't I access the JMX server from within the EJB in Glassfish without having to know that?

Related

Consumer Proxy unable to pick up messages from queue due to service configuration in flux

The Consumer proxy is not picking up messages from queue. We have redeployed service and restarted servers. But it did not help. I am attaching logs in here.
<01-Mar-2019 10:39:53 o'clock GMT>
<01-Mar-2019 10:39:53 o'clock GMT>
According to Oracle support document 1573359.1:
CAUSE
The service has been re-deployed/changed while there were messaging being processed. Review Doc ID 1571958.1 "OSB SBConsole Activation - Limitations for configuration or deployment changes in production" for other reasons that this error can occur.
SOLUTION
Stop consumption on the jms queue, delete and re-deploy service.
Log in to Weblogic Console
Expand services -> Messaging -> JMS Modules -> Select the Queue your service is interacting with.
Select the Control tab
For both production and consumption, select pause.
Wait a short while (5 minutes) and restart the queue
Re-deploy your Proxy Services
If message still persist please check config.xml and make sure that there is a correct number of applications with name starting with "ALSB". The correct number depends on the kind of services you have deployed. JMS request-response, JMS plain request, JMS topic etc...
The easiest way to make sure that config.xml is correct is to do the following:
Delete all the JMS proxies from OSB configuration
Open WLS console go to "Deployments" and make sure that there are no application "_ALSB_xyz" deployed. If they are present delete them.
Re-deploy JMS proxies
Alternately, check Note 1382976.1 to locate the related deployments. Delete any application deployments starting with "ALSB" which are not related to any actively deployed JMS proxy service.

Ping Connection Pool failed for jmspool. Could not create connection. Please check the server.log for more details

I'm GlassFish 3.1.2, ActiveMQ 5.1.1 and ActiveMQ 5.8 resource adapter (activemq-rar-5.1.1) I have created a GlassFish cluster with 2 instances. I reference http://geertschuring.wordpress.com/2012/04/20/how-to-connect-glassfish-3-to-activemq-5/ to assist with the initial deployment/configuration of activemq resource adapter After configuring the Connector Connection Pool via GlassFish Admin Console and enable Ping. When Ping is executed an error occurs: Ping Connection Pool for jms/ActiveMQConnectionFactory is Failed. Ping failed Exception - This pool is not registered with the runtime environment : jms/ActiveMQConnectionFactory Please check the server.log for more details. Ping failed Exception - This pool is not registered with the runtime environment : jms/ActiveMQConnectionFactory Please check the server.log for more details.
appreciate for your replay
by any chance is your glassfish instance in a cluster?. In our case it was failing with the same error message, and we figured out that if the instance is in a cluster, it doesn´t work. Though it works if it is not in a cluster.
We verified this also by checking in the activemq admin console the the consumers were active and receiving message, even if the ping command was failing.

asadmin start-domain fails when remote JMS queue is unreachable

I have 2 servers A and B running a glassfish 3.1.2.2 application server on them. Both use a JMS queue for communication, which works fine so far. If the network connection breaks for any reason, I can see in the logs of server B (the one configured to connect to the remote queue of A) that it tries to reconnect and is actually always successful in doing so as soon as A is up again.
But the problem is, that if I try to restart the glassfish instance on B while server A is unreachable, the startup process will fail after some retries and remains stuck in a kind of undefined/unusable state, i.e. the java process is started, some ports are open but the applications are not started - not even the administration console.
IMHO glassfish startup process should not wait for the queues to connect, this should be done in some kind of background process.
Has anyone of you experienced something similar? Is there anything I can configure/tune to fix this behaviour?
Never mind, it seems to have fixed itself :(
After restarting the computer,removing the deployed ear and deploying it again it just worked. I haven't experienced this behaviour since then.

How to debug ActiveMQ client?

I'm a fairly new user of ActiveMQ and I'm looking for a way to get detailed debug information on the client side of a queue connection. My problem is this: I have a server that is sending a message through a queue to a client. Using the admin web page associated with the broker, I can verify the following: the queue was created, there is a consumer associated with the queue, the message has been enqueued, the message has been dispatched, the dispatched queue size is 1, the message has not been dequeued. This setup was working yesterday but mysteriously stopped working today even though I did a restart of the activemq service. The log file at /var/log/activemq.log does not contain any useful information.
At this point I'm stumped; I'm assuming that there is some sort of problem with the configuration, but it hasn't changed since yesterday. Does anybody have a suggestion about what my next step should be?
Turn on debug (or even trace) logging in the broker first of all in conf/log4j.properties.
log4j.logger.org.apache.activemq=DEBUG
restart the broker and re-run your scenario. The logging will hopefully provide you with some information.
Jconsole is also a useful tool to monitor the running broker.
Does your client use any message filters?
You can also enable remote debugging and then connect with an IDE.
To start remote debugging execute
$ ACTIVEMQ_DEBUG=true bin/activemq
and then start a remote debugger to connect to port 5005

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.