asadmin start-domain fails when remote JMS queue is unreachable - glassfish

I have 2 servers A and B running a glassfish 3.1.2.2 application server on them. Both use a JMS queue for communication, which works fine so far. If the network connection breaks for any reason, I can see in the logs of server B (the one configured to connect to the remote queue of A) that it tries to reconnect and is actually always successful in doing so as soon as A is up again.
But the problem is, that if I try to restart the glassfish instance on B while server A is unreachable, the startup process will fail after some retries and remains stuck in a kind of undefined/unusable state, i.e. the java process is started, some ports are open but the applications are not started - not even the administration console.
IMHO glassfish startup process should not wait for the queues to connect, this should be done in some kind of background process.
Has anyone of you experienced something similar? Is there anything I can configure/tune to fix this behaviour?

Never mind, it seems to have fixed itself :(
After restarting the computer,removing the deployed ear and deploying it again it just worked. I haven't experienced this behaviour since then.

Related

Glassfish 4.1 does not stop

I have a problem where glassfish instance does not stop from GUI or Command line.When trying a message appears saying there is no server instance running but there is, i can login to localhost:4848. I have killed all the process on 8080 but the behavior is still same. Any help would be much appreciated.
Thanks

How to use rotate_logs on a log file that is 80+gb's for RabbitMQ on windows server

I need to run rabbitmqctl rotate_logs on a rabbitmq log file that is over 80gb's in size. When I tried to run this the first time it froze rabbit and no messages could be received. The freeze lasted 20 mins before I had to kill the command and restart the rabbit server.
This is a production server and completing this in a small amount of time without losing messages or killing the broker would be optimal.
Would it be possible to shut down the service and move the current log file to another location and restart the service and then run the rotate_logs command?
I'm fairly new to rabbitmq and I am not sure what the best way to handle this would be.
This is installed on a windows 2008 server as a service for a heavy traffic production site (However the message queue has a small load and only affects the administrative side of things).
Any help or insight would be appreciated.
I ran into a similar situation, but with only about 4GB of log file instead of 80.
the workaround I used was pretty much what you suggested... stop the service, move the log file and restart the service as quickly as possible.
for me, specifically, instead of moving the file while the service was stopped i just renamed it. i also wrote a commandline script to do the work for me.
this allowed me to stop the service, rename the file and restart the service in a matter of seconds.
once the service was back up and running, i was free to move / rename / whatever the large log file as needed.

Jconsole randomly stops connecting

We have a jboss 7 instance running and hosting a web application. JMX remote has been turned on with username/password authentication and we are able to connect to it fine. Kindly not we are using Jboss/bin/jconsole.bat to connect.
However at times we notice after the following 2 cases it stops allowing any more connections to jmx unless we restart the jboss server. the cases are
1) we attempt a heap dump of the JVM using jconsole
2) We invoke a softreset method on a c3p0 datasource object that has been exposed via spring JMX
Not necessarily after doing any of the 2 it will always stop working. At times it stops taking new connections after trying one heap dump or at times after 3-4 successful attempts.
Any clue on this random behaviour of jconsole?
I think you ware bit by connection leak bug that AS 7.1.x had and it is fixed with 7.2.x versions.
I would recommend you to take EAP 6.1.0.Alpha1 (same as 7.2.0.Final) and try again.
If I recall correctly this was the original issue https://issues.jboss.org/browse/REMJMX-45

How to debug ActiveMQ client?

I'm a fairly new user of ActiveMQ and I'm looking for a way to get detailed debug information on the client side of a queue connection. My problem is this: I have a server that is sending a message through a queue to a client. Using the admin web page associated with the broker, I can verify the following: the queue was created, there is a consumer associated with the queue, the message has been enqueued, the message has been dispatched, the dispatched queue size is 1, the message has not been dequeued. This setup was working yesterday but mysteriously stopped working today even though I did a restart of the activemq service. The log file at /var/log/activemq.log does not contain any useful information.
At this point I'm stumped; I'm assuming that there is some sort of problem with the configuration, but it hasn't changed since yesterday. Does anybody have a suggestion about what my next step should be?
Turn on debug (or even trace) logging in the broker first of all in conf/log4j.properties.
log4j.logger.org.apache.activemq=DEBUG
restart the broker and re-run your scenario. The logging will hopefully provide you with some information.
Jconsole is also a useful tool to monitor the running broker.
Does your client use any message filters?
You can also enable remote debugging and then connect with an IDE.
To start remote debugging execute
$ ACTIVEMQ_DEBUG=true bin/activemq
and then start a remote debugger to connect to port 5005

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.