Jconsole randomly stops connecting - jboss7.x

We have a jboss 7 instance running and hosting a web application. JMX remote has been turned on with username/password authentication and we are able to connect to it fine. Kindly not we are using Jboss/bin/jconsole.bat to connect.
However at times we notice after the following 2 cases it stops allowing any more connections to jmx unless we restart the jboss server. the cases are
1) we attempt a heap dump of the JVM using jconsole
2) We invoke a softreset method on a c3p0 datasource object that has been exposed via spring JMX
Not necessarily after doing any of the 2 it will always stop working. At times it stops taking new connections after trying one heap dump or at times after 3-4 successful attempts.
Any clue on this random behaviour of jconsole?

I think you ware bit by connection leak bug that AS 7.1.x had and it is fixed with 7.2.x versions.
I would recommend you to take EAP 6.1.0.Alpha1 (same as 7.2.0.Final) and try again.
If I recall correctly this was the original issue https://issues.jboss.org/browse/REMJMX-45

Related

Behavior of WL.server.createEventSource on a Worklight Cluster Environment

Let's assume I have a cluster of 2 worklight servers sharing the same WL runtime.
On that runtime, I've installed a application with a adapter that is a create event source function.
Just like this IBM article.
https://www.ibm.com/developerworks/community/blogs/worklight/entry/configuring_a_polling_event_source_to_send_push_notifications?lang=en
My question is, what will happen on a cluster environment.
Will repeated work ensue?
By other words, would my two WL Servers will be pooling for events?
Or perhaps that functionality is writing a task on the WL DB that the WL Servers poll regularly to check for work if no instance is taking care of it, so that only a server at a time would be "the event source"?
I'm working with IBM Worklight 6.2 and Websphere Liberty Profile 8.5.5
Thanks in advance!
Here's my attempt to answer this after some consultation:
My question is, what will happen on a cluster environment. Will
repeated work ensue? By other words, would my two WL Servers will be
pooling for events?
While the Worklight Servers share the same runtime, they are still considered as 2 instances. This means that each of them will attempt to perform a polling action. This is considered OK.
However, it is important to note that the backend system that is being polled should likely be smart enough to handle such a situation where 2 polling attempts are done for the same message.
If the backend doesn't know how to handle polling properly, the same message can be pulled more than once. This is true even of you have a single eventsource running. So this is something to keep in mind.

asadmin start-domain fails when remote JMS queue is unreachable

I have 2 servers A and B running a glassfish 3.1.2.2 application server on them. Both use a JMS queue for communication, which works fine so far. If the network connection breaks for any reason, I can see in the logs of server B (the one configured to connect to the remote queue of A) that it tries to reconnect and is actually always successful in doing so as soon as A is up again.
But the problem is, that if I try to restart the glassfish instance on B while server A is unreachable, the startup process will fail after some retries and remains stuck in a kind of undefined/unusable state, i.e. the java process is started, some ports are open but the applications are not started - not even the administration console.
IMHO glassfish startup process should not wait for the queues to connect, this should be done in some kind of background process.
Has anyone of you experienced something similar? Is there anything I can configure/tune to fix this behaviour?
Never mind, it seems to have fixed itself :(
After restarting the computer,removing the deployed ear and deploying it again it just worked. I haven't experienced this behaviour since then.

How to handle Humantask without Minaserver in JBPM 5

I'm using apache mina server to process my workflow.
But when too many processes are launched the Mina server is occupying much of JVM and i couldnt progress further.
One instance of "org.apache.mina.transport.socket.nio.NioSocketSession" loaded by
"org.jboss.classloader.spi.base.BaseClassLoader # 0xb9b10d58" occupies 685,361,840 (68.96%) bytes.
The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
1.So is there any other alternative to Mina..?
2.How to handle my human task without Mina..?
Kindly suggest a solution...
There are two alternatives to Apache Mina currently supported in jBPM 5.2
- LocalTaskService: runs locally, next to your process engine
- HornetQ: uses HornetQ messages for communication between client and server
Kris

ODBC connection re establishes after application pool recycle

I have a web service application which connects to databases through odbc sql native client and SQL Server drivers. all of a sudden the application stopped connecting to the database throwing the error 08001. But when i did the application pool recycle it started working. Now it is happening intermittently and became a headache for me. It cant be a memory problem as it happened immediately after app pool reclycle once. but agian got corrected after one more app pool recycle. i dont know what is happening as none of the error logs give any clue:(. Please help me...
the first step is to be able to diagnose what is going on. You cannot fix what you cannot measure. To do this I would enable pooling in the data source console for the driver, then add the counters to the performance monitor to see what the connection pool is doing.
I'm not sure what the realtionship between IIS applocation pool processes and odbc connections is but we are seeing some unexpected behaviour in this area. Also the odbc connection performance counters are visible if I connect to the driver through a locally installed console application but I cannot see any performance counter activity for connections made via the web service app pool in IIS? ODD!?

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.