Error generating thread dump from console in weblogic - weblogic

I am trying to generate thread dump from weblogic console(Server-> -> Monitoring -> Threads -> Dump Thread Stacks.
I am getting below message: Server must be running before thread stacks can be displayed.
But, when I try to generate thread dump using kill -3 <PID>, it gets generated.
OS: Centos
Weblogic: WebLogic Server Version: 10.3.6.0
Can anyone please help me in understanding, why thread dump does not get generated from console and Why I am getting the message saying server must be running.
NOTE: Server is in running state.

As you are executing the Thread Dump command from Console, there might be an issue with AdminServer and managed server communication.
Console uses WLST to capture Thread Dumps and before generating thread dumps it will check Managed Server status. May be Admin Server unable to get current state of Managed Server hence you're seeing the error.
Recommended way to take Thread Dumps is OS command (kill -3 ) and from JDK tools, jstack for hostpot and jrcmd for JRockit. Thread Dumps taken from Console might not have lock related information and it might get truncated if thread dump is too long

I guess you was using JDK 7. It is a kind of bug in WLS 10.3.6.0 when using JDK 7. You can either downgrade the JDK to JDK 6 or patch the weblogic.

Related

Mule-EE windows service not starting

I was trying to start the mule mmc and mule-ee using the start launcher given in the mule mmc distribution package. When Ia m running the start.bat it was mentioning that the mule instance is already running.
When i checked the windows process I was not able to see any related java process running and in services I could see mul-ee in stopped state. When I am trying to start the mule-ee service it is mentioning that the file path cannot be identified.
I even tried to remove the service which also was not possible.
Could you please help in finding what might be issue ?.
Regards
Arun
I have resolved the issue temporarily by deleting the mule-ee service.
Below is the command used :
sc delete mule-ee
Regards
Arun
Same issue occur to me also, I find the port which running for mule instance, I have killed it and run start.bat file.. I am not sure about mule-ee is the problem.
I might guess that your are not stopped your mule and mmc instance while shoutdown the pc.

trying to view stack of unresponsive JVM : both jvisualvm and jconsole fail to connect

I have a Java program on my local machine that becomes unresponsive after sometime and appears to freeze without making further progress. I guess it blocks somewhere (it is accessing remote resources over both HTTP and JDBC so a blocking situation is likely). I am trying to connect to it to see a view of the main thread's stack so as to understand where the block occurred. Both jvisualvm and jconsole list the JVM in question (among others running in my system) but both fail to connect.
jconsole balks with "connection failed" (even when I try the insecure option).
jvisualvm appears to connect but when I hit the 'sampler' tab to see the stack it complains with the screenshot below:
The thing is I am using the same utilities (jconsole and jvisualvm) to connect to other JVMs in my system which I have invoked without using any of the JMX options mentioned in this answer and I don't have any issues. How can I get the stack of this unresponsive JVM to see where it blocks?
I faced a similar issue today with a JVM that was completely stuck and I was unable to properly attach jconsole/jvisualvm to it. Also kill -3 <PID> was unsuccessful (no Thread dump).
I was able to trigger a coredump of the JVM using kill -11 <PID> and feed that into jstack as follows: jstack /path/to/java /path/to/core.file. From the jstack output I was able to extract some useful stack information.
You could just collect a thread dump with kill -3 <PID>.
This will show you all the threads and where they are blocked.

How to swap a server in and Out of cluster during runtime

I am implementing session replication in my application. This is old application.
I made all changes and now need to test the server switch and confirm that the objects in session is properly carried to another server in server list.
I have 1 Admin server and 2 managed servers. So the cluster is made of 2 managed server.
while testing I have to always bounce the server and test the flow of my application. This process is very time consuming. So I am looking for any other way to sway a server in and out of cluster
during runtime. I asked on Oracle support website , but they said only way to bounce the server.
How can I write a script for this?
Is there a parameter in weblogic or wlproxy plugin config file that help in this switch.
Your help is appreciated.
using Weblogic scripting tool (WLST) in script mode, you can write a script to automate the shutdown / startup of the managed server that you would like to remove temporarily from the cluster.
you create a file with .py extension which will contain the weblogic commands that you would like to run.
shutdown.py:
connect('username','password','t3://adminIP:port')
shutdown('servername')
disconnect()
startup.py:
connect('username','password','t3://adminIP:port')
start('servername')
disconnect()
to run the script from commandline:
java weblogic.WLST c:\myscripts\shutdown.py
you can put this line in a shell/batch script.
Another way is to write a Java program or an ANT script to invoke the commands using the weblogic.jar file that comes with weblogic.
If you were to change the state of a weblogic managed server from running to admin mode then also you can test the session replication.
You can do this from admin console by selecting the managed server and going to control tab and changing the state of the server to Admin. You can change it back to running from the same place.
Using WLST you can use the commands suspend and resume
http://docs.oracle.com/cd/E11035_01/wls100/server_start/server_life.html
http://docs.oracle.com/cd/E14571_01/web.1111/e13813/quick_ref.htm
suspending and resuming managed servers is quicker than shutting it down and restarting it again.
I have tested this at my end and it works fine, ie when I change the state to admin, my request goes to another managed server and the session is also replicated.
I have used the sample WLS cluster replication example available in wls installation.

Jconsole randomly stops connecting

We have a jboss 7 instance running and hosting a web application. JMX remote has been turned on with username/password authentication and we are able to connect to it fine. Kindly not we are using Jboss/bin/jconsole.bat to connect.
However at times we notice after the following 2 cases it stops allowing any more connections to jmx unless we restart the jboss server. the cases are
1) we attempt a heap dump of the JVM using jconsole
2) We invoke a softreset method on a c3p0 datasource object that has been exposed via spring JMX
Not necessarily after doing any of the 2 it will always stop working. At times it stops taking new connections after trying one heap dump or at times after 3-4 successful attempts.
Any clue on this random behaviour of jconsole?
I think you ware bit by connection leak bug that AS 7.1.x had and it is fixed with 7.2.x versions.
I would recommend you to take EAP 6.1.0.Alpha1 (same as 7.2.0.Final) and try again.
If I recall correctly this was the original issue https://issues.jboss.org/browse/REMJMX-45

How to fix the Zookeeper error for Hbase

Main OS is windows 7 64bit. Using VM player to create two vm CentOS 5.6 system. The net connection is bridge. I installed Hbase on both of the CentOS system, one is master, the other is slave. When I enter the shell, and run status 'details'.
The error from master is
zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException: An error is
preventing HBase from connecting to ZooKeeper
And the error from slave is
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is
the default). Consider inspecting your ZK server logs for that error
and then make sure you are reusing HBaseConfiguration as often as you
can. See HTable's javadoc for more information.
Please give me some suggestion.
Thanks a lot
Check if this is within your .bashrc, if not, add them and restart all hbase services (do not forget to manually run them as well), that did it for me with a pseudo-distributed installation. My problem (and maybe yours as well) was that Hbase wasn't detecting it's configuration.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
I see this very often on my machine. I don't have a failsafe cure, but end up running stop-all.sh, and deleting every place that hadoop and dfs (its a dfs failure) store their temp files. It seems to happen after my computer goes to sleep while dfs is running.
I am going to experiment with single-user mode to avoid this. I dont need distribution while developing.