Monitoring a JVM with SNMP - jvm

I'm using snmp to monitore some servers (win2k3 mostly) and during my journey on internet, I found a MIB done by Oracle for monitoring a JVM, JVM-MANAGEMENT-MIB. What I did so far to use it is :
Configure the JVM with snmp.acl and management.properties
Compile the JVM-MANAGEMENT-MIB with mibcc and replace the mib.bin.
With those steps, I think I'm good to try. So I made a java program with snmp4j and when I try to call an OID from the JVM mib, I have got an error Request timed out.
The weird part is, I only compile the JVM-MANAGEMENT-MIB so I should have access to only those OIDs, right ? But it's not the case, I still have access to cpu usage, number of process...
So what did I miss ? Thanks

Add the following three parameters to the JVM and it should be all you need.
-Dcom.sun.management.snmp.interface=127.0.0.1 \
-Dcom.sun.management.snmp.acl=false \
-Dcom.sun.management.snmp.port=16500 \
Well and you do not need to write a java program to verify if it works. net-snmp or some MIB browser is enough.
snmpwalk -v2c -c public 127.0.0.1:16500 SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4
SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4.2.0 = STRING: "Java HotSpot(TM) 64-Bit Server VM"
SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4.3.0 = STRING: "Sun Microsystems Inc."
SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4.4.0 = STRING: "20.10-b01"
SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4.5.0 = STRING: "Java Virtual Machine Specification"
SNMPv2-SMI::enterprises.42.2.145.3.163.1.1.4.6.0 = STRING: "Sun Microsystems Inc."

Related

AttachNotSupportedException when trying to start a JFR recording

I'm receiving AttachNotSupportedException when trying to start a JFR recording.
It was working normally, until now.
jcmd 3658 JFR.start maxsize=100M filename=jfr_1.jfr dumponexit=true settings=profile
Output:
3658:
com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
What might be happening?
SO: Oracle Linux Server release 6.7
$ java -version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
One of the probable reasons is that /tmp/.java_pid1234 file has been deleted (where 1234 is PID of a Java process).
Tools that depend on Dynamic Attach Mechanism (jstack, jmap, jcmd, jinfo) communicate to JVM through a UNIX domain socket created at /tmp.
This socket is created by JVM lazily on the first attach attempt or eagerly at JVM initialization if -XX:+StartAttachListener flag is specified.
Once the file corresponding to the socket is deleted, tools cannot connect to the target process, and unfortunately there is no way to re-create communication socket without restarting JVM.
For the description of Dynamic Attach Mechanism see this answer.
With personal experience... This problem also occurs in scenarios where the development environment is divided into partitions, and the partition where the operating system is located is different from the operating system partition. Example, operating system partition is EXT4 and the development environment partition is NTFS (where the JVM is). Problem occurs because you can not create a file "/tmp/.java_pid6024" (where 6024 is the PID of the java process).
To troubleshoot add -XX: + StartAttachListener at the start of the JVM, or application server.
Another possibility: your app is running under systemd with 'PrivateTmp=yes'. This prevents the /tmp/.java_pid1234 file from being found.

Sat4j Remote Control window doesn't open

What happens:
I execute the following command.
java -jar sat4j-sat.jar -remote
No window opens, and I get a console output same as without the -remote flag, which begins:
c SAT4J: a SATisfiability library for Java (c) 2004-2013 Artois (...)
c This is free software under the dual EPL/GNU LGPL licenses.
c See www.sat4j.org for details.
c version 2.3.4.v20130419
c java.runtime.name OpenJDK Runtime Environment
c java.vm.name OpenJDK Client VM
c java.vm.version 24.65-b04
c java.vm.vendor Oracle Corporation
c sun.arch.data.model 32
c java.version 1.7.0_65
c os.name Linux
c os.version 3.2.0-4-686-pae
(...)
What is expected:
From readme.txt:
To run sat4j with on the fly configuration:
java -jar sat4j-sat.jar -remote
These instructions should open a java window named Remote Control. We
assume that the 1.5 version of the java command is in your path. If
it isn’t, then you should either specify the complete path to the java
command or update your PATH environment variable as described in the
installation instructions for the Java 2 SDK.
Other details
I have tried multiple versions of the library, up to 2.3.4.
My system is Debian 7 with Gnome 2.
My default Java installation is OpenJDK 1.7.0_65.
My secondary Java installation is Oracle Java 1.8.0_45 (with the same issue).
Gnuplot 4.6 is installed.
My first machine has a 32 bit dual core CPU with 2GB of RAM.
My second machine has a 64 bit quad core CPU with 8GB of RAM with nearly identical software.
Question
Has anyone used SAT4J's remote control feature? What is the problem with my method?
Update
On another machine (64 bit Debian 7) the window opens. After start dat files are created, but plotting does not start.
Update 2
I ran the generated instance.dimacs-gnuplot.gnuplot file manually from a gnuplot terminal, and I got the message unknown or ambiguous terminal type for the x11 type. I installed the gnuplot-x11 package, and now it works on the workplace machine: I can see the diagrams (wow!). Unfortunately on my home machines the Remote Control window still doesn't open.
The -remote parameter is used to display the remote control, i.e. to setup the various parameters of the solver.
If you want to always monitor what the solver is doing, you need to use in conjunction the -r parameter.
So the complete command line should be:
java -jar sat4j-sat.jar -r -remote file.cnf
You can get a fresh snapshot of Sat4j Sat on our continuous integration server:
http://bamboo.ow2.org/browse/SAT4J-DEF2-41/artifact/JOB1/nightly_build/
This might solve the issue you met with the 2.3.4 release.
Cheers,
Daniel

How do I get the following information out of a solaris sparc machine?

-Processor Id
-baseboard manufacturer and
-serial number of the bios
For the x86 solaris I got it from smbios but when I run smbios on a solaris sparc, it gives me an error message:
smbios: failed to load SMBIOS: System does not export an SMBIOS table
I would also like to get the information programmatically.
Any help would be appreciated.
This is no surprise. SPARC machines do not make use of a BIOS but what is called the Open Boot PROM.
Here are some commands that will help you gathering information from your machine:
prtdiag
prtconf
prtfru
psrinfo -v
sneep

Find the commands that jvm was started with during run time (1.6.0_12)

Full story:
I am trying to start up an instance of hudson with a larger memory allocation and I'm currently using scripts owned by root that I can't modify directly to pass arguments. However the script currently passes the $JAVA_ARGS variable when starting up the service. I have exported the required parameters to JAVA_ARGS but the application still appears to be bound by the old memory restrictions.
Question:
Is there a way to find out which command line parameters were used to start up the instance. More specifically I'm looking to find the values that were passed (if any) to Xmx and Xms.
java version "1.6.0_12"
Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
Java HotSpot(TM) 64-Bit Server VM (build 11.2-b01, mixed mode)
After some searching I came about a pretty simple solution (which I'm a little embarrassed to have missed for so long). You can see the command line to any command running in linux with ps, as long as you pass the correct flags. I just made a call to ps -fHu hudson and was able to see the full command line call to java which showed the passed in parameters.
Since you can export $JAVA_ARGS, maybe you can override $PATH to trick the script to run another program instead of the JVM, which could be a program that simply writes its arguments somewhere.

Xcode distributed build failure

I am trying to do distributed builds with Xcode, but I see this error while building from my build server (Build Sever is the host, dev machine is the client).
When I try to do this the other way, I am able to distribute builds (My Dev machine as the host and the Build Sever as the client)
Any thoughts?
[14:44:47]: Step 2/3 (6m:10s)
[14:44:57]: [Step 2/3] distcc[95606] (dcc_parse_multiplier) ERROR: bad multiplier "/0,lzo,cpp" in host specification
[14:44:57]: [Step 2/3] distcc[95606] (dcc_show_hosts) CRITICAL! Failed to get host list
[14:44:57]: [Step 2/3] /usr/bin/pump: error: pump mode requested, but distcc hosts list does not contain any hosts with ',cpp' option
Your milage may vary with this solution, but we've had to hack the distcc that comes with Xcode to force pump mode to be off to fix this problem.
Remove pump from /Developer/usr/bin and /usr/bin, just write out an empty file named pump in its place
Don't forget to chmod a+x your pump and distcc (in the next step)
In /Developer/usr/bin, rename distcc to distcc.bin and write out this distcc
#!/bin/bash
hosts=$DISTCC_HOSTS
hosts=${hosts//\,cpp/}
export DISTCC_HOSTS=$hosts
echo Modified DISTCC_HOSTS=\"$DISTCC_HOSTS\"
/Developer/usr/bin/distcc.bin $#
Apologies, this is a quick and dirty solution. There is probably a cleaner way to do this.
Please restart the build server and your own computer. That usually does the trick for me, also, update to the latest xcode 4