Does the current HotSpot JVM run in parallel by default? - jvm

I'm using this Java version:
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.3) (suse-9.1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
When I start a java program, e.g.
java TestApp
by default, will the JVM run in parallel ?
If so, which parts run in parallel ?
I am interested in this, because I found if I use taskset -c 0 java TestApp to bind TestApp running on processor 0, the first running time is much slower than java TestApp. Does this imply something?

There are a number of single threaded tasks which have a thread of their own.
the main thread which runs you program
the background byte code to native compiler
the finalizer thread (to call finalize() on objects)
the GC thread pool
Your code will only use as many threads as you create (plus "main" which created for you)

The JVM has native threads and no Global Lock, if that's what you're asking.

The first running time is probably largely JITing the bytecode to machine code. I would suspect very strongly that process is optimized for parallel scenarios.

Related

Does Vxworks support multiprogramming?

Earlier versions of Vxworks didn't support multiprocessing but I read Vxworks 6.6 and above support SMP (Symmetric multiprocessing).SMP would allow process to run parallel in multiple cores. But does Vxworks support multi-programming? One of the issues with Vxworks was that since whole software is one program, if one thread crash the whole software crash! Is it still the case?
The answer is Yes. There are several specific answers of Yes.
VxWorks 6.x and 7.x have process models (called RTPs)
VxWorks 6.6+ and 7.x have SMP.
VxWorks 7.x have memory models more like Unix.
VxWorks 6.x and VxWorks 7 both have Posix pthreads and native multiprogramming API including processor affinity API
I am using Vxworks 6.8 and it supports multiple threads well. Crash in a thread is isolated to it. Crashed thread terminates while others keep their execution.

JVisualVM lies: "Not supported for this JVM" while it is

App A, ran from IntelliJ, has CPU usage and I can look at threads. App B has no Threads tab and CPU usage shows "Not supported for this JVM". Both apps are ran with same JVM. Why / what happened?
JVM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13, mixed mode)
Java: version 1.8.0_101, vendor Oracle Corporation
Java Home: /usr/lib/jvm/java-8-oracle/jre
I can take a thread dump with jstack.
Apps are simple command-line programs, not larger than 10 classes. One app uses finalization, if it matters.
I started both apps from IntelliJ Idea, via Run. App A when started from CLI (via java FinalizerTest) and looked at with jvisualvm had Threads and CPU usage, despite not having it previously. This is still the same JVM that runs it, so I guess this is Intellij problem?
If I can provide more information, freely ask.

Jprofiler and G1 GC

I was trying to profile my spark application (that uses the G1 GC) using jprofiler. I came across their website where they mention that jprofiler remote profiling works reliably only with the standard garbage collector:
http://resources.ej-technologies.com/jprofiler/help/doc/index.html
(Under section Probe Settings/Starting Remote Sessions)
"Please note that the profiling interface JVMTI only runs reliably with the standard garbage collector. If you have VM parameters on your command line that change the garbage collector type such as
-Xincgc
-XX:+UseParallelGC
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
please make sure to remove them. It might be a good idea to remove all -XX options if you have problems with profiling."
Is this true for the latest version of jprofiler as well (9.0) ? Does this affect cpu profiling as well ?
I am able to do a memory profiling with visualVM, just wondering why this limitation (if at all) with jprofiler.
It's not a limitation, it's just advice. Some of the alternative GCs are not well tested with the JVMTI (the profiling interface of the JVM). G1 GC will become the standard GC, so there the situation is different.

JRE Architecture Dependencies (Running on MIPS)

OpenJDK currently does not have support for a JRE on MIPS processors (there's a port in progress, but who knows how long that will take).
I'm trying to understand how the JRE works, and what is standing in the way of using OpenJDK on our embedded system running Linux on a MIPS processor. If I have a custom JVM that is capable of running on MIPS designed to work with OpenJDK (in this case, I'm referring to JamVM 2.0), is there anything else preventing the JRE from running on the MIPS board? Are any other parts of the JRE platform-dependent?
My understanding is that the JRE is (mostly) composed of two units: the virtual machine, which abstracts the hardware and which is platform-dependent, and the collection of Java libraries which run on the virtual machine and which are not platform-dependent.
To be clear, my questions is: aside from the JVM, is any part of the Java Runtime Environment platform-dependent?
"aside from the JVM, is any part of the Java Runtime Environment platform-dependent?"
That depends on where you place the boundary where the VM ends and the JRE begins. I would consider memory management and code execution as 'the VM', everything more specific part of the JRE.
Thats means every binding to the operating system, be it I/O, Graphics etc. is part of the JRE. Thus the JRE has many platform dependent parts; you usually just don't notice them because your code uses their abstractions (e.g. File, Socket, Window).
So when you say "a port to MIPS" it doesn't mean anything without specifying an OS (ok, your link says Linux); a VM ported to a processor architecture by itself does not make a working java environment. It also requires a port of the native parts of the JRE that allow the java program to actually communicate with things outside the VM; thats where the OS platform comes in.
Since Linux is already supported for x64, the MIPS port should be able to reuse most of the JRE to platform bindings from that.

jBoss slowness in a VMWare ESXI Virtual Machine

We have a Java EE application that runs in JBoss 7.1.1, and, we must run it in VirtualMachines (such as VMWare ESXI).
The thing is, when we run our app in the VM, the performance is dropped by 50% approx.
Seems like the GC goes crazy... as far I can tell, when GC runs, it take much more longer
than normal to "end", and block the application meanwhile.
Have anyone else had a experience like that? Any tips, tunning or a light that I can follow?
Thanks in advance.
EDIT
JVM has Xmx and Xms = 1Gb
VM has 4Gb RAM
Ubuntu Server 64
oracle JVM 64
I would say that before moving your app to VM with configuration you posted it was running on 32bit system and 32bit jvm and using same jvm parameters.
Trick is that you moved to 64bit with 64bit java but still assigned same amount of heap size for your application, what has happened in reality is that you app now has half the memory available that it used to have.
Every object on 64bit jvm is twice the size of the one of 32bit jvm.
Given configuration you have I would suggest few solutions:
increase heap size to 2G
or use compressed oops
or install 32bit jvm
Given that your application does not have more than 1.3G assigned it think best performance would be achieved by installing 32bit jvm and running with -Xms1300m -Xmx1300m.
You can go even step futher and have 32bit VM with 32bit linux installation.
64bit jvm is only useful if you need more than 1.3G of heap otherwise it just adds too much overhead.
Also you can run jvm with
-verbose:gc -XX:+PrintGCDetails
that will show you what is happening with GC, this can further help you tune your jvm.