We have been monitoring an application of our laboratory these days. We found that minor GCs occur frequently and the total GC time is about 15 secs in 20 minutes.
Compared with the similar product of another company, we measured that the total GC time is about 8 secs.
We want to know which part of our code leads to the frequent allocation and reclaim of objects, so that we can optimize our code and try to catch up with others as much as we can.
We've tried to use jvisualvm to create heap dump and see the difference between two heap dump. However, the timing of creating heap dump is hard to handle and the creation is also very slow.
Is there any tool or method to know which classes are collected most during each minor GC?
Thank you!
java profilers such as jmc, jprofiler or yourkit provide allocation recording or class histogram diffing which should tell you which classes are frequently allocated. Some can even tell you the allocation sites.
Alternatively you could try tweaking the GC to collect the young generation less often (by making it larger or relaxing pause time goals) which might increase efficiency.
total GC time occupies a relatively large proportion of the running time
This is odd. It would normally trigger a GC overhead limit exceeded OOME, at least for the parallel collector, which is the default.
Related
I have a question about how scheduling is done. I know that when a system has multiple CPUs scheduling is usually done on a per processor bases. Each processor runs its own scheduler accessing a ready list of only those processes that are running on it.
So what would be the pros and cons when compared to an approach where there is a single ready list that all processors share?
Like what issues are there when assigning processes to processors and what issues might be caused if a process always lives on one processor? In terms of the mutex locking of data structures and time spent waiting on for the locks are there any issues to that?
Generally there is one, giant problem when it comes to multi-core CPU systems - cache coherency.
What does cache coherency mean?
Access to main memory is hard. Depending on the memory frequency, it can take between a few thousand to a few million cycles to access some data in RAM - that's a whole lot of time the CPU is doing no useful work. It'd be significantly better if we minimized this time as much as possible, but the hardware required to do this is expensive, and typically must be in very close proximity to the CPU itself (we're talking within a few millimeters of the core).
This is where the cache comes in. The cache keeps a small subset of main memory in close proximity to the core, allowing accesses to this memory to be several orders of magnitude faster than main memory. For reading this is a simple process - if the memory is in the cache, read from cache, otherwise read from main memory.
Writing is a bit more tricky. Writing to the cache is fast, but now main memory still holds the original value. We can update that memory, but that takes a while, sometimes even longer than reading depending on the memory type and board layout. How do we minimize this as well?
The most common way to do so is with a write-back cache, which, when written to, will flush the data contained in the cache back to main memory at some later point when the CPU is idle or otherwise not doing something. Depending on the CPU architecture, this could be done during idle conditions, or interleaved with CPU instructions, or on a timer (this is up to the designer/fabricator of the CPU).
Why is this a problem?
In a single core system, there is only one path for reads and writes to take - they must go through the cache on their way to main memory, meaning the programs running on the CPU only see what they expect - if they read a value, modified it, then read it back, it would be changed.
In a multi-core system, however, there are multiple paths for data to take when going back to main memory, depending on the CPU that issued the read or write. this presents a problem with write-back caching, since that "later time" introduces a gap in which one CPU might read memory that hasn't yet been updated.
Imagine a dual core system. A job starts on CPU 0 and reads a memory block. Since the memory block isn't in CPU 0's cache, it's read from main memory. Later, the job writes to that memory. Since the cache is write-back, that write will be made to CPU 0's cache and flushed back to main memory later. If CPU 1 then attempts to read that same memory, CPU 1 will attempt to read from main memory again, since it isn't in the cache of CPU 1. But the modification from CPU 0 hasn't left CPU 0's cache yet, so the data you get back is not valid - your modification hasn't gone through yet. Your program could now break in subtle, unpredictable, and potentially devastating ways.
Because of this, cache synchronization is done to alleviate this. Application IDs, address monitoring, and other hardware mechanisms exist to synchronize the caches between multiple CPUs. All of these methods have one common problem - they all force the CPU to take time doing bookkeeping rather than actual, useful computations.
The best method of avoiding this is actually keeping processes on one processor as much as possible. If the process doesn't migrate between CPUs, you don't need to keep the caches synchronized, as the other CPUs won't be accessing that memory at the same time (unless the memory is shared between multiple processes, but we'll not go into that here).
Now we come to the issue of how to design our scheduler, and the three main problems there - avoiding process migration, maximizing CPU utilization, and scalability.
Single Queue Multiprocessor scheduling (SQMS)
Single Queue Multiprocessor schedulers are what you suggested - one queue containing available processes, and each core accesses the queue to get the next job to run. This is fairly simple to implement, but has a couple of major drawbacks - it can cause a whole lot of process migration, and does not scale well to larger systems with more cores.
Imagine a system with four cores and five jobs, each of which takes about the same amount of time to run, and each of which is rescheduled when completed. On the first run through, CPU 0 takes job A, CPU 1 takes B, CPU 2 takes C, and CPU 3 takes D, while E is left on the queue. Let's then say CPU 0 finishes job A, puts it on the back of the shared queue, and looks for another job to do. E is currently at the front of the queue, to CPU 0 takes E, and goes on. Now, CPU 1 finishes job B, puts B on the back of the queue, and looks for the next job. It now sees A, and starts running A. But since A was on CPU 0 before, CPU 1 now needs to sync its cache with CPU 0, resulting in lost time for both CPU 0 and CPU 1. In addition, if two CPUs both finish their operations at the same time, they both need to write to the shared list, which has to be done sequentially or the list will get corrupted (just like in multi-threading). This requires that one of the two CPUs wait for the other to finish their writes, and sync their cache back to main memory, since the list is in shared memory! This problem gets worse and worse the more CPUs you add, resulting in major problems with large servers (where there can be 16 or even 32 CPU cores), and being completely unusable on supercomputers (some of which have upwards of 1000 cores).
Multi-queue Multiprocessor Scheduling (MQMS)
Multi-queue multiprocessor schedulers have a single queue per CPU core, ensuring that all local core scheduling can be done without having to take a shared lock or synchronize the cache. This allows for systems with hundreds of cores to operate without interfering with one another at every scheduling interval, which can happen hundreds of times a second.
The main issue with MQMS comes from CPU Utilization, where one or more CPU cores is doing the majority of the work, and scheduling fairness, where one of the processes on the computer is being scheduled more often than any other process with the same priority.
CPU Utilization is the biggest issue - no CPU should ever be idle if a job is scheduled. However, if all CPUs are busy, so we schedule a job to a random CPU, and a different CPU ends up becoming idle, it should "steal" the scheduled job from the original CPU to ensure every CPU is doing real work. Doing so, however, requires that we lock both CPU cores and potentially sync the cache, which may degrade any speedup we could get by stealing the scheduled job.
In conclusion
Both methods exist in the wild - Linux actually has three different mainstream scheduler algorithms, one of which is an SQMS. The choice of scheduler really depends on the way the scheduler is implemented, the hardware you plan to run it on, and the types of jobs you intend to run. If you know you only have two or four cores to run jobs, SQMS is likely perfectly adequate. If you're running a supercomputer where overhead is a major concern, then an MQMS might be the way to go. For a desktop user - just trust the distro, whether that's a Linux OS, Mac, or Windows. Generally, the programmers for the operating system you've got have done their homework on exactly what scheduler will be the best option for the typical use case of their system.
This whitepaper describes the differences between the two types of scheduling algorithms in place.
Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.
I know external programs can be called, but I don't know how expensive it is compared to, say, calling a subroutine. By the cost of calling, I mean the overhead of starting the program, rather than the cost of executing the program's code itself. I know the cost probably varies greatly depending on the language and operating system used and other factors, but I would appreciate some ballpark estimates.
I am asking to see the plausibility of emulating code self-modification on languages that don't allow code self-modification by making processes modify other processes
Like I said in my comment above, perhaps it would be best if you simply tried it and did some benchmarking. I'd expect this to depend primarily on the OS you're using.
That being said, starting a new process generally is many orders of magnitude slower than calling a subroutine (I'm tempted to say something like "at least a million times slower", but I couldn't back up such a claim with any measurements).
Possible reasons why starting a process is much slower:
Disk I/O (the OS has to load the process image file into memory) — this is going to be a big factor because I/O is many orders of magnitude slower than a simple CPU jump/call instruction.
To give you a rough idea of the orders of magnitude involved, let me quote this 2011 blog article (which is about memory access vs HDD access, not CPU jump instruction vs HDD access):
"Disk latency is around 13ms, but it depends on the quality and rotational speed of the hard drive. RAM latency is around 83 nanoseconds. How big is the difference? If RAM was an F-18 Hornet with a max speed of 1,190 mph (more than 1.5x the speed of sound), disk access speed is a banana slug with a top speed of 0.007 mph."
You do the math.
allocations of memory & other kernel data structures
laying out the process image in memory & performing relocations
creation of a new OS thread
context switches
etc.
Apparently, all of the above points mean that your OS is likely to perform lots of internal subroutine calls to start a new process, so doing just one subroutine call yourself instead of having the OS do hundreds of these is bound to be comparatively super-cheap.
Is it possible to recover memory lost from w3wp.exe? I thought a session.abandon() should clear up the resources like that? The thing is I have a web application, certain pages make w3wp.exe grow significantly. Like from 40 MB to 400 MB. Now I am going to optimize my code defiantly to reduce this, however for what ever amount the w3wp.exe grows, is there no way to recover the lost memory even when the user has logged out and closed the browser?
I know this worker process will recycle after 30 minutes (default) of idle use, but what if there is no idle use-age for a long time and the worker process still has that portion of memory, it just keeps on growing? Any thoughts people?
The garbage collector will take care of whatever memory needs to be freed, provided that you dispose things correctly, etc. The GC doesn't immediately kick in every time you call Session.Abandon(), as that would be a major performance hit.
That said, every application has a "normal" memory usage, i.e. a stable memory usage (again, provided you don't have leaks), and this figure is different for every application. 400MB can be a lot or it can be nothing, depending on what your app does. I have apps that hover around 400MB and others around 1.5GB and that's OK as long as memory usage stabilizes somewhere. If you see unbounded memory usage then you most likely have a leak somewhere in your app.
Storing large amounts of data in the in-proc session can also quickly rack up the memory usage. Instead, use a file or a database to store this data.
unless you are leaking the memory, the memory manager will re-use this memory so you should not see the process memory keep growing.
I have a WebSphere Portal application running four instances on a single box and after about 7 days of runtime there is only 130-150mb of address space free in native memory (using PMAP). Somewhere in another 7-10 days the figure drops well below 100mb (which we deem dangerous and we start to recycle the JVM). If we don't do the recycle, the JVM will eventually crash with a SIGSEGV signal.
We've done some initial investigation into class counts and the size of JIT code. Class counts grow, but slowly from 50k onwards...about a couple hundred per day. JITC sizes get to about 210 MB after 7 days and grow about 1 MB per day after that so. In our previous experience we don't find these to be sinister values.
What we need to to be able to break down what is in the native heap, whether it is threads (all thread counts appear normal and we have fixed thread pools), String pools, constant pools, bytecode, or whatever else.
One lead we are trying now is reducing the reflection threshold to 0 (shutting off the bytecode accessors for reflectively created classes). This app uses a lot of pointcutting and a lot of reflection, so we're hoping there's a good chance this helps.
Any advice is welcome.
Might be a bit of back-and-forth, but have you GC logged and ensured it's not growing Java heap over time? Looked at your perm space? The SIGSEGV is an interesting one though, I'd expect a more JVM-ish crash for any in-Java mem issues.
After lengthy investigation, this ended up being a WebSphere bug: PK72252: CALLS TO CLASSLOADER.GETRESOURCEASSTREAM ARE SLOW. Fixed in 6.0.2.33.