Does Jprofiler analyze gc log files to show things like eden space utilization vs tenured utilization etc.? - jprofiler

JClarity claims to be able to analyze gc log files to show such interesting & useful facts such as eden space utilization over time etc. This could help me, presumably, tune my GC settings etc. But, I already bought jprofiler. Does jprofiler provide similar functionality? (I couldn't find any tutorial or examples that would allow me to do similar things to jclarity, such as showing eden space utilization over time etc.)

As of version 10.1, JProfiler does not analyze GC log files. However, the "Memory" VM telemetry can show all the memory pools separately. You can select a memory pool in the drop-down at the top.

Related

IntelliJ uses more memory than allocated

My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.

How to properly assign huge heap space for JVM

Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.

Intellij IDEA: Improving performance due to being incredibly slow at times

I have these settings:
-server
-Xms2048m
-Xmx8096m
-XX:MaxPermSize=2048m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=512
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-Dawt.useSystemAAFontSettings=lcd
Yes, they are maxed out.
I have also changed from 2500 to:
idea.max.intellisense.filesize=500
I am developing in a Java project which mostly works fine, although in some java classes it is slow at times, like when just editing a String.
However, today I am touching some html, css and javascript files and it is just going slower and slower.
The CPU level are not increasing considerably but just slow.
I am in debug mode most of the time, but i don't have auto build on save.
What other parameters can I increase/decrease to get it run smoother?
Right now it is not able to provide me with any help.
I have 24 GB ram and a I7-4810MQ so it's a pretty powerful laptop.
According to this Jetbrains blog post, you can often double the performance of IDEA by fixing various NTFS formatted disk related issues:
If you are running a Windows machine with NTFS disks, there is a good chance to double the performance of IntelliJ IDEA by optimizing the MFT tables, disk folder structure and Windows paging file.
We have used the Diskeeper, 2007 Pro Trial version tool to carry out the following procedure. You may of course, repeat this with your favorite defragmenter, provided it supports equivalent functionality.
Free about 25% space on the drive you are optimizing.
Turn off any real-time antivirus protection and reboot your system.
Defragment files.
Defragment MFT (Do a Frag Shield, if you are using Diskeeper). Note that this is quite lenghty process which also requires your
machine to reboot several times.
Defragment the folder structure (perform the Directory consolidation).
Defragment the Windows paging file.
The above optimizations have positive impact not only on IntelliJ
IDEA, but on general system performance as well.
You could open VisualVM, YourKit or other profiler and see what exactly is slow.
Or just report a performance problem.
VisualVM alone would tell you if the CPU is spending time with garbage collecting or normal stuff.
Large heap provides a considerable benefit only when garbage collection causes lags or eats too much CPU. Also if you enable a Memory Indicator by enabling Settings | Show Memory Indicator you will see how much of heap is occupied and when GC clears it.
BTW you absolutely need an SSD.

How to generate metrics or reports using JProfiler?

I have successfully started my application in Profiling mode but I am not sure how to generate reports or metrics from Jprofiler.
I could see the Live memory (all objects, recorded objects no. of. instance count etc), heap walker etc but I am not sure of what JProfiler concludes or recommends about my application.
Can someone help?
This profiling approach you're describing is jProfiler's live profiling session. The objective is pretty much looking at the charts it produces and identifying anomalies.
For example, on CPU profiling, you will be looking at CPU Hot Spots (i.e. individual methods that consume a disproportionate amount of time).
In memory profiler, you will be able to identify objects that occupy the most memory (also hot spots).

Is it meaningful to monitor physical memory usage on AIX?

Due to AIX's special memory-using algorithm, is it meaning to monitor the physical memory usage in order to find out the memory bottleneck during performance tuning?
If not, then what kind of KPI am i supposed to keep eyes on so as to determine whether we need to enlarge the RAM capacity or not?
Thanks
If a program requires more memory that is available as RAM, the OS will start swapping memory sections to disk as it sees fit. You'll need to monitor the output of vmstat and look for paging activity. I don't have access to an AIX machine now to illustrate with an example, but I recall the man page is pretty good at explaining what data is represented there.
Also, this looks to be a good writeup about another AIX specfic systems monitoring tool, and watching your systems overall memory (svgmon).
http://www.aixhealthcheck.com/blog.php?id=255
To track the size of your individual application instance(s), there are several options, with the most common being ps. Again, you'll have to check the man page to get information on which options to use. There are several columns for memory sz per process. You can compare those values to the overall memory that's available on your machine, and understand, by tracking over time, if your application is only increasing is memory, or if it releases memory when it is done with a task.
Finally, there's quite a body of information from IBM on performance tuning for AIX, but I was never able to find a road map guide to reading that information. A lot of it assumes you know facts and features that aren't explained in the current doc set, so you then have to try and find an explanation, which oftens leads to searching for yet another layer of explanations. ! :^/
IHTH.