Netbeans out of memory exception
I increased the -Xmx value in netbeans file.
but the IDE is busy acquiring more memory to scan projects ?
the memory usage increases and the process is slow, and non responsive
Sounds like your system is thrashing. The heap size is now so large that there is not enough physical memory on your system to hold it ... and all of the other things you are running.
The end result is that your system has to copy memory pages between physical memory and the disc page file. Too much of that and the system performance will drop dramatically. You will see that the disc activity light is "on" continually. (The behaviour is worst during Java garbage collection, because that entails accessing lots of VM pages in essentially random order.)
If this is your problem then there is no easy solution:
You could reduce the -Xmx a bit ...
You could stop other applications running; e.g. quit your web browser, email client, etc.
You could buy more memory. (This only works up to a point ... if you are using a 32bit system / 32bit OS / 32bit JVM.)
You could switch to a less memory-hungry operating system or distro.
Related
My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.
I am wondering what's the JVM behaviour for the following situation:
JVM minimum heap size = 500MB
JVM maximum heap size = 2GB
OS has 1GB memory
After the JVM started and the program runs for a period of time, it uses more than 1GB memory. I wonder if OOM will happen immediately or it will try to GC first!
It depends on how much swap space you have.
If you don't have enough free swap, the JVM won't start as it can't allocate enough virtual memory.
If you have enough free swap your program could start and run. However once a JVM starts swapping its heap the GC times rise dramatically. The GC assumes it can access the heap somewhat randomly.
If your heap can't fit in main memory, the program, and possibly the machine becomes unusable. In my experience, on Windows, a reboot is needed at this point. On Linux, I usually find I can kill the process.
In short, you might be able to start the JVM, but it's a bad idea.
We migrated web application from jsf1.0 to 1.2 and deployed in Websphere 8.5. EArlier application was deployed in Websphere6.0. We are facing performance issue during SOAK testing. Got some thread hung message in sysout logs also i observe lot of blocking thread in thread dump file and its released on time.
Application performance degrades on time. i can see the performance issue remains same even the application is idle for 1 day .
Main issue is with the High CPU usage and high JVM memory even the application is idle for 1 day. Application is fast after the restart of server. Does the GC will not clear the JVM memory for 1 day or why this CPU is high ?
High cpu with low/declining app throughput is typical of java heap exhaust, when the JVM spends most of its time running GC trying to clear space in the heap to do real work. You should enable verbose GC logging, the GC log will show the heap state and GC activity. If the heap is below 10% tenure/OldGen free (assuming using default gencon collector) after a global/full GC, you are in heap exhaust state.
You could try increasing the heap size, maybe it just needs more space than currently provided. If the heap use (used tenure after global) continues to climb over time, when the workload offered is steady/constant, then the app probably has a memory leak. The objects accumulating in the heap can be seen by taking a core/system dump when the server is near heap exhaust state, and examining the dump with e.g. Eclipse Memory Analyzer.
Apppreciate any expert here could advise for below JVM and swap space related queries. Thanks in advance
1) Am I right that Operating System will use swap space when OutOfMemory occured in JVM Java Heap, Perm Generation or Native Heap ? Or swap space is used for OutOfMemory in Native Heap ?
2) Am I right that Native heap size is not configurable at JVM, because OS will assign available RAM to JVM during runtime ?
3) How can we enable swap space for JVM, or swap space is enabled for all processes at Unix and Window level by default ?
4) Understand that swap space can affect application performance, is that best practice to disable swap space for JVM ? If not, what is the reason ?
5) How can we disable swap space and change the swap space size for particular JVM in both Unix and Window OS, or it is only configurable at OS level which is applied to all processes in the OS ?
There are a lot of questions here... Operating systems indeed use swap space to create the so called virtual memory (which is obviously bigger then the RAM you might have). It is usually enabled by default, but you need to check.
You can not instruct the JVM to use only the physical RAM AFAIK, but that would be a limitation of the OS itself and not JVM (this should answer 5).
You can disable swap (again for the OS, not JVM), but that is a bad idea. There are multiple processes that run inside the operating system and they each need space to run into (that at some point in time might exceed your actual RAM). It indeed affects performance, but what is worse - some performance penalties (I assume the OS has many things to make this better for you) or the death of the application? (this should answer 4).
Regarding (2) there are two parameters that control how much heap you will have: Xmx - maximum heap that JVM process will use. And Xms - initial heap. Actually just recent there was a very good talk about this: here.
I think -Xmx and -Xms configure how much heap is available for the java process that is run inside the virtual machine. The virtual machine itself is a native process that requires additional heap for running the virtual machine itself. The JVM process can therefore consume more memory then that indicated by the -Xmx option.
Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.