Using large codebases with Jetbrains Webstorm 9 - jvm

I am working on a large webgl application, and I am using very large js files that contain all the information about a specific model. I keep getting out of memory errors in Webstorm and when I try to set the Xms higher, I get a JVM failed to start error. I reset it back to 512 mb, and now Webstorm will open, but it will immediately give me the "out of memory" error. Do I also need to set something with jvm?

Go to Appearance & Behaviour/Appearance, and set the checkbox "Show Memory Indicator"
A memory indicator will appear on the bottom left side of the screen
Here's the nice thing: If you click on it, you do a garbage collection of the currently used memory. It is a good practice to deplete the used memory once it's close to the limit. This is a common practice and has saved me of crashes many times.

Related

VSCode OpenOCD Debugging - inspecting a really large array?

I need to inspect the content of a large array I have in my program's RAM.
The array is 1536 elements long, and its contents are ever changing.
Every time I inspect the array in my debug window, I just get a "circle of death" / loading visual beside the variable, which never seems to complete. The debugger eventually cogs out / freezes causing me to exit the debugger and reset the whole process.
Is there a different way for inspecting large arrays such as this? I assume OpenODC needs to fetch all the data from the MCU and is breaking in the process 🤷‍♂️.
Perhaps there are some OpenOCD settings I could adjust? 🤔

IntelliJ uses more memory than allocated

My IntelliJ goes unbearably slow, so I was fiddling with memory settings. If you select Help -> Change Memory Settings, you can set the max heap size for IntelliJ. But even after restarting, then running Mac's Activity Monitor, I see it using 5.5GB even though I set the heap to 4092MB.
It's using 1.5GB more than allocated for heap. That's a lot of memory for permgen + stack, don't you think? Or, could it be that this memory setting actually doesn't have any effect on the program?
It's the virtual memory you see, it may also include memory mapped files and many other things occupied by the JVM internals, plus the native libraries for a dozen of Apple frameworks loaded into the process. There is nothing to worry about unless you get OOM or IDE becomes slow.
If it happens, refer to the KB documents and report the issues to YouTrack with the CPU/Memory snapshots.

Why does Intellij not release memory after closing a project?

I had three projects open. One of them - Spark - was very large. Upon closing spark there was NO difference in memory usage - as reported by os/x activity monitor. Note: all projects are opened within the same Intellij instance.
It is in fact using just over 4GB. And I only now have two projects open. Those two projects only take up 1.5GB if I shut down Intellij and start it up again.
So .. what to do to "encourage" Intellij to release the memory it is using? It is running very very slowly (can not keep up with my typing for example)
Update I just closed the larger of the two remaining projects. STILL no reduction in memory usage. The remaining project is a single python file. So Intellij should be using under 512Meg at this point!
Following up on #PeterGromov's answer it seems that is were difficult to obtain the memory back. In addition #KevinKrumwiede mentioned the XX:MaxHeapFreeRatio which appears to be an avenue.
Here are a couple of those ideas taken bit farther from Does GC release back memory to OS?
The HotSpot JVM does release memory back to the OS, but does so
reluctantly.
You can make it more aggressive by setting -XX:GCTimeRatio=19
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of
allocated-but-unused heap memory after a GC cycle.
Additionally with Java 9 -XX:-ShrinkHeapInSteps option can be be used
to apply the shrinking caused by the previous two options more
aggressively. Relevant OpenJDK bug.
Do note that shrinking ability and behavior depends on the chosen
garbage collector. For example G1 only gained the ability to yield
back unused chunks in the middle of the heap with jdk8u20.
So if heap shrinking is needed it should be tested for a particular
JVM version and GC configuration.
and from How to free memory in Java?
To extend upon the answer and comment by Yiannis Xanthopoulos and Hot
Licks (sorry, I cannot comment yet!), you can set VM options like this
example:
-XX:+UseG1GC -XX:MinHeapFreeRatio=15 -XX:MaxHeapFreeRatio=30 In my jdk 7 this will then release unused VM memory if more than 30% of the heap
becomes free after GC when the VM is idle. You will probably need to
tune these parameters.
While I didn't see it emphasized in the link below, note that some
garbage collectors may not obey these parameters and by default java
may pick one of these for you, should you happen to have more than one
core (hence the UseG1GC argument above).
I am going to add the -XX:MaxHeapFreeRatio to IJ and report back if it were to help.
Our application presently only runs on Java7 so the first approach above is not yet viable - but there is hope since our app is moving to jdk8 soon.
https://www.jetbrains.com/help/idea/status-bar.html
I used this:
Shows the current heap level and memory usage. Visibility of this section in the Status bar is defined by the Show memory indicator check box in the Appearance page of the Settings/Preferences dialog. It is not shown by default.
Click the memory indicator to run the garbage collector.
The underlying Java virtual machine supports only growing of its heap. So even if after closing all projects the IDE doesn't need all of it, it's still allocated and counted as used in the OS.

Meaning of clearance of generation analysis memory growth on simulator "simulate memory warning" event

I was trying to debug the memory growth in generation analysis and was frustrated (Lots of objects that was the result of call toCGGlyphBitmapCreate was not being released) . Then, I ran the program on simulator and captured many generation snapshots and then I did a simulate memory warning. Almost every generation cleared to zero ( a few had a few bytes here and there). Does that mean my code is fine and I should not worry about it? How can I prevent the growth so that it wont have to wait until a simulate memory warning event to clear the growth? (By the way, all these growth was caused by system libraries)
If the memory is getting released upon memory warning, then you're probably OK. The OS will cache all sorts of stuff (that it will free/reuse as it sees fit) that you don't generally have to be concerned about.
Still, I would run the code through the static analyzer (press shift+command+B in Xcode or select "Analyze" on the Xcode "Product" menu) just to be safe.

Memoryusage drops after a week

I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.