I'm currently using several advantech devices with LabView,
When I use one function with the dll it use memory, if I call it several time the memory used increased over and over ... to finally crash labview or have a bluescreen.
I've tried to flush the memory each time, it works better (crash after 30 min, before it was 10 sec ...)
Problem solved after a few weeks :
There was a software memory leak in the manufacturer DLL. Changed my way of measuring to avoid the problem.
There was only a few % of people having this problem
Related
I have only one project (an ordinary SpringFramework project) opened. And the IDE is crazy using CPU:
JVisualVM CPU sample:
Note this happened just recently
Any idea?
The correct answer was posted by #matt-helliwell if you're coming from a version older than 2016.2.
File -> Invalidate Caches and Restart
If the above doesn't fix your problem, track this issue:
https://youtrack.jetbrains.com/issue/IDEA-157837
I invalidated the caches and it solved the problem for some time. But after a couple days Idea (my version is 2017.1.3) started to work slow with some freezing delays again. Finally I increased maximum available memory to 2 Gb (parameter -Xmx in idea.exe.vmoptions/idea64.exe.vmoptions file) and now it works perfect
I solved the problem by running idea64 bits :
JetBrains\IntelliJ IDEA 2016.2.4\bin\idea64.exe
Another possible solution, my IDEA was very slow because of a huge sql file open which was consuming all my CPU.
It took me a long time to notice this was happening only when opening a specific utility class with more than 1000 lines of code.
This class had maybe 50 public static methods (the reason why it is a utility class...), all pure.
At first, I thought it was stuck on a loop of "Performing code analysis" because that was the thing running heavily on the background as shown when hovering the mouse above a green check on top of the window of the offending class:
, but in reality, it was slowly scanning each instance in which the code was being executed in the entire source code.
It took like 45 minutes to scan the entirety of the class, the entire time the CPU usage at max (100%).
Once the class is closed the usage stops.
The issue (at least with AS Dolphin 2022-23) is that the analysis is never memorized, so if the window is closed, and opened later, the analysis begins from 0. So, it never gets cached...
I've written a small single-line oriented UDP based display service to support raspberry pi projects I frequently work on, where it would be nice to see the results of sensor data being captured. This is a rewrite of that program using GTK3 V3.18.9, and GLIB2 V2.46.2. I'm developing on OSX El Capitan
It seems to double in real memory size every 30 minutes or so based on traffic; so I'm presuming I have a memory leak somewhere. But for the life of me I can't see where in the code it could possible be. Val grind did not initially work for me, so I've got some studying to do to resolve what ever issue that is.
Meanwhile I was hoping that different eyes might be able to suggest a coding cause for a traffic based (at least I think so) memory leak. Here is the program and test client.
It starts up using about 10MB of real memory, then jumps when it has received 64 total messages to 14MB, than slowly grows from there. At 64 message, I start deleting the 65th message off the end of the list, presuming I should be saving memory; as this program might run for weeks.
Here is the code for the test client and the display service:
https://gist.github.com/skoona/c1218919cf393c98af474a4bf868009f
I have this app written in VB.Net with winforms that shows some stats and pictures on a bigscreen monitor. I also monitor the memory usage of sad app by using this.
Process.WorkingSet64
I know windows does not always report the correct usage but I just wanted to know if I didn't have any little memory leaks which I had but are solved now. But the first week the memory usage was around 100MB and the second week the memory usage showed around 50MB.
So why did it all of a sudden drop while still running the exact same code?
I can hardly imagine that the garbage collector kicked in this late since the app refreshes every 10 seconds and it has ample time in between those periods to do it's thing.
Or perhaps there is just better way to get memory usage for a process that is more reliable.
Process.WrokingSet64 doesn't report the memory usage, it omits the memory that is swapped to disk:
The value returned by this property represents the current size of working set memory used by the process. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. (MSDN)
Even if your system was never low on free memory, you may have minimized the application window, which caused Windows to trim its working set.
If you want to look for memory leaks you should probably use Process.PrivateMemorySize64 instead. Your shared memory is going to contain just executable code and it's going to remain more or less constant throughout the life of the process, so you should focus on the private memory.
just as the topic suggests I've come across a slight issue with boost::serialization when serializing a huge amount of data to a file. The problem consists of the memory footprint of the serialization part of the application taking around 3 to 3.5 times the memory of my objects being serialized.
It is important to note that the data structure I have is a three dimensional vector of base class pointers and a pointer to that structure. Like this:
using namespace std;
vector<vector<vector<MyBase*> > >* data;
This is later serialised with a code analog to this one:
ar & BOOST_SERIALIZATION_NVP(data);
boost/serialization/vector.hpp is included.
Classes being serialised all inherit from "MyBase".
Now, since the start of my project I've used different archives for serialization from typical binary_archive, text, xml and finally polymorphic binary/xml/text. Every single one of these acts exactly the same way.
Typically this wouldn't be a problem if I had to serialize small amounts of data but the number of classes I have are in the milions (ideally around 10 milion) and the memory usage as I've been able to test it shows consistently that the memory allocated by boost::serialization part of the code is around 2/3 of the application whole memory footprint while writing the file.
This amounts to around 13.5 GB of RAM taken for 4 milion objects where the objects themselves take 4.2GB. Now this is as far as I've been able to take my code since I don't have access to a machine with more than 8GB of physical RAM. I should also note that this is a 64bit application being run on a Windows 7 professional x64 edition but the situation is similar on an Ubuntu box.
Anyone has any idea how I would go about troubleshooting this as it is unacceptable for me to have such high memory requirements for an application that will not use as much memory while running as it does while serializing.
Deserialization isn't as bad, as it allocates around 1.5 times the needed memory. This is something I could live with.
Tried turning tracking off with boost::archive::archive_flags::no_tracking but it acts exactly the same.
Anyone have any idea what I should do?
Using valgrind I found that the main reason of memory consumption is a map inside the library to track pointers. If you are certain that you do not need pointer tracking ( it means you are sure that there is no pointer aliasing) disable tracking. You can find here the main concepts of disable tracking. In short you must do something like this:
BOOST_CLASS_TRACKING(vector<vector<vector<MyBase*> > >, boost::serialization::track_never)
In my question I wrote a version of this macro that you could disable tracking of a template class. This must have a significant impact on your memory consumption.
Also notice that there are pointers inside any containers If you want tracking never you must disable tracking of them too. Currently I could not find any way to do this properly.
I have a WebSphere Portal application running four instances on a single box and after about 7 days of runtime there is only 130-150mb of address space free in native memory (using PMAP). Somewhere in another 7-10 days the figure drops well below 100mb (which we deem dangerous and we start to recycle the JVM). If we don't do the recycle, the JVM will eventually crash with a SIGSEGV signal.
We've done some initial investigation into class counts and the size of JIT code. Class counts grow, but slowly from 50k onwards...about a couple hundred per day. JITC sizes get to about 210 MB after 7 days and grow about 1 MB per day after that so. In our previous experience we don't find these to be sinister values.
What we need to to be able to break down what is in the native heap, whether it is threads (all thread counts appear normal and we have fixed thread pools), String pools, constant pools, bytecode, or whatever else.
One lead we are trying now is reducing the reflection threshold to 0 (shutting off the bytecode accessors for reflectively created classes). This app uses a lot of pointcutting and a lot of reflection, so we're hoping there's a good chance this helps.
Any advice is welcome.
Might be a bit of back-and-forth, but have you GC logged and ensured it's not growing Java heap over time? Looked at your perm space? The SIGSEGV is an interesting one though, I'd expect a more JVM-ish crash for any in-Java mem issues.
After lengthy investigation, this ended up being a WebSphere bug: PK72252: CALLS TO CLASSLOADER.GETRESOURCEASSTREAM ARE SLOW. Fixed in 6.0.2.33.