I recently use valgrind with glib(with gobject), it doesn't work very well.
I have added G_SLICE=always-malloc G_DEBUG=gc-friendly in the command line,
but there's still many "possibly lost" reported by valgrind.
As I use valgrind in automated testsuit, so I add --error-exitcode=1,
but those "possibly lost" will make valgrind exit with 1, which will
case my test fail.
Does anyone know how to make valgrind not treat "possibly lost" as errors.
With valgrind 3.7.0, use:
--show-possibly-lost=no|yes show possibly lost blocks in leak check?
[yes]
Check https://live.gnome.org/Valgrind for tips on how to use Valgrind with glib/gtk+/gnome. You might be interested in the Suppressions's section.
Does anyone know how to make valgrind not treat "possibly lost" as errors.
Use --errors-for-leak-kinds=definite for this. See Valgrind User Manual, Memcheck Command-Line Options section
Related
Is there a parameter in valgrind massif that allows me to only track allocations made by certain functions and classes? I would like to make a run that only traces (de)allocations made by std::vector.
Regards
Using the option --xtree-memory=full, you might be able to visualise the stacktraces that are interesting you.
See http://www.valgrind.org/docs/manual/manual-core.html#manual-core.xtree for more details.
I want suppress Valgrind's reporting of some "definitely lost" memory by the library I'm using. I have tried valgrind --gen-suppressions=yes ./a but it only prompts for errors such as "conditional jump or move depends on uninitialized value".
Is there a way to generate suppressions for straight-up memory leaks? If not, is it difficult to write them by hand? Valgrind's manpage seems to discourage it, at least for errors.
Run valgrind with the --gen-suppressions=all and --log-file=memcheck.log options, and manually copy/paste the logged suppressions into the suppression file.
valgrind --leak-check=full --gen-suppressions=all --log-file=memcheck.log ./a
If you find the output is mixed with the application output then redirect valigrind output to separate file descriptor: --log-fd=9 9>>memcheck.log
valgrind --leak-check=full --gen-suppressions=all --log-fd=9 ./a 9>>memcheck.log
To be prompted for leaks that aren't generating errors, you have to run
valgrind --leak-check=full --gen-suppressions=yes ./a
There is a page on how you can generate such a file based on your errors https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto
It's not perfect, but you can start from it
You can write a suppression file of your own (but it doesn't seem obvious) :
--suppressions=<filename> [default: $PREFIX/lib/valgrind/default.supp]
If the question was to disable an entire library, see this.
Valgrind's man page.
When I run a code in OMNeT++ (eclipse based IDE), the simulation crashes after certain number of events. So to check for a memory leak, I used VALGRIND. When I run the code using this valgrind profiler, my simulation runs perfectly fine. I don't know the reason for this peculiar behavior. Can someone explain the reason behind this ?
Probably a 'heisenbug". I.e. an issue that changes its behavior if you try to examine it. It could be an uninitialized variable or other obscure bug that did not surface if the program starts with a different memory layout (i.e. under valgrind).
I would still look into the valgring logs, even if the crash does not occur as the logs may cotain some hints.
When I write a routine to test the performance of two stuffs, which optimization flags should I use? -O0, -O2, or -g ?
You should test the performance of your code using each of the settings. Ideally the larger the number -O0, -O1, -O2, -O3, implies better performance as there is more/better optimization, but that is not always the case.
Likewise depending on how your code is written some of it may be removed in a way that you didnt expect from the language or the compiler or both. So not only do you need to test the performance of your code, you need to actually test the program generated from your code to see that it does what you think it does.
There is definitely not one optimization setting that provides the best performance for any code that can be compiled by that compiler. You have to test the settings and compiler on a particular system to verify that for that system the code does indeed run faster. How you test that performance is filled with many traps and other error producing problems that you can easily misunderstand the results. So you have to be careful in how you test your performance.
For gcc folks usually say -O3 is risky to use and -O2 is the best performance/safe. And for the most part that is the case -O2 is used enough to get many bugs flushed out. -O2 does not always produce the fastest code but it generally produces faster code that -O0 and -O1. Use of debuggers can defeat the optimization or remove it all together, so never test for performance with a debugger based build or using a debugger. Test on the system as the user would use the system, if the user uses a debugger when they run your program then test that way, otherwise dont.
In GCC -O0 disables compiler code optimizations at all. -g adds debugging info to executable so you can use debugger.
If you want to enable speed optimizations use flags -O1 or -O2. See man gcc(1) for more information.
If you want to measure performance of your code use profiler such as valgrind or gprof.
Actually, if you care about performance you should definitely use -O3. Why give away potential optimisations?
And yes, there’s a small but measurable difference between -O2 and -O3.
-g is not an optimisation flag but it can prevent optimisations so it must be disabled for representative benchmarks.
I have a program witch is a xmpp client that connect to a server.
I use gloox library to do that.
When I run the program, it runs ok and connects to the server.
But when I run it under valgrind, the program never sends
<iq id='uid:4efa1893:327b23c6' type='set' from='user#server/ressource' xmlns='jabber:client'><session xmlns='urn:ietf:params:xml:ns:xmpp-session'/></iq>
to the server.
Had anybody experience such problem?
Are there any parameter I specially need to run valgrind with to make sure that it is the same environement as a normal program execution?
The very first question is: did Valgrind report any errors in the execution of your program?
If your program is well-defined, and Valgrind didn't report any errors in it, then the program is supposed to behave exactly the same way under Valgrind as without it (only slower); no special settings required.
It is somewhat more likely that Valgrind did report some errors, and if so, your program is likely not well-defined, in which case your question is mute -- your program doesn't work the same because it is not well-defined (i.e. depends on undefined behavior).