I am trying to find the correct tool for displaying the results contained in the file generated by :
valgrind --tool=memcheck --xtree-memory=full --xml=yes --xml-file=memcheck_result.xml [prog]
This produces a file named xtmemory.kcg.[pid] which is largely unreadable ... at least to me !
Since we have kcachegrind for visualising callgrind trees and graphs, is there really no tool for the xtmemory file ?
kcachegrind is the tool to use to visualise xtmemory files.
See http://www.valgrind.org/docs/manual/manual-core.html#manual-core.xtree
Related
I've compiled a program using -pg switch and linked using the -pg switch. When my program is executed a file "gmon.out" is produced. However after running gprof on the file, there is no data other than the standard information explaining the data provided.
Why would there be nothing in the gmon.out file? The program is obviously compiled and linked correctly as the new "gmon.out" file is generated; it just has no data.
It's a bug that happens with the recent gnu c compiler.
You can use the -no-pie option as a workaround
gcc -no-pie
When researching how to compress a bunch of PDFs with pictures inside (ideally in a lossless fashion, but I'll settle for lossy) I found that a lot of people recommend doing this:
$ pdf2ps file.pdf
$ ps2pdf file.ps
This works! The resulting file is smaller and looks at least good enough.
How / why does this work?
Which settings can I tweak in this process?
If there is some lossy conversion, which one is that?
Where is the catch?
People who recommend this procedure rarely do so from a background of expertise or knowledge -- it's rather based on gut feelings.
The detour of generating a new PDF via PostScript and back (also called "refrying a PDF") is never going to give you the optimal results. Sometimes it is useful, f.e. in cases were the original PDF isn't printed at all, or cannot be processed by another application. But these cases are very rare.
In any case, this "roundtrip" conversion will never lead to the same PDF file as initially.
Also the pdf2ps and ps2pdf tools aren't an independent tools at all: they are just simple wrapper scripts around a Ghostscript (gs or gswin32c.exe) command line. You can check that yourself by doing:
cat $(which ps2pdf)
cat $(which pdf2ps)
This will also reveal the (default) parameters these simple wrappers use for the respective conversions.
If you are unlucky, you will have an ancient Ghostscript installed. The PostScript which is then generated by pdf2ps will be Level 1 PS, and this will be "lossy" for many fonts which could be used by more modern PDF files, resulting in rasterization of previous vector fonts. Not exactly the output you'd like to look at...
Since both tools are using Ghostscript anyway (but behind your back), you are better off to run Ghostscript yourself. This gives you more control over the parameters it uses. Especially advantageous is the fact that this way you can get a direct PDF->PDF conversion, without any detour via an intermediary PostScript file format.
Here are a few answers which would give you some hints about what parameters you could use in order to drive the file size down in a semi-controlled way in your output PDF:
Optimize PDF files (with Ghostscript or other) (StackOverflow)
Remove / Delete all images from a PDF using Ghostscript or ImageMagick (StackOverflow)
Using the different Valgrind tools for profiling and debugging individual programs is easy. I am working on a big project with lot of modules and packages. (for Router SoC).
On Building the model, how do I use Valgrind for debugging during compilation of the entire model? Should I include Valgrind in the makefile (Because I don't want to run Valgrind separately for every individual file)? All I want is during the compilation of the entire big model, I want Valgrind's output log files for every individual C program?
You may try something like valgrind --trace-children=yes --log-file="log.%p" the-whole-shebang. The %p is substituted for pid. You can throw in some --trace-children-skip or --trace-children-skip-by-arg options if you wish to skip certain uninteresting parts. See valgrind man page for details. Or perhaps combination of --log-socket and valgrind-listener? It would be easier to filter output that way if you are up for little scripting.
I am totally new to flash development, don't even know ActiveScript yet.
I have to improve some existing flash application, so at first I need to understand the code.
I want to use some tool for code analysis, something to visualize class dependencies and code structure. I googled and found out about Apparat tool. Now I'm struggling with it because I can not find documentation that describes how to use Apparat. I'm frustrated, but it seems to be the only such tool.
So I started with example.
I've set up apparat running on FDT following this guide:
http://www.webdevotion.be/blog/2010/06/02/how-to-get-up-and-running-with-apparat/
The example (http://blog.joa-ebert.com/2010/05/26/new-apparat-example/) builds well and creates two SWF files. (I'm using ANT builder)
Now I want to analyze existing swf and see a PNG with class dependencies.
How should I do that?
What do I have to add and where?
Or maybe someone can explain how to use dump from windows command line? Something like
dump example.swf exampleAnalysis.png
After resolving all dependencies (which was tricky), I managed to get dump running
dump -i example.swf -uml
But it saves the UML diagram in .DOT format which is really hard to read as Graphviz GVedit cannot zoom and exports to PNG only what you see (messy impossible to read zoomed out graph), smyrna doesn't work and zgrviewer fails to load some files.
I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.