I use gprof2dot to profile my application:
./gprof2dot.py -f callgrind callgrind.out.x | dot -Tsvg -o output.svg
source
Even though, it gives me beautiful graphical profiling, the name of each function in each box is very long and goes far over screen size. Since boost library has high usage of templates. Just look at one of the function names:
std::reference_wrapper<boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag> > std::ref<boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag> >(boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag>&)
Is there any way to strip out the name space and template and even arguments of the function to make it look smaller in the graph?
PS. The image is very big and I could not convert it into png. I do not know if you can download and open this 10MB image (link).
./gprof2dot.py has two related options:
-s, --strip strip function parameters, template parameters, and
const modifiers from demangled C++ function names
-w, --wrap wrap function names
I personally prefer -w as I can still tell templates apart.
Related
SYNOPSIS
From man(1):
-l
Format and display local manual files instead of
searching through the system's manual collection.
-t
Use groff -mandoc to format the manual page to stdout.
From groff_tmac(5):
papersize
This macro file is already loaded at start-up by troff so it
isn't necessary to call it explicitly. It provides an interface
to set the paper size on the command line with the option
-dpaper=size. Possible values for size are the same as
the predefined papersize values in the DESC file (only
lowercase; see groff_font(5) for more) except a7–d7.
An appended l (ell) character denotes landscape orientation.
Examples: a4, c3l, letterl.
Most output drivers need additional command-line switches -p
and -l to override the default paper length and orientation
as set in the driver-specific DESC file. For example, use the
following for PS output on A4 paper in landscape orientation:
sh# groff -Tps -dpaper=a4l -P-pa4 -P-l -ms foo.ms > foo.ps
THE PROBLEM
I would like to use these to format local and system man pages to print out, but want to switch the paper size from letter to A4. Unfortunately I couldn't find anything in man(1) about passing options to the underlying roff formatter.
Right now I can use
zcat `man -w man` | groff -tman -dpaper=a4 -P-pa4
to format man(1) on stdout, but that's kind of long and I'd rather have man build the pipeline for me if I can. In addition the above pipeline might need changing for more complicated man pages, and while I could use grog, even it doesn't detect things like accented characters (for groff's -k option), while man does (perhaps using locale settings).
The man command is typically intended only for searching for and displaying manual pages on a TTY device, not for producing typeset and paper printed output.
Depending on the host system, and/or the programs of interest, the a fully typeset printable form of a manual page can sometimes be generated when a program (or the whole system) is compiled. This is more common for system documents and less common for manual pages though.
Note that depending on which manual pages you are trying to print there may be additional steps required. Traditionally the following pipeline would be used to cover all the bases:
grap $MANFILE | pic | tbl | eqn /usr/pub/eqnchar | troff -tman -Tps | lpr -Pps
Your best solution for simplifying your command line would probably be to write a little tiny script which encapsulates what you're doing. Note that man -w might find several different filenames, so you would probably want to print each separately (or maybe only print the first one).
I was using valgrind tool - callgrind and kcachegrind for profiling a large project and was wondering if there is a way that callgrind reports the stats from all the functions (not just the most expensive functions).
To be specific - When I visualized the callgraph in kcachegrind, it included only those functions that are quite expensive, but I was wondering if there is a way to include all the functions from the project in the callgraph. Command used for generating profiling info is given below :
valgrind --dsymutil=yes --tool=callgrind $EXE
I am not sure if I have to give any options to valgrind or may be compile the application at a different optimization. This might be something trivial but I couldn't find a solution. Any pointers regarding this highly appreciated.
Thanks !
It occurred to me yesterday. As shown in the picture, I found in call graph of kcachegrind, there is a right click menu, in which you can set up the threshold above which the node will be visualized.
There is also a option "no minimum", however it can not be chosen. I think maybe it's because, if every function, no matter how trivial it is, takes up a node, the graph may be too large to handle.
I just found that the script gprof2dot can handle this.
The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph, you can set the parameter like -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph, you can set the parameter like -e0
In order to generate the complete call graph you would use both of the options (-n0 and -e0).
I've tried this, however, as the graph generated is too large, the dot software warned me that "graph is too large for cairo-renderer bitmaps. Scaling by 0.328976 to fit. " But you can set up the output format as eps which can handle this. You also can change the parameter to adapt your objective.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot.py -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
The command I am using is
valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes $EXE and as far as I have seen it includes all the functions in the call graph.
Hope it helps.
I'm going to complete rengar's answer with information that will allow you to generate the complete call graph, as well as give an example of the full process.
You can use the gprof2dot to show all functions in a callgraph. The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph you should set this parameter to -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph you should set this parameter to -e0
In order to generate the complete call graph you would use both of the options: -n0 and -e0.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
I have a latex file which needed to include snippets of Lua code (for display, not execution), so I used the minted package. It requires latex to be run with the latex -shell-escape flag.
I am trying to upload a PDF submission to arXiv. The site requires these to be submitted as .tex, .sty and .bbl, which they will automatically compile to PDF from latex. When I tried to submit to arXiv, I learned that there was no way for them to activate the -shell-escape flag.
So I was wondering if any of you knew a way to highlight Lua code in latex without the -shell-escape flag. I tried the listings package, but I can't get it to work for Lua on my Ubuntu computer.
You can set whichever style you want inline using listings. It's predefined Lua language has all the keywords and associated styles identified, so you can just change it to suit your needs:
\documentclass{article}
\usepackage{listings,xcolor}
\lstdefinestyle{lua}{
language=[5.1]Lua,
basicstyle=\ttfamily,
keywordstyle=\color{magenta},
stringstyle=\color{blue},
commentstyle=\color{black!50}
}
\begin{document}
\begin{lstlisting}[style=lua]
-- defines a factorial function
function fact (n)
if n == 0 then
return 1
else
return n * fact(n-1)
end
end
print("enter a number:")
a = io.read("*number") -- read a number
print(fact(a))
\end{lstlisting}
\end{document}
Okay so lhf found a good solution by suggesting the GNU source-hightlight package. I basically took out each snippet of lua code from the latex file, put it into an appropriately named [snippet].lua file and ran the following on it to generate a [snippet]-lua.tex :
source-highlight -s lua -f latex -i [snippet].lua -o [snippet]-lua.tex
And then I included each such file into the main latex file using :
\input{[snippet]-lua}
The result really isn't as nice as that of the minted package, but I am tired of trying to convince the arXiv admin to support minted...
I am working on a project where our verification test scripts need to locate symbol addresses within the build of software being tested. This might be used for setting breakpoints or reading static data from memory. What I am after is to create a map file containing symbol names, base address in memory, and size. Our build outputs an ELF file which has the information I want. I've been trying to use the readelf, nm, and objdump tools to try and to gain the symbol addresses I need.
I originally tried readelf -s file.elf and that seemed to access some symbols, particularly those which were written in assembler. However, many of the symbols that I wanted were not in there - specifically those that originated within our Ada code.
I used readelf --debug-dump file.elf to dump all debug information. From that I do see all symbols, including those that were in the Ada code. However, the format seems to be in the DWARF format. Does anyone know why these symbols would not be output by readelf when I ask it to list the symbolic information? Perhaps there is simply an option I am missing.
Now I could go to the trouble of writing a custom DWARF parser to get the information but if I can get it using one of the Binutils (nm, readelf, objdump) then I'd really like prefer a standard solution.
DWARF is the debug information and tries to reflect the relation of the original source code. Taking following code as an example
static int one() {
// something
return 1;
}
int main(int ac, char **av) {
return one();
}
After you compile it using gcc -O3 -g, the static function one will be inlined into main. So when you use readelf -s, you will never see the symbol one. However, when you use readelf --debug-dump, you can see one is a function which is inlined.
So, in this example, compiler does not prohibit you use optimization with -g, so you can still debug the executable. In that example, even the function is optimized and inlined, gdb still can use DWARF information to know the function and source/line from current code block inside inlined function.
Above is just a case of compiler optimization. There might be plenty of reasons that could lead to mismatch symbols address between readelf -s and DWARF.
I want to convert PDF to SVG please suggest some libraries/executable that will be able to do this efficiently. I have written my own java program using the apache PDFBox and Batik libraries -
PDDocument document = PDDocument.load( pdfFile );
DOMImplementation domImpl =
GenericDOMImplementation.getDOMImplementation();
// Create an instance of org.w3c.dom.Document.
String svgNS = "http://www.w3.org/2000/svg";
Document svgDocument = domImpl.createDocument(svgNS, "svg", null);
SVGGeneratorContext ctx = SVGGeneratorContext.createDefault(svgDocument);
ctx.setEmbeddedFontsOn(true);
// Ask the test to render into the SVG Graphics2D implementation.
for(int i = 0 ; i < document.getNumberOfPages() ; i++){
String svgFName = svgDir+"page"+i+".svg";
(new File(svgFName)).createNewFile();
// Create an instance of the SVG Generator.
SVGGraphics2D svgGenerator = new SVGGraphics2D(ctx,false);
Printable page = document.getPrintable(i);
page.print(svgGenerator, document.getPageFormat(i), i);
svgGenerator.stream(svgFName);
}
This solution works great but the size of the resulting svg files in huge.(many times greater than the pdf). I have figured out where the problem is by looking at the svg in a text editor. it encloses every character in the original document in its own block even if the font properties of the characters is the same. For example the word hello will appear as 6 different text blocks. Is there a way to fix the above code? or please suggest another solution that will work more efficiently.
Inkscape can also be used to convert PDF to SVG. It's actually remarkably good at this, and although the code that it generates is a bit bloated, at the very least, it doesn't seem to have the particular issue that you are encountering in your program. I think it would be challenging to integrate it directly into Java, but inkscape provides a convenient command-line interface to this functionality, so probably the easiest way to access it would be via a system call.
To use Inkscape's command-line interface to convert a PDF to an SVG, use:
inkscape -l out.svg in.pdf
Which you can then probably call using:
Runtime.getRuntime().exec("inkscape -l out.svg in.pdf")
http://download.oracle.com/javase/1.4.2/docs/api/java/lang/Runtime.html#exec%28java.lang.String%29
I think exec() is synchronous and only returns after the process completes (although I'm not 100% sure on that), so you shoudl be able to just read "out.svg" after that. In any case, Googling "java system call" will yield more info on how to do that part correctly.
Take a look at pdf2svg (also on on github):
To use
pdf2svg <input.pdf> <output.svg> [<pdf page no. or "all" >]
When using all give a filename with %d in it (which will be replaced by the page number).
pdf2svg input.pdf output_page%d.svg all
And for some troubleshooting see:
http://www.calcmaster.net/personal_projects/pdf2svg/
pdftocairo can be used to convert pdf to svg. pdfcairo is part of poppler-utils.
For example to convert 2nd page of a pdf, following command can be run.
pdftocairo -svg -f 1 -l 1 input.pdf
pdftk 82page.pdf burst
sh to-svg.sh
contents of to-svg.sh
#!/bin/bash
FILES=burst/*
for f in $FILES
do
inkscape -l "$f.svg" "$f"
done
I have encountered issues with the suggested inkscape, pdf2svg, pdftocairo, as well as the not suggested convert and mutool when trying to convert large and complex PDFs such as some of the topographical maps from the USGS. Sometimes they would crash, other times they would produce massively inflated files. The only PDF to SVG conversion tool that was able to handle all of them correctly for my use case was dvisvgm. Using it is very simple:
dvisvgm --pdf --output=file.svg file.pdf
It has various extra options for handling how elements are converted, as well as for optimization. Its resulting files can further be compacted by svgcleaner if necessary without perceptual quality loss.
inkscape (#jbeard4) for me produced svgs with no text in them at all, but I was able to make it work by going to postscript as an intermediary using ghostscript.
for page in $(seq 1 `pdfinfo $1.pdf | awk '/^Pages:/ {print $2}'`)
do
pdf2ps -dFirstPage=$page -dLastPage=$page -dNoOutputFonts $1.pdf $1_$page.ps
inkscape -z -l $1_$page.svg $1_$page.ps
rm $1_$page.ps
done
However this is a bit cumbersome, and the winner for ease of use has to go to pdf2svg (#Koen.) since it has that all flag so you don't need to loop.
However, pdf2svg isn't available on CentOS 8, and to install it you need to do the following:
git clone https://github.com/dawbarton/pdf2svg.git && cd pdf2svg
#if you dont have development stuff specific to this project
sudo dnf config-manager --set-enabled powertools
sudo dnf install cairo-devel poppler-glib-devel
#git repo isn't quite ready to ./configure
touch README
autoreconf -f -i
./configure && make && sudo make install
It produces svgs that actually look nicer than the ghostscript-inkscape one above, the font seems to raster better.
pdf2svg $1.pdf $1_%d.svg all
But that installation is a bit much, too much even if you don't have sudo. On top of that, pdf2svg doesn't support stdin/stdout, so the readily available pdftocairo (#SuperNova) worked a treat in these regards, and here's an example of "advanced" use below:
for page in $(seq 1 `pdfinfo $1.pdf | awk '/^Pages:/ {print $2}'`)
do
pdftocairo -svg -f $page -l $page $1.pdf - | gzip -9 >$1_$page.svg.gz
done
Which produces files of the same quality and size (before compression) as pdf2svg, although not binary-identical (and even visually, jumping between output of the two some pixels of letters shift, but neither looks wrong/bad like inkscape did).
Inkscape does not work with the -l option any more. It said "Can't open file: /out.svg (doesn't exist)". The long form that option is in the man page as --export-plain-svg and works but shows a deprecation warning. I was able to fix and update the command by using the -o option on Inkscape 1.1.2-3ubuntu4:
inkscape in.pdf -o out.svg