Does GDB have a built in scripting mechanism, should I code up an expect script, or is there an even better solution out there?
I'll be sending the same sequence of commands every time and I'll be saving the output of each command to a file (most likely using GDB's built-in logging mechanism, unless someone has a better idea).
Basically, in this example I wanted to get some variable values in particular places of the code; and have them output until the program crashes. So here is first a little program which is guaranteed to crash in a few steps, test.c:
#include <stdio.h>
#include <stdlib.h>
int icount = 1; // default value
main(int argc, char *argv[])
{
int i;
if (argc == 2) {
icount = atoi(argv[1]);
}
i = icount;
while (i > -1) {
int b = 5 / i;
printf(" 5 / %d = %d \n", i, b );
i = i - 1;
}
printf("Finished\n");
return 0;
}
The only reason the program accepts command-line arguments is to be able to choose the number of steps before crashing - and to show that gdb ignores --args in batch mode. This I compile with:
gcc -g test.c -o test.exe
Then, I prepare the following script - the main trick here is to assign a command to each breakpoint, which will eventually continue (see also Automate gdb: show backtrace at every call to function puts). This script I call test.gdb:
# http://sourceware.org/gdb/wiki/FAQ: to disable the
# "---Type <return> to continue, or q <return> to quit---"
# in batch mode:
set width 0
set height 0
set verbose off
# at entry point - cmd1
b main
commands 1
print argc
continue
end
# printf line - cmd2
b test.c:17
commands 2
p i
p b
continue
end
# int b = line - cmd3
b test.c:16
commands 3
p i
p b
continue
end
# show arguments for program
show args
printf "Note, however: in batch mode, arguments will be ignored!\n"
# note: even if arguments are shown;
# must specify cmdline arg for "run"
# when running in batch mode! (then they are ignored)
# below, we specify command line argument "2":
run 2 # run
#start # alternative to run: runs to main, and stops
#continue
Note that, if you intend to use it in batch mode, you have to "start up" the script at the end, with run or start or something similar.
With this script in place, I can call gdb in batch mode - which will generate the following output in the terminal:
$ gdb --batch --command=test.gdb --args ./test.exe 5
Breakpoint 1 at 0x804844d: file test.c, line 10.
Breakpoint 2 at 0x8048485: file test.c, line 17.
Breakpoint 3 at 0x8048473: file test.c, line 16.
Argument list to give program being debugged when it is started is "5".
Note, however: in batch mode, arguments will be ignored!
Breakpoint 1, main (argc=2, argv=0xbffff424) at test.c:10
10 if (argc == 2) {
$1 = 2
Breakpoint 3, main (argc=2, argv=0xbffff424) at test.c:16
16 int b = 5 / i;
$2 = 2
$3 = 134513899
Breakpoint 2, main (argc=2, argv=0xbffff424) at test.c:17
17 printf(" 5 / %d = %d \n", i, b );
$4 = 2
$5 = 2
5 / 2 = 2
Breakpoint 3, main (argc=2, argv=0xbffff424) at test.c:16
16 int b = 5 / i;
$6 = 1
$7 = 2
Breakpoint 2, main (argc=2, argv=0xbffff424) at test.c:17
17 printf(" 5 / %d = %d \n", i, b );
$8 = 1
$9 = 5
5 / 1 = 5
Breakpoint 3, main (argc=2, argv=0xbffff424) at test.c:16
16 int b = 5 / i;
$10 = 0
$11 = 5
Program received signal SIGFPE, Arithmetic exception.
0x0804847d in main (argc=2, argv=0xbffff424) at test.c:16
16 int b = 5 / i;
Note that while we specify command line argument 5, the loop still spins only two times (as is the specification of run in the gdb script); if run didn't have any arguments, it spins only once (the default value of the program) confirming that --args ./test.exe 5 is ignored.
However, since now this is output in a single call, and without any user interaction, the command line output can easily be captured in a text file using bash redirection, say:
gdb --batch --command=test.gdb --args ./test.exe 5 > out.txt
There is also an example of using python for automating gdb in c - GDB auto stepping - automatic printout of lines, while free running?
gdb executes file .gdbinit after running.
So you can add your commands to this file and see if it is OK for you.
This is an example of .gdbinit in order to print backtrace for all f() calls:
set pagination off
set logging file gdb.txt
set logging on
file a.out
b f
commands
bt
continue
end
info breakpoints
r
set logging off
quit
If a -x with a file is too much for you, just use multiple -ex's.
This is an example to track a running program showing (and saving) the backtrace on crashes
sudo gdb -p "$(pidof my-app)" -batch \
-ex "set logging on" \
-ex continue \
-ex "bt full" \
-ex quit
Related
I've got this:
try { run 'tar', '-zxvf', $path.Str, "$dir/META6.json", :err }
Despite being in a try{} block, this line is still causing my script to crash:
The spawned command 'tar' exited unsuccessfully (exit code: 1, signal: 0)
in block at ./all-versions.raku line 27
in block at ./all-versions.raku line 16
in block <unit> at ./all-versions.raku line 13
Why isn't the try{} block allowing the script to continue and how can I get it to continue?
That's because the run didn't fail (yet). run returns a Proc object. And that by itself doesn't throw (yet).
try just returns that Proc object. As soon as the returned value is used however (for instance, by having it sunk), then it will throw.
Compare (with immediate sinking):
$ raku -e 'run "foo"'
The spawned command 'foo' exited unsuccessfully (exit code: 1, signal: 0)
with:
$ raku -e 'my $a = run "foo"; say "ran, going to sink"; $a.sink'
ran, going to sink
The spawned command 'foo' exited unsuccessfully (exit code: 1, signal: 0)
Now, what causes the usage of the Proc object in your code, is unclear. You'd have to show more code.
A way to check for success, is to check the exit-code:
$ raku -e 'my $a = run "foo"; say "failed" if $a.exitcode > 0'
failed
$ raku -e 'my $a = run "echo"; say "failed" if $a.exitcode > 0'
Or alternately, use Jonathan's solution:
$ raku -e 'try sink run "foo"'
I am transitioning a bash script to snakemake and I would like to parallelize a step I was previously handling with a for loop. The issue I am running into is that instead of running parallel processes, snakemake ends up trying to run one process with all parameters and fails.
My original bash script runs a program multiple times for a range of values of the parameter K.
for num in {1..3}
do
structure.py -K $num --input=fileprefix --output=fileprefix
done
There are multiple input files that start with fileprefix. And there are two main outputs per run, e.g. for K=1 they are fileprefix.1.meanP, fileprefix.1.meanQ. My config and snakemake files are as follows.
Config:
cat config.yaml
infile: fileprefix
K:
- 1
- 2
- 3
Snakemake:
configfile: 'config.yaml'
rule all:
input:
expand("output/{sample}.{K}.{ext}",
sample = config['infile'],
K = config['K'],
ext = ['meanQ', 'meanP'])
rule structure:
output:
"output/{sample}.{K}.meanQ",
"output/{sample}.{K}.meanP"
params:
prefix = config['infile'],
K = config['K']
threads: 3
shell:
"""
structure.py -K {params.K} \
--input=output/{params.prefix} \
--output=output/{params.prefix}
"""
This was executed with snakemake --cores 3. The problem persists when I only use one thread.
I expected the outputs described above for each value of K, but the run fails with this error:
RuleException:
CalledProcessError in line 84 of Snakefile:
Command ' set -euo pipefail; structure.py -K 1 2 3 --input=output/fileprefix \
--output=output/fileprefix ' returned non-zero exit status 2.
File "Snakefile", line 84, in __rule_Structure
File "snake/lib/python3.6/concurrent/futures/thread.py", line 56, in run
When I set K to a single value such as K = ['1'], everything works. So the problem seems to be that {params.K} is being expanded to all values of K when the shell command is executed. I started teaching myself snakemake today, and it works really well, but I'm hitting a brick wall with this.
You need to retrieve the argument for -K from the wildcards, not from the config file. The config file will simply return your list of possible values, it is a plain python dictionary.
configfile: 'config.yaml'
rule all:
input:
expand("output/{sample}.{K}.{ext}",
sample = config['infile'],
K = config['K'],
ext = ['meanQ', 'meanP'])
rule structure:
output:
"output/{sample}.{K}.meanQ",
"output/{sample}.{K}.meanP"
params:
prefix = config['invcf'],
K = config['K']
threads: 3
shell:
"structure.py -K {wildcards.K} "
"--input=output/{params.prefix} "
"--output=output/{params.prefix}"
Note that there are more things to improve here. For example, the rule structure does not define any input file, although it uses one.
There is an option now for parameter space exploration
https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#parameter-space-exploration
I mean, all error messages is splitted into one character length, and these are the lines in my error_log. For example if the error message of my CGI application is "Error", I can see 5 lines of text, one line for every character of the error message, appended with referer and some other time informations. My error messages come from forked cURL process, and countains \r (carriage return) characters, because of the downloading progress indicator. What can I do to get the error output / stderr of my cURL processes really line-by-line?
Fortunately I managed to find the solution with Popen3 from the stdlib:
require "open3"
cmd=%Q{curl -b cookie.txt 'http://www.example.com/file/afancyfileid/yetanotherfilename.avi' -L -O -J}
Open3.popen3(cmd) {|stdin, stdout, stderr, wait_thr|
pid = wait_thr.pid # pid of the started process.
line_no=0
stderr.each_char do |c|
if c=="\r" then
line_no+=1
STDERR.puts if line_no % 5 == 0
else
STDERR.print c if line_no % 5 == 0
end
end
exit_status = wait_thr.value
}
I print only every 5th lines not to let grow my error_log so fast.
I am working with a blind student. She can run gdb from the command line to debug a window-based program, but the program takes the focus from gdb, so if a breakpoint is hit or the program crashes, the screen reader does not read the gdb result. Ideally, it would like if the focus would go to the terminal when it gets gdb output, but otherwise, is there a way to run a linux command when gdb hits a breakpoint or when the program crashes? Then I could run "espeak gdb" and she would know that gdb needs to get focus.
Seems there should be an easy way to do this using scripting in .gdbinit.
Edited later:
I figure out that you can put this code into .gdbinit:
python
import os
def stop_handler (event):
os.system("espeak gdb")
gdb.events.stop.connect (stop_handler)
You can install stop hook hook-stop and use shell followed by a command so it get executed when ever the debugger stops, for example running cmd (Windows) so it echo some string from the shell on stop:
define hook-stop
shell cmd /c echo "hello"
end
Replace cmd /c echo "hello" with the command you want, and copy it and past it in the debugger, now when my program crushes:
#include <stdio.h>
int main(int argc, char **argv) {
int *p = NULL;
printf("%d\n", *p);
return 0;
}
I should see "hello":
> gdb -q a.exe
Reading symbols from a.exe...done.
(gdb) define hook-stop
Type commands for definition of "hook-stop".
End with a line saying just "end".
> shell cmd /c echo "hello"
>end
(gdb) run
Starting program: a.exe
[New Thread 420.0x430]
Program received signal SIGSEGV, Segmentation fault.
"hello"
0x004013a6 in main ()
(gdb)
Is there a way to trigger the PDF export feature in PhantomJS without specifying an output file with the .pdf extension? We'd like to use stdout to output the PDF.
You can output directly to stdout without a need for a temporary file.
page.render('/dev/stdout', { format: 'pdf' });
See here for history on when this was added.
If you want to get HTML from stdin and output the PDF to stdout, see here
Sorry for the extremely long answer; I have a feeling that I'll need to refer to this method several dozen times in my life, so I'll write "one answer to rule them all". I'll first babble a little about files, file descriptors, (named) pipes, and output redirection, and then answer your question.
Consider this simple C99 program:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
if (argc < 2) {
printf("Usage: %s file_name\n", argv[0]);
return 1;
}
FILE* file = fopen(argv[1], "w");
if (!file) {
printf("No such file: %s\n", argv[1]);
return 2;
}
fprintf(file, "some text...");
fclose(file);
return 0;
}
Very straightforward. It takes an argument (a file name) and prints some text into it. Couldn't be any simpler.
Compile it with clang write_to_file.c -o write_to_file.o or gcc write_to_file.c -o write_to_file.o.
Now, run ./write_to_file.o some_file (which prints into some_file). Then run cat some_file. The result, as expected, is some text...
Now let's get more fancy. Type (./write_to_file.o /dev/stdout) > some_file in the terminal. We're asking the program to write to its standard output (instead of a regular file), and then we're redirecting that stdout to some_file (using > some_file). We could've used any of the following to achieve this:
(./write_to_file.o /dev/stdout) > some_file, which means "use stdout"
(./write_to_file.o /dev/stderr) 2> some_file, which means "use stderr, and redirect it using 2>"
(./write_to_file.o /dev/fd/2) 2> some_file, which is the same as above; stderr is the third file descriptor assigned to Unix processes by default (after stdin and stdout)
(./write_to_file.o /dev/fd/5) 5> some_file, which means "use your sixth file descriptor, and redirect it to some_file"
In case it's not clear, we're using a Unix pipe instead of an actual file (everything is a file in Unix after all). We can do all sort of fancy things with this pipe: write it to a file, or write it to a named pipe and share it between different processes.
Now, let's create a named pipe:
mkfifo my_pipe
If you type ls -l now, you'll see:
total 32
prw-r--r-- 1 pooriaazimi staff 0 Jul 15 09:12 my_pipe
-rw-r--r-- 1 pooriaazimi staff 336 Jul 15 08:29 write_to_file.c
-rwxr-xr-x 1 pooriaazimi staff 8832 Jul 15 08:34 write_to_file.o
Note the p at the beginning of second line. It means that my_pipe is a (named) pipe.
Now, let's specify what we want to do with our pipe:
gzip -c < my_pipe > out.gz &
It means: gzip what I put inside my_pipe and write the results in out.gz. The & at the end asks the shell to run this command in the background. You'll get something like [1] 10449 and the control gets back to the terminal.
Then, simply redirect the output of our C program to this pipe:
(./write_to_file.o /dev/fd/5) 5> my_pipe
Or
./write_to_file.o my_pipe
You'll get
[1]+ Done gzip -c < my_pipe > out.gz
which means the gzip command has finished.
Now, do another ls -l:
total 40
prw-r--r-- 1 pooriaazimi staff 0 Jul 15 09:14 my_pipe
-rw-r--r-- 1 pooriaazimi staff 32 Jul 15 09:14 out.gz
-rw-r--r-- 1 pooriaazimi staff 336 Jul 15 08:29 write_to_file.c
-rwxr-xr-x 1 pooriaazimi staff 8832 Jul 15 08:34 write_to_file.o
We've successfully gziped our text!
Execute gzip -d out.gz to decompress this gziped file. It will be deleted and a new file (out) will be created. cat out gets us:
some text...
which is what we expected.
Don't forget to remove the pipe with rm my_pipe!
Now back to PhantomJS.
This is a simple PhantomJS script (render.coffee, written in CoffeeScript) that takes two arguments: a URL and a file name. It loads the URL, renders it and writes it to the given file name:
system = require 'system'
renderUrlToFile = (url, file, callback) ->
page = require('webpage').create()
page.viewportSize = { width: 1024, height : 800 }
page.settings.userAgent = 'Phantom.js bot'
page.open url, (status) ->
if status isnt 'success'
console.log "Unable to render '#{url}'"
else
page.render file
delete page
callback url, file
url = system.args[1]
file_name = system.args[2]
console.log "Will render to #{file_name}"
renderUrlToFile "http://#{url}", file_name, (url, file) ->
console.log "Rendered '#{url}' to '#{file}'"
phantom.exit()
Now type phantomjs render.coffee news.ycombinator.com hn.png in the terminal to render Hacker News front page into file hn.png. It works as expected. So does phantomjs render.coffee news.ycombinator.com hn.pdf.
Let's repeat what we did earlier with our C program:
(phantomjs render.coffee news.ycombinator.com /dev/fd/5) 5> hn.pdf
It doesn't work... :( Why? Because, as stated on PhantomJS's manual:
render(fileName)
Renders the web page to an image buffer and save it
as the specified file.
Currently the output format is automatically set based on the file
extension. Supported formats are PNG, JPEG, and PDF.
It fails, simply because neither /dev/fd/2 nor /dev/stdout end in .PNG, etc.
But no fear, named pipes can help you!
Create another named pipe, but this time use the extension .pdf:
mkfifo my_pipe.pdf
Now, tell it to simply cat its inout to hn.pdf:
cat < my_pipe.pdf > hn.pdf &
Then run:
phantomjs render.coffee news.ycombinator.com my_pipe.pdf
And behold the beautiful hn.pdf!
Obviously you want to do something more sophisticated that just cating the output, but I'm sure it's clear now what you should do :)
TL;DR:
Create a named pipe, using ".pdf" file extension (so it fools PhantomJS to think it's a PDF file):
mkfifo my_pipe.pdf
Do whatever you want to do with the contents of the file, like:
cat < my_pipe.pdf > hn.pdf
which simply cats it to hn.pdf
In PhantomJS, render to this file/pipe.
Later on, you should remove the pipe:
rm my_pipe.pdf
As pointed out by Niko you can use renderBase64() to render the web page to an image buffer and return the result as a base64-encoded string.But for now this will only work for PNG, JPEG and GIF.
To write something from a phantomjs script to stdout just use the filesystem API.
I use something like this for images :
var base64image = page.renderBase64('PNG');
var fs = require("fs");
fs.write("/dev/stdout", base64image, "w");
I don't know if the PDF format for renderBase64() will be in a future version of phanthomjs but as a workaround something along these lines may work for you:
page.render(output);
var fs = require("fs");
var pdf = fs.read(output);
fs.write("/dev/stdout", pdf, "w");
fs.remove(output);
Where output is the path to the pdf file.
I don't know if it would address your problem, but you may also check the new renderBase64() method added to PhantomJS 1.6: https://github.com/ariya/phantomjs/blob/master/src/webpage.cpp#L623
Unfortunately, the feature is not documented on the wiki yet :/