Accessing frame info in gdb - scripting

In gdb, is there a way to access the contents of info frame in a script?
I'm debugging a problem somewhere between Apache, PHP, APC and my own code, and I have about a hundred cores to choose from. Following the instructions here
http://bugs.php.net/bugs-generating-backtrace.php
I end up with a stacktrace like:
#0 0x0121a31a in do_bind_function (opline=0xa94dd750, function_table=0x9b9cf98, compile_time=0 '\0') at /usr/src/debug/php-5.2.7/Zend/zend_compile.c:2407
#1 0x0124bb2e in ZEND_DECLARE_FUNCTION_SPEC_HANDLER (execute_data=0xbfef7990) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:498
#2 0x01249dfa in execute (op_array=0xb79d5d3c) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92
#3 0x01261e31 in ZEND_INCLUDE_OR_EVAL_SPEC_VAR_HANDLER (execute_data=0xbfef80d0) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:7809
#4 0x01249dfa in execute (op_array=0xb79d55ec) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92
...
#26 0x09caa894 in ?? ()
#27 0x00000000 in ?? ()
The stack will always look similar, with function execute and ZEND_something interleaved several times. I need to go up to the last instance of execute (up 2 in this case) and print myVar.
Obviously gdb knows the function names, but does it surface them in any user variables I could access?
Typing frame 2 shows a one-line version, and info frame shows a single stackframe in detail. I want to do something like
while ($current_frame.function_name != "execute") {up;} print myVar but I don't see how to do it strictly within gdb.
Is there a variable / structure / special memory location / something that allows access to gdb's information on either the whole stack (like bt) or to the current stack frame (like info frame)?

GBD 7.1 has support for accessing frame information from Python for exactly this kind of scripting.

Related

How to break the gem5 executable in GDB at a the nth instruction?

Using --debug-flags ExecAll tracing, I found that there is a bug at the Nth instruction, which happens at the Nth line of the log.
Is there an easy way to break specifically at that instruction to debug it in GDB and view gem5's internal state?
The simplest approach is to use --debug-break as shown at: schedBreak(<tick>) gdb debugging function not working
That makes gem5 raise a signal at a given simulation, which GDB stops at by default. You can determine what simulation time corresponds to your instruction by looking at an --debug-flags ExecAll trace beforehand.
You will want to break on the tick much more often than on the Nth instructions, in particular since gem5 simulates the instruction pipeline, and therefor there can be multiple instructions in flight at the same time.
Alternatively, from GDB your point of interest sees the ExecutionContext object, which if often called xc, you can just add a conditional breakpoint like:
b MyClass::myFunction if xc->numInsts.data()->value() == <n> - 2
The -2 is needed because this index is zero based, and because the tick increments after instruction execution.
You can also find the tick time rather than instruction count with:
p xc->cpu->tick
or from the other commonly available ThreadContext object with:
p tc->baseCpu->tick
You generally want to do this from the ::tick() function of your CPU model of interest.
For AtomicSimpleCPU::tick() you could also break just before the second instruction with:
b AtomicSimpleCPU::tick if (*threadInfo[curThread]).numInst == 1
Or to break at a given tick, say 1000 (500 is the one before it):
b AtomicSimpleCPU::tick if tick == 500
Two other important break locations are at the main event loop when an event is executed:
b EventQueue::serviceOne() if head->when() == 1000
and the event scheduling target point:
b EventQueue::schedule if when == <target-time>
b EventQueue::reschedule if when == <target-time>
or for the time of schedule itself:
b EventQueue::schedule if _curTick == 1000
b EventQueue::reschedule if _curTick == 1000
Together with reverse debugging and:
--debug-flags Event
these event breakpoints will actually allow you to understand what gem5 is doing.
Note however that conditional breakpoints significantly slow down simulation unfortunately... arghh.
Another useful technique to have in mind is that you can do a run that stops shortly after the point of interest with:
-m <tick>
and then reverse debug back to the exact point of interest, possibly conditionally since now you will be close the the point of interest, so the performance loss will not be a huge problem. You can then just continue going back to the root cause.
Tested in gem5 9f247403e558977738b5911a45e5776afff87b1a.

"Bad permissions for mapped region at address" Valgrind error for memset

I am running into a problem that appears to be due to a stack overflow. When I run the application under Valgrind, I get the following errors:
Thread 75:
Invalid write of size 4
at 0x833FBF6: <Class Name>::<Method Name>(short, short&) (<File Name>:692)
Address 0x222d75c0 is on thread 75's stack
Process terminating with default action of signal 11 (SIGSEGV): dumping core
Bad permissions for mapped region at address 0x222D6000
at 0x4022BA3: memset (mc_replace_strmem.c:586)
by 0x833FC80: <Class Name>::<Method Name>(short, short&) (<File Name>:708)
If I open the core file in gdb, go to frame 1 where the memset is being called, and do an "info registers", it shows that $esp = 0x222d5210 and $ebp = 0x222d75c8.
Doesn't that seem to indicate that the stack would include memory at addres 0x222D6000? If that's true, then why would we get the "Bad permissions" error?
The other odd thing is that line 692 of the source file is the very first line of the method (i.e., "void ::(short var1, short &var2)"). So, why would we get an invalid write at that point?
As I said, it seems to be a case of running out of stack space, but even if we use the "limit stacksize" command to increase the amount of allocated stack space, we still encounter the same problem.
I've been beating my head against the wall for several days trying to debug this problem. Any advice would be appreciated.
It turns out that this problem was due to a stack overflow after all. I didn't realize that the code that spawned the thread that was causing the problem explicitly set the size of the stack to be used by the thread. That's why changing the value used by the "limit stacksize" command didn't make a difference. Once, I modified the code that set the stack size to increase the amount of memory allocated, the problem went away.
What you could do is to activate the Valgrind gdbserver, and
attach using gdb+vgdb to your program running under Valgrind.
You can then use various valgrind monitor commands to have more
info about the problem. E.g. look again at the register values,
use 'monitor v.info scheduler' to see the stack trace and the stack size, ...
Full list of monitor commands with memcheck+valgrind can be found at
http://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands
and
http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.valgrind-monitor-commands

Creating robust real-time monitors for variables

We can create a real-time monitor for a variable like this:
CreatePalette#Panel#Row[{"x = ", Dynamic[x]}]
(This is more interesting and useful if x happens to be something like $Assumptions. It's so easy to set a value and then forget about it.)
Unfortunately this stops working if the kernel is re-launched (Quit[], then evaluate something). The palette won't show changes in the value of x any more.
Is there a way to do this so it keeps working even across kernel sessions? I find myself restarting the kernel quite often. (If the resulting palette causes the kernel to be automatically started after Quit that's fine.)
Update: As mentioned in the comments, it turns out that the palette ceases working only if we quit by evaluating Quit[]. When using Evaluation -> Quit Kernel -> Local, it will keep working.
Link to same question on MathGroup.
I can only guess, because on my Ubuntu here the situations seems buggy. The trick with the Quit from the menu like Leonid suggested did not work here. Another one is: on a fresh Mathematica session with only one notebook open:
Dynamic[x]
x = 1
Dynamic[x]
x = 2
gives as expected
2
1
2
2
Typing in the next line Quit, evaluating and typing then x=3 updates only the first of the Dynamic[x].
Nevertheless, have you checked the command
Internal`GetTrackedSymbols[]
This gives not only the tracked symbols but additionally some kind of ID where the dynamic content belongs. If you can find out, what exactly these numbers are and investigate in the other functions you find in the Internal context, you may be able to add your palette Dynamic-content manually after restarting the kernel.
I thought I had something like that with
Internal`SetValueTrackExtra
but I'm currently not able to reproduce the behavior.
#halirutan's answer jarred my memory...
Have you ever come across: Experimental/ref/ValueFunction? (documentation address)
Although the documentation contains no examples, the 'more information' section provides the following tidbit:
The assignment ValueFunction[symb] = f specifies that whenever
symb gets a new value val, the expression f[symb,val] should be
evaluated.

set a breakpoint, when called: return and continue

I know how to do this in gdb. I'd attach, and follow with:
break myfunction
commands
return
cont
end
cont
I'm wondering if there's a way of doing this in c? I already have my code working for reading memory addresses and writing to memory addresses. And it automatically finds the pid and does related stuff. I'm stuck with implementing that use of breakpoints.
If you are talking about some sort of hand-written debugger, you can use IP value to set a breakpoint; Literally, when IP hits some certain value, you stop the program being debugged and perform some routine (for example, heading away to debugger process). To use function names, you should use symbol tables like it is done in GDB.
It's not quite clear what you are trying to achieve.
The GDB sequence you've show will simply make myfunction immediately return.
Assuming you want your mini-debugger to have the same effect, simply write the opcode for ret (0xC3 on x86) to the address of myfunction; no need to do the breakpoint at all.

How does reverse debugging work?

GDB has a new version out that supports reverse debug (see http://www.gnu.org/software/gdb/news/reversible.html). I got to wondering how that works.
To get reverse debug to work it seems to me that you need to store the entire machine state including memory for each step. This would make performance incredibly slow, not to mention using a lot of memory. How are these problems solved?
I'm a gdb maintainer and one of the authors of the new reverse debugging. I'd be happy to talk about how it works. As several people have speculated, you need to save enough machine state that you can restore later. There are a number of schemes, one of which is to simply save the registers or memory locations that are modified by each machine instruction. Then, to "undo" that instruction, you just revert the data in those registers or memory locations.
Yes, it is expensive, but modern cpus are so fast that when you are interactive anyway (doing stepping or breakpoints), you don't really notice it that much.
Note that you must not forget the use of simulators, virtual machines, and hardware recorders to implement reverse execution.
Another solution to implement it is to trace execution on physical hardware, such as is done by GreenHills and Lauterbach in their hardware-based debuggers. Based on this fixed trace of the action of each instruction, you can then move to any point in the trace by removing the effects of each instruction in turn. Note that this assumes that you can trace all things that affect the state visible in the debugger.
Another way is to use a checkpoint + re-execution method, which is used by VmWare Workstation 6.5 and Virtutech Simics 3.0 (and later), and which seems to be coming with Visual Studio 2010. Here, you use a virtual machine or a simulator to get a level of indirection on the execution of a system. You regularly dump the entire state to disk or memory, and then rely on the simulator being able to deterministically re-execute the exact same program path.
Simplified, it works like this: say that you are at time T in the execution of a system. To go to time T-1, you pick up some checkpoint from point t < T, and then execute (T-t-1) cycles to end up one cycle before where you were. This can be made to work very well, and apply even for workloads that do disk IO, consist of kernel-level code, and performs device driver work. The key is to have a simulator that contains the entire target system, with all its processors, devices, memories, and IOs. See the gdb mailinglist and the discussion following that on the gdb mailing list for more details. I use this approach myself quite regularly to debug tricky code, especially in device drivers and early OS boots.
Another source of information is a Virtutech white paper on checkpointing (which I wrote, in full disclosure).
During an EclipseCon session we also asked how they do this with the Chronon Debugger for Java. That one does not allow you to actually step back, but can play back a recorded program execution in such a way that it feels like reverse debugging. (The main difference is that you cannot change the running program in the Chronon debugger, while you can do that in most other Java debuggers.)
If I understood it correctly, it manipulates the byte code of the running program, such that every change of an internal state of the program is recorded. External states don't need to be recorded additionally. If they influence your program in some way, then you must have an internal variable matching that external state (and therefore that internal variable is enough).
During playback time they can then basically recreate every state of the running program from the recorded state changes.
Interestingly the state changes are much smaller than one would expect on first look. So if you have a conditional "if" statement, you would think that you need at least one bit to record whether the program took the then- or the else-statement. In many cases you can avoid even that, like in the case that those different branches contain a return value. Then it is enough to record only the return value (which would be needed anyway) and to recalculate the decision about the executed branch from the return value itself.
Although this question is old, most of the answers are too, and as reverse-debugging remains an interesting topic, I'm posting a 2015 answer. Chapters 1 and 2 of my MSc thesis, Combining reverse debugging and live programming towards visual thinking in computer programming, covers some of the historical approaches to reverse debugging (especially focused on the snapshot-(or checkpoint)-and-replay approach), and explains the difference between it and omniscient debugging:
The computer, having forward-executed the program up to some point, should really be able to provide us with information about it. Such an improvement is possible, and is found in what are called omniscient debuggers. They are usually classified as reverse debuggers, although they might more accurately be described as "history logging" debuggers, as they merely record information during execution to view or query later, rather than allow the programmer to actually step backwards in time in an executing program. "Omniscient" comes from the fact that the entire state history of the program, having been recorded, is available to the debugger after execution. There is then no need to rerun the program, and no need for manual code instrumentation.
Software-based omniscient debugging started with the 1969 EXDAMS system where it was called "debug-time history-playback". The GNU debugger, GDB, has supported omniscient debugging since 2009, with its 'process record and replay' feature. TotalView, UndoDB and Chronon appear to be the best omniscient debuggers currently available, but are commercial systems. TOD, for Java, appears to be the best open-source alternative, which makes use of partial deterministic replay, as well as partial trace capturing and a distributed database to enable the recording of the large volumes of information involved.
Debuggers that do not merely allow navigation of a recording, but are actually able to step backwards in execution time, also exist. They can more accurately be described as back-in-time, time-travel, bidirectional or reverse debuggers.
The first such system was the 1981 COPE prototype ...
mozilla rr is a more robust alternative to GDB reverse debugging
https://github.com/mozilla/rr
GDB's built-in record and replay has severe limitations, e.g. no support for AVX instructions: gdb reverse debugging fails with "Process record does not support instruction 0xf0d at address"
Upsides of rr:
much more reliable currently. I have tested it relatively long runs of several complex software.
also offers a GDB interface with gdbserver protocol, making it a great replacement
small performance drop for most programs, I haven't noticed it myself without doing measurements
the generated traces are small on disk because only very few non-deterministic events are recorded, I've never had to worry about their size so far
rr achieves this by first running the program in a way that records what happened on every single non-deterministic event such as a thread switch.
Then during the second replay run, it uses that trace file, which is surprisingly small, to reconstruct exactly what happened on the original non-deterministic run but in a deterministic way, either forwards or backwards.
rr was originally developed by Mozilla to help them reproduce timing bugs that showed up on their nightly testing the following day. But the reverse debugging aspect is also fundamental for when you have a bug that only happens hours inside execution, since you often want to step back to examine what previous state led to the later failure.
The following example showcases some of its features, notably the reverse-next, reverse-step and reverse-continue commands.
Install on Ubuntu 18.04:
sudo apt-get install rr linux-tools-common linux-tools-generic linux-cloud-tools-generic
sudo cpupower frequency-set -g performance
# Overcome "rr needs /proc/sys/kernel/perf_event_paranoid <= 1, but it is 3."
echo 'kernel.perf_event_paranoid=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Test program:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int f() {
int i;
i = 0;
i = 1;
i = 2;
return i;
}
int main(void) {
int i;
i = 0;
i = 1;
i = 2;
/* Local call. */
f();
printf("i = %d\n", i);
/* Is randomness completely removed?
* Recently fixed: https://github.com/mozilla/rr/issues/2088 */
i = time(NULL);
printf("time(NULL) = %d\n", i);
return EXIT_SUCCESS;
}
compile and run:
gcc -O0 -ggdb3 -o reverse.out -std=c89 -Wextra reverse.c
rr record ./reverse.out
rr replay
Now you are left inside a GDB session, and you can properly reverse debug:
(rr) break main
Breakpoint 1 at 0x55da250e96b0: file a.c, line 16.
(rr) continue
Continuing.
Breakpoint 1, main () at a.c:16
16 i = 0;
(rr) next
17 i = 1;
(rr) print i
$1 = 0
(rr) next
18 i = 2;
(rr) print i
$2 = 1
(rr) reverse-next
17 i = 1;
(rr) print i
$3 = 0
(rr) next
18 i = 2;
(rr) print i
$4 = 1
(rr) next
21 f();
(rr) step
f () at a.c:7
7 i = 0;
(rr) reverse-step
main () at a.c:21
21 f();
(rr) next
23 printf("i = %d\n", i);
(rr) next
i = 2
27 i = time(NULL);
(rr) reverse-next
23 printf("i = %d\n", i);
(rr) next
i = 2
27 i = time(NULL);
(rr) next
28 printf("time(NULL) = %d\n", i);
(rr) print i
$5 = 1509245372
(rr) reverse-next
27 i = time(NULL);
(rr) next
28 printf("time(NULL) = %d\n", i);
(rr) print i
$6 = 1509245372
(rr) reverse-continue
Continuing.
Breakpoint 1, main () at a.c:16
16 i = 0;
When debugging complex software, you will likely run up to a crash point, and then fall inside a deep frame. In that case, don't forget that to reverse-next on higher frames, you must first:
reverse-finish
up to that frame, just doing the usual up is not enough.
The most serious limitations of rr in my opinion are:
https://github.com/mozilla/rr/issues/2089 you have to do a second replay from scratch, which can be costly if the crash you are trying to debug happens, say, hours into execution
https://github.com/mozilla/rr/issues/1373 x86 only
UndoDB is a commercial alternative to rr: https://undo.io Both are trace / replay based, but I'm not sure how they compare in terms of features and performance.
Nathan Fellman wrote:
But does reverse debugging only allow you to roll back next and step commands that you typed, or does it allow you to undo any number of instructions?
You can undo any number of instructions. You're not restricted to, for instance,
only stopping at the points where you stopped when you were going forward. You can
set a new breakpoint and run backwards to it.
For instance, if I set a breakpoint on an instruction and let it run until then, can I then roll back to the previous instruction, even though I skipped over it?
Yes. So long as you turned on recording mode before you ran to the breakpoint.
Here is how another reverse-debugger called ODB works. Extract:
Omniscient Debugging is the idea of
collecting "time stamps" at each
"point of interest" (setting a value,
making a method call,
throwing/catching an exception) in a
program and then allowing the
programmer to use those time stamps to
explore the history of that program
run.
The ODB ... inserts
code into the program's classes as
they are loaded and when the program
runs, the events are recorded.
I'm guessing the gdb one works in the same kind of way.
Reverse debugging means you can run the program backwards, which is very useful to track down the cause of a problem.
You don't need to store the complete machine state for each step, only the changes. It is probably still quite expensive.