Print complete control flow through gdb including values of variables - dynamic

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.

Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.

What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.

Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?

Related

Show me your ID(E)!

I often work on very small pieces of code, on the order of max 100 lines, especially in scenarios when I learn something new and just play with the code, or when I debug.
Because I frequently change code and want to see how that changes the contents of my variables and output, it is tedious to either
1) hit the debug button, wait for the debugger to start (in my case I use PyCharm as an IDE) and then inspect the output
or
2) insert some prints for the variables that I want to observe and compile the code (slightly faster than starting the debugger).
To eliminate this time consuming workflow, where I constantly hit the compile or debug button every few seconds, is there an IDE where I can set a watch to a few variables and then each time I change in my source code a single character (or, alternatively, every half a second) the IDE automatically compiles everything and I will see then new values of my variables?
(Of course while I intermediatelychange the code the compilation will give errors, but that is ok. This feature would be a big time saver. Maybe PyCharm has it already implemented? If not, ideally I would hope for a language agnostic IDE, similar to PyCharm, where variants for Java etc. also exist. If not, since I code in Python, a Python IDE would also be great.)
This might not be exactly what you are looking for but PyCharm (and IntelliJ and probably others) can run tests automatically when code changes.
In the PyCharm Run toolbar look for "Toggle auto-test" button.
For example in PyCharm you can create test cases that just runs the code you're interested in and prints the variables you need.
Then create a run configuration that runs only those tests and set it to run automatically.
For more details see PyCharm documentation on rerunning tests.
The Scala plugin for IntelliJ has exactly what you need in the form of "worksheets," where every expression is automatically recompiled when its value or the value of anything it references is changed.
Since (based on your usage of PyCharm), I assume you're using Python primarily, I think Jupyter notebook is your best bet. Jupyter is language agnostic but began as specific to python (it was called IPython notebook for this reason).
I have not tried it, but this guide purports to show to get Jupyter to work with PyCharm
EDIT: Here is another possibility called vim worksheet; I haven't tried it, but it purports to do the same thing as Scala worksheets, but in vim, and for a number of languages, including Python.
The python Spyder IDE (comes with Anaconda) has this feature. When you hit run, you can see all of the variables at the top right of the screen and you can click on them to see their values (this is very helpful with Numpy Arrays too!).
If your interest is in the actual workflow improvement:
I used to program like you, looking at what my variables changed to, and design or debug my code based on such modifications, however is way to inefficient and costly to set what variables to watch over and over again and besides when if it bugs, you have to go all over again for the debugging process.
I changed my design process to better my workflow and adopted Test Driven Development (TDD), with it you can look at tools for you specific implementations or IDEs but the principles and workflow stay with you, with it you stop looking on how the variables changed and instead focus on what the functions should do, meaning faster iteration (with real time tools for testing), easier debugging and far more better, safe refactoring.
My favorite tool for it is Cucumber, and agnostic tool (for IDE or programming language) which help you test your code scenarios and at the same time documenting your features.
Hope it helps, i know its a very opinionated answer but it's an honest advices for improvement in ones workflow.
You should try Thonny. It is developed by Institute of Computer Science of University of Tartu.
The 4 features which might be of help to you are below (verbatim from the website):
No-hassle variables.
Once you're done with hello-worlds, select View → Variables and see how your programs and shell commands affect Python variables.
Simple debugger.
Just press Ctrl+F5 instead of F5 and you can run your programs step-by-step, no breakpoints needed. Press F6 for a big step and F7 for a small step. Steps follow program structure, not just code lines.
Stepping through statements
Step through expression evaluation. If you use small steps, then you can even see how Python evaluates your expressions. You can think of this light-blue box as a piece of paper where Python replaces subexpressions with their values, piece-by-piece.
Visualization of expression evaluation
Faithful representation of function calls.
Stepping into a function call opens a new window with separate local variables table and code pointer. Good understanding of how function calls work is especially important for understanding recursion.

How to find the size of a reg in verilog?

I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.

Sizeable screenshot UI code

I am in need of someway to access the UI for the screenshot command in OSX (Cmd+Shft+4) and I would like to be able to activate the UI with a UI button that will screenshot the region selected and save it to a temp location.
Thanks in advance ;)
If there's a direct way to do this from Cocoa, maybe someone will chime in... but I doubt it exists. You can, however, get any behavior you want from the "screencapture" command line utility; it does exactly the same as Cmd-Shift-3 or 4 with a gazillion options. Just type "man screencapture" in Terminal to see all the flags.
But this would require you to run a bash script from your app. If you haven't done that before, well, google it, or check out the many threads here on SO... Opinions vary on how complicated it should be, from a one-liner call to system() to fully thread-safe error reporting NSTask and all kinds of answers in between.
I'd recommend using one of the NSTask answers which keep themselves to half a dozen lines, but YMMV.

How to encourage positive developer behavior with an IDE?

The goal of IDEs is increase productivity. They do a great job at that. Refactoring, navigation, inline documentation, auto completion help increase productivity immensely.
But: Every tool is a weapon. The very same IDE helps to produce chunk code. Some IDE features are an invitation to produce bad code: code generation, code formatting tools, refactoring tools.
IDE overuse tends to isolate developers from the necessary details. It is a good thing that you can start working but at some point in your career you have to be able to figure out how to start a process. You can ignore this detail for some time, in the end they are important to write a working product (vs. bolted together stuff that works 90% of the time).
How do you encourage positive behavior of other developers working with an IDE? This is a question as old as copy and paste.
To get the right impression: developers have to have the maximum freedom to mobilize their maximum creativity and motivation. They may use IDEs and all the related tools as they see fit. Nobody should impose draconian measures on them. I don't want to demotivate and force someone to do something. Good behavior has to be encouraged. It has to itch little a bit if you do the wrong thing. In the same line as the SO "accept rate" metric (and reputation). You can ignore it but life is better if you follow the rules.
(The solution should work in a given setting. You can ignore reviews, changing the staffing or more education as potential solutions.)
Train your IDE, instead of being trained by it.
Set up code formatting the way you (or your team) wants it. Heck, even disable it in cases where it makes sense. I've never seen an IDE align something like this with a sensible combination of tabs and spaces (where \t is obviously the tab character):
{
\tcout << "Hello "
\t << (some + long + expression +
\t to_produce_the_word(world))
\t << endl;
}
In languages like Java, you cannot avoid boilerplate. The best option you have is to check generated code, ensuring that it is the same as what you'd have written by hand. Modify it as necessary. Configure your IDE to generate the exact code that you need, if possible. Eclipse is pretty good at this.
Know what's going on under the hood.
Know that your IDE is actually invoking the compiler. Have some insight into the flags that it passes. Be able to invoke the compiler from the command line.
Know about the runtime system. Be aware of the flags that are used or needed to launch your program. Be able to launch the program from a command line.
I think before anyone uses a RAD tool of any type they should be able to write the application from scratch (scratch being wiring together the framework components) in notepad potentially on a computer that is 10 years older than current technology :P. Not knowing the ins and outs of a paradigm/framework leads to bad code from novice developers who only learn things at a mile high view of the platforms they develop for. Perhaps they should do this in a few technologies -- i.e., GTK programming is completely different to MVC which is then also different to SWING and .NET.
I think the end result should be a developer that thinks of the finer details of a problem before they jump to thinking of how they will write an interface to it in a specific RAD environment.
its an open ended question, but...
We have a Eclipse format file that everyone shares, so that we all format the code in the same manor. (Except the one lone InteliJ guy we have).
Everyone shares a dictionary file. It helps to remove all the red lines from the code. Making it look cleaner and more readable.
I run EMMA over the code to find out who isn't testing their code, and then moan at them.
The main problems we face is that most of the team don't know all the features/power of the IDE (eclipse). The didn't know about CTRL + O (twice), or auto code gen. All I can do as a 'hot key wizard' is keep sharing my knowledge with them to help them become more productive.
I look forward to the day when my problem is that they auto gen as much as possible.
Rather than me finding bugs where the wrong value is returned from a getter method due to a typo.
Attempt development (at least occasionally) using only a text editor and launching the compilation, testing, etc. from the command line.
Typing the commands will get tedious very quickly so create scripts or (even better) learn rake, ant, msbuild.
If the IDE does code generation for you and that code generation is really important (such as generating classes from xsd or proxy classes from wsdl), try to find out how to run the code generation from the command line - then hook the code generation into a build (so you'll never be tempted to edit the generated code).
The idea of autoformatting code is great but it usually just turns your code into a mess. If you have less code, minor formatting inconsistencies are just not a big deal.
Adding code quality tools into your build - style checks, class and method sizes, complexity, code duplication, test coverage, etc (complexian, simian, flog, flay, ndepend, ncover, etc.) will discourage IDE generated code.

How would one go about testing an interpreter or a compiler?

I've been experimenting with creating an interpreter for Brainfuck, and while quite simple to make and get up and running, part of me wants to be able to run tests against it. I can't seem to fathom how many tests one might have to write to test all the possible instruction combinations to ensure that the implementation is proper.
Obviously, with Brainfuck, the instruction set is small, but I can't help but think that as more instructions are added, your test code would grow exponentially. More so than your typical tests at any rate.
Now, I'm about as newbie as you can get in terms of writing compilers and interpreters, so my assumptions could very well be way off base.
Basically, where do you even begin with testing on something like this?
Testing a compiler is a little different from testing some other kinds of apps, because it's OK for the compiler to produce different assembly-code versions of a program as long as they all do the right thing. However, if you're just testing an interpreter, it's pretty much the same as any other text-based application. Here is a Unix-centric view:
You will want to build up a regression test suite. Each test should have
Source code you will interpret, say test001.bf
Standard input to the program you will interpret, say test001.0
What you expect the interpreter to produce on standard output, say test001.1
What you expect the interpreter to produce on standard error, say test001.2 (you care about standard error because you want to test your interpreter's error messages)
You will need a "run test" script that does something like the following
function fail {
echo "Unexpected differences on $1:"
diff $2 $3
exit 1
}
for testname
do
tmp1=$(tempfile)
tmp2=$(tempfile)
brainfuck $testname.bf < $testname.0 > $tmp1 2> $tmp2
[ cmp -s $testname.1 $tmp1 ] || fail "stdout" $testname.1 $tmp1
[ cmp -s $testname.2 $tmp2 ] || fail "stderr" $testname.2 $tmp2
done
You will find it helpful to have a "create test" script that does something like
brainfuck $testname.bf < $testname.0 > $testname.1 2> $testname.2
You run this only when you're totally confident that the interpreter works for that case.
You keep your test suite under source control.
It's convenient to embellish your test script so you can leave out files that are expected to be empty.
Any time anything changes, you re-run all the tests. You probably also re-run them all nightly via a cron job.
Finally, you want to add enough tests to get good test coverage of your compiler's source code. The quality of coverage tools varies widely, but GNU Gcov is an adequate coverage tool.
Good luck with your interpreter! If you want to see a lovingly crafted but not very well documented testing infrastructure, go look at the test2 directory for the Quick C-- compiler.
I don't think there's anything 'special' about testing a compiler; in a sense it's almost easier than testing some programs, since a compiler has such a basic high-level summary - you hand in source, it gives you back (possibly) compiled code and (possibly) a set of diagnostic messages.
Like any complex software entity, there will be many code paths, but since it's all very data-oriented (text in, text and bytes out) it's straightforward to author tests.
I’ve written an article on compiler testing, the original conclusion of which (slightly toned down for publication) was: It’s morally wrong to reinvent the wheel. Unless you already know all about the preexisting solutions and have a very good reason for ignoring them, you should start by looking at the tools that already exist. The easiest place to start is Gnu C Torture, but bear in mind that it’s based on Deja Gnu, which has, shall we say, issues. (It took me six attempts even to get the maintainer to allow a critical bug report about the Hello World example onto the mailing list.)
I’ll immodestly suggest that you look at the following as a starting place for tools to investigate:
Software: Practice and Experience April 2007. (Payware, not available to the general public---free preprint at http://pobox.com/~flash/Practical_Testing_of_C99.pdf.
http://en.wikipedia.org/wiki/Compiler_correctness#Testing (Largely written by me.)
Compiler testing bibliography (Please let me know of any updates I’ve missed.)
In the case of brainfuck, I think testing it should be done with brainfuck scripts. I would test the following, though:
1: Are all the cells initialized to 0
2: What happens when you decrement the data pointer when it's currently pointing to the first cell? Does it wrap? Does it point to invalid memory?
3: What happens when you increment the data pointer when it's pointing at the last cell? Does it wrap? Does it point to invalid memory
4: Does output function correctly
5: Does input function correctly
6: Does the [ ] stuff work correctly
7: What happens when you increment a byte more than 255 times, does it wrap to 0 properly, or is it incorrectly treated as an integer or other value.
More tests are possible too, but this is probably where i'd start. I wrote a BF compiler a few years ago, and that had a few extra tests. Particularly I tested the [ ] stuff heavily, by having a lot of code inside the block, since an early version of my code generator had issues there (on x86 using a jxx I had issues when the block produced more than 128 bytes or so of code, resulting in invalid x86 asm).
You can test with some already written apps.
The secret is to:
Separate the concerns
Observe the law of Demeter
Inject your dependencies
Well, software that is hard to test is a sign that the developer wrote it like it's 1985. Sorry to say that, but utilizing the three principles I presented here, even line numbered BASIC would be unit testable (it IS possible to inject dependencies into BASIC, because you can do "goto variable".