My main program is written by C. The C code calls an interpreter language, say python, by its C APIs. Then the interpreter language calls back to other C APIs. All the C code are debuggable, the interpreter language's C interface is also debuggable.
I found callgrind cannot profile the C code called by the interpreter language. Its call tree stops at the C code that calls the interpreter language.
Is this any known limitation of callgrind? Oracle Solaris Studio works at the case.
callgrind logic maintains the stack incrementally. This implies that it has to understand the calling convention of all what is on the stack, detect the
calls and the returns.
You might compare with the valgrind unwinder and the gdb unwinder using gdb+vgdb.
Start valgrind with: valgrind --vgdb-error=0 --vgdb=full ....
Put a breakpoint in the C code called by the interpreter language,
and continue the execution.
When breakpoint is encountered, compare the result of the 2 following
gdb commands:
backtrace
monitor v.info scheduler
The above will show if gdb and/or valgrind unwinders are working properly.
You can try to understand what callgrind does by using some valgrind
debugging options, e.g.
valgrind --tool=callgrind -v -v -v -d -d -d --ct-verbose=3
(adjust the nr of -v/-d/verbosity to your taste).
And of course, in case you have an old version of valgrind, you might
try with the last release or even the git repository, even if I doubt
something recently changed in this area.
Related
I am working on a project in which Lua (more specifically LuaJIT) is the scripting language but most of the heavy lifting is performed in C code. The C code is compiled into a .so file and LuaJIT's ffi capabilities are used to load the library and access the functions.
Let's say I set a breakpoint in the Lua code at the point where the C function is invoked. Can I "step into" the C code at that point and continue stepping through the C code as if I were using gdb?
No; ZeroBrane Studio debugger (it's using MobDebug) only supports stepping through Lua code. I don't think there is a debugger that integrates stepping through Lua and C code. You may be able to use two debuggers though: one for Lua and one for C code.
How could I automate interactions with command line programs that expose a text terminal interface with Perl 6 for testing purposes?
If you want to use Perl 6 to automate execution or testing of console applications, I think you're going to use NativeCall to interact with the expect library. Once expect is installed, man libexpect will show its API documentation, though the way of accessing the documentation (such as the manpage name) may differ per package distribution.
Expect has APIs to launch a program, wait for text to appear on the (emulated) console (to "expect" text), and send text to the console (to emulate typing). The most common use case is to automate programs which require password input. Expect is often scripted--it is an interpreter--but there's no reason not to use it from a higher level programming language.
Edit: I somewhat answered the wrong question. The OP is interested in testing Perl 6 modules with Perl 6. That said, using expect to launch a second Perl 6 interpreter which uses the module is still the strongest, most strict way to test the application. You don't need to know what type of terminal library the module uses, because expect should be compatible with nearly all of them. You can send text to the STDIN pipe of a subprocess, but that's not as strong as the subprocess (console) communication you can get from expect. I don't know if there's a way to hijack whichever terminal library the module uses and communicate with it directly.
If it's just a plain interface, you could just run the program and collect output. The currently-experimental Testo module has is-run routine. You could use that directly, or if experimental status is bothersome, copy the guts of it into your own helper routine.
Take a look at Sparrow6 Task Check Language - Perl6 based DSL to verify text output. I've done a lot terminal apps testing using it.
I've got working multiplatform Hello World code in Gas, NASM, and YASM, and I would like to shrink their corresponding executable files from 76KB to something more reasonable for a Hello World assembly program, seeing as a basic Hello World C program leads to an 80KB executable, and assembly should be much smaller. I believe the bulk of the executables are filled with junk from the linker options.
Trace:
LIBS=c:/strawberry/c/i686-w64-mingw32/lib/crt2.o -Lc:/strawberry/c/i686-w64-mingw32/lib -lmingw32 -lmingwex -lmsvcrt
ld ld -o $(EXECUTABLE) hello.o $(LIBS)
hello.exe
Hello World!
Code:
.data
msg: .ascii "Hello World!\0"
.text
.global _main
_main:
pushl $msg
call _puts
leave
movl $0, %eax
ret
If I remove any of the options in LIBS, either the link process fails, or the resulting executable raises a Windows error when it runs. So the logical thing to do is replace the puts call with something simpler, like sys_write, but I don't know how to do this multiplatform. The little documentation online says to use int 0x80 to perform a call to the kernel, but this only works in Linux, not in Windows, and I want my assembly code to be multiplatform.
Your program bloat comes mostly from the C runtime library. In Windows, a simple hello world program can be < 5K if you write your own "tiny" CRT. Here is a link to a project which explains all of the details about how to shrink your EXE to its smallest possible size:
http://www.codeproject.com/Articles/15156/Tiny-C-Runtime-Library
For Windows, you can call the native Win32 API functions, such as GetStdHandle() and WriteFile() to write directly to stdout.
For Unix-like systems, you can call the write() syscall with file descriptor 1 for stdout.
The details of exactly how you do each of these will depend on which assembler and OS you are using.
You should be able to link dynamically to the C runtime library instead of including it statically. I don't know how to do it in Linux, but in Windows you can use msvcrt.dll.
The assembler bloat is most likely coming from the C lib dependencies, especially for puts. refactoring the code to print Hello World without using a C call will most likely require OS-specific assembly code, as the Unix standard involves interrupts that make calls to the kernel, and Windows has its own VB-like API for such tasks.
I did manage to find a solution that would create small executable while still maintaining platform agnosticism. Ordinarily, C preprocessor directives would do the trick, but I'm not sure which assembly languages even have preprocessor syntax. But a similar effect can be achieved through the use of controlled, included assembly code files. A collection of wrapper code files can handle OS-specific assembly code, while an included assembly file does the rest. And a simple Makefile can run the respective build console commands to reference the respective wrapper code on the desired platform.
For example, I was able to quickly construct FASM code that works this way. (Though I have yet to inform it to actually bypass puts with something less bloaty.) Anyway, it's progress.
Because almost all C functions use the CDECL calling convention where you the caller adjusts the stack not the callee (the function).
You will get into trouble if you don't learn how to do things correctly now, read harder to trackdown bugs.
Try this:
push szLF
push esp
push fmtint2
call printf
add esp, 4 * 3
push msg
call puts
push szLF
push esp
push fmtint2
call printf
add esp, 4 * 3
Run it and notice the numbers before and after your call to puts. They are different no? Well, they are supposed to be the same. Now add:
add esp, 4
after your call to puts and run it again.. The numbers are the same now? That means you have a balanced stack pointer and the function uses the CDECL calling convention.
I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?
You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.
You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog
In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.
For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.
I handed in a C program which contained a lot of verbose printf debug lines. I always compiled it command line with gcc.
Now it's been turned into an Eclipse-CDT (Helios) project, and my
\n
no longer do carriage returns. I get an unreadable "staircase" in my console.
RCINAHFM. Is there a check box in the IDE I need to modify or do I need to go back and carefully modify hundreds of lines of code?
Any help greatly appreciated.
Bert
RCINAHFM=Remaining calm / I need a hug from Mom
Eclipse does not compile C all by itself. It uses an external compiler for that, usually gcc. So it’s highly unlikely that the compiled program is incorrect, unless the compiler configuration within Eclipse does something very, very weird.
If you get a “staircase”, it sounds as if the new line part is carried out, but no carriage return happens. This might happen under systems that use CR/LF as their line ending, such as DOS/Windows.
Unfortunately, you give way to little detail. Are you using Unix or Windows? Where does the program run, in an XTerm, a Windows DOS console, within the Eclipse console? If the answer is “Eclipse console”, then have you tried running it in another terminal instead; or tried running your original program in the Eclipse console? Are you using printf or some other function?