Installing Valgrind Patches - valgrind

I am tying to use Valgrind to tell me stuff about memory usage in my C programs. I have a very simple program:
int main() {
return 0;
}
I would expect no memory leaks. But when I run Valgrind ./myFile I get this:
Syscall param msg->desc.port.name points to uninitialised byte(s)
The trace it prints out doesn't have anything to do with my source code.
I've found sites that refer to a bug and patches to fix it, but I'm wondering: aside from manually typing in the changes to Valgrind's files, how do I get the changes onto my Mac?

Related

Using Vector Offset table with MBED library on Eclipse IDE

I'm a newly graduated electronics engineer and one of my first tasks in my new job is to import a code to Mbed compiler.
I'm trying to run the Mbed Blinky example on my custom hardware with LPC1769 chip. I've exported the Blinky app to GNU Eclipse from the Online MBED compiler and imported it to the IDE.
The Mbed blinky code runs fine when I set the appropriate led pin(changing LED1 in the PinNames.h from 1.10 to 2.13 for my hardware) and flash it directly. So MBed and my custom HW isn't problematic. However, my firm has a custom bootloader and it's required to use it with any application. The custom bootloader requires that I start the program beginning from 0x4000.
For this my firm was previously adding this line to their code, flashing the bootloader and uploading the IDE's output .bin file to the board with a custom FW loading program.
SCB->VTOR = (0x4000) & 0x1FFFFF80;
When I try to follow the same steps, compiler builds without any complaints, but I see no blinks when I upload the program to my bootloader.
I'm suspecting I have to make some changes to the built-in CMSIS library, and/or the startup_LPC17XX.o and system_LPC17xx.o files come with the MBED export, but I'm confused. Any help would be much appreciated.
Also, I'm using the automatically built make file, in case there's any wonders.
Most importantly, you need to adjust the code location in the linker script, for example:
MEMORY {
FLASH : ORIGIN = 0x4000, LENGTH = 0x7C000
}
Check the startup code and linker script for any further absolute addresses in flash memory.
Adjusting the VTOR is required for interrupts, if the bootloader doesn't already do that. The & operation looks weird; it should be sufficient to simply write 0x4000, or, even better, something like:
SCB->VTOR = (uint32_t) &_IsrVector;
Assuming you have defined _IsrVector in your linker script or startup code to refer to the very first byte in the vector table, i.e. the definition of the initial stack pointer. This way you don't have to adjust the code if the memory layout is changed in the linker script, and you avoid magic numbers.

GNU compile-time stack checking

Our application crashed immediately upon entering main with the following:
0x0000000000492148 in main (argc=Cannot access memory at address 0x7fffff7689fc)
After using objdump we realized there were several very large objects that were created on the stack. All this is good.
Now, we are trying to instruct g++ to inform us of such huge stack allocations at the preamble, but using -fstack-check doesn't do it in this case, possibly because the problem is in main.
I read about STACK_CHECK_BUILTIN, but is that a flag that should be provided to the compilation of g++, and not my application? The documentation is vast, but not concise.

generating suppressions for memory leaks

I want suppress Valgrind's reporting of some "definitely lost" memory by the library I'm using. I have tried valgrind --gen-suppressions=yes ./a but it only prompts for errors such as "conditional jump or move depends on uninitialized value".
Is there a way to generate suppressions for straight-up memory leaks? If not, is it difficult to write them by hand? Valgrind's manpage seems to discourage it, at least for errors.
Run valgrind with the --gen-suppressions=all and --log-file=memcheck.log options, and manually copy/paste the logged suppressions into the suppression file.
valgrind --leak-check=full --gen-suppressions=all --log-file=memcheck.log ./a
If you find the output is mixed with the application output then redirect valigrind output to separate file descriptor: --log-fd=9 9>>memcheck.log
valgrind --leak-check=full --gen-suppressions=all --log-fd=9 ./a 9>>memcheck.log
To be prompted for leaks that aren't generating errors, you have to run
valgrind --leak-check=full --gen-suppressions=yes ./a
There is a page on how you can generate such a file based on your errors https://wiki.wxwidgets.org/Valgrind_Suppression_File_Howto
It's not perfect, but you can start from it
You can write a suppression file of your own (but it doesn't seem obvious) :
--suppressions=<filename> [default: $PREFIX/lib/valgrind/default.supp]
If the question was to disable an entire library, see this.
Valgrind's man page.

SIGSEGV in optimizated ifort

If I compile with -O0 in ifort, the program can run correctly. But as long as I open the optimization option, like -O, -O3, -fast, there will be a SIGSEGV segmentation fault come out.
This error occurred in a subroutine named maketable(). And the belows are the phenomenons:
(1) I call fftw library in this subroutine. If I comment the sentences about fftw, it'll be ok. But I think it's not the fault of fftw, because I also use fftw in some other places of this code, and they are good.
(2) the fftw is called in a loop, and the loop can run several times when the program crashed. The segfault does not happen at the first time of entering the loop.
(3) I considered the stack overflow, but I don't think so now. I have the executable file complied by others long time ago, it's can be executed in my computer. I think that suggests it's not due to the system stack overflow.
The version of ifort is 10.0, of fftw is fftw-2.1.5. The cpu type is intel xeon 5130. Thanks a lot.
There are two common causes of segmentation faults in Fortran programs:
Attempting to access an element outside the bounds of an array.
Mismatching actual and dummy arguments in a procedure call.
Both are relatively easy to find:
Your compiler will have an option to generate code which performs array bounds checking at run time. Check your compiler documentation, rebuild your code and rerun it. If this is the cause of the problem you will get an error message identifying where your code goes awry.
Program explicit interfaces for any subroutines and functions in your program, or use modules so that the compiler generates such interfaces for you, or use a compiler option (see the documentation) to check that argument types match at compile-time.
It's not unusual that such errors (seem to) arise only when optimisation is turned up high.
EDIT
Note that I'm not suggesting that optimisation causes the error you observe, but that it causes the error to affect the execution of your program and become evident.
It's not unknown for incorrect programs to run many times apparently without fault only for, say, recompilation with a new compiler version to create an executable which crashes every time.
Your wish to switch off optimisation only for the subroutine where the segmentation fault seems to arise is, I suggest, completely wrong-headed. I expect my programs to execute correctly at any level of optimisation (save for clear evidence of a compiler bug, such things are not unknown). I think that by turning off optimisation you are sweeping a real problem with your program under the carpet, as it were.

How do I cut out assembler executable bloat?

I've got working multiplatform Hello World code in Gas, NASM, and YASM, and I would like to shrink their corresponding executable files from 76KB to something more reasonable for a Hello World assembly program, seeing as a basic Hello World C program leads to an 80KB executable, and assembly should be much smaller. I believe the bulk of the executables are filled with junk from the linker options.
Trace:
LIBS=c:/strawberry/c/i686-w64-mingw32/lib/crt2.o -Lc:/strawberry/c/i686-w64-mingw32/lib -lmingw32 -lmingwex -lmsvcrt
ld ld -o $(EXECUTABLE) hello.o $(LIBS)
hello.exe
Hello World!
Code:
.data
msg: .ascii "Hello World!\0"
.text
.global _main
_main:
pushl $msg
call _puts
leave
movl $0, %eax
ret
If I remove any of the options in LIBS, either the link process fails, or the resulting executable raises a Windows error when it runs. So the logical thing to do is replace the puts call with something simpler, like sys_write, but I don't know how to do this multiplatform. The little documentation online says to use int 0x80 to perform a call to the kernel, but this only works in Linux, not in Windows, and I want my assembly code to be multiplatform.
Your program bloat comes mostly from the C runtime library. In Windows, a simple hello world program can be < 5K if you write your own "tiny" CRT. Here is a link to a project which explains all of the details about how to shrink your EXE to its smallest possible size:
http://www.codeproject.com/Articles/15156/Tiny-C-Runtime-Library
For Windows, you can call the native Win32 API functions, such as GetStdHandle() and WriteFile() to write directly to stdout.
For Unix-like systems, you can call the write() syscall with file descriptor 1 for stdout.
The details of exactly how you do each of these will depend on which assembler and OS you are using.
You should be able to link dynamically to the C runtime library instead of including it statically. I don't know how to do it in Linux, but in Windows you can use msvcrt.dll.
The assembler bloat is most likely coming from the C lib dependencies, especially for puts. refactoring the code to print Hello World without using a C call will most likely require OS-specific assembly code, as the Unix standard involves interrupts that make calls to the kernel, and Windows has its own VB-like API for such tasks.
I did manage to find a solution that would create small executable while still maintaining platform agnosticism. Ordinarily, C preprocessor directives would do the trick, but I'm not sure which assembly languages even have preprocessor syntax. But a similar effect can be achieved through the use of controlled, included assembly code files. A collection of wrapper code files can handle OS-specific assembly code, while an included assembly file does the rest. And a simple Makefile can run the respective build console commands to reference the respective wrapper code on the desired platform.
For example, I was able to quickly construct FASM code that works this way. (Though I have yet to inform it to actually bypass puts with something less bloaty.) Anyway, it's progress.
Because almost all C functions use the CDECL calling convention where you the caller adjusts the stack not the callee (the function).
You will get into trouble if you don't learn how to do things correctly now, read harder to trackdown bugs.
Try this:
push szLF
push esp
push fmtint2
call printf
add esp, 4 * 3
push msg
call puts
push szLF
push esp
push fmtint2
call printf
add esp, 4 * 3
Run it and notice the numbers before and after your call to puts. They are different no? Well, they are supposed to be the same. Now add:
add esp, 4
after your call to puts and run it again.. The numbers are the same now? That means you have a balanced stack pointer and the function uses the CDECL calling convention.