Dynamic Loader of Executable in Valgrind Ignoring LD_LIBRARY_PATH - valgrind

LD_LIBRARY_PATH=/otherRoot/lib/ valgrind myProg
chroot /otherRoot valgrind myProg
When running the first command, valgrind gives me errors about a stripped dynamic linker because it apparently is not using the one in /otherRoot/lib. Using the second command, it finds my the appropriate .so and works.
For reference, I have valgrind installed in the "normal root" and "otherRoot" as well.
Why does valgrind/myProg not search for the .so in /otherRoot/lib first?

When running the first command, valgrind gives me errors about a stripped dynamic linker because it apparently is not using the one in /otherRoot/lib.
The dynamic loader your program uses doesn't (can't) depend on LD_LIBRARY_PATH, because it is the kernel the loads the dynamic loader (and the kernel generally doesn't care about environment variables). More info here.
Outside of chroot, the "wrong" dynamic loader is used with or without valgrind. You can confirm this by pausing myProg and examining /proc/$PID/maps for it.

Related

LD_BIND_NOW doesn’t take effect on ELF PIE executable

Observations
When the Linux executable is compiled as PIE (Position Independent Executable, default on Ubuntu 18.04), the symbols from shared libraries (e.g. libc) will be resolved when the program starts executing, setting LD_BIND_NOW environment variable to null will not defer this process.
However, if the executable is compiled with the -no-pie flag, the symbol resolution can be controlled by LD_BIND_NOW.
Question
Is it possible to control when the symbols from share libraries is resolved on a ELF PIE executable?
Below is the code and system info in my test,
ubuntu: 18.04
kernel: Linux 4.15.0-50-generic #54-Ubuntu SMP Mon May 6 18:46:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
gcc: gcc (Ubuntu 7.4.0-1ubuntu1~18.04) 7.4.0
#include <stdio.h>
int main() {
printf("Hello world!\n");
}
Details of experiments (that leads to above conclusions).
The experiments are carried out in gdb-peda. Search for gdb-peda in the output will reveal the commands used for each step.
The address stored at GOT entry for puts will be displayed (by disp) whenever the execution proceeds in gdb. So the stage when it is patched with the real address can be easily spotted.
Outputs for -no-pie binary test.
Outputs for pie binary test.
BTW, the same question was originally posted in Reverse Engineering Stack Exchange, and the answer there confirmed above observations.
When the Linux executable is compiled as PIE (Position Independent Executable, default on Ubuntu 18.04), the symbols from shared libraries (e.g. libc) will be resolved when the program starts executing, setting LD_BIND_NOW environment variable to null will not defer this process.
However, if the executable is compiled with the -no-pie flag, the symbol resolution can be controlled by LD_BIND_NOW.
You are mistaken: the symbol resolution happens only when the program starts executing in either case, regardless of whether LD_BIND_NOW is defined or not.
What LD_BIND_NOW controls is whether all functions are resolved at once (during program startup), or whether such symbols are resolved lazily (the first time a program calls a given unresolved function). More on lazy resolution here.
As far as I understand, nothing in the above picture changes between PIE and non-PIE binaries. I'd be interested to know how you came to your conclusion that there is a difference.

How to run a dynamically linked executable syscall emulation mode se.py in gem5?

After How to solve "FATAL: kernel too old" when running gem5 in syscall emulation SE mode? I managed to run a statically linked hello world under certain conditions.
But if I try to run an ARM dynamically linked one against the stdlib with:
./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./a.out
it fails with:
fatal: Unable to open dynamic executable's interpreter.
How to make it find the interpreter? Hopefully without copying my cross' toolchain's interpreter on my host's root.
For x86_64 it works if I use my native compiler, and as expected strace says that it is using the native interpreter, but it does not work if I use a cross compiler.
The current FAQ says it is not possible to use dynamic executables: http://gem5.org/Frequently_Asked_Questions but I don't trust it, and then these presentations mention it:
http://www.gem5.org/wiki/images/0/0c/2015_ws_08_dynamic-linker.pdf
http://research.cs.wisc.edu/multifacet/papers/learning_gem5_tutorial.pdf
but not how to actually use it.
QEMU user mode has the -L option for that.
Tested in gem5 49f96e7b77925837aa5bc84d4c3453ab5f07408e
https://www.mail-archive.com/gem5-users#gem5.org/msg15582.html
Support for dynamic linking has been added in November 2019
At: https://gem5-review.googlesource.com/c/public/gem5/+/23066
It was working for sure at that point, but then it broke at some point and needs fixing.....
arm 32-bit https://gem5.atlassian.net/browse/GEM5-461
arm 64-bit https://gem5.atlassian.net/browse/GEM5-828
If you have a root filesystem to use, for example one generated by Buildroot you can do:
./build/ARM/gem5.opt configs/example/se.py \
--redirects /lib=/path/to/build/target/lib \
--redirects /lib64=/path/to/build/target/lib64 \
--redirects /usr/lib=/path/to/build/target/usr/lib \
--redirects /usr/lib64=/path/to/build/target/usr/lib64 \
--interp-dir /path/to/build/target \
--cmd /path/to/build/target/bin/hello
Or if you are using an Ubuntu cross compiler toolchain for example in Ubuntu 18.04:
sudo apt install gcc-aarch64-linux-gnu
aarch64-linux-gnu-gcc -o hello.out hello.c
./build/ARM/gem5.opt configs/example/se.py \
--interp-dir /usr/aarch64-linux-gnu \
--redirects /lib=/usr/aarch64-linux-gnu/lib \
--cmd hello.out
You have to add any paths that might contain dynamic libraries as a separate --redirect as well. Those are enough for C executables.
--interp-dir sets the root directory where the dynamic loader will be searched for, based on ELF metadata which says the path of the loader. For example, buildroot ELF files set that path to /lib/ld-linux-aarch64.so.1, and the loader is a file present at /path/to/build/target/lib/ld-linux-aarch64.so.1. As mentioned by Brandon, this path can be found with:
readelf -a $bin_name | grep interp
The main difficulty with syscall emulation dynamic linking, is that we want somehow:
linker file accesses to go to a magic directory to find libraries there
other file accesses from the main application to go to normal paths, e.g. to read an input file in the current working directory
and it is hard to detect if we are in the loader or not, especially because this can happen via dlopen in the middle of a program.
The --redirects option is a simple solution for that.
For example /lib=/path/to/build/target/lib makes it so that if the guest would access the C standard library /lib/libc.so.6, then gem5 sees that this is inside /lib and redirects the path to /path/to/build/target/lib/libc.so.6 instead.
The slight downside is that it becomes impossible to actually access files in the /lib directory of the host, but this is not common, so it works in most cases.
If you miss any --redirect, the dynamic linker will likely complain that the library was not found with a message of type:
hello.out: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory
If that happens, you have to find the libstdc++.so.6 library in the target filesystem / toolchain and add the missing --redirect.
It later broke at https://gem5.atlassian.net/browse/GEM5-430 but was fixed again.
Downsides of dynamic linking
Once I got dynamic linking to work, I noticed that it actually has the following downsides, which might or not be considerable depending on the application:
the dynamic linker has to run some instructions, and if you have a very minimal userland test executable, and are running on a low CPU like O3, then this startup can dominate runtime, so watch out for that
ExecAll does not show symbol names for stdlib functions, you just get offsets from some random nearest symbol e.g. #__end__+274873692728. Maybe something along these lines would work: Debugging shared libraries with gdbserver but not sure
dynamically jumping to a stdlib function for the first time requires going through the dynamic linking machinery, which can create problems if you are trying to control a microbench.
I actually already hit this once: the dynamic version of a program was doing something extra that and that compounded with a gem5 bug broke my experiment, and cost me a few hours of debugging.
Interpreters like Python and Java
Python and Java are just executables, and the script to execute an argument to the executable.
So in theory, you can run them in syscall emulation mode e.g. with:
build/ARM/gem5.opt configs/example/se.py --cmd /usr/bin/python --options='hello.py arg1 arg2'
In practice however hugely complex executable like interpreters are likely to have syscalls that not yet implemented given the current state of gem5 as of November 2019, see also: When to use full system FS vs syscall emulation SE with userland programs in gem5?
Generally it is not hard to implement / ignore uneeded calls though, so give it a shot. Related threads:
Java: Running Java programs in gem5(or any language which is not C)
Python: 3.6.8 aarch64 fails with "fatal: syscall unused#278 (#278) unimplemented.", test setup
Old answer
I have been told that as of 49f96e7b77925837aa5bc84d4c3453ab5f07408e (May 2018) there is no convenient / well tested way for running dynamically linked cross arch executables in syscall emulation: https://www.mail-archive.com/gem5-users#gem5.org/msg15585.html
I suspect however that it wouldn't be very hard to patch gem5 to support it. QEMU user mode already supports that, you just have to point to the root filesystem with -L.
The cross-compiled binary should have an .interp entry if it's a dynamic executable.
Verify it with readelf:
readelf -a $bin_name | grep interp
The simulator is setup to find this section in main executable when it loads the executable into the simulated address space. If this sections exists, a c-string is set to point to that text (normally something like /lib64/ld-linux-x86-64.so.2). The simulator then calls glibc's open function with that c-string as the parameter. Effectively, this opens the dynamic linker-loader for the simulator as a normal file. The simulator then maps the file into the simulated address space in phases with mmap and mmap_fixed.
For cross compilation, this code must fail. The error message follows it directly if the simulator cannot open the file. To make this work, you need to debug the opening process and possibly the subsequent pasting of the loader into the address space. There is mechanism to set the program's entry point into the loader rather than directly into the code section of the main binary. (It's done through the auxiliary vector on the stack.) You may need to play around with that as well.
The interesting code is (as of 05/29/19) in src/base/loader/elf_object.cc.
I encountered this problem after I just cross compiled the code. You can try to add "--static" after the command.

CMake detect if no C++ compiler present

I have a project that first uses a C++ program to process some template files (setup as a subdirectory) and then needs to cross-compile to run on vxworks. The cross compile part will be done via a custom command and .bat file but the first part will vary depending on the available options.
If the computer has an appropriate compiler it should compile the template processor program as necessary before running it. Some computers, though, won't have a regular c++ compiler. In this case I want to assume that the template processor program is installed to a specific location and continue using that prebuilt version.
How would I go about tackling this with CMake?
You can give try_run a try. It compiles and executes C++ source code. If execution fails. it is indicated in the failed run variable you pass as the first argument.
The advantage of using this is, that you don't have to take care of the right compiler call or any flags.
Documentation: https://cmake.org/cmake/help/v3.4/command/try_run.html
Especially the section about cross compilation

Cmake apparently ignoring CMAKE_BUILD_TYPE?

So I'm using CMake for a project.
It consists of a set of shared libraries linked to one executable. All are generated in the project (there are no external targets). Each sub project lives in its own directory, with its own cmakelists file.
So I make an out-of-source build, taking care to set CMAKE_BUILD_TYPE to Debug, and run cmake, and then make. I use GNU make 3.81, GCC 4.8.1, binutils 2.23.2 and CMake 3.2.3 on a Windows box using MSYS/MINGW.
The problem is that, when I load this executable in gdb (version 7.6), place a breakpoint on a function from one of the shared libraries, and then try to single step, gdb skips the whole function saying it has no line number information.
According to my understanding, line number information is a part of the debugging information, so I expected this to be generated during the compiling process (as per the CMAKE_BUILD_TYPE) which it didn't, so I would like to know how I can get CMake to generate this line number information properly (that is, without manually adding compiler-specific options in the cmake files, although I would take that if it's the only solution).
I've tried setting CMAKE_BUILD_TYPE from the command line (when invoking the cmake utility), inside the cmakelists, and even by modifying the CMakeCache.txt, and restarting the build from an empty directory with no success. I then made sure that CMAKE_BUILD_TYPE was effectively set to Debug by using the MESSAGE command to print it's value, and it was correctly set to Debug. So then I executed 'make VERBOSE=1' to see if the correct compiler option was added, and found it correctly used the "-g" option (although I would have expected -ggdb, but more on this later). The cmake documentation and Google did not bring me any answers.
My hypothesis is that the -g option only generates basic debugging information (such as the mappings between functions and their memory addresses, and how to access their arguments) whereas -ggdb would generate more in-detail debugging information in a gdb-specific format, including said line number informations), but a troubling fact is that, when running the executable in gdb, functions defined inside the executable do have line number information, only the shared libraries don't, hence my confusion.

Ignore LD_LIBRARY_PATH and stick with library given through -rpath at link time

I'm sitting in an environment which I have no real control over (it's not just me, so basically, I can't change the environment or it won't work for anyone else), the only thing I can affect is how the binary is built.
My problem is, the environment specifies an LD_LIBRARY_PATH containing a libstdc++ which is not compatible with the compiler being used. I tried compiling it statically, but that doesn't seem possible for g++ (version 4.2.3, seems to have been work done in this direction in later versions though which are not available, -static-libstdc++ or something like that).
Now I've arrived at using rpath to bake the absolute path name into the executable (would work, all machines it's supposed to run on are identical). Unfortunately it appears as though LD_LIBRARY_PATH takes precedence over rpath (resetting LD_LIBRARY_PATH confirmed that it's able to find the library, but as stated above, LD_LIBRARY_PATH will be set for everyone, and I cannot change that).
Is there any way I can make rpath take precedence over LD_LIBRARY_PATH, or are there any other possible solutions to my problem? Note that I'm talking about dynamic linking at runtime, I am able to control the command line at compile and link time.
Thanks.
Maybe you can use a shell wrapper which modifies the environment for just this program?
#!/bin/sh
LD_LIBRARY_PATH="/path/to/your/lib:${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH
exec /path/to/binary $#
This should overload the LD_LIBRARY_PATH before execution and then replace the wrapper with your binary via exec.
Would this help?
Linking against libstdc++.a is definitely possible, albeit tricky. Instructions here.
I am a bit skeptical of your claim that LD_LIBRARY_PATH takes precedence over the "baked in" DT_RPATH though -- at least on Linux and (I believe) Solaris, LD_LIBRARY_PATH is only looked at after DT_RPATH lookup has failed.