How do I compile NetBSD kernel and userland with full debugging symbols - netbsd

I am compiling NetBSD kernel and userland for ARM in NetBSD5.1
Libraries generated are not with debugging symbols. Is there any kernel which I can use to enable debugging for userland?
For e.g. libc.so.12 is stripped. I want unstripped with full debugging symbols

To build with debugging symbols, add -VMKDEBUG=yes and -VMKDEBUGLIB=yes to your build.sh command line.
See share/mk/bsd.README for all the available build flags.

Related

CMake: Howto test for compiler, before enable a language

In my project some code can be optional compiled in a different language (nasm & fortran), but it's also fine to compile the project without having these compiler installed. E.g. on Windows.
I would like to check if the the compiler are installed, before enabling the languages with enable_language
enable_language(ASM_NASM)
enable_language(Fortran)
If I use enable_language without an additional check, CMake stops with an error message.
(At the moment I check for if (MSVC) as workaround.)
Btw. I have a similar problem with the check for Qt. The check don't stop with an error, but generate a lot of noisy warnings.
So use check_language to check if a language can be enabled.

Error during cmake build : CXX compiler must support Cilk

I am trying to install cilk++ according to this website and am at the steps in section "Cilk Plus Runtime". When I go to build, I get the following output:
$ cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_INSTALL_PREFIX=./install ..
CMake Error at CMakeLists.txt:132 (message):
CXX compiler must support Cilk.
-- Configuring incomplete, errors occurred!
See also "/Users/anthonymcknight/Documents/cubing/bfs/lab4/cilk/cilkrts-0.1.2/build/CMakeFiles/CMakeOutput.log".
See also "/Users/anthonymcknight/Documents/cubing/bfs/lab4/cilk/cilkrts-0.1.2/build/CMakeFiles/CMakeError.log".
I thought clang and clang++ (which I checked with --version are indeed installed) would be sufficient. Do I need to update clang and clang++? There are no troubleshooting steps on the instructions website, so I'm not sure what I need to do to finally get cilk++ up and running on my laptop.
Thanks in advance,
Anthony
Expanding on my comment:
When you configure this Cilk Plus Runtime with CMake, CMake first verifies the compiler by attempting to compile a simple test program (see here). If the compilation fails, CMake prints the error you see:
CMake Error at CMakeLists.txt:132 (message):
CXX compiler must support Cilk.
On the Intel Cilk Plus runtime Github page (cilkrts), it has some compiler requirements listed for those trying to build this library:
You need the CMake tool and a C/C++ compiler that supports the Cilk language extensions. The requirements for each operating systems are:
Common: CMake 3.4.3 or later Make tools such as make
Linux: Tapir/LLVM compiler, or GCC* 4.9.2 or later (depracated), or Cilk-enabled branch of Clang*/LLVM* (http://cilkplus.github.io), or Intel(R) C++ Compiler v12.1 or later (depracated)
OS X: Tapir/LLVM compiler, or Cilk-enabled branch of Clang*/LLVM* (http://cilkplus.github.io), or Intel C++ Compiler v12.1 or later (depracated)
Since you are using Clang as your compiler, be sure it is a Cilk-enabled branch of Clang, as specified in the requirements. Or, you can try to use the Tapir/LLVM compiler.

How to run a dynamically linked executable syscall emulation mode se.py in gem5?

After How to solve "FATAL: kernel too old" when running gem5 in syscall emulation SE mode? I managed to run a statically linked hello world under certain conditions.
But if I try to run an ARM dynamically linked one against the stdlib with:
./out/common/gem5/build/ARM/gem5.opt ./gem5/gem5/configs/example/se.py -c ./a.out
it fails with:
fatal: Unable to open dynamic executable's interpreter.
How to make it find the interpreter? Hopefully without copying my cross' toolchain's interpreter on my host's root.
For x86_64 it works if I use my native compiler, and as expected strace says that it is using the native interpreter, but it does not work if I use a cross compiler.
The current FAQ says it is not possible to use dynamic executables: http://gem5.org/Frequently_Asked_Questions but I don't trust it, and then these presentations mention it:
http://www.gem5.org/wiki/images/0/0c/2015_ws_08_dynamic-linker.pdf
http://research.cs.wisc.edu/multifacet/papers/learning_gem5_tutorial.pdf
but not how to actually use it.
QEMU user mode has the -L option for that.
Tested in gem5 49f96e7b77925837aa5bc84d4c3453ab5f07408e
https://www.mail-archive.com/gem5-users#gem5.org/msg15582.html
Support for dynamic linking has been added in November 2019
At: https://gem5-review.googlesource.com/c/public/gem5/+/23066
It was working for sure at that point, but then it broke at some point and needs fixing.....
arm 32-bit https://gem5.atlassian.net/browse/GEM5-461
arm 64-bit https://gem5.atlassian.net/browse/GEM5-828
If you have a root filesystem to use, for example one generated by Buildroot you can do:
./build/ARM/gem5.opt configs/example/se.py \
--redirects /lib=/path/to/build/target/lib \
--redirects /lib64=/path/to/build/target/lib64 \
--redirects /usr/lib=/path/to/build/target/usr/lib \
--redirects /usr/lib64=/path/to/build/target/usr/lib64 \
--interp-dir /path/to/build/target \
--cmd /path/to/build/target/bin/hello
Or if you are using an Ubuntu cross compiler toolchain for example in Ubuntu 18.04:
sudo apt install gcc-aarch64-linux-gnu
aarch64-linux-gnu-gcc -o hello.out hello.c
./build/ARM/gem5.opt configs/example/se.py \
--interp-dir /usr/aarch64-linux-gnu \
--redirects /lib=/usr/aarch64-linux-gnu/lib \
--cmd hello.out
You have to add any paths that might contain dynamic libraries as a separate --redirect as well. Those are enough for C executables.
--interp-dir sets the root directory where the dynamic loader will be searched for, based on ELF metadata which says the path of the loader. For example, buildroot ELF files set that path to /lib/ld-linux-aarch64.so.1, and the loader is a file present at /path/to/build/target/lib/ld-linux-aarch64.so.1. As mentioned by Brandon, this path can be found with:
readelf -a $bin_name | grep interp
The main difficulty with syscall emulation dynamic linking, is that we want somehow:
linker file accesses to go to a magic directory to find libraries there
other file accesses from the main application to go to normal paths, e.g. to read an input file in the current working directory
and it is hard to detect if we are in the loader or not, especially because this can happen via dlopen in the middle of a program.
The --redirects option is a simple solution for that.
For example /lib=/path/to/build/target/lib makes it so that if the guest would access the C standard library /lib/libc.so.6, then gem5 sees that this is inside /lib and redirects the path to /path/to/build/target/lib/libc.so.6 instead.
The slight downside is that it becomes impossible to actually access files in the /lib directory of the host, but this is not common, so it works in most cases.
If you miss any --redirect, the dynamic linker will likely complain that the library was not found with a message of type:
hello.out: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory
If that happens, you have to find the libstdc++.so.6 library in the target filesystem / toolchain and add the missing --redirect.
It later broke at https://gem5.atlassian.net/browse/GEM5-430 but was fixed again.
Downsides of dynamic linking
Once I got dynamic linking to work, I noticed that it actually has the following downsides, which might or not be considerable depending on the application:
the dynamic linker has to run some instructions, and if you have a very minimal userland test executable, and are running on a low CPU like O3, then this startup can dominate runtime, so watch out for that
ExecAll does not show symbol names for stdlib functions, you just get offsets from some random nearest symbol e.g. #__end__+274873692728. Maybe something along these lines would work: Debugging shared libraries with gdbserver but not sure
dynamically jumping to a stdlib function for the first time requires going through the dynamic linking machinery, which can create problems if you are trying to control a microbench.
I actually already hit this once: the dynamic version of a program was doing something extra that and that compounded with a gem5 bug broke my experiment, and cost me a few hours of debugging.
Interpreters like Python and Java
Python and Java are just executables, and the script to execute an argument to the executable.
So in theory, you can run them in syscall emulation mode e.g. with:
build/ARM/gem5.opt configs/example/se.py --cmd /usr/bin/python --options='hello.py arg1 arg2'
In practice however hugely complex executable like interpreters are likely to have syscalls that not yet implemented given the current state of gem5 as of November 2019, see also: When to use full system FS vs syscall emulation SE with userland programs in gem5?
Generally it is not hard to implement / ignore uneeded calls though, so give it a shot. Related threads:
Java: Running Java programs in gem5(or any language which is not C)
Python: 3.6.8 aarch64 fails with "fatal: syscall unused#278 (#278) unimplemented.", test setup
Old answer
I have been told that as of 49f96e7b77925837aa5bc84d4c3453ab5f07408e (May 2018) there is no convenient / well tested way for running dynamically linked cross arch executables in syscall emulation: https://www.mail-archive.com/gem5-users#gem5.org/msg15585.html
I suspect however that it wouldn't be very hard to patch gem5 to support it. QEMU user mode already supports that, you just have to point to the root filesystem with -L.
The cross-compiled binary should have an .interp entry if it's a dynamic executable.
Verify it with readelf:
readelf -a $bin_name | grep interp
The simulator is setup to find this section in main executable when it loads the executable into the simulated address space. If this sections exists, a c-string is set to point to that text (normally something like /lib64/ld-linux-x86-64.so.2). The simulator then calls glibc's open function with that c-string as the parameter. Effectively, this opens the dynamic linker-loader for the simulator as a normal file. The simulator then maps the file into the simulated address space in phases with mmap and mmap_fixed.
For cross compilation, this code must fail. The error message follows it directly if the simulator cannot open the file. To make this work, you need to debug the opening process and possibly the subsequent pasting of the loader into the address space. There is mechanism to set the program's entry point into the loader rather than directly into the code section of the main binary. (It's done through the auxiliary vector on the stack.) You may need to play around with that as well.
The interesting code is (as of 05/29/19) in src/base/loader/elf_object.cc.
I encountered this problem after I just cross compiled the code. You can try to add "--static" after the command.

Chromium Ninja build fails (Illegal Instruction output)

I followed the Linux build instructions and when I try running "ninja -C out/Debug chrome", I just get the output "Illegal Instruction (core dumped)". Now, I wish I could actually find where the core dump is located to see if there is more specific information in there...
For reference, I am trying to run Ninja on Ubuntu 13.10.
Has anyone else experienced this while building Chromium or while trying to build anything else using Ninja? Also, where could I find the core dump?
The error message "Illegal Instruction (core dumped)" indicates that the current binary is using an instruction that is not supported by your CPU.
Please check whether software used for compilation (compiler, linker, ar, ninja-build etc.) is matching your CPU architecture. Unless you have no fancy system like ARM or POWER, you mixed up 32 bit (e.g. i586) and 64 bit (x86-64).
Or you compile to a wrong target. Does your compiler flags include flags beginning with -m like "-march="? That could lead to the same error but only if the compiled code is executed.
Have you built gyp or ninja-build yourself? This would be an other place to make such a mistake.

Building a cross-platform application (using Rust)

I started to learn Rust programming language and I use Linux. I'd like to build a cross-platform application using this language.
The question might not be related to Rust language in particular, but nonetheless, how do I do that? I'm interested in building a "Hello World" cross-platform application as well as for more complicated ones. I just need to get the idea.
So what do I do?
UPDATE:
What I want to do is the ability to run a program on 3 different platforms without changing the sources. Do I have to build a new binary file for each platform from the sources? Just like I could do in C
To run on multiple platforms you need to build an executable for each as #huon-dbauapp commented.
This is fairly straightforward with Rust. You use "--target=" with rustc to tell it what you want to build. The same flag works with Cargo.
For example, this builds for an ARM target:
cargo build --target=arm-unknown-linux-gnueabihf
See the Rust Flexible Target Specification for more about targets.
However, Rust doesn't ship with the std Crate compiled for ARM (as of June 2015). If this is the case for your target, you'll first need to compile the std Crates for the target yourself, which involves compiling the Rust compiler from source, and specifying the target for that build!
For information, most of this is copied from: https://github.com/japaric/ruststrap/blob/master/1-how-to-cross-compile.md
The following instructions are for gcc, so if you don't have this you'll need to install it. You'll also need the corresponding cross compiler tools, so for gcc:
sudo apt-get install gcc-arm-linux-gnueabihf
Compile Rust std Crate For ARM
The following example assumes you've already installed the current Rust Nightly, so we'll just get the sources and compile for ARM. If you are using a different version of the compiler, you'll need to get that to ensure your ARM libraries match the version of the compiler you're using to build your projects.
mkdir ~/toolchains
cd ~/toolchains
git clone https://github.com/rust-lang/rust.git
cd rust
git update
Build rustc for ARM
cd ~/toolchains/rust
./configure --target=arm-unknown-linux-gnueabihf,x86_64-unknown-linux-gnu
make -j4
sudo make install
Note "-j4" needs at least 8GB RAM, so if you hit a problem above try "make" instead.
Install ARM rustc libraries In native rustc build
sudo ln -s $HOME/src/rust/arm-unknown-linux-gnueabihf /usr/lib/rustlib/arm-unknown-linux-gnueabihf
Create hello.rs containing:
pub fn main() {
println!("Hello, world!");
}
Compile hello.rs, and tell rustc the name of the cross-compiler (which must be in your PATH):
rustc -C linker=arm-linux-gnueabihf-gcc-4.9 --target=arm-unknown-linux-gnueabihf hello.rs
Check that the produced binary is really an ARM binary:
$ file hello
hello: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), (..)
SUCCESS!!!:
Check: the binary should work on an ARM device
$ scp hello me#arm:~
$ ssh me#arm ./hello
Hello, world!
I've used this to build and link a Rust project with a separate C library as well. Instructions similar to the above on how to do this, dynamically or statically are in a separate post, but I've used my link quota up already!
The best way to figure this out is to download the source code for Servo and explore it on your own. Servo is absolutely a cross-platform codebase, so it will have to address all of these questions, whether they be answered in build/configuration files, or the Rust source itself.
It looks like the rust compiler might not be ready to build standalone binaries for windows yet (see the windows section here), so this probably can't be done yet.
For posix systems it should mostly Just Work unless you're trying to do GUI stuff.
Yes, you won't need to change the source, unless you are using specific libraries that are not cross-platform.
But as #dbaupp said native executables are different on each platform, *nix uses ELF, Windows PE, and OSX Mach-O. So you will need to compile it for each platform.
I don't know the state of cross-compiling in rust, but if they already implemented it, then you should be able to build all the binaries in the same platform, if not, you will have to build each binary on it's platform.