Building a kernel module on Centos 7 with a CMake file - cmake

Sorry for the length. I have tried to include as much information as possible.
A device I work with randomly fails to start at boot - this is a well known issue with the device and there are lots of posts on the web with no known solution except reboot.
So the task is to look in dmesg for a certain string that if present means the device has failed to start and the system needs rebooting. A simple call to system() with boot seems to do the job.
A unit test that proves this would be nice. The idea is to look for a non-existant uuid in the dmesg log to prove that it fails to find one and then to write a different uuid to the log and then search for that. Proving it works in both cases.
First thing was to hit up google: Find you can write to the kernel log with # echo '<4>Foo: Message' | sudo tee /dev/kmsg which works from terminal but the sudo may cause issue in the unit test.
The next thing I looked at was accessing it via code. The unit tests are written in C++ and the library is googletest.
Most posts talk about writing a Makefile and kbuild. I am working in a build system where we have cmake called from a shell script.
After several hours of searching and trying things, I decided to ask here.
I have installed
kernel.x86_64 3.10.0-1062.el7 #anaconda
kernel.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-devel.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-devel.x86_64 3.10.0-1160.24.1.el7 #updates
kernel-headers.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-tools.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-tools-libs.x86_64 3.10.0-1160.21.1.el7
uname -r gives 3.10.0-1160.21.1.el7.x86_64 which seems to suggest I have the kernel headers and devel files installed.
Doing a find /. -name module.h lists:
...
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/arch/x86/include/asm/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/asm-generic/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/linux/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/trace/events/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/uapi/linux/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/arch/x86/include/asm/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/asm-generic/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/trace/events/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/uapi/linux/module.h
...
It maybe that I am trying to link files in /3.10.0-1160.24.1.el7.x86_64/ when I should be linking to 3.10.0-1160.21.1.el7.x86_64/. Listing installed yum packages via sudo yum list | grep linux-d returns
libselinux-devel.x86_64 2.5-15.el7 #base
libhbalinux-devel.i686 1.0.17-2.el7 base
libhbalinux-devel.x86_64 1.0.17-2.el7 base
libselinux-devel.i686 2.5-15.el7 base
syslinux-devel.x86_64 4.05-15.el7 base
My CMakeFiles.txt looks like
project( X_test )
set( TEST_SOURCE
X_test.cpp
)
execute_process(COMMAND uname -r OUTPUT_VARIABLE uname_r OUTPUT_STRIP_TRAILING_WHITESPACE)
include_directories(/usr/src/kernels/${uname_r}/include)
link_directories(/lib/modules/${uname_r}/build)
add_library(source-lib STATIC source.c)
Anything else in there has been commented out to prevent confusion.
Without the lines include_directories or link_directories I get the error
#include <linux/module.h>
With those lines in I get the error:
In file included from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/kernel.h:6:0,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/cache.h:4,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/time.h:4,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/stat.h:18,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/module.h:10,
from /home/user/git/asdo/Services/DCO-3303/test/source.c:1:
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/linkage.h:7:25: fatal error: asm/linkage.h: No such file or directory
#include <asm/linkage.h>
The code I am compiling is the standard printk(KERN_INFO "Hello world\n"); which you can see here.
How do I go about compiling code that uses a kernel call through CMake?

Related

Apache log4cxx 0.12.0 cmake scripts fail during test configuration

Version: apache-log4cxx-0.12.0.tar.gz
Configuration:
mkdir build; cd build && cmake -DBUILD_SHARED_LIBS=off -DAPR_STATIC=yes -DAPU_STATIC=yes ..
Symptoms (log snippet):
32882 error: downloading 'https://www-us.apache.org/dist/logging/log4j/1.2.17/log4j-1.2.17.tar.gz' failed
32883 status_code: 6
32884 status_string: "Couldn't resolve host name"
32885 log:
32886 --- LOG BEGIN ---
This was building just last month. I can't 100% attest to the fact that the build procedure has not changed (since it was done manually) but I don't believe it was significantly different.
At first, I thought my DNS resolver was just out of date, but after some dig-ing and fiddling with /etc/resolv.conf, it has become apparent that the www-us.apache.org url log4cxx was using to get the tarball from has disappeared from the face of the earth.
Two methods you can use to fix this (which I wish someone had posted before me).
First
Hack your /etc/hosts file to spoof www-us.apache.org to actually go to www.apache.org (where you will find a redirect for the link). To give neophytes an idea of what I'm talking about, here is kinda how I did it on Debian.
sudo echo '151.101.2.132 www-us.apache.org' >> /etc/hosts
Second
Fix the cmake script src/test/java/CMakeLists.txt line 3 to point to the right link. The broken one is
https://www-us.apache.org/dist/logging/log4j/1.2.17/log4j-1.2.17.tar.gz
The right one is
https://downloads.apache.org/logging/log4j/1.2.17/log4j-1.2.17.tar.gz
Digging around the GitHub account we found this code merged and (I assume) ready to go out with the next release, whenever that is.
https://github.com/apache/logging-log4cxx/commit/341a23aa0d13278c8ae85b6017d49de9790f00fe
Here's hoping this helps someone not remain stuck, expecting the build to work like it did a month ago.

Install mozroot-certdata package on a read only root file system

I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.

apache2 build fails in yocto - "/usr/local/include" is unsafe for cross-compilation [-Wpoison-system-directories]

I was trying to build apache2 on yocto.
But I was getting below errors.
ERROR: This autoconf log indicates errors, it looked at host include and/or library paths while determining system capabilities.
Rerun configure task after fixing this.
Some googling led me to
https://lists.yoctoproject.org/pipermail/yocto/2012-March/005125.html
So I looked into conf.log and find out those lines:
cc1: warning: include location "/usr/local/include" is unsafe for
cross-compilation [-Wpoison-system-directories]
arm-poky-linux-gnueabi/4.9.2/ld: warning: library search path "/usr/local/lib"
is unsafe for cross-compilation
I googled again but, I couldn't understand 3 things yet:
Why has the PATH been set to local path ?
Why does this error only come when building apache2 [ I can build ngnix, cryptsetup, etc..]
How can I fix it?
Usually these types of errors come from configure scripts that have paths (like /usr/local/include, /usr/include and all sorts of other variations) hardcoded into them. So the way to fix it is to patch configure.ac (if there is one in the package, of course, configure otherwise) removing this paths.
For example, take a look at patch for pure-ftpd from current meta-oe, it solves similar problem:
--- a/configure.ac
+++ b/configure.ac
## -100,18 +100,6 ## AC_ARG_VAR(PYTHON,local path to the python interpreter)
python_possible_path="/usr/bin:/usr/local/bin:/bin:/opt/python/bin:/opt/python/usr/bin:/opt/python/usr/local/bin"
AC_PATH_PROG(PYTHON,python,/usr/bin/env python,$python_possible_path)
-if test -d /usr/local/include; then
- CPPFLAGS="$CPPFLAGS -I/usr/local/include"
-fi
-
-if test -d /usr/kerberos/include; then
- CPPFLAGS="$CPPFLAGS -I/usr/kerberos/include"
-fi
-
-if test -d /usr/local/lib; then
- LDFLAGS="$LDFLAGS -L/usr/local/lib"
-fi
-
CPPFLAGS="$CPPFLAGS -D_FORTIFY_SOURCE=2"
dnl Checks for header files

ios Symbolication Server side

How to symbolicate the ios crash report after uploading to server in a linux environment where iOS development tools and scripts will not be available. I know Apple uses atos and some other tools to map the hex addresses to symbol along with .dYSM file.
I can upload .dYSM file along with crash report to server. Refered QuincyKit, but they are doing symbolication locally. But other's like HockeyApp and Critterism are doing it remotely.
Pls recommend the possible ways to do it in server.
It is possible. You can take a look at https://github.com/facebook/atosl
I got it working under Linux. (Ubuntu Server) However, it takes some time to get it up and running.
Installing atosl
First, you need to install libdwarf-dev, dwarfdump, binutils-dev and libiberty-dev.
E.g. on Ubuntu:
$ sudo apt-get install libdwarf-dev dwarfdump binutils-dev libiberty-dev
Download or clone the atosl repo from GitHub:
$ git clone https://github.com/facebook/atosl.git
CD to the atosl dir
$ cd atosl
Create a local config config.mk.local which contains a flag with the location of your binutil apps. (in Ubuntu by default that's /usr/bin). If you're not sure, you can find out by executing cat /var/lib/dpkg/info/binutils.list | less and copy the path of the file objdump. E.g. if the entry is /usr/bin/objdump, your path is /usr/bin.
So in the end, your config.mk.local should look like this:
LDFLAGS += -L/usr/bin
Compile it:
$ make
Now you can start using it:
$ ./atosl --help
Symbolicating example
To show how atosl is used, I'll provide a simple example.
Now let's take a look at a line from the crash log:
13 ErrorApp 0x000ea294 0xe3000 + 29332
To symbolicate this, we will need the load address, and the runtime address.
In this example the runtime address is 0x000ea294, and the load address is 0xe3000.
Now we have everything we need:
$ ./atosl -o [YOUR_dSYM_FILE] -l [LOAD_ADDRESS] [RUNTIME_ADDRESS]
In this example:
$ ./atosl -o ErrorApp.app.dSYM/Contents/Resources/DWARF/ErrorApp -l 0xe3000 0x000ea294
Which returns the symbolicated line:
main (in ErrorApp) (main.m:16)
FYI
Your vmaddr, which usually is 0x00001000, you can find by looking at the segname __TEXT Mach-O load command of your binary. In my example, this happens to be different, namely 0x00004000
To find the address, we need to do some math.
The address is found by the following formula:
address = vmaddr + ( runtime_address - load_address )
In this example our address is:
0x00004000 + ( 0x000ea294 - 0xe3000 ) = 0xB294
I haven't played around with this that much yet, but for now it seems to give me the results I needed. Maybe it will work for you too.
You need to implement your own linux compatible versions of atos, otool and dwarfdump (at least the functionality needed for symbolication). The Apple tools are not open source and only run on Mac OS X.
None of the services provide a solution that can be used by 3rd parties on non OS X systems. So your only chance, besides implementing the required functionality to run on your linux system, is to do it on a Mac like QuincyKit does it, see https://github.com/TheRealKerni/QuincyKit/wiki/Remote-symbolication or use a third party service.
Note: I am the creator of QuincyKit and Co-Founder of HockeyApp.

building apache from source on debian

I'm trying to build apache from source on debian. The only reason I'm not using spt-get install is because in the apache cookbook, they recommend installing from source.I get the following error when I ./configure:
configure: error: invalid variable name: ' --with-mpm'
I also saw some warnings when I ./buildconf Is this something I should be concerned about? This is my first attempt at compiling from source, and I'd really appreciate any help.
I'm using the ./configure arguments directly from the apache cookbook:
./configure --prefix=/usr/local/apache --with-layout=Apache --enable-modules=most --enable-mods-shared=all \ --with-mpm=prefork
I'm running a minimum debian install in virtual box to train myself for deploying in the rackspace cloud soon.
EDIT: I'm building Apache 2.2.16
I suspect you are typing that entire build line you provided on one line, complete with the '\' in the middle.
You should get rid of '\', which in bash either treats the following as part of the same string, but the slash has to immediately follow a non-whitespace character. It is also used for special escape sequences, which I think is the case here and generating that message.
This should be the correct line in your case.
./configure --prefix=/usr/local/apache --with-layout=Apache --enable-modules=most --enable-mods-shared=all --with-mpm=prefork
On a side note, doesn't the Apache Cookbook say that building from source is one possibility for installing it, in addition to installing from a pre-packaged build like you can get from Debian's repositories? I suppose if you really wanted a far newer build or a more repeatable process to ensure consistency across a variety of distributions, building from scratch will do that for you, but otherwise I would try to utilize the distribution's package management as much as possible. Building from source removes you from the security patches and ease-of-upgrade path that Debian APT gives you.