I want to extract execution traces (e.g., visited basic blocks) when testing Apache server (httpd). Since my work is based on LLVM infrastructure, I choose to use clang instrumentation based profiling as follows:
clang -fprofile-instr-generate ${options to compile httpd} -o httpd
export LLVM_PROFILE_FILE=code-%p.profraw
sudo -E ./httpd -k start # output a .profraw
curl ${url} # send a request
sudo -E ./httpd -k stop # output another .profraw
The compilation of instrumented httpd works well.
However, I want to track httpd's request handling which is executed in a separate child process. The output .profraw does not record any execution from child processes. As a result, I can only access the execution traces of starting and closing the server. How can I get the .profraw including request handling?
Not restricted in clang profiling. Any solution compatible with LLVM is great. Thanks!
Update
From the logs, it turns out the child process whose owner is "daemon" has no write permission to the files
LLVM Profile Error: Failed to write file "code-94752.profraw": Permission denied
Problem solved
The problem is the collision of prof file names. The process httpd -k start create multiple child processes as workers. When using LLVM_PROFILE_FILE=code-%p.profraw, their pid %p is same. So the main process is owned by root and creates the prof file first. Then latter process owned by daemon cannot write that file.
Solution: Use LLVM_PROFILE_FILE=code-%9m.profraw (%Nm instead of %p) to avoid name collisions.
Related
Imagine we have a statically linked Linux executable.
How should I name it in the imported tar.gz so the WSL 1 will run it by default, when created and started like:
# import an archive as a WSL distro
wsl --import static tmp-root-dir static.tar.gz
# boot distro to a default app??
wsl -d static
PS WSL uses own proprietary boot process and seems doesn't use traditional Unix /sbin/init.
Short answer:
The smallest bootable (without errors or warnings) WSL rootfs will consist of three files:
/main: Your statically-linked application. It can be named whatever you want, as long as the name matches what is in passwd.
/etc/passwd: Defines the application (i.e. shell) to load for the default user.
/etc/wsl.conf: To suppress normal WSL functionality and (optionally) define the non-root user.
More detail:
This probably isn't exactly what you are wanting, but it will hopefully meet your needs.
To start with, the entry point for WSL (the first time a Linux ELF binary is started inside the instance) seems to be its /init binary, which, in addition to some "normal" Linux init process tasks, sets up some of the Windows-interop functionality. To my knowledge, it cannot currently be changed. As far as I can tell, for WSL1, it is injected into the instance by the LXSS manager when starting a WSL instance.
Note: WSL2 might be slightly different in this regard, as it does seem to use a kernel-processed initrd to load /init. It is possible to override the kernel command-line, but that would impact all WSL2 instances, so it's probably not a practical solution.
It's not quite clear from your question whether you want the "default application" to:
Run as the default application/shell every time wsl -d static is run, even if it was already running.
Or just run once when starting the WSL1 instance for the first time.
I believe you are looking for the first option.
Run as the default application
In the first case, the standard WSL1 /init process might get you to where you need to be. As part of the startup, as you would expect, it reads /etc/passwd to determine the user shell to start. It also reads /etc/wsl.conf to determine the default user ID (but falls back to the registry if there is no default user set in wsl.conf).
So, to start a different application (let's call it main), you can:
Place the binary in the root directory of your image.
Set the application as the "shell" of the root user in a single-line /etc/passwd:
user:x:1000:1000:user:/:/main
Side-note that this also sets the home directory to / so we don't have to create another directory.
Define a etc/wsl.conf with the following contents:
[user]
default=user
[automount]
enabled=false
mountFsTab=false
[interop]
appendWindowsPath=false
This will prevent WSL from performing the following startup tasks, which would produce an error without additional image support:
Mounting Windows drives into the instance
Attempting to process /etc/fstab (since we have no mount command in the image).
Appending Windows paths (since our instance won't have access to the Windows drives)
It also sets the default user to the UID 1000 user we created in /etc/passwd. This isn't strictly necessary - There's likely no concern with running as root in this single-use instance, but I've included a non-root user as a "best practice".
That should be it. The smallest bootable WSL rootfs will consist of just those three files:
/etc/wsl.conf
/etc/passwd
/main
This will work on WSL1 as well as WSL2, although for WSL2, you should invoke with wsl ~ -d static to make sure that it doesn't try to start on a Windows drive that it can't access. Otherwise, you'll receive an init error, but your application will still be invoked.
Run once
If you are looking for something that will, for instance, start up a daemon when the instance is started for the first time, then there are a few alternatives that I document in this answer. If you are on Windows 11, then there's a built-in mechanism via /etc/wsl.conf. Otherwise, on Windows 10, you'll probably need to include some binary that can handle conditional logic. Something like execline would probably be perfect for this, but I've had issues with it under WSL2, at least, and I'm not sure that it would run under WSL1 (but it might).
Side-note for WSL1/musl
musl is a commonly used alternative libc implementation. For instance, Rust (AFAICT), can only generate truly statically-linked executables using musl. Note, however, that WSL1 cannot run musl-based statically linked binaries.
WSL2 can handle them just fine.
I managed to get it working. Initially I missed an executable bit on the app when created TAR archive.
Take standard 64-bit assembly:
.data
msg:
.ascii "Hello, world!\n"
.set len, . - msg
.text
.globl _start
_start:
# write
mov $1, %rax
mov $1, %rdi
mov $msg, %rsi
mov $len, %rdx
syscall
# exit
mov $60, %rax
xor %rdi, %rdi
syscall
and create a minimal WSL system:
wsl as -64 -o minimal.o minimal.s
wsl ld -melf_x86_64 -o minimal minimal.o
tar czf minimal.tar.gz \
--mode=a=rx \
--xform='s#^minimal#/\0#' minimal
wsl --import minimal rootfs-minimal minimal.tar.gz --version 1
wsl --list
wsl -d minimal -e /minimal
To make executable default (shorten wsl -d minimal -e /minimal to wsl -d minimal) we need an extra file /etc/passwd:
root:x:0:0:root:/root:/minimal
First line of this file determine a default user and so path to the executable (entry point) unless you override the user with /etc/wsl.conf:
[user]
default=user
Basically WSL 1 treats only 2 files as magical (in addition to ignoring /sbin/init):
/etc/wsl.conf
/etc/passwd
I installed slurm on a workstation and it seemed to work, i can use the slurm commands, srun is working too.
But when i try to launch a job from a script using sbatch test.sh i get the following error : Batch job submission failed: I/O error writing script/environment to file even if the script is the simplest like
#!/bin/bash
srun hostname
Make sure slurmd is running as root. See the SlurmdUser parameter in slurm.conf. Its default value is root and it should be so.
Note this is different from the SlurmUser parameter, that defines the user which runs the controller processes ; this one is preferably not root.
If the configuration is correct, then you might have a faulty filesystem at the location referred to in the SlurmdSpoolDir parameter, where slurmd writes the submission script and environment for jobs assigned to the node.
I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.
I'd really appreciate any help in tracking down and diagnosing an umask issue on Ubuntu:
I'm running php5-fpm with Apache via proxy_fcgi. The process is running with a umask of 0022 (confirmed by having PHP send the results of umask() into a file [the result is '18' == 0022]). I'd like to change this to 0002, but can't track down where the umask is coming from.
Apache is set with umask 0002, and as a test, if I disable proxy_fcgi and run my test above, I get a file with u+g having rw access (and the file contents confirm the umask as '2' == 0002).
If I sudo -iu fpmuser and run umask the results are 0002.
System info:
PHP: 5.5.3-1ubuntu2.1
Apache: 2.4.6
Ubuntu: 13.10
PHP-PFM is listing using TCP ports (as Unix ports aren't yet working/support)
So far I've tried the following (each followed by a system restart and a retest):
adding umask 0002 to the start of /etc/init.d/php5-fpm
adding --umask 0002 into the start-stop-daemon calls in /etc/init.d/php5-fpm
adding umask 0002 to .profile in the home of the fpm user
Something is clearly adjusting the umask of the php-fpm process - so, how can I begin tracing what is forcing the umask 0022 onto the php-fpm process?
EDIT (1):
adjusting the system wide umask via /etc/login.defs (see How to set system wide umask?) affects the umask elsewhere (e.g. comannds via sudo now have a umask of 0002), but still php-fpm creates files with a umask of 0022. Note that I verified that session optional pam_umask.so was also present in /etc/pam.d/common-session-noninteractive and I tested umasks of 002 and 0002.
EDIT (2):
I have been able to replicate the issue using nginx and php5-fpm (using unix sockets set to listen mode '0666').
I would love to trace where the umask is coming from but I'd settle for some way to force it to what I want.
I should add that the first test was done on an Amazon Ubuntu 13.10 image. My tests in 'edit 2' where completed using a copy of the Ubuntu13.10 server ISO setup from scratch in a virtual machine. All installations were completed via apt-get rather than by downloading the source and building.
EDIT (3):
I have confirmed I can manipulate the umask manually by either of the following (verified by checking the permissions on the test file created):
a. In a shell, set a umask then run /usr/sbin/php-fpm from the shell
b. In a shell, run the following with whatever umask value I like:
start-stop-daemon --start --quiet --umask 0002 --pidfile /var/run/php5-fpm.pid --exec /usr/sbin/php5-fpm -- --daemonize --fpm-config /etc/php5/fpm/php-fpm.conf
However this exact same command in the /etc/init.d/php5-fpm file fails to adjust the umask when running sudo service php5-fpm stop; sudo service php5-fpm start or at reboot.
Not a solution for generically tracing where umask settings are coming from on ubuntu (the only way I've found so far is the good old hard work approach of replicating the issue, attempting to isolate it to a script or a function, then stepping back through each script/function that is called recursively) but a solution to the php5-fpm umask issue. I've found a lot of hits on google, stackoverflow, and elsewhere for the problem, but so far no solution. Hopefully this is useful for people.
Edit /etc/init/php-fpm.conf to include the line umask 0002 (or whatever umask you wish). My version of the file now looks like this:
# php5-fpm - The PHP FastCGI Process Manager
description "The PHP FastCGI Process Manager"
author "Ondřej Surý <ondrej#debian.org>"
start on runlevel [2345]
stop on runlevel [016]
### my edit - change umask setting
umask 0002
pre-start exec /usr/lib/php5/php5-fpm-checkconf
respawn
exec /usr/sbin/php5-fpm --nodaemonize --fpm-config /etc/php5/fpm/php-fpm.conf
Explanation
Having traced through the service command which launches php5-fpm at startup, it runs some checks (line 118 on my copy) for /etc/init/${SERVICE}.conf, along with verifying initctl is present and can report it's version. If these tests are passed then upstart is used which in the case of php5-fpm uses the /etc/init/php-fpm.conf file.
The ubuntu upstart site gives pretty clear instructions. In particular you can check out the upstart cookbook for the specifics you need.
As best I can work out that means that therefore the 'service' command was never actually running the start-stop-daemon … commands found in /etc/init.d/php5-fpm which is why my previous edits were having no effect. Instead it passes off to upstart (actually initctl) when you use something like service php5-fpm start, etc.
If you use systemd, in the /etc/systemd/system directory, create a new directory called php7.2-fpm.service.d. The name of this directory will vary depending on your distro and PHP version. Run systemctl list-units --type=service | grep --ignore-case php to find out what to call it. Inside of this directory, place a file called umask.conf with the contents:
# /etc/systemd/system/php7.2-fpm.service.d/umask.conf
[Service]
UMask=0002
For the changes to take effect, run:
systemctl daemon-reload && systemctl restart php7.2-fpm
The benefit of this solution is that your customizations are not lost when packages get updated.
Explanation of how this works from the systemd manual:
Along with a unit file foo.service, a "drop-in" directory foo.service.d/ may exist. All files with the suffix ".conf" from this directory will be parsed after the file itself is parsed. This is useful to alter or add configuration settings for a unit, without having to modify unit files. Each drop-in file must have appropriate section headers. Note that for instantiated units, this logic will first look for the instance ".d/" subdirectory and read its ".conf" files, followed by the template ".d/" subdirectory and the ".conf" files there.
In addition to /etc/systemd/system, the drop-in ".d" directories for system services can be placed in /usr/lib/systemd/system or /run/systemd/system directories. Drop-in files in /etc take precedence over those in /run which in turn take precedence over those in /usr/lib. Drop-in files under any of these directories take precedence over unit files wherever located. Multiple drop-in files with different names are applied in lexicographic order, regardless of which of the directories they reside in.
better copy systemd script before editing php5-fpm.service or it will be overwritten on next update:
cp /lib/systemd/system/php5-fpm.service /etc/systemd/system/
vi /etc/systemd/system/php5-fpm.service
Add: UMask=0002 in [Service] section.
systemctl daemon-reload
systemctl restart php5-fpm
Source: https://ispire.me/running-php-fpm-with-different-user-group-using-umask/
okey, but this applies to all the pools.
Would be handy to be able to set it with something like
env[umask] = 0002
(no chance for this to work)
been googling, but doesn't seem to be a way to do this on a per host basis.
I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.