I'm writing a kernel character device driver for which I've implemented the fops.read, and the FIONREAD (0x541B) ioctl. The data returned by read is an ELF executable. ls -l confirms that the device has r-x permissions, and both of the following commands allow me to execute the contained ELF binary:
# cp /dev/foo0 /tmp/bar && /tmp/bar
-or-
# cat /dev/foo0 > /tmp/bar && /tmp/bar
foo_open
foo_ioctl 0x0000541B
foo_read size=131072 off=0
foo_ioctl 0x0000541B
foo_read size=131072 off=13096
foo_release
Hello from /tmp/bar!
...
Note that the kernel messages indicate the various driver messages that are called. When I try to run the device directly, however, I get an error:
# /dev/foo0
foo_open
foo_release
/bin/sh: 6: /dev/foo0: Permission denied
What check might be causing the permissions error, and is it possible to override it without fundamentally breaking linux? I'm using the 4.18.3 kernel with a minimal sysroot image.
From man 2 execve:
EACCES The file or a script interpreter is not a regular file.
The Linux kernel only allows regular files to be executed, not character devices or any other special files. The kernel does that check in the do_open_execat function in fs/exec.c:
if (!S_ISREG(file_inode(file)->i_mode))
goto exit;
You can rebuild the kernel without that check if you really want, but it's probably there for a good reason.
Short answer: you’re not (ever) allowed to exec device files.
The execve man page lists
EACCES
The file or a script interpreter is not a regular file.
in the ERRORS section.
Related
I have Homebrew Apache installed and trying to connect Coldfusion Server 2016 with Tomcat mod_jk.
I downloaded the source code from https://tomcat.apache.org/download-connectors.cgi
I followed the directions to compile it, tried few different ways, but when I get to the "make" command, I keep getting the same error:
In file included from jk_ajp12_worker.c:26:
In file included from ./jk_ajp12_worker.h:26:
In file included from ./jk_logger.h:26:
In file included from ./jk_global.h:340:
./jk_types.h:56:2: error: Can not determine the proper size for pid_t
#error Can not determine the proper size for pid_t
^
./jk_types.h:62:2: error: Can not determine the proper size for pthread_t
#error Can not determine the proper size for pthread_t
^
2 errors generated.
make[1]: *** [jk_ajp12_worker.lo] Error 1
make: *** [all-recursive] Error 1
These are the different commands I've tried to compile:
./configure --with-apxs=/opt/homebrew/bin/apxs
./configure CFLAGS='-arch arm64e' APXSLDFLAGS='-arch arm64e' --with-apxs=/opt/homebrew/bin/apxs
./configure CFLAGS='-arch arm64e' APXSLDFLAGS='-arch arm64e' --with-apxs=/opt/homebrew/bin/apxs --host=arm
I recently got this new MacBook Pro 16" and migrated everything over from my 2017 MacBook Pro (Intel chip). I was running stock Apache with Coldfusion Server 2016, but when I tried to start up Apache on the new MacBook, it didn't like my mod_jk.so file and threw an error:
httpd: Syntax error on line 542 of /opt/homebrew/etc/httpd/httpd.conf: Syntax error on line 2 of /opt/homebrew/etc/httpd/mod_jk.conf:
Cannot load /Applications/ColdFusion2016/config/wsconfig/2/mod_jk.so into server: dlopen(/Applications/ColdFusion2016/config/wsconfig/2/mod_jk.so, 0x000A):
tried: '/Applications/ColdFusion2016/config/wsconfig/2/mod_jk.so'
(mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))
I appreciate any help or input. Thank you.
I've finally installed Apache2 with Tomcat on my M1 and it all works.
The one thing you must do is to install a fresh Apache from Macports or HomeBrew. This is because most old installations copied from your old mac to your new one will now be in the read-only part of your file system and SIP won't let you near them. You will find weird and wonderful workarounds (apachectl told me I had to codesign mod_jk.so for example and I wasted a lot of time doing it and in the end it was pointless) and you will attempt to get the old installation to work, but trust me it's not worth it.
You will need to compile a fresh jk_module (mod_jk.so). This is what I did:
Download latest connector https://dlcdn.apache.org/tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.48-src.tar.gz, save the .gz and unzip it.
Change directory to the native folder.
run which apxs to tell you the path to apxs for the ./configure command
The path mine gave was: /opt/local/bin/apxs. Use it as the path in the ./configure command below.
The commands are as follows (actually don't bother running them yet because they will fail):
./configure --with-apxs=/opt/local/bin/apxs
make
However make will fail with:
./jk_types.h:56:2: error: Can not determine the proper size for pid_t
#error Can not determine the proper size for pid_t
^
./jk_types.h:62:2: error: Can not determine the proper size for pthread_t
#error Can not determine the proper size for pthread_t
^
2 errors generated.
make[1]: *** [jk_ajp12_worker.lo] Error 1
make: *** [all-recursive] Error 1
This is a problem for M1 macs that has been fixed. So for the moment we will abandon the 1.2.48 source and download the source with the fix.
But don't delete the 1.2.48 source because the fix source is missing a few files which you will copy straight over from the 1.2.48 source.
The page to download the fix for Mac OS is here: https://github.com/apache/tomcat-connectors, which is commit e719874 on Jun 30, 2021.
Click on the green 'Code' button and then on 'Download ZIP'.
Unzip the new source and cd to 'native'
Run the commands:
./configure --with-apxs=/opt/local/bin/apxs
make
And whenever it stops and complains that something is missing, find it in the 1.2.48 source and copy it over to the same position in the new source and try again. It will happen two or three times.
I got this error at one point:
/home/myuser/source/mod_auth_cas/mod_auth_cas/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [aclocal.m4] Error 127
Then I read somewhere to run autoreconf -f -i (which fixed it).
When make finishes, find your nice new mod_jk.so file in the native/apache-2.0 folder and copy it to where all your other modules are. I have a Macports installation so Homebrew is probably different, but my modules are in /opt/local/lib/apache2/modules.
Don't forget to add the LoadModule line in httpd.conf if it isn't already there:
LoadModule jk_module /opt/local/lib/apache2/modules/mod_jk.so
You might have some trouble working out which apache2 folders contain the new install, and not an old installation - I found two other installations knocking about trying to confuse me.
My config is here: /opt/local/etc/apache2/httpd.conf
apachectl is very useful for configuration.
apachectl -t -D DUMP_INCLUDES will find all the configuration files it is using. This totally saved me because it showed me that my httpd.conf file, which I had copied from elsewhere, was still pointing via 'Include' commands at other old config files in the wrong place.
apachectl configtest will test your config for you and print out any mistakes it finds. It pointed at 4 modules that it didn't like so I just excluded them. Though obviously read the messages carefully and google if you are not sure why apachectl doesn't like something. If it replies 'Syntax OK' you are ready to go.
This is a mysterious message I got a lot until I worked out that it was because httpd.conf was pointing at the wrong modules folder (an old install of apache2) for each module, so it was loading stuff that presumably was not compiled for 64bit
httpd: Syntax error on line 76 of /opt/local/etc/apache2/httpd.conf:
Cannot load libexec/apache2/mod_authz_owner.so into server:
dlopen(/usr/libexec/apache2/mod_authz_owner.so, 0x000A): symbol not
found in flat namespace '_apr_stat$INODE64'
This is my launch command using the plist which Macports automatically created:
sudo launchctl load -w /opt/local/etc/LaunchDaemons/org.macports.apache2/org.macports.apache2.plist
And to unload:
sudo launchctl unload /opt/local/etc/LaunchDaemons/org.macports.apache2/org.macports.apache2.plist
run ps ax|grep httpd to see if it's running.
Logging: Don't forget to sudo to create the jk folder in /var/log/apache2 if it doesn't already exist, otherwise apache or tomcat will have mysterious problems or won't start or something (the /var/log/apache2/jk folder is needed for jk.log).
Another problem cropped up just as I thought I had it made: apache
was unable to write its pid file on startup. Again this was because the position set in my config for the pid file was from the configuration on my old mac, and the position chosen was in a read-only location.
To change this you need to set the PidFile parameter, which I found in the following file:
/opt/local/etc/apache2/extra/httpd-mpm.conf
and it looks like this:
# PidFile: The file in which the server should record its process
# identification number when it starts.
#
# Note that this is the default PidFile for most MPMs.
#
<IfModule !mpm_netware_module>
PidFile "local/run/apache2/httpd.pid"
</IfModule>
Don't worry about what the IfModule thing is doing, just set the PidFile to a writeable location, which as you can see is a relative path. You may be wondering what goes in front of the local folder.
What goes in front is the ServerRoot parameter set in httpd.conf:
ServerRoot "/usr"
So my pid will be written at /usr/local/run/apache2/httpd.pid. I had to create the run and apache2 folders.
That's about it. There are various logs that might indicate errors if you are stuck:
/var/log/apache2/error_log
And the jk.log for the apache/tomcat connector:
/var/log/apache2/jk/jk.log
And there's always the system log which just might tell you something:
/var/log/system.log
I hope very much that this helps someone. However it was very long and complicated and I have surely missed something that I did along the way, so if you come across some new problem I will see if I can help.
Running ColdFusion on a Mac is consistently a PITA. Doesn't matter if it's CF 9, 10, 11, all the way to current. Especially when you're dealing with a non-Intel based chipset. You are also trying to get an older and custom build of Toncat running on a chipset that likely isn't supported. You're also not the only one having this issue with CF 2016 on the M1 chip (they didn't find a solution either).
Try using CommandBox to run CF. It will download the server as a JAR file and run it on the Glassfish servlet container (IIRC). You won't need Apache either. It's really quite simple to get up and running.
https://commandbox.ortusbooks.com/embedded-server/multi-engine-support
Once you have it installed, go to your application's root folder in the CLI:
start cfengine=adobe#2016
It will download & install the server, then start the application.
Check the docs for more info.
I am trying to run an application which uses pagemap in gem5 FS mode.
But I am not able to use pagemap in gem5. It throws below error -
"assert(pagemap>=0) failed"
The line of code is:
int pagemap = open("/proc/self/pagemap", O_RDONLY);
assert(pagemap >= 0);
Also, If I try to run my application on gem5 terminal with sudo ,it throws error-
sudo command not found
How can I use sudo in gem5 ??
These problems are not gem5 specific, but rather image / Linux specific, and would likely happen on any simulator or real hardware. So I recommend that you remove gem5 from the equation completely, and ask a Linux or image specific question next time, saying exactly what image your are using, kernel configs, and provide a minimal C example that reproduces the problem: this will greatly improve the probability that you will get help.
I have just done open("/proc/self/pagemap", O_RDONLY) successfully with: this program and on this fs.py setup on aarch64, see also these comments.
If /proc/<pid>/pagemap is not present for any file, do the following:
ensure that procfs is mounted on /proc. This is normally done with an fstab entry of type:
proc /proc proc defaults 0 0
but your init script needs to use fstab as well.
Alternatively, you can mount proc manually with:
mount -t proc proc proc/
you will likely want to ensure that /sys and /dev are mounted as well.
grep the kernel to see if there is some config controlling the file creation.
These kinds of things are often easy to find without knowing anything about the kernel.
If I do:
git grep '"pagemap'
to find the pagemap string, which is likely the creation point, on v4.18 this leads me to fs/proc/base.c, which contains:
#ifdef CONFIG_PROC_PAGE_MONITOR
REG("pagemap", S_IRUSR, proc_pagemap_operations),
#endif
so make sure CONFIG_PROC_PAGE_MONITOR is set.
sudo: most embedded / simulator images don't have it, you just login as root directly and can do anything by default without it. This can be seen by the conventional # in the prompt instead of $.
I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.
Can anybody please guide procedure to setup tool-chain for MSP430 in Linux (particularly Ubuntu) ? I am using MSP430 launchpad (MSP-EXP430G2), and I need to setup compiler/build tools and debugger drivers.
If you install Texas Instruments' CCS IDE, Linux version, it will install the tool-chain. There are, however, other problems in developing for MSP430 in Linux. The bugs and fixes are detailed in my post here:
MSP430 / eZ430-RF2500 Linux support Guide
"Compile code using Code Composer Studio (CCS)
Download CCS for Linux.
Create a new CCS project with a Custom MSP430 Device or any other.
Compile the code. The result binary image will be in the workspace. The workspace path can be found in “File” / “Switch workspace”.
The file that should be programmed to the device is the project-name.out file.
Program and run device using mspdebug
Download and Install mspdebug
From the directory with the file project-name.out run:
$ sudo mspdebug rf2500
Now you are in mspdebug’s command line shell. Run the following to program and run the device:
(mspdebug) prog project-name.out
(mspdebug) run
Use Ctrl+c to pause run and get command line back.
Fix a Linux Kernel bug that prevents Minicom to communicate with device
The device path in /dev is /dev/ttyACM0. Currently, connecting to it
serially using utilities such as minicom is not possible, and you get the message “/dev/ttyACM0: No such file or directory”.
The bug is in Kernel module “cdc_acm”. The solution is to fix the bug in the source code, recompile the module and plug it instead of the existing one.
Find out Linux version:
$ uname -r
cdc_acm’s source is the files cdc-acm.c and cdc-acm.h. They are under the Linux path drivers/usb/class/.
Download these two files from a repository that matches your Linux version. Such repos are available in lxr.free-electrons.com and www.kernel.org.
Create a new directory and move the files to it.There are two code segments need to be removed or commented out:
The next lines appear in function “acm_port_activate()” on newer versions and in “acm_tty_open()” in older ones:
// if (0 > acm_set_control(acm, acm->ctrlout = ACM_CTRL_DTR | ACM_CTRL_RTS) &&
// (acm->ctrl_caps & USB_CDC_CAP_LINE))
// goto bail_out;
The next line appears in function “acm_port_shutdown()” on newer versions “acm_port_down()” in older ones:
// acm_set_control(acm, acm->ctrlout = 0);
Create a Makefile and compile:
$ echo 'obj-m += cdc-acm.o' > Makefile
$ make -C /lib/modules/`uname -r`/build M=$PWD modules
You should have a new cdc-acm.ko file in the directory
Replace the existing module (This change will be discarded after boot):
$ sudo rmmod cdc-acm
$ sudo insmod ./cdc-acm.ko
Communicate via the serial port using Minicom
Launch minicom setup from command line:
$ minicom -s
In the menu, choose:
Serial port setup
Press ‘A’ (for “Serial Device”).
Replace Current device path with:
/dev/ttyACM0
Press ‘E’ (for “Bps/Par/Bits”).
Set the correct data rate for your device.To lower the rate (to 1200, for instance), keep pressing ‘B’ (for “previous”) until the top line shows:
Current: 1200 8N1
Press “Enter” until returning to main menu, there, press “Exit”.This will exit the setup menu and start running on the device. From now on you should see messages over the serial connection: It is up to you to program the device with such messages."
Download the pre-compile tool-chain (.run file) form http://www.ti.com/tool/msp430-gcc-opensource
Unzip
Execute chmod +x <downloaded file>
Run the installer
enjoy!
I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ‘-j’ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin