I am running QNX OS (650SP1) in VMplayer. I would like to load devnp-ncm.so driver.
I have tried following things -
1) # io-pkt-v4-hc -d /lib/dll/devnp-ncm.so path=/dev/io-usb/io-usb -ptcpip verbose &
2) # io-pkt-v4-hc &
# mount -T io-pkt devnp-ncm.so
Please give me some suggestion how to load it.
I used following command and it worked -
# io-pkt-v4-hc -d ncm pnp verbose &
or you can specify path i.e. #io-pkt-v4-hc -d ncm pnp path=/lib/dll/devnp-ncm.so verbose &
But make sure the sequence. It matters.
If you give "pnp" after "path" then it will not work.
Related
I am setting up a foreign chroot environment to build for architectures other than amd64 from a GitLab CI image. Steps were mostly taken from https://www.hellion.org.uk/blog/posts/foreign-chroots-with-schroot-and-qemu/, except that I am skipping the schroot/sbuild part.
- export CROSS_ARCH=armhf
- export CROSS_ROOT=/opt/chroot/$CROSS_ARCH
- export DISTRO=stretch
- export CROSS_MIRROR=http://deb.debian.org/debian/
- apt-get update
- apt-get -y install debootstrap qemu-user-static binfmt-support
- mkdir -p $CROSS_ROOT
- debootstrap --variant=buildd --include=fakeroot,build-essential --arch=$CROSS_ARCH --foreign $DISTRO $CROSS_ROOT $CROSS_MIRROR
- mkdir -p $CROSS_ROOT/usr/bin
- cp /usr/bin/qemu-arm-static $CROSS_ROOT/usr/bin/
- chroot $CROSS_ROOT ./debootstrap/debootstrap --second-stage
When I now try to run a command in the target environment like this:
chroot $CROSS_ROOT qemu-arm-static uname -a
the command exits with an error (nonzero exit status), but no error message is printed. It works, however, if I specify the path:
chroot $CROSS_ROOT qemu-arm-static /bin/uname -a
And it gives me the following output, which indicates I am running inside the armhf environment:
Linux runner--azerasq-project-40807358-concurrent-0 5.4.109+ #1 SMP Wed Jun 16 20:00:10 PDT 2021 armv7l GNU/Linux
Oddly, the following works:
chroot $CROSS_ROOT qemu-arm-static /bin/bash -c "uname -a"
i.e. full path to bash, but no path for the command after -c.
Suspecting that there could be something wrong with $PATH, I ran:
chroot $CROSS_ROOT qemu-arm-static /bin/bash -c set
I get all of the GitLab-specific variables, as well as a bunch of others, including the following ones:
MACHTYPE=arm-unknown-linux-gnueabihf
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
When I run
chroot $CROSS_ROOT qemu-arm-static /usr/bin/env
some variables (such as $MACHTYPE) are missing but $PATH is the same. So $PATH seems to be set correctly, and a diff of the outputs (after sorting) does not indicate anything that looks related – the extra variables for bash -c set look mostly bash-specific.
Why won’t qemu-arm-static accept binaries without a full path if they are on $PATH? Where else should I look to debug?
Why won’t qemu-arm-static accept binaries without a full path if they
are on $PATH?
Because qemu-user is not a shell, it doesn't have code that would search the PATH. This is the piece of qemu-user code that opens executable image when it's started as in the examples that you give, and as you can see here exec_path comes directly from the command line.
On the other hand you can install qemu-user as a binfmt-misc handler, in which case the shell will do the PATH search and the kernel will invoke qemu-user with an open file descriptor of the executable file in the AT_EXECFD entry in the aux vectors.
I am trying to run gem5 in FS mode by using command as : "build/ARM/gem5.opt configs/example/fs.py --disk-image=/home/coep/gem5%202/full_system_images/aarch32-ubuntu-natty-headless.img --arm=/home/coep/gem5 2/full_system_images/vmlinux.arm.smp.fb.3.2/vmlinux.arm.smp.fb.3.2"
and getting error as : "Usage: fs.py [options] fs.py: error: option --arm-iset: invalid choice: '/home/coep/gem5' (choose from 'arm', 'thumb', 'aarch64')"
please help me to solve this error.
Thank you.
I assume the --arm=/home/coep/gem5...vmlinux.arm.smp.fb.3.2 argument specifies the path to the guest kernel, in which case it should be --kernel=...:
build/ARM/gem5.opt \
configs/example/fs.py \
--disk-image=/home/coep/gem5\ 2/full_system_images/aarch32-ubuntu-natty-headless.img \
--kernel=/home/coep/gem5\ 2/full_system_images/vmlinux.arm.smp.fb.3.2/vmlinux.arm.smp.fb.3.2
Arguments and their explanations are found in configs/common/Options.py
There can be multiple reasons why are getting this error, One of them can be an incorrect path to the disk image files.
I have run the gem5 in the FS mode and have booted Linux on top of it on Ubuntu 18.04 LTS
You can follow the below steps, the first step is to download and install the full-system binary and disk image files.
1. $ mkdir full_system_image
2. $ cd full_system_image/
3. $ wget http://www.m5sim.org/dist/current/arm/aarch-system-2014-10.tar.bz2
4. $ tar jxf aarch-system-2014-10.tar.bz2
5. $ echo "export M5_PATH=/Path to the full_system_image directory/full_system_images/" >> ~/.bashrc
6. $ source ~/.bashrc
7. $ echo $M5_PATH (- check if the path is set correct)
Now the path has been set, the next step is to run the gem5 in FS mode.
1. connect to gem5 base directory
2. $ ./build/ARM/gem5.opt configs/example/fs.py --disk-image=/home/full_system_image/disks/aarch32-ubuntu-natty-headless.img
3. Note: --disk-image=path to the full_system_image/disks/aarch32-ubuntu-natty-headless.img
4. open a new terminal and listen to port 3456
5. $ telnet localhost 3456
6. Here 3456 is a port number on the gem5 terminal
7. this will take around 30 mins depending on the machine performance.
8. After this, at the end you will get something like this
input: AT Raw Set 2 keyboard as /devices/smb.14/motherboard.15/iofpga.17/1c060000.kmi/serio0/input/input0
input: touchkitPS/2 eGalax Touchscreen as
/devices/smb.14/motherboard.15/iofpga.17/1c070000.kmi/serio1/input/input2
kjournald starting. Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with writeback data mode
VFS: Mounted root (ext3 filesystem) on device 8:1.
Freeing unused kernel memory: 292K (806aa000 - 806f3000)
random: init urandom read with 14 bits of entropy available
Ubuntu 11.04 gem5sim ttySA0
9. login as root
Voila, you have run the gem5 in FS mode.
System release: CoreOS 2135.5.0
Kernel: 4.19.50-coreos-r1
System install way: vmware
When I change the port in the sshd.service,it displays:
CoreOS-234 ssh # echo "Port 10000" >> /usr/share/ssh/sshd_config ;systemctl mask sshd.socket;systemctl enable sshd.service;systemctl restart sshd.service
-bash: /usr/share/ssh/sshd_config: Read-only file system
The file system that you are working in is currently in Read-only mode. Remounting the file system to read-write should resolve the issue. You will need to have root privilages:
$ mount -o remount,rw /
Occasionally the reason your file system will be running in read-only mode is due to Kernel issues, therefore there may be further problems with the system that will need to be debugged. Regarding the Kernel errors you may want to have a look at the following link: https://unix.stackexchange.com/questions/436483/is-remounting-from-read-only-to-read-write-potentially-dangerous?rq=1
In coreos /usr is designed to be a read-only file system, Remounting /usr is theoretically feasible, but is not officially recommended
You can refer to this
I use the following command to solve this problem
sudo sed -i '$a\Port=60022' /etc/ssh/sshd_config && \
sudo systemctl mask sshd.socket && \
sudo systemctl enable sshd.service && \
sudo systemctl start sshd.service
I was trying to build Apache Impala from source(newest version on github).
I followed following instructions to build Impala:
(1) clone Impala
> git clone https://git-wip-us.apache.org/repos/asf/incubator-impala.git
> cd Impala
(2) configure environmental variables
> export JAVA_HOME=/usr/lib/jvm/java-7-oracle-amd64
> export IMPALA_HOME=<path to Impala>
> export BOOST_LIBRARYDIR=/usr/lib/x86_64-linux-gnu
> export LC_ALL="en_US.UTF-8"
(3)build
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
(4) errors are shown below:
Heap is needed to find the cause. Looks like the compiler does not support the GLIBCXX_3.4.21. But the GCC is automatically downloaded by the building script.
Appreciate your help!!!
Starting from this commit https://github.com/apache/impala/commit/d5cefe07c931a0d3bf02bca97bbba05400d91a48 , Impala has been shipped with a development bootstrap script.
I tried the master branch in a fresh ubuntu 16.04 docker image and it works fine. Here is what I did.
checkout the latest impala code base and do
docker run --rm -it --privileged -v /home/amos/git/impala/:/root/Impala ubuntu:16.04
inside docker, do
apt-get update
apt-get install sudo
cd /root/Impala
comment this out in bin/bootstrap_system.sh if you don't need test data
# if ! [[ -d ~/Impala-lzo ]]
# then
# git clone https://github.com/cloudera/impala-lzo.git ~/Impala-lzo
# fi
# if ! [[ -d ~/hadoop-lzo ]]
# then
# git clone https://github.com/cloudera/hadoop-lzo.git ~/hadoop-lzo
# fi
# cd ~/hadoop-lzo/
# time -p ant package
also add this line before ssh localhost whoami
echo "source ${IMPALA_HOME}/bin/impala-config-local.sh" >> ~/.bashrc
change the build command to whatever you like in bin/bootstrap_development.sh
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
then run bin/bootstrap_development.sh
You'll be prompted for some input. Just fill in default value and it'll work.
I'm running Apache2 in a docker container and want to write nothing to the disk, writing logs to stdout and stderr. I've seen a few different ways to do this (Supervisord and stdout/stderr, Apache access log to stdout) but these seem like hacks. Is there no way to do this by default?
To be clear, I do not want to tail the log, since that will result in things being written to the disk in the container.
The "official" version checked into Docker Hub (https://hub.docker.com/_/httpd/) still write to disk.
Also, what do I need to do to stop Apache from failing when it tries to roll the logs?
One other thing - ideally, I'd really like to do this without another add-on. nginx can do this trivially.
I'm not positive that this won't mess with httpd's logging at all (e.g. if it tries to seek within the file), but you can set up symlinks from the log paths to /dev/stdout and /dev/stderr, like so:
ln -sf /dev/stdout /path/to/access.log
ln -sf /dev/stderr /path/to/error.log
The entry command to the vanilla httpd container from Docker Hub could be made to be something like
ln -sf /dev/stdout /path/to/access.log && ln -sf /dev/stderr /path/to/error.log && /path/to/httpd
According to the apache mailing list, you can just directly write to /dev/stdio (on Unix like systems) as that's just a regular ol' file handle. Easy! Pasting...
The most efficient answer depends on your operating system. If you're
on a UNIX like system which provides /dev/stdout and /dev/stderr (or
perhaps /dev/fd/1 and /dev/fd/2) then use those file names. If that
isn't an option use the piped output feature. For example, from my
config:
CustomLog "|/usr/sbin/rotatelogs -c -f -l -L
/private/var/log/apache2/test-access.log
/private/var/log/apache2/test-access.log.%Y-%m-%d 86400 "
krader_custom ErrorLog "|/usr/sbin/rotatelogs -c -f -l -L
/private/var/log/apache2/test-error.log
/private/var/log/apache2/test-error.log.%Y-%m-%d 86400"
Obviously you'll want to substitute another program for
/usr/sbin/rotatelogs in the example above that writes the data where
you want it to go.
https://mail-archives.apache.org/mod_mbox/httpd-users/201508.mbox/%3CCABx2=D-wdd8FYLkHMqiNOKmOaNYb-tAOB-AsSEf2p=ctd6sMdg#mail.gmail.com%3E
I know it's an old question, but I had this need today.
On an Alpine 3.6, the following instructions, in httpd.conf, are working:
Errorlog /dev/stderr
Transferlog /dev/stdout
I add them to my container this way:
FROM alpine:3.6
RUN apk --update add apache2
RUN sed -i -r 's#Errorlog .*#Errorlog /dev/stderr#i' /etc/apache2/httpd.conf
RUN echo "Transferlog /dev/stdout" >> /etc/apache2/httpd.conf
...
I adjusted config, as from the Dockerfile recipe of httpd, they use sed to adjust the config, to change ErrorLog and CustomLog as follows:
sed -ri ' \
s!^(\s*CustomLog)\s+\S+!\1 /proc/self/fd/1!g; \
s!^(\s*ErrorLog)\s+\S+!\1 /proc/self/fd/2!g; \
' /usr/local/apache2/conf/httpd.conf \
See https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile (towards the end of the file)
You can send your ErrorLog to syslog directly, and you can send any CustomLog (access log) to any executable that reads from stdin. There are log aggregation tools, or you can again use syslog w/ e.g. /usr/bin/logger.
You could try using the dockerize tool. With that you could wrap the httpd-foreground command and redirect its log files to stdout/stderr (don't know exactly the httpd log file paths, simply adjust them to your needs):
CMD ["dockerize", "-stdout", "/var/log/httpd.log", "-stderr", "/var/log/httpd.err", "httpd-foreground"]
In addition to that you could grab that containers stdout/stderr then by specifying a syslog log driver and redirect them to the /var/log/syslog log file on the docker host:
docker run -d --log-driver=syslog ...