why to remount filesystem to read-only before unmounting in umountfs script? - ext4

On embedded Linux distribution with ext4, I have the following umountfs script:
#!/bin/sh
### BEGIN INIT INFO
# Provides: umountfs
# Required-Start:
# Required-Stop:
# Default-Start:
# Default-Stop: 0 6
# Short-Description: Turn off swap and unmount all local file systems.
# Description:
### END INIT INFO
PATH=/sbin:/bin:/usr/sbin:/usr/bin
echo "Deactivating swap..."
[ -x /sbin/swapoff ] && swapoff -a
# We leave /proc mounted.
echo "Unmounting local filesystems..."
grep -q /mnt/ram /proc/mounts && mount -o remount,ro /mnt/ram
mount -o remount,ro /
umount -f -a -r > /dev/null 2>&1
: exit 0
I have a question about the following lines:
mount -o remount,ro /
umount -f -a -r > /dev/null 2>&1
The question is: why we need to remount rootfs to read-only before umount?
I saw some explanation, as though we need to remount the rootfs to ro in order to force all pending write requests to be flashed on the disk. But this does not satisfy me, because the flashing of the pending write requests is the part of umount command.
So the question: does somebody understand, why we need to remount rootfs to ro before unmounting it?

Typically you can't unmount the root filesystem, because at least one currently-running process that is using the filesystem - init or systemd. Remounting the root filesystem read-only flushes all dirty data and prevents it from being modified again, so that the filesystem is consistent. Typically at that point the kernel reboots without having actually unmounted the root filesystem.

Related

s2e-block: dirty sectors on close:11104 Terminating node id 0 (instance slot 0)

I tried to test OpenVSwitch using S2E. I wrote the OpenVSwitch installation script in bootstrap.sh. The image in the qemu virtual machine is the same as the image in the host machine, so the executable file compiled in the host machine should also be executed in the virtual machine. So after I installed OpenVSwitch and started ovsdb-server and ovs-vsctl, ovs-vswitchd should be able to execute successfully, but I got the following error:
18 [State 0] BaseInstructions: Killing state 0
18 [State 0] Terminating state: State was terminated by opcode
message: "bootstrap terminated"
status: 0x0
18 [State 0] TestCaseGenerator: generating test case at address 0x40717d
18 [State 0] TestCaseGenerator: All states were terminated
qemu-system-x86_64: terminating on signal 15 from pid 42128 (/home/lz/s2e/install/bin/qemu-system-x86_64)
s2e-block: dirty sectors on close:11104
Terminating node id 0 (instance slot 0)
bootstrap.sh and the installation script ovs-install.sh are as follows:
bootstrap.sh
#!/bin/bash
#
# This file was automatically generated by s2e-env at 2022-09-29 14:22:53.271106
#
# This bootstrap script is used to control the execution of the target program
# in an S2E guest VM.
#
# When you run launch-s2e.sh, the guest VM calls s2eget to fetch and execute
# this bootstrap script. This bootstrap script and the S2E config file
# determine how the target program is analyzed.
#
set -x
mkdir -p guest-tools32
TARGET_TOOLS32_ROOT=guest-tools32
mkdir -p guest-tools64
TARGET_TOOLS64_ROOT=guest-tools64
# 64-bit tools take priority on 64-bit architectures
TARGET_TOOLS_ROOT=${TARGET_TOOLS64_ROOT}
# To save the hassle of rebuilding guest images every time you update S2E's guest tools,
# the first thing that we do is get the latest versions of the guest tools.
function update_common_tools {
local OUR_S2ECMD
OUR_S2ECMD=${S2ECMD}
# First, download the common tools
for TOOL in ${COMMON_TOOLS}; do
${OUR_S2ECMD} get ${TARGET_TOOLS_ROOT}/${TOOL}
if [ ! -f ${TOOL} ]; then
${OUR_S2ECMD} kill 0 "Could not get ${TOOL} from the host. Make sure that guest tools are installed properly."
exit 1
fi
chmod +x ${TOOL}
done
}
function update_target_tools {
for TOOL in $(target_tools); do
${S2ECMD} get ${TOOL} ${TOOL}
chmod +x ${TOOL}
done
}
function prepare_target {
# Make sure that the target is executable
chmod +x "$1"
}
function get_ramdisk_root {
echo '/tmp/'
}
function copy_file {
SOURCE="$1"
DEST="$2"
cp ${SOURCE} ${DEST}
}
# This prepares the symbolic file inputs.
# This function takes as input a seed file name and makes its content symbolic according to the symranges file.
# It is up to the host to prepare all the required symbolic files. The bootstrap file does not make files
# symbolic on its own.
function download_symbolic_file {
SYMBOLIC_FILE="$1"
RAMDISK_ROOT="$(get_ramdisk_root)"
${S2ECMD} get "${SYMBOLIC_FILE}"
if [ ! -f "${SYMBOLIC_FILE}" ]; then
${S2ECMD} kill 1 "Could not fetch symbolic file ${SYMBOLIC_FILE} from host"
fi
copy_file "${SYMBOLIC_FILE}" "${RAMDISK_ROOT}"
SYMRANGES_FILE="${SYMBOLIC_FILE}.symranges"
${S2ECMD} get "${SYMRANGES_FILE}" > /dev/null
# Make the file symbolic
if [ -f "${SYMRANGES_FILE}" ]; then
export S2E_SYMFILE_RANGES="${SYMRANGES_FILE}"
fi
# The symbolic file will be split into symbolic variables of up to 4k bytes each.
${S2ECMD} symbfile 4096 "${RAMDISK_ROOT}${SYMBOLIC_FILE}" > /dev/null
}
function download_symbolic_files {
for f in "$#"; do
download_symbolic_file "${f}"
done
}
# This function executes the target program given in arguments.
#
# There are two versions of this function:
# - without seed support
# - with seed support (-s argument when creating projects with s2e_env)
function execute {
local TARGET
TARGET="$1"
shift
execute_target "${TARGET}" "$#"
}
###############################################################################
# This section contains target-specific code
function make_seeds_symbolic {
echo 1
}
# This function executes the target program.
# You can customize it if your program needs special invocation,
# custom symbolic arguments, etc.
function execute_target {
local TARGET
TARGET="$1"
shift
#wo tian jia de
sudo ./install_ovs.sh
S2E_SO="${TARGET_TOOLS64_ROOT}/s2e.so"
# ovs-vswitchd is dynamically linked, so s2e.so has been preloaded to
# provide symbolic arguments to the target if required. You can do so by
# using the ``S2E_SYM_ARGS`` environment variable as required
S2E_SYM_ARGS="" LD_PRELOAD="${S2E_SO}" "${TARGET}" "$#" > /dev/null 2> /dev/null
}
# Nothing more to initialize on Linux
function target_init {
# Start the LinuxMonitor kernel module
sudo modprobe s2e
}
# Returns Linux-specific tools
function target_tools {
echo "${TARGET_TOOLS32_ROOT}/s2e.so" "${TARGET_TOOLS64_ROOT}/s2e.so"
}
S2ECMD=./s2ecmd
COMMON_TOOLS="s2ecmd"
###############################################################################
update_common_tools
update_target_tools
# Don't print crashes in the syslog. This prevents unnecessary forking in the
# kernel
sudo sysctl -w debug.exception-trace=0
# Prevent core dumps from being created. This prevents unnecessary forking in
# the kernel
ulimit -c 0
# Ensure that /tmp is mounted in memory (if you built the image using s2e-env
# then this should already be the case. But better to be safe than sorry!)
if ! mount | grep "/tmp type tmpfs"; then
sudo mount -t tmpfs -osize=10m tmpfs /tmp
fi
# Need to disable swap, otherwise there will be forced concretization if the
# system swaps out symbolic data to disk.
sudo swapoff -a
target_init
# Download the target file to analyze
${S2ECMD} get "ovs-vswitchd"
#wo tian jia de
#${S2ECMD} get "ovsdb-server"
#${S2ECMD} get "ovs-vsctl"
${S2ECMD} get "openvswitch-3.0.0.tar.gz"
${S2ECMD} get "install_ovs.sh"
download_symbolic_files
# Run the analysis
TARGET_PATH='./ovs-vswitchd'
prepare_target "${TARGET_PATH}"
#wo tian jia de
#prepare_target "./ovsdb-server"
#prepare_target "./ovs-vsctl"
prepare_target "openvswitch-3.0.0.tar.gz"
prepare_target "install_ovs.sh"
execute "${TARGET_PATH}" --pidfile --detach --log-file
ovs-install.sh
#!/bin/bash
tar zxvf openvswitch-3.0.0.tar.gz
cd openvswitch-3.0.0
./configure
make -j4
sudo make install
export PATH=$PATH:/usr/local/share/openvswitch/scripts
sudo mkdir -p /usr/local/etc/openvswitch
sudo ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
#/usr/local/share/openvswitch/scripts/ovs-ctl --no-ovs-vswitchd start
sudo ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
sudo ovs-vsctl --no-wait init
#sudo ovs-vswitchd --pidfile --detach
Does anybody can tell me how to fix this? Or is OpenVSwitch simply not testable by S2E?

can't change the sshd's prot in CoreOS

System release: CoreOS 2135.5.0
Kernel: 4.19.50-coreos-r1
System install way: vmware
When I change the port in the sshd.service,it displays:
CoreOS-234 ssh # echo "Port 10000" >> /usr/share/ssh/sshd_config ;systemctl mask sshd.socket;systemctl enable sshd.service;systemctl restart sshd.service
-bash: /usr/share/ssh/sshd_config: Read-only file system
The file system that you are working in is currently in Read-only mode. Remounting the file system to read-write should resolve the issue. You will need to have root privilages:
$ mount -o remount,rw /
Occasionally the reason your file system will be running in read-only mode is due to Kernel issues, therefore there may be further problems with the system that will need to be debugged. Regarding the Kernel errors you may want to have a look at the following link: https://unix.stackexchange.com/questions/436483/is-remounting-from-read-only-to-read-write-potentially-dangerous?rq=1
In coreos /usr is designed to be a read-only file system, Remounting /usr is theoretically feasible, but is not officially recommended
You can refer to this
I use the following command to solve this problem
sudo sed -i '$a\Port=60022' /etc/ssh/sshd_config && \
sudo systemctl mask sshd.socket && \
sudo systemctl enable sshd.service && \
sudo systemctl start sshd.service

monit: use hostname in logfile path

I am new to monit and want to use different logfile path for monit (not the default one)
set logfile /x/home/xxxx/yyyy/monit/monit-5.20.0/logs/monit_$HOST.log
In place of $HOST, I want the hostname where the monit is running.
Any idea how can we achieve this? Similarly, I wan to use the hostname for idfile and statefile as well.
Note: /x/home/xxxx/yyyy/monit/monit-5.20.0 is common mount for all machines and want to run monit on them. But don't want the same log file.
Finally I found the way to have hostname in the logfile, idfile and statsfile.
I created a wrapper script start.sh as follows and started the monit by passing control file, logfile, idfile and statsfile.
#!/bin/bash
BASEDIR=`dirname $0`
HOST=`hostname`
MONIT_BIN=$BASEDIR/bin/monit
CTRL_FILE=$BASEDIR/conf/monitrc
LOG_FILE=$BASEDIR/logs/monit_$HOST.log
PID_FILE=$BASEDIR/run/monit_$HOST.pid
STATS_FILE=$BASEDIR/run/.monit_$HOST.state
mkdir -p $BASEDIR/run
mkdir -p $BASEDIR/logs
touch $PID_FILE
touch $STATS_FILE
touch $LOG_FILE
nohup $MONIT_BIN -c $CTRL_FILE -l $LOG_FILE -p $PID_FILE -s $STATS_FILE &> /dev/null &

Redis sentinel docker image / Dockerfile

I'm looking to deploy high availability Redis on a coreOS cluster, and I need a Redis Sentinel docker image (i.e. Dockerfile) that works. I've gathered enough information/expertise to create one (I think)... but my limited knowledge/experience with advanced networking is the only thing keeping me from building and sharing it.
Can someone who is an expert here help me developing a Redis Sentinel Dockerfile (none exist right now)? The Redis/Docker community would really benefit from this.
Here's the broader issue and context:
https://github.com/antirez/redis/pull/1908
I think the solution is right here specifically:
https://github.com/antirez/redis/pull/1908#issuecomment-54380876
Here's the Dockerfile I've been using... but if you read the thread above, you'll see my comments (joshula)... it lacks the Networking fixes that mattsta is talking about. Note that because I'm using this on coreOS, any config settings in sentinel.conf are being set at run-time via the command line (hence ENTRYPOINT).
# Pull base image.
FROM dockerfile/ubuntu:latest
# Install Redis.
RUN \
cd /tmp && \
wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
make install && \
cp -f src/redis-sentinel /usr/local/bin && \
mkdir -p /etc/redis && \
cp -f *.conf /etc/redis && \
rm -rf /tmp/redis-stable* && \
sed -i 's/^\(bind .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(daemonize .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(dir .*\)$/# \1\ndir \/data/' /etc/redis/redis.conf && \
sed -i 's/^\(logfile .*\)$/# \1/' /etc/redis/redis.conf
# Define mountable directories.
VOLUME ["/data"]
# Define working directory.
WORKDIR /data
# Expose ports.
EXPOSE 26379
# Define default command.
ENTRYPOINT redis-sentinel /etc/redis/sentinel.conf
After a ton of work, I ended up figuring this out. Here's to making it simple for anyone else who wants to deploy a highly available redis instance via Docker:
https://registry.hub.docker.com/u/joshula/redis-sentinel/
There is no need for a custom sentinel image, or messing with the network. See my redis-ha-learning project using Spring Data Redis, bitnami/redis and bitnami/redis-sentinel images. The Docker Compose file is here.
My code auto detects the sentinels based on the Docker Compose container names.

Scripted truecrypt mount, without using /dev/ or UUID

I have 5 truecrypt encrypted drives. Running ubuntu 13.04. I'm trying to run the following command in a script to mount my drives.
truecrypt -t /dev/disk/by-uuid/25f8c629-d0c8-4c39-b4c2-aacba38b5882 /media/P --password="$password" -k "" --protect-hidden=no
Because of the way truecrypt works I cant use this, because the UUID is only accessible once the drives are mounted.
Is it possible to do the same thing but with hard-drive serial numbers, or model numbers? Something a bit more permanent?
I cant use the /dev/ as they change randomly nearly every time I reboot the PC. This is due to 2 of my drives being connected via a PCI card.
Use Disk ID instead:
#!/bin/bash
# Run this script as root to avoid entering the root password twice
secret=0xa52f2c38
# Generate tempfile
tempfile=fdisk.tmp
sudo fdisk -l > $tempfile
# --------------------------------------------------------------------------
# Locate secret drive and mount it
# --------------------------------------------------------------------------
num=$[ $(grep -n "^Disk identifier: $secret" $tempfile | cut -f1 -d:) - 5 ]
if [ $num \> 0 ] # num will be greater than 0 if drive exists
then
# Get line containing /dev
# ----------------------------------------------------------------------
dev=$(sed -n "${num}p" $tempfile | cut -f2 -d' ' | sed 's/://')
truecrypt $dev /media/secret
# Check (Create .truecrypt on the mounted volumen beforehand)
# ----------------------------------------------------------------------
if [ ! -f /media/secret/.truecrypt ]
then
zenity --error --text="There was a problem mounting secret"
fi
fi
rm $tempfile
The source of the script is: http://delightlylinux.wordpress.com/2012/05/21/mounting-truecrypt-volumes-by-disk-id/
I recommend you to read it through if you have difficulty understanding what the script is doing. The explanation is thorough.