I've installed RVM in Mixed Mode and have Phusion Passenger running in stand alone mode.
I've found this init script to start my Phusion Passenger standalone server on startup: http://memcloud.com/note/show/167
Modifying only the prescribed values, it was giving me the following error, but would still run
-su: /home/myuser/.rvm/bin/rvm: No such file or directory
I ran which rvm in myuser and found out that RVM is in /usr/local/rvm/bin/rvm. So I updated the RVM variable to reflect that, and changed RVM="$USER_HOME/.rvm/bin/rvm" to RVM="/usr/local/rvm/bin/rv". Now it's giving me the following message, but it still runs.
RVM is not a function, selecting rubies with 'rvm use ...' will not work.
Not really sure if it's a problem if the system is running, but I'd just like to be sure.
I would say this script is wrong, you should use something more like this:
#!/usr/bin/env bash
### BEGIN INIT INFO
# Provides: my-app passenger in standalone
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start/stop my-app web site
### END INIT INFO
# BEGIN MINIMAL CHANGES
USER=www-data
USER_HOME=/var/www
APP_PATH=/var/www/my-app/current
GEM_SET=ruby-1.8.7-p330#my-app
ADDRESS=127.0.0.1
PORT=3000
ENVIRONMENT=production
# END MINIMAL CHANGES
RVM="/usr/local/rvm/bin/rvm"
PASSENGER="$USER_HOME/.rvm/gems/$GEM_SET/bin/passenger"
PASSENGER="cd $APP_PATH; $RVM $GEM_SET do $PASSENGER"
CMD_START="$PASSENGER start -a $ADDRESS -p $PORT -e $ENVIRONMENT -d"
CMD_STOP="$PASSENGER stop -p $PORT"
. /lib/lsb/init-functions
case "$1" in
start)
echo "Starting myapp passenger"
echo $CMD_START
su - $USER -c "$CMD_START"
;;
stop)
echo "Stopping myapp passenger"
echo $CMD_STOP
su - $USER -c "$CMD_STOP"
;;
*)
echo "Usage: $0 start|stop" >&2
exit 3
;;
esac
you could also replace GEM_SET=. to make rvm use ruby stored in .rvmrc but this requires that $USER trusted that .rvmrc ... which could be also done in this script with:
su - $USER -c "rvm rvmrc trust $APP_PATH"
called as first line in start)
Related
I installed older Codeception 2.5 (because of module for PHP framework Yii1) like this:
composer require codeception/codeception:2.5.*
And then executed:
php vendor/bin/codecept run unit --coverage-html
And nothing happened. I can only see following code. I discovered that all 3 files (carbon, codecept, phpunit) in folder vendor/bin contain only this instead of PHP code:
#!/usr/bin/env sh
dir=$(cd "${0%[/\\]*}" > /dev/null; cd '../codeception/codeception' && pwd)
if [ -d /proc/cygdrive ]; then
case $(which php) in
$(readlink -n /proc/cygdrive)/*)
# We are in Cygwin using Windows php, so the path must be translated
dir=$(cygpath -m "$dir");
;;
esac
fi
"${dir}/codecept" "$#"
Why is that? I am using Ubuntu 16 in Vagrant (CognacBox image). If I use XAMPP and Windows 10 it works correctly. I used Composer v1 and v2. Both with the same problem.
I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq
I was trying to build Apache Impala from source(newest version on github).
I followed following instructions to build Impala:
(1) clone Impala
> git clone https://git-wip-us.apache.org/repos/asf/incubator-impala.git
> cd Impala
(2) configure environmental variables
> export JAVA_HOME=/usr/lib/jvm/java-7-oracle-amd64
> export IMPALA_HOME=<path to Impala>
> export BOOST_LIBRARYDIR=/usr/lib/x86_64-linux-gnu
> export LC_ALL="en_US.UTF-8"
(3)build
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
(4) errors are shown below:
Heap is needed to find the cause. Looks like the compiler does not support the GLIBCXX_3.4.21. But the GCC is automatically downloaded by the building script.
Appreciate your help!!!
Starting from this commit https://github.com/apache/impala/commit/d5cefe07c931a0d3bf02bca97bbba05400d91a48 , Impala has been shipped with a development bootstrap script.
I tried the master branch in a fresh ubuntu 16.04 docker image and it works fine. Here is what I did.
checkout the latest impala code base and do
docker run --rm -it --privileged -v /home/amos/git/impala/:/root/Impala ubuntu:16.04
inside docker, do
apt-get update
apt-get install sudo
cd /root/Impala
comment this out in bin/bootstrap_system.sh if you don't need test data
# if ! [[ -d ~/Impala-lzo ]]
# then
# git clone https://github.com/cloudera/impala-lzo.git ~/Impala-lzo
# fi
# if ! [[ -d ~/hadoop-lzo ]]
# then
# git clone https://github.com/cloudera/hadoop-lzo.git ~/hadoop-lzo
# fi
# cd ~/hadoop-lzo/
# time -p ant package
also add this line before ssh localhost whoami
echo "source ${IMPALA_HOME}/bin/impala-config-local.sh" >> ~/.bashrc
change the build command to whatever you like in bin/bootstrap_development.sh
${IMPALA_HOME}/buildall.sh -noclean -skiptests -build_shared_libs -format
then run bin/bootstrap_development.sh
You'll be prompted for some input. Just fill in default value and it'll work.
How can I automate patching and re-building gems after running a gem/bundle update?
(I want to patch gem-ctags to use ripper-tags instead of ctags)
The following script achieves this, and has some smarts to work around patch returning a failure code when some or all of a patch has already been applied:
#!/bin/bash
# Usage: gem-patch
# Code updates: https://gist.github.com/HaleTom/275f28403828b9b9b93d313990fc94f4
# Features:
# Work around `patch` returning non-zero if some patch hunks are already applied
# Apply all patches in $patch_dir (in order) to their corresponding gem(s)
# Build a gem only after all patches have been applied
# Only build the gem if it was patched
# Robust error handling
patch_dir="$HOME/lib/gem-patch"
# Patches are assumed to be made with git patch
# Files are to be named gem-name or gem-name.patch-explanation
# Multiple patches are applied in filename order
set -e -u
shopt -s nullglob # Globs are '' when no files match a pattern
gems_dir="$(gem environment gemdir)/gems"
if ! compgen -G "$patch_dir/*" > /dev/null; then
echo "Couldn't find any patches located in $patch_dir. Quitting." 2>&1
exit 1
fi
# Save the current "gemname-1.2.3" so that when it changes to a new one
# (ie, all patches have been applied) it can be built only once
function build_prev_if_needed {
if [[ ${prev_gem_ver:="${gem_ver:=''}"} != "$gem_ver" ]]; then
# We've moved on to another gem, build the last one
( cd "$gems_dir/$prev_gem_ver" &&
gem build "${prev_gem_ver%%-[-0-9.]*}.gemspec"
)
fi
prev_gem_ver="$gem_ver"
}
for patch in "$patch_dir"/*; do
gem_name=$(basename "${patch%%[.]*}") found_one=false
# $gem_dir becomes "rails-5.0.0.1" from find at end of loop
while read -d '' -r gem_dir; do
found_one=true
# Build the previously seen gem if we've moved on to a new one
gem_ver=${gem_dir##$gems_dir/}
echo -n "$gem_ver (patch $(basename "$patch")): "
# If we could reverse the patch, then it has already been applied; skip it
if patch --dry-run --reverse -d "$gem_dir" -fp1 --ignore-whitespace -i "$patch" >/dev/null 2>&1; then
echo "skipping (already applied)"
continue
else # patch not yet applied
echo "patching..."
# Patch returns non-zero if some hunks have already been applied
if ! patch -d "$gem_dir" -fsp1 --ignore-whitespace -i "$patch"; then
# Check that the patch was fully applied by pretending to reverse it
if patch --dry-run --reverse -d "$gem_dir" -fp1 --ignore-whitespace -i "$patch" >/dev/null 2>&1; then
echo "Ignoring failure: hunk(s) were already applied"
else
echo "Patch failed for $gem_dir" >&2; exit 1;
fi
fi
build_prev_if_needed
fi
done < <(find "$gems_dir" -maxdepth 1 -type d -regex ".*/$gem_name-[-0-9.]+" -print0)
if [[ $found_one != true ]]; then
echo "Fatal: Patch file '$(basename "$patch")': Couldn't find any gem sources named $gems_dir/$(basename "$patch")*" >&2; exit 1
fi
done # $gem_dir is now blank
gem_ver=''
build_prev_if_needed
If I ssh into my VPS as the deployment user and run bundle -v I get Bundler version 1.1.5 as expected.
If I run ssh deployment#123.123.123.123 bundle -v, then I see bash: bundle: command not found
Why isn't bundle being shown running commands via ssh?
More Info
$ cat ~/.bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
if [ -d "${RBENV_ROOT}" ]; then
export PATH="${RBENV_ROOT}/bin:${PATH}"
eval "$(rbenv init -)"
fi
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
When you run:
ssh deployment#123.123.123.123
You get a login shell on the remote host, which means that your shell will run (...for bash...) .bash_profile or .profile or equivalent AS WELL AS your per-shell initialization file.
When you run:
ssh deployment#123.123.123.123 some_command
This does not start a login shell, so it only runs the per-shell initialization file (e.g., .bashrc).
The problem you've described typically means that you need something in your .profile file (typically an environment variable setting) for everything to work.