How do I start plack application on boot - apache

Does anyone know how to start a plack application on boot.
The os is raspbian(raspberry pi).
I think i have run it as a normal user(pi). That's how i start it manually.
I have tried adding something like this to rc.local but without success
su pi -c 'cd /path/to/app && plackup -d -p 5000 -r -R ./lib,./t -a ./bin/app.psgi &'
This will in-turn be used by Apache and the app is written in dancer2 if it makes any difference.

On a raspberry pi I use systemd to create and start a service, in the file:
/etc/systemd/system/dancer.service
[Unit]
Description=NCI Starman Dancer App
After=syslog.target
[Service]
Type=forking
ExecStart=/usr/local/bin/starman --daemonize -l 127.0.0.1:3004 \
--user myuser --group myuser --workers 8 -D -E production \
--pid /var/run/dancer.pid -I/home/myuser/webservers/Dancer/lib \
--error-log=/home/myuser/logs/dancer_error.log \
/home/myuser/webservers/Dancer/bin/app.psgi
Restart=always
[Install]
WantedBy=multi-user.target
And then I enable this with systemctl enable dancer.service
Or start it manually with systemtctl start dancer.service
Instead of startman, you can of course use plackup.

The issue was that the perl 5 environment variables were not initialised (which are in .bashrc).
so the solution was to run the plackup command inside bash -i so that it reads .bashrc or set the PERL5LIB before invoking plackup

You may also want to use monit or supervisord to be sure your app is always run and will be restarted in case of kill by any reason, for example OOM

Related

PID recv: short read in CRIU

I am receiving PID recv: short read error while using lazy pages migration with CRIU.
At the source, I run the following command:
memhog -r1000 64m
cd /tmp/dump sudo -H -E criu dump -t $(pidof memhog) -D /tmp/dump --lazy-pages --address 10.237.23.102 --port 1234 --shell-job --display-stats -vvvv -o d.log
Then, in a separate terminal on the source machine itself:
scp -r /tmp/dump/ dst:/tmp/
Now, on the destination machine I start the daemon:
cd /tmp/dump criu lazy-pages --page-server --address $(gethostip -d src) --port 1234 --display-stats -vvvvv
And finally, the restore command:
cd /tmp/dump criu restore -D /tmp/dump/ --shell-job --lazy-pages -vvvv --display-stats -o restore.log -vvvv
The error is thrown by the lazy server daemon on the destination machine.
Furthermore, it works fine for the memhog installed from numactl. However, it does not if I build it from the source.
Any suggestions for solving this will be appreciated.
::Update:: Solved. See answer
Found the issue:
I was building them separately on two different machines due to which their "build-id" was not matching. Solution: Build on one machine and then just scp it over to the other machine.

RuntimeWarning:You're running the worker with superuser privileges:this is absolutely not recommended

I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')
Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup

How to keep WIndows Container running?

I need to keep my Windows Container up so I can run further commands on it using docker exec.
On Linux, I'd start it to run either sleep infinity, or tail -f /dev/null. Alternatively, I could borrow pause.c from Kubernetes.
What does this look like on Windows?
Use ping -t localhost will do it
A full run command would be:
docker run -d --name YourContainer mcr.microsoft.com/windows/nanoserver:1809 ping -t localhost
Note: Make sure 1809 is equal with your own windows version from [WIN]+[R] -> winver.
You should then be able to step into the running container instance with the name YourContainer:
docker exec -it YourContainer cmd
Kubernetes on Windows used to use ping
cmd /c ping -t localhost
This would print lots of unnecessary output, so a good improvement should be
cmd /c ping -t localhost > NUL
What Kubernetes does now is to run a custom pauseloop.exe binary.
In late 2022, the current home for wincat/pauseloop is https://github.com/kubernetes/kubernetes/tree/master/build%2Fpause%2Fwindows%2Fwincat. The move was implemented in https://github.com/kubernetes-sigs/sig-windows-tools/pull/270.

Running apache and cron in docker

I understood there should be only one process running on foreground in a docker container. Is there any chance of running both apache and cron together in foreground? A quick search says there is something called supervisord to achieve this. But is there any other method using Entrypoint script or CMD?
Here is my Dockerfile
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk /repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
RUN mkdir /startup
COPY script.sh /startup
RUN chmod 755 /startup/script.sh
ENTRYPOINT ["/startup/script.sh"]
The content of script.sh is pasted below
#!/bin/bash
# start cron
/usr/sbin/crond -f -l 8
# start apache
httpd -D FOREGROUND
When the docker is run with this image only crond is running and most interestingly when I kill the cron then apache starts and running in the foreground.
I am using aws ecs ec2 to run the docker container using task definition and a service.
Docker container is running while main process inside it is running. So if you want to run two services inside docker container, one of them has to be run in a background mode.
I suggest to get rid of scrip.sh at all and replace it just with one CMD layer:
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The final Dockerfile is:
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The problem is that you're running crond -f, without telling bash to run it in the background, basically keeping bash waiting for crond to exit to continue running the script. There's two solutions for this:
Remove the -f flag (that flag causes crond to run in the foreground).
Add & at the end of the crond line, after -l 8 (I wouldn't recommend this).
Also, I'd start apache with exec:
exec httpd -D FOREGROUND
Otherwise /startup/script.sh will remain running, while it's not doing anything useful anymore anyway. exec tells bash to replace the current process with the command to execute.

Within a Docker image, "Slapd stop" succeeds but slapd is still running

I'm trying to create an openLDAP docker image with custom schema, and I would like to have a working LDAP service before modifying it.
I installed slapd and ldap-utils in my docker image, by putting in the dockerfile:
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y slapd ldap-utils
At this point, when I open a bash in a new container, service slapd status or /etc/init.d/slapd status output both "slapd is not running". Indeed, the policy-rc denies the execution of start after the installation of a package.
Well, no problem, service slapd start returns OK and starts the slapd service. I can search in my LDAP, modify it, everything is fine.
The problem comes when I want to restart the slapd service. service slapd restart, or service slapd force-reload or service slapd stop and service slapd start all fail at the "start" comand. The "stop" command returns OK. However, this time, service slapd status returns "slapd is running". Also, I still can search in my LDAP.
To know a bit more about what happened, I tried to start the slapd service with the debug option, as:
slapd -h 'ldap:/// ldapi:///' -g openldap -u openldap -F /etc/ldap/slapd.d -d stats
Unfortunately, this hangs at "slapd starting" and never finishes.
Thanks for any help :)
I have the same issue. When being inside the container the only way I find to stop slapd is pkill slapd
Nevertheless this is not working with Dockerfile and run pkill slapd
I just encountered the same issue in a docker image based on minideb (Debian Buster).
When executing service slapd stop, the stop_slapd shell function of the /etc/init.d/slapd script is invoked, which in turn executes this command:
start-stop-daemon --stop --quiet --oknodo --retry TERM/10 \
--pidfile "/var/run/slapd/slapd.pid" \
--exec /usr/sbin/slapd 2>&1
When you execute this command in a root shell and omit the --quiet flag the following error is shown:
root#4d1b74229670:/# start-stop-daemon --stop --oknodo --retry TERM/10 --pidfile /var/run/slapd/slapd.pid --exec /usr/sbin/slapd
No /usr/sbin/slapd found running; none killed.
The /var/run/slapd/slapd.pid file exists, the /usr/sbin/slapd executable path is correct too and the process is visible like this:
root#4d1b74229670:/# ps -efww | grep slapd
openldap 764 1 0 20:13 ? 00:00:00 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d
root 779 1 0 20:22 pts/0 00:00:00 grep slapd
To work around this I changed the stop_slapd in /etc/init.d/slapd function and replaced --exec $SLAPD with --name slapd:
stop_slapd() {
reason="`start-stop-daemon --stop --quiet --oknodo --retry TERM/10 \
--pidfile "$SLAPD_PIDFILE" \
--name slapd 2>&1`"
}
I applied the change using sed:
sed -i 's/--exec $SLAPD 2/--name slapd 2/' /etc/init.d/slapd
This is another way to fix the issue:
SLAPD_PID=$(cat /run/slapd/slapd.pid)
kill -15 $SLAPD_PID
while [ -e /proc/$SLAPD_PID ]; do sleep 0.1; done # wait until slapd is terminated