RuntimeWarning:You're running the worker with superuser privileges:this is absolutely not recommended - redis

I am using django+celery+redis,celery==4.4.0 in local it is working fine but when I am dockerizing it , I am getting the above error.
I am using following commands to run in local as well as inside container
**CMDs**
celery -A nrn worker -l info
docker run -d -p 6379:6379 redis
flower -A nrn --port=5555
Any help is highly appreciated
*settings.py**
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_URL = os.environ.get('redis', 'redis://127.0.0.1:6379/')

Take a look in the documentation. It's a warning, though, not an error (see the code). Running Celery under root is an error only when you allow pickle serialization which is not enabled by default (see here).
However, it's still the best practice to run Celery with lower privileges. In Docker (with Debian based image), I choose to run Celery under nobody:nogroup. I use this Dockerfile:
FROM python:3.6
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
WORKDIR /srv/celery
COPY ./app app
COPY ./requirements.txt /tmp/requirements.txt
COPY ./celery.sh celery.sh
RUN pip install --no-cache-dir \
-r /tmp/requirements.txt
VOLUME ["/var/log/celery", "/var/run/celery"]
CMD ["./celery.sh"]
where celery.sh looks as follows:
#!/usr/bin/env bash
mkdir -p /var/run/celery /var/log/celery
chown -R nobody:nogroup /var/run/celery /var/log/celery
exec celery --app=app worker \
--loglevel=INFO --logfile=/var/log/celery/worker-example.log \
--statedb=/var/run/celery/worker-example#%h.state \
--hostname=worker-example#%h \
--queues=celery.example -O fair \
--uid=nobody --gid=nogroup

Related

How to make tensorflow-serving example work

I am trying out the tensorflow example from the tutorial page
at the third step
# Start TensorFlow Serving container and open the REST API port
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
I get the following error message
2020-07-19 11:54:52.858203: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: /models/half_plus_two; Permission denied
This is continuously repeated. I have installed the demo model as mentioned in the tutorial.
git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
Can someone please help what am i missing? I am just starting off on the serving part.
Thanks
Krishnan
The problem could be with your -v parameter where you are binding the path.
Try (Change the source parameter):
docker run -p 8501:8501 --mount type=bind,\
source=/path/to/yourmodels/,\
target=/models/half_plus_two/1 \
-e MODEL_NAME=half_plus_two -t tensorflow/serving

Running apache and cron in docker

I understood there should be only one process running on foreground in a docker container. Is there any chance of running both apache and cron together in foreground? A quick search says there is something called supervisord to achieve this. But is there any other method using Entrypoint script or CMD?
Here is my Dockerfile
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk /repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
RUN mkdir /startup
COPY script.sh /startup
RUN chmod 755 /startup/script.sh
ENTRYPOINT ["/startup/script.sh"]
The content of script.sh is pasted below
#!/bin/bash
# start cron
/usr/sbin/crond -f -l 8
# start apache
httpd -D FOREGROUND
When the docker is run with this image only crond is running and most interestingly when I kill the cron then apache starts and running in the foreground.
I am using aws ecs ec2 to run the docker container using task definition and a service.
Docker container is running while main process inside it is running. So if you want to run two services inside docker container, one of them has to be run in a background mode.
I suggest to get rid of scrip.sh at all and replace it just with one CMD layer:
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The final Dockerfile is:
FROM alpine:edge
RUN apk update && apk upgrade
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk add \
bash \
apache2 \
php7-apache2 \
php7 \
curl \
php7-mysqli \
php7-pdo \
php7-pdo_mysql
RUN cp /usr/bin/php7 /usr/bin/php
CMD ( crond -f -l 8 & ) && httpd -D FOREGROUND
The problem is that you're running crond -f, without telling bash to run it in the background, basically keeping bash waiting for crond to exit to continue running the script. There's two solutions for this:
Remove the -f flag (that flag causes crond to run in the foreground).
Add & at the end of the crond line, after -l 8 (I wouldn't recommend this).
Also, I'd start apache with exec:
exec httpd -D FOREGROUND
Otherwise /startup/script.sh will remain running, while it's not doing anything useful anymore anyway. exec tells bash to replace the current process with the command to execute.

How do I start plack application on boot

Does anyone know how to start a plack application on boot.
The os is raspbian(raspberry pi).
I think i have run it as a normal user(pi). That's how i start it manually.
I have tried adding something like this to rc.local but without success
su pi -c 'cd /path/to/app && plackup -d -p 5000 -r -R ./lib,./t -a ./bin/app.psgi &'
This will in-turn be used by Apache and the app is written in dancer2 if it makes any difference.
On a raspberry pi I use systemd to create and start a service, in the file:
/etc/systemd/system/dancer.service
[Unit]
Description=NCI Starman Dancer App
After=syslog.target
[Service]
Type=forking
ExecStart=/usr/local/bin/starman --daemonize -l 127.0.0.1:3004 \
--user myuser --group myuser --workers 8 -D -E production \
--pid /var/run/dancer.pid -I/home/myuser/webservers/Dancer/lib \
--error-log=/home/myuser/logs/dancer_error.log \
/home/myuser/webservers/Dancer/bin/app.psgi
Restart=always
[Install]
WantedBy=multi-user.target
And then I enable this with systemctl enable dancer.service
Or start it manually with systemtctl start dancer.service
Instead of startman, you can of course use plackup.
The issue was that the perl 5 environment variables were not initialised (which are in .bashrc).
so the solution was to run the plackup command inside bash -i so that it reads .bashrc or set the PERL5LIB before invoking plackup
You may also want to use monit or supervisord to be sure your app is always run and will be restarted in case of kill by any reason, for example OOM

Add Ruby SDK from Docker container as a remote SDK on RubyMine

Rubymine has options to add remote sdks using Vagrant and SSH, however I decided to go with Docker. I already created a Ruby container, but I don't know how to enable SSH access to it so Rubymine can set it as the remote SDK.
Is it possible?
Tried to follow this article, but the Ruby image doesn't have yum and this package epel-release is for Fedora/RedHat.
Hey are you using this official Ruby docker image?
If so, it's based on Debian and you'll have to use apt-get to install packages.
Here's a handy script for installing openssh-server and configuring a user in a Dockerfile:
FROM ruby:2.1.9
#======================
# Install OpenSSH server (sshd)
#======================
RUN apt-get update -qqy \
&& apt-get -qqy install \
openssh-server \
&& echo "PidFile ${RUN_DIR}/sshd.pid" >> /etc/ssh/sshd_config \
&& sed -i 's|session required pam_loginuid.so|session optional pam_loginuid.so|g' /etc/pam.d/sshd \
&& mkdir -p /var/run/sshd \
&& rm -rf /var/lib/apt/lists/*
# Add user rubymine with password rubymine and give ownership of rubymine home dir
RUN adduser --quiet rubymine \
&& echo "rubymine:rubymine" | chpasswd \
&& chown -R rubymine:rubymine /home/rubymine \
EXPOSE 22
I'm not sure of what are the exact configurations you can perform with Rubymine. But it's possible to open a tty with the container without the need of ssh:
#run it as a daemon
docker run -d --name=myruby ruby:2.19
#connect to it
docker -it exec myruby /bin/bash
UPDATE:
Try setting DOCKER_HOST environment variable to listen on a tcp port:
export DOCKER_HOST='tcp://localhost:2376'

Redis sentinel docker image / Dockerfile

I'm looking to deploy high availability Redis on a coreOS cluster, and I need a Redis Sentinel docker image (i.e. Dockerfile) that works. I've gathered enough information/expertise to create one (I think)... but my limited knowledge/experience with advanced networking is the only thing keeping me from building and sharing it.
Can someone who is an expert here help me developing a Redis Sentinel Dockerfile (none exist right now)? The Redis/Docker community would really benefit from this.
Here's the broader issue and context:
https://github.com/antirez/redis/pull/1908
I think the solution is right here specifically:
https://github.com/antirez/redis/pull/1908#issuecomment-54380876
Here's the Dockerfile I've been using... but if you read the thread above, you'll see my comments (joshula)... it lacks the Networking fixes that mattsta is talking about. Note that because I'm using this on coreOS, any config settings in sentinel.conf are being set at run-time via the command line (hence ENTRYPOINT).
# Pull base image.
FROM dockerfile/ubuntu:latest
# Install Redis.
RUN \
cd /tmp && \
wget http://download.redis.io/redis-stable.tar.gz && \
tar xvzf redis-stable.tar.gz && \
cd redis-stable && \
make && \
make install && \
cp -f src/redis-sentinel /usr/local/bin && \
mkdir -p /etc/redis && \
cp -f *.conf /etc/redis && \
rm -rf /tmp/redis-stable* && \
sed -i 's/^\(bind .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(daemonize .*\)$/# \1/' /etc/redis/redis.conf && \
sed -i 's/^\(dir .*\)$/# \1\ndir \/data/' /etc/redis/redis.conf && \
sed -i 's/^\(logfile .*\)$/# \1/' /etc/redis/redis.conf
# Define mountable directories.
VOLUME ["/data"]
# Define working directory.
WORKDIR /data
# Expose ports.
EXPOSE 26379
# Define default command.
ENTRYPOINT redis-sentinel /etc/redis/sentinel.conf
After a ton of work, I ended up figuring this out. Here's to making it simple for anyone else who wants to deploy a highly available redis instance via Docker:
https://registry.hub.docker.com/u/joshula/redis-sentinel/
There is no need for a custom sentinel image, or messing with the network. See my redis-ha-learning project using Spring Data Redis, bitnami/redis and bitnami/redis-sentinel images. The Docker Compose file is here.
My code auto detects the sentinels based on the Docker Compose container names.