How to run Redis on docker with a different configuration file? - redis

I would like to set a password on my Redis server running on docker. I have followed the instcruction on https://registry.hub.docker.com/_/redis/:
1.I have created a folder with a Dockerfile containing:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
2.I have added a redis.conf file with:
requirepass thepassword
3.I built the image using:
docker build -t ouruser/redis .
4.I started the container:
docker run --name my-redis -p 0.0.0.0:6379:6379 -d ouruser/redis redis-server --appendonly yes
The redis server does not have any password ! I do not understand why.

The run command:
docker run --name my-redis -p 0.0.0.0:6379:6379 -d ouruser/redis redis-server --appendonly yes
Overrides the CMD defined in the Dockerfile with redis-server --appendonly yes, so your conf file will be being ignored. Just add the path to the conf file into your run command:
docker run --name my-redis -p 0.0.0.0:6379:6379 -d ouruser/redis redis-server /usr/local/etc/redis/redis.conf --appendonly yes
Alternatively, set up an entrypoint script or add --appendonly yes to the CMD instruction.

Related

Docker entrypoint initdb PERMISSION DENIED

I am getting the following error when I run docker-compose up:
Thanks a lot for your help
I resolved this problem by adding this to the Dockerfile after it copies the scripts to docker-entrypoint-initdb.d
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
Example Dockerfile:
FROM mysql:latest
ENV MYSQL_DATABASE NAME_DATABASE
ENV MYSQL_ROOT_PASSWORD ***********
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
The next step is to build the image:
docker build -t image-db:latest .
The next step is to create the container
docker run -d -p 3306:3306 --name container-db image-db:latest
You should not override the postgres image entrypoint. It is designed to look for .sql files in /docker-entrypoint-initdb.d/ directory (See line in script).
You should just mount your .sql files into /docker-entrypoint-initdb.d/ and it should be processed on startup (only if database does not already exist)
I had the same issue, however, my problem occurred due to Linux user. I am using root as a runner so the problem happened because the mounting volume in the local machine did not have permissions. in this regard, I used chmod -R 777 scripts and it worked fine. Technically, you need to set permissions for both local machine and your container.

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Apache Tomcat 8 not starting within a docker container

I am experimenting with Docker and am very new to it. I am struck at a point for a long time and am not getting a way through and hence came up with this question here...
Problem Statement:
I am trying to create an image from a docker file containing Apache and lynx installation. Once done I am trying to access tomcat on 8080 of the container which is in turn forwarded to the 8082 of the host. But when running the image I never get tomcat started in the container.
The Docker file
FROM ubuntu:16.10
#Install Lynx
Run apt-get update
Run apt-get install -y lynx
#Install Curl
Run apt-get install -y curl
#Install tools: jdk
Run apt-get update
Run apt-get install -y openjdk-8-jdk wget
#Install apache tomcat
Run groupadd tomcat
Run useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
Run cd /tmp
Run curl -O http://apache.mirrors.ionfish.org/tomcat/tomcat- 8/v8.5.12/bin/apache-tomcat-8.5.12.tar.gz
Run mkdir /opt/tomcat
Run tar xzvf apache-tomcat-8*tar.gz -C /opt/tomcat --strip-components=1
Run cd /opt/tomcat
Run chgrp -R tomcat /opt/tomcat
Run chmod -R g+r /opt/tomcat/conf
Run chmod g+x /opt/tomcat/conf
Run chown -R tomcat /opt/tomcat/webapps /opt/tomcat/work /opt/tomcat/temp opt/tomcat/logs
Run cd /opt/tomcat/bin
Expose 8080
CMD /opt/tomcat/bin/catalina.sh run && tail -f /opt/tomcat/logs/catalina.out
When the image is built I tried running the container by the two below methods
docker run -d -p 8082:8080 imageid tail -f /dev/null
While using the above, container is running but tomcat is not started inside the container and hence not accessible from localhost:8082. Also I do not see anything if I perform docker logs longcontainerid
docker run -d -p 8082:8080 imageid /path/to/catalina.sh start tail -f /dev/null
I see tomcat started when I do docker logs longconatainrid
While using the above the container is started and stopped immediately and is not running as I can see from docker ps and hence again not accessible from localhost:8082.
Can anyone please tell me where I am going wrong?
P.s. I searched a lot on the internet but could not get the thing right. Might be there is some concept that i am not getting clearly.
Looking at the docker run command documentation, the doc states that any command passed to the run will override the original CMD in your Dockerfile:
As the operator (the person running a container from the image), you can override that CMD instruction just by specifying a new COMMAND
1/ Then when you run:
docker run -d -p 8082:8080 imageid tail -f /dev/null
The container is run with COMMAND tail -f /dev/null, the original command starting tomcat is overridden.
To resolve your problem, try to run:
docker run -d -p 8082:8080 imageid
and
docker log -f containerId
To see if tomcat is correctly started.
2/ You should not use the start argument with catalina.sh. Have a look at this official tomcat Dokerfile, the team uses :
CMD ["catalina.sh", "run"]
to start tomcat (when you use start, docker ends container at the end of the shell script and tomcat will start but not maintain a running process).
3/ Finally, why don't you use tomcat official image to build your container? You could just use the :
FROM tomcat:latest
directive at the beginning of your Dockerfile, and add you required elements (new files, webapps war, settings) to the docker image.

Docker HTTPS access - ONLYOFFICE3

I'm following the ONLYOFFICE Docker documentation
(GITHUB ONLYOFFICE docker HTTPS access) to get ONLYOFFICE
documentserver and communityserver running with HTTPS.
What I've tried:
1.
I've created the cert files (.crt, .key, .pem) like mentioned in the documentation. After that I created a file named env.list in my home dir /home/jw/data/ with the following content:
SSL_CERTIFICATE_PATH=/opt/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/opt/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/opt/onlyoffice/Data/certs/dhparam.pem
SSL_VERIFY_CLIENT=true
2.
After that I added the directory /home/jw/data/ to my $PATH environment
variable:
PATH=$PATH:/home/jw/data/; export PATH
3.
On the same shell I started the docker container like this:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
4.
The documentserver is running fine. After that I've started the
communityserver with:
sudo docker run -i -t -d --link onlyoffice-document-server:document_server --env-file /home/jw/data/env.list onlyoffice/communityserver
5.
With the command docker ps -a I see booth docker containers running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f573111f2e5 onlyoffice/communityserver "/bin/sh -c 'bash -C " 29 seconds ago Up 28 seconds 80/tcp, 443/tcp, 5222/tcp lonely_mcnulty
23543300fa51 onlyoffice/documentserver "/bin/sh -c 'bash -C " 42 seconds ago Up 41 seconds 80/tcp, 0.0.0.0:443->443/tcp onlyoffice-document-server
But when I'm trying to access https://localhost there is an error "Secure
Connection Failed" in Firefox.
Did I miss something?
Okay got it:
I've changed the environment variables in env.list to:
SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt
SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key
SSL_DHPARAM_PATH=/var/www/onlyoffice/Data/certs/dhparam.pem
After that used the following command to run ONLY the documentserver:
sudo docker run -i -t -d --name onlyoffice-document-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/jw/data/env.list onlyoffice/documentserver
The ONLYOFFICE OnlineEditor API is now available over HTTPS:
https://localhost/OfficeWeb/apps/api/documents/api.js
If you want to use CommunityServer with HTTPS just change the run command above to:
sudo docker run -i -t -d --name onlyoffice-community-server -p 443:443 -v /opt/onlyoffice/Data:/var/www/onlyoffice/Data --env-file /home/<username>/env.list onlyoffice/communityserver
Thank you anyway!

Docker CentOS image does not auto start httpd

I'm trying to run a simple Docker image with Apache and a PHP program. It works fine if I run
docker run -t -i -p 80:80 my/httpd /bin/bash
then manually start Apache
service httpd start
however I cant get httpd to start automatically when running
docker run -d -p 80:80 my/httpd
Apache will startup then container exists. I have tried a bunch of different CMDs in my docker file
CMD /etc/init.d/httpd start
CMD ["service" "httpd" "start"]
CMD ["/bin/bash", "/etc/init.d/httpd start"]
ENTRYPOINT /etc/init.d/httpd CMD start
CMD ./start.sh
start.sh is
#!/bin/bash
/etc/init.d/httpd start
However every-time docker instance will exist after apache starts
Am I missing something really obvious?
You need to run apache (httpd) directly - you should not use init.d script.
Two options:
you have to run apache in foreground: /usr/sbin/apache2 -DFOREGROUND ... (or /usr/sbin/httpd in CentOS)
you have to start all services (including apache configured as auto-run) by executing /sbin/init as entrypoint.
Add this line in the bottom of your Dockerfile to run Apache in the foreground on CentOS
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Simple Dockerfile to run httpd on centOS
FROM centos:latest
RUN yum update -y
RUN yum install httpd -y
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
Commands for building images and running container
Build
docker build . -t chttpd:latest
Running container using new image
docker container run -d -p 8000:80 chttpd:latest
In the end of Dockerfile, insert below command to start httpd
# Start httpd
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
This works for me
ENTRYPOINT /usr/sbin/httpd -D start && /bin/bash