Docker-compose not attaching redis: redis does not start - redis

I have an issue with my docker-compose configuration that I cannot pinpoint: redis won't start.
My docker-compose.yml:
web:
build: ./web
links:
- db
- redis
ports:
- "8080:8080"
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: bignibou_dev
redis:
build: ./redis
ports:
- "63790:6379"
My ./web/Dockerfile:
FROM java:8
ADD ./bignibou-server-1.0.jar /app/bignibou-server-1.0.jar
ADD ./spring-cloud.properties /app/spring-cloud.properties
ENV SPRING_CLOUD_PROPERTIESFILE=/app/spring-cloud.properties
ENV SPRING_PROFILES_ACTIVE=cloud
ENV SPRING_CLOUD_APP_NAME=bignibou
ENV CLEARDB_DATABASE_URL=mysql://root:root#localhost:3307/bignibou_dev
ENV REDISCLOUD_URL=redis://dummy:dummy#localhost:63790
ENV DYNO=dummy
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "/app/bignibou-server-1.0.jar" ]
My ./redis/Dockerfile:
FROM redis
COPY redis.conf /usr/local/etc/redis/redis.conf
EXPOSE 6379
ENTRYPOINT [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
When I run sudo docker-compose up, redis is not started by Docker although mysql/db starts properly.
Can anyone please help?

Instead of localhost , write your redis service name which in your case is redis , so connection will be like :
redis://dummy:dummy#redis:63790

Related

How to setup rabbitmq service with Github Actions?

I am trying to set up Github Actions CI for an app that is using RabbitMQ.
RabbitMQ container is started using:
services:
rabbitmq:
image: rabbitmq:3-management
ports:
- 5672:5672
But now I need to configure it with smth like rabbitmqctl add_user user password.
How can it be done? Should I be using rabbitmq container here at all?
As this is using the rabbitmq Docker image, you can configure user credentials by passing in the RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS environment variables.
rabbitmq:
image: rabbitmq
env:
RABBITMQ_DEFAULT_USER: craiga
RABBITMQ_DEFAULT_PASS: security_is_important
ports:
- 5672:5672
If you have trouble connecting to RabbitMQ, try with a dynamic port.
Use this:
jobs:
test:
runs-on: ubuntu-latest
services:
rabbitmq:
image: rabbitmq:3.8
env:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- 5672
steps:
- name: Run Tests
run: |
python manage.py test
env:
RABBITMQ_HOST: 127.0.0.1
RABBITMQ_PORT: ${{ job.services.rabbitmq.ports['5672'] }}

Running multiple docker-compose files with nginx reverse proxy

I asked a question here and got part of my problem solved, but I was advised to create another question because it started to get a bit lengthy in the comments.
I'm trying to use docker to run multiple PHP,MySQL & Apache based apps on my Mac, all of which would use different docker-compose.yml files (more details in the post I linked). I have quite a few repositories, some of which communicate with one another, and not all of them are the same PHP version. Because of this, I don't think it's wise for me to cram 20+ separate repositories into one single docker-compose.yml file. I'd like to have separate docker-compose.yml files for each repository and I want to be able to use an /etc/hosts entry for each app so that I don't have to specify the port. Ex: I would access 2 different repositories such as http://dockertest.com and http://dockertest2.com (using /etc/hosts entries), rather than having to specify the port like http://dockertest.com:8080 and http://dockertest.com:8081.
Using the accepted answer from my other post I was able to get one app running at a time (one docker-compose.yml file), but if I try to launch another with docker-compose up -d it results in an error because port 80 is already taken. How can I runn multiple docker apps at the same time, each with their own docker-compose.yml files and without having to specify the port in the url?
Here's a docker-compose.yml file for the app I made. In my /etc/hosts I have 127.0.0.1 dockertest.com
version: "3.3"
services:
php:
build: './php/'
networks:
- backend
volumes:
- ./public_html/:/var/www/html/
apache:
build: './apache/'
depends_on:
- php
- mysql
networks:
- frontend
- backend
volumes:
- ./public_html/:/var/www/html/
environment:
- VIRTUAL_HOST=dockertest.com
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
nginx-proxy:
image: jwilder/nginx-proxy
networks:
- backend
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
frontend:
backend:
I would suggest to extract the nginx-proxy to a separate docker-compose.yml and create a repository for the "reverse proxy" configuration with the following:
A file with extra contents to add to /etc/hosts
127.0.0.1 dockertest.com
127.0.0.1 anothertest.com
127.0.0.1 third-domain.net
And a docker-compose.yml which will have only the reverse proxy
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Next, as you already mentioned, create a docker-compose.yml for each of your repositories that act as web endpoints. You will need to add VIRTUAL_HOST env var to the services that serve your applications (eg. Apache).
The nginx-proxy container can run in "permanent mode", as it has a small footprint. This way whenever you start a new container with VIRTUAL_HOST env var, the configuration of nginx-proxy will be automatically updated to include the new local domain. (You will still have to update /etc/hosts with the new entry).
If you decide to use networks, your web endpoint containers will have to be in the same network as nginx-proxy, so your docker-compose files will have to be modified similar to this:
# nginx-proxy/docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
networks:
- reverse-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
reverse-proxy:
# service1/docker-compose.yml
version: "3.3"
services:
php1:
...
networks:
- backend1
apache1:
...
networks:
- nginx-proxy_reverse-proxy
- backend1
environment:
- VIRTUAL_HOST=dockertest.com
mysql1:
...
networks:
- backend1
networks:
backend1:
nginx-proxy_reverse-proxy:
external: true
# service2/docker-compose.yml
version: "3.3"
services:
php2:
...
networks:
- backend2
apache2:
...
networks:
- nginx-proxy_reverse-proxy
- backend2
environment:
- VIRTUAL_HOST=anothertest.com
mysql2:
...
networks:
- backend2
networks:
backend2:
nginx-proxy_reverse-proxy:
external: true
The reverse-proxy network that is created in nginx-proxy/docker-compose.yml is referred as nginx-proxy_reverse-proxy in the other docker-compose files because whenever you define a network - its final name will be {{folder name}}_{{network name}}
If you want to have a look at a solution that relies on browser proxy extension instead of /etc/hosts, check out mitm-proxy-nginx-companion

reverse proxy docker container to another two docker containers, how to have multiple instances on a single computer

In this project I have an apache docker container (called loadbalancer) which points to either of two apache docker containers. If the path is "/support*" then it goes to the support container otherwise it goes to webapp. Currently to achieve this I have hard coded my docker compose networks subnet and each containers ipv4 address. Then an apache conf file just points to those hard coded ips. This works great for local development environments.
However, it doesn't work for staging servers which need to host multiple instances of the project. I can't spin up more than one instance of this docker-compose network due to the hardcoded subnet/ipv4 addresses. How can I achieve this load balancer setup without hard coding the subnet so I can have multiple instances. Or is there a better solution to achieve the desired effect of many copies being hosted on a single server such as many vhosts in apache container. What would you suggest? As I have no clue as to what would be best practice here.
loadbalancer.conf
<VirtualHost *:80>
TimeOut -1
ProxyPass "/support" "http://172.20.0.5/support"
ProxyPassReverse "/support" "http://172.20.0.5/support"
ProxyPass "/" "http://172.20.0.2/"
ProxyPassReverse "/" "http://172.20.0.2/"
ProxyPreserveHost On
TimeOut -1
</VirtualHost>
docker-compose.yml
version: '3.7'
networks:
pi-net:
ipam:
config:
- subnet: 172.20.0.0/24
services:
cli:
container_name: cli
build: ./docker/cli
networks:
pi-net:
ipv4_address: 172.20.0.3
volumes:
- type: bind
source: .
target: /srv/www
- type: bind
source: $HOME/.gitconfig
target: /home/developer/.gitconfig
extra_hosts:
- "pi.docker:172.20.0.2"
user: developer
stdin_open: true
tty: true
environment:
GIT_PAGER: cat
webapp:
container_name: webapp
build:
context: ./docker/web-server
args:
- vhostsFileName=webapp.conf
networks:
pi-net:
ipv4_address: 172.20.0.2
ports:
- 80
volumes:
- type: bind
source: .
target: /srv/www
# depends on cli because cli entrypoint.sh is creating var/ files needed by httpd
depends_on:
- "cli"
support:
container_name: support
build:
context: ./docker/web-server
args:
- vhostsFileName=support.conf
networks:
pi-net:
ipv4_address: 172.20.0.5
ports:
- 80
volumes:
- type: bind
source: .
target: /srv/www
# depends on cli because cli entrypoint.sh is creating var/ files needed by httpd
depends_on:
- "cli"
loadbalancer:
container_name: loadbalancer
build:
context: ./docker/web-server
args:
- vhostsFileName=loadbalancer.conf
networks:
pi-net:
ipv4_address: 172.20.0.6
ports:
- 80:80
db:
container_name: db
build: ./docker/mysql
networks:
pi-net:
ipv4_address: 172.20.0.4
ports:
- 3306:3306
volumes:
- db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: pi
MYSQL_USER: root
MYSQL_PASSWORD: root
restart: always
volumes:
db:
driver: local
Docker provides an internal DNS service to resolve container names as host names, and Docker Compose provides a network for you. You should make two changes:
In your Apache configuration, replace the explicit IP addresses with the name of the corresponding service block in the docker-compose.yml: http://support/support, for example.
Delete all of the networks: and container_name: settings in the docker-compose.yml, since they're redundant and limit reuse of the file. (Docker will assign IP addresses for you and Docker Compose will pick container names, but there's nothing wrong with these defaults.)
(Many questions of this form also use the outdated links: functionality; it's safe to delete all of the links: blocks too.)

Expose and publish a port with specified host port number inside Dockerfile

Suppose for example that I want to make an SSH host in Docker. I understand that I can EXPOSE 22 inside Dockerfile. I also understand that I can use -p 22222:22 so I can SSH into that Docker container from another physical machine on my LAN on port 22222 as ssh my_username#docker_host_ip -p 22222:22. But suppose that I'm so lazy that I can't be bothered to docker run the container with the option -p 22222:22 every time. Is there a way that the option -p 22222:22 can be automated in a config file somewhere? InDockerfile` maybe?
You can use docker compose
You can defind listening port in docker-compose.yml file as below:
version: '2'
services:
web:
image: ubuntu
ssh_service:
build: .
command: ssh ....
volumes:
- .:/code
ports:
- "22222:22"
depends_on:
- web

Deploy/run a Redis service using Ansible and Docker

I'm using Ansible docker module to setup a Redis service (see ansible role below)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
volumes_from:
- redis-data
After provisioning, redis-service container is up but when I try to connect to redis using redis-cli I have the following error:
vagrant#dev1:~$ redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
NOTE: redis-service seems up and running:
vagrant#dev1:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e8f27b14479 redis:3 "/entrypoint.sh redis" 12 minutes ago Up 12 minutes 6379/tcp redis-service
vagrant#dev1:~$ docker logs 3e8f27b14479
...
1:M 02 Sep 15:41:16.532 * The server is now ready to accept connections on port 6379
Do you have any idea of what might cause the problem?
I finally found the problem: ports attribute must be set too (not only expose)
- hosts: redis
roles:
- role: angstwad.docker_ubuntu
sudo: true
tasks:
- name: data container
sudo: true
docker:
name: redis-data
image: busybox
state: started
volumes:
- /data/redis
- name: redis container
sudo: true
docker:
name: redis-service
image: redis:3
command: redis-server --appendonly yes
state: started
expose: 6379
ports:
- 6379:6379
volumes_from:
- redis-data