I use DDEV as a development environment for a TYPO3 project. I want to have Redis server available (for cache).
How can I achieve that?
In order to have Redis available for TYPO3 you need to have:
Redis server
To create redis server for your project, just create a file
.ddev/docker-compose.redis.yaml with following
content:
# ddev redis recipe file
#
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-redis
image: redis:4
restart: always
ports:
- 6379
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=6379
volumes: []
web:
links:
- redis:$DDEV_HOSTNAME
Configure your application to use Redis
Use redis as a host, and port 6379.
FYI! DDEV added PHP-Redis to the web container, as of DDEV v1.1.0 on 15 Aug.
https://www.drud.com/ddev-local/ddev-v1-1-0/
"More services! We’ve added PHP-Redis to the web container. We heard repeatedly that not having Redis was a major hurdle for people who wanted to use DDEV. We hope this helps!"
You can get redis with ddev get drud/ddev-redis. There's also ddev get drud/ddev-redis-commander for use with the ddev redis service.
https://ddev.readthedocs.io/en/latest/users/extend/additional-services/
Related
I have several VMs running gilab-runner,and I'm using gitlab-ci to deploy microservices into those VMs. Now I want to monitoring those VMs with prometheus and grafana, but i need to setup node-exporter/cadvisor etc. service into those VMs.
My Ideas is using gitlab-ci to define a common job for those VMs.
I have already write the docker-compose.yml and .gitlab-ci.yml.
version: '3.8'
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
cadvisor:
image: google/cadvisor
container_name: cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- "8080:8080"
deploy-workers:
tags:
- worker
stage: deploy-workers
script:
- docker-compose -f docker-compose.worker.yaml pull
- docker-compose -f docker-compose.worker.yaml down
- docker-compose -f docker-compose.worker.yaml up -d
then I register the runner in all my VMs with 'worker' tag.
However, only one worker job is triggered during ci.
I have about 20 VMs to go.
Do anyone have suggestions?
This is probably not a good way to be deploying your services onto virtual machines. You don't want to just launch your GitLab CI job and then hope that it results in what you want. Managing each VM separately is both going to be tedious and error-prone.
You probably want to do is have a method that has a declarative way to define/describe your infrastructure and the state of how that infrastructure should be configured and the applications running on it.
For example, you could:
Use a proper orchestrator, such as docker swarm or Kubernetes AND/OR
Use a provisioning tool, such as Ansible connected to each VM, or if your VMs run in the cloud, Terraform or similar.
In both these examples, you can leverage these tools from a single GitLab CI job and deploy changes to all of your VMs/clusters at once.
Using docker swarm
For example, instead of running your docker-compose on 20 hosts, you can join all 20 VMs to the same docker swarm.
Then in your compose file, you create a deploy key specifying how many replicas you want across the swarm, including numbers per node. Or use mode: global to simply specify you want one container of the service per host in your cluster.
services:
node-exporter:
deploy:
mode: global # deploy exactly one container per node in the swarm
# ...
cadvisor:
deploy:
mode: global # deploy exactly one container per node in the swarm
Then running docker stack deploy from any manager node will do the right thing to all your swarm worker nodes. Docker swarm will also automatically restart your containers if they fail.
See deploy reference.
Using swarm (or any orchestrator) has a lot of other benefits, too, like health checking, rollbacks, etc. that will make your deployment process a lot safer and more maintainable.
If you must use a job per host
Set a unique tag for each runner on each VM. Then use a parallel matrix with a job set to each tag.
job:
parallel:
matrix:
RUNNER: [vm1, vm2, vm3, vm4, vm5] # etc.
tags:
- $RUNNER
See run a matrix of parallel jobs
You want to make sure the tag is unique and covers all your hosts, or you may run the same job on the same host multiple times.
This will let you do what you were seeking to do. However, it's not an advisable practice. As a simple example: there's no guarantee that your docker-compose up will succeed and you may just take down your entire cluster all at once.
I use Docker version 1.13.1,build 092cba3 on Windows 10.
I have a custom Jenkins container that builds code from Github in a volume.
The volume is /var/jenkins_home/workspace/myjob .
I also have an Apache container that I want to share the volume with.
The docker-compose.yml file is:
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes_from:
- jenkins
volumes:
myjob_volume:
I basically want the Jenkins container to fetch the code in a volume , which will then be visible by the Apache (httpd) container. So every change I make to the code from my IDE and pushed to Github, will be visible in the Apache container. The volume is created in the Apache container, but when I successfully build the code in the Jenkins container, it does not appear in the volume in Apache.
EDIT:
After launching the 2 containers with docker-compose up -d,
I enable their volumes from Kitematic
I change the volume path for Apache to point to the Jenkins volume
and when I build the code from Jenkins, Apache sees it as I would like.
So...how should I do the same from the docker-compose file ?
You are using volumes_from which "copies" the mount definition from the container you're specifying. As a result, the myjob_volume will be mounted at /var/jenkins_home/workspace/myjob inside the Apache container. The official Apache image from Docker hub (https://hub.docker.com/_/httpd/) uses /usr/local/apache2/htdocs/ as the webroot.
To mount the volume at that location, update the docker-compose file to look like this;
version: '2'
services:
jenkins:
container_name: jenkins
image: jenkins:v1
environment:
JAVA_OPTS: "-Djava.awt.headless=true"
JAVA_OPTS: "-Djenkins.install.runSetupWizard=false" # Start jenkins unlocked
ports:
# - "50000:50000" # jenkins nodes
- "8686:8080" # jenkins UI
volumes:
- myjob_volume:/var/jenkins_home/workspace/myjob
apache:
container_name: httpd
image: httpd:2.2
volumes:
- myjob_volume:/usr/local/apache2/htdocs/
volumes:
myjob_volume:
What I want to do is to use a dump.rdb that I've taken from a production server, and use it in my development environment, that is defined by a very simple compose file.
For simplicity, assume that my app is the same as this compose example from the docker docs for redis and flask, so the docker-compose.yml looks like:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
depends_on:
- redis
redis:
image: redis
This persists redis data between restarts, but you just cannot access the redis files as there is no volume mounted for redis in the docker-compose.yml.
So I change my compose file to mount a volume for redis, and I also want to force redis to persist data and the official redis image docs say that happens if I use 'appendonly'.
redis:
image: redis
command: redis-server --appendonly yes
volumes:
- ./redis:/data
If I do this, my data are persisted, as they were in the original example, and I can now see a dump.rdb and and appendonly.aof in the /redis path. The problem is, if I want to restore from a dump.rdb I need to turn off appendonly (for example, see digital ocean's how-to-back-up-and-restore-your-redis-data-on-ubuntu-14-04), and without append-only I cannot see how to get the compose file to write to the volume.
How can I produce a docker compose that will persist redis in a volume where I can switch the dump.rdb files, and therefore insert the production snapshot into my development environment?
Update
The following compose works, but be patient when testing, as the creation of the dump.rdb is not instant (hence it seeming like it failed). Also the redis official image doc, implies you have to use appendonly when you don't:
redis:
image: redis
volumes:
- ./redis:/data
The appendonly part is just to make sure that you don't lose data, but since you already have the dump.rdb from your server you don't need to worry about that: you can either remove the append only flag or remove 'command' entirely since it will then fall back to the image default which is just 'redis-server'.
I have a similar setup here and it writes/loads the dump.rdb files fine. (404)
I have a apache server installed and running for 3 website in PHP. I also developed a mobile api in django running on 4 docker containers
(django, redis, elasticsearch, rabbitmq using fig.sh).
Because apache is running and I want to keep it and configure it to run the web app on the docker containers. if it is django app I will config mod_wsgi for that but it is not so I don't know.
Any idea about that. Thank a lot.
Note: I am using docker 1.5 and apache 2.2 on Centos 6.6.
Edit:
Apache is contain 3 <VirtualHost *:80 > for 3 domain of 3 website.
1 website1.com
2 website2.com
3 website3.com
and api I want to deploy is running on domain api.website1.com is subdomain of website1.com
fig.yml
db:
image: mysql
volumes:
- /var/lib/mysql:/var/lib/mysql
volumes_from:
- mysql_data
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: 123
# command:
redis:
image: redis:3
elasticsearch:
image: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
rabbitmq:
image: tutum/rabbitmq
environment:
- RABBITMQ_PASS=123456
ports:
- "5672:5672" # we forward this port because it's useful for debugging
- "15672:15672" # here, we can access rabbitmq management plugin
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db:db
- elasticsearch:elasticsearch
- rabbitmq:rabbit
- redis:redis
# container with redis worker
worker:
build: .
command:
volumes:
- .:/code/mobile_api
links:
- db:db
- rabbitmq:rabbit
- redis:redis
For more information about the general issues around proxying Apache to backend Python web sites which use mod_wsgi, see:
http://blog.dscpl.com.au/2015/06/proxying-to-python-web-application.html
my Redis container is defined as a standard image in my docker_compose.yml
redis:
image: redis
ports:
- "6379"
I guess it's using standard settings like binding to Redis at localhost.
I need to bind it to 0.0.0.0, is there any way to add a local redis.conf file to change the binding and let docker-compose to use it?
thanks for any trick...
Yes. Just mount your redis.conf over the default with a volume:
redis:
image: redis
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379"
Alternatively, create a new image based on the redis image with your conf file copied in. Full instructions are at: https://registry.hub.docker.com/_/redis/
However, the redis image does bind to 0.0.0.0 by default. To access it from the host, you need to use the port that Docker has mapped to the host for you which you find by using docker ps or the docker port command, you can then access it at localhost:32678 where 32678 is the mapped port. Alternatively, you can specify a specific port to map to in the docker-compose.yml.
As you seem to be new to Docker, this might all make a bit more sense if you start by using raw Docker commands rather than starting with Compose.
Old question, but if someone still want to do that, it is possible with volumes and command:
command: redis-server /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
Unfortunately with Docker, things become a little tricky when it comes to Redis configuration file, and the answer voted as best (im sure from people that did'nt actually tested it) it DOESNT work.
But what DOES WORK, fast, and without husles is this:
command: redis-server --bind redis-container-name --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
You can pass all the variable options you want in the command section of the yaml docker file, by adding "--" in the front of it, followed by the variable value.
Never forget to set a password, and if possible close the port 6379.
Τhank me later.
PS: If you noticed at the command, i didnt use the typical 127.0.0.1, but instead the redis container name. This is done for the reason that docker assigns ip addresses internally via it's embedded dns server. In other words this bind address becomes dynamic, hence adding an extra layer of security.
If your redis container is called "redis" and you execute the command docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' redis (for verifying the running container's internal ip address), as far as docker is concerned, the command give in docker file, will be translated internally to something like: redis-server --bind 172.19.0.5 --requirepass some-long-password --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
Based on David awnser but a more "Docker Compose" way is:
redis:
image: redis:alpine
command: redis-server --include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
That way, you include the .conf file by docker-compose.yml file and don't need a custom image.
mount your config /usr/local/etc/redis/redis.conf
add command to execute redis-server with your config
redis:
image: redis:7.0.4-alpine
restart: unless-stopped
volumes:
- ./redis.conf:/usr/local/etc/redis/redis.conf
command: redis-server /usr/local/etc/redis/redis.conf
########################################
# or using command if mount not work
########################################
command: >
redis-server --bind 127.0.0.1
--appendonly no
--save ""
--protected-mode yes
It is an old question but I have a solution that seems elegant and I don't have to execute commands every time ;).
1 Create your dockerfile like this
#/bin/redis/Dockerfile
FROM redis
CMD ["redis-server", "--include /usr/local/etc/redis/redis.conf"]
What we are doing is telling the server to include that file in the Redis configuration. The settings you type there will override the default Redis have.
2 Create your docker-compose
redisall:
build:
context: ./bin/redis
container_name: 'redisAll'
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./config/redis:/usr/local/etc/redis
3 Create your configuration file it has to be called the same as Dockerfile
//config/redis/redis.conf
requirepass some-long-password
appendonly yes
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.*
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1
// and all configurations that can be specified
// what you put here overwrites the default settings that have the
container
I had the same problem when using Redis in docker environment that the Redis could not save data to disk on dump.rdb.
The problem was the Redis could not read the configurations redis.conf , I solve it by sending the required configurations with the command in docker compose as below :
redis19:
image: redis:5.0
restart: always
container_name: redis19
hostname: redis19
command: redis-server --requirepass some-secret --stop-writes-on-bgsave-error no --save 900 1 --save 300 10 --save 60 10000
volumes:
- $PWD/redis/redis_data:/data
- $PWD/redis/redis.conf:/usr/local/etc/redis/redis.conf
- /etc/localtime:/etc/localtime:ro
and it works fine.
I think it will be helpful i am sharing working code in my local
redis:
container_name: redis
hostname: redis
image: redis
command: >
--include /usr/local/etc/redis/redis.conf
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"