How to change the rabbitMQ default guest user/password
docker-compose.yaml
version: "3.8"
services:
rabbitmq3:
container_name: "rabbitmq"
image: rabbitmq:3.8-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=myuser
- RABBITMQ_DEFAULT_PASS=mypassword
ports:
# AMQP protocol port
- '5672:5672'
# HTTP management UI
- '15672:15672'
rabbitMQ container getting restarted after updating username and password in a docker-compose file and I have changed the default user name and password in rabitmq.conf but the issue remains the same.
I mentioned below one in setting.py
amqp://password:user#localhost:5672/
Related
i am trying to deploy nodejs app integrated with redis on jenkis server. application is working fine .but redis on jenkins server not seems ok. it seems like it is not receiving data from cache on deployment server while it is working fine on local. in node application i have integrated redis to optimized api response time but still i am receiving api response at same time as it was without redis .can you please suggest why this happening on deployment server
do i need to do some other configuration on jenkins server for redis
please suggest.
this is my docker code
services:
redis:`enter code here`
image: redis:6.2-alpine
container_name: redis
restart: unless-stopped
network_mode: bridge
expose:
- 6379
server:
build: .
container_name: qa
restart: unless-stopped
network_mode: bridge
command: npm start
ports:
- 8081:8081
volumes:
- ./:/data
links:
- redis
redis connection
import { createClient } from 'redis';
const client = createClient({ url: 'redis://localhost:6379'});
I am considering replacing Apache for Traefik for my web project (Kestrel / .Net Core). After reading the documentation, a few things remain unclear to me regarding Traefik:
1/ Does Traefik automatically handle the LetsEncrypt certificate renewal or does it need to be done manually or via an external script? From the doc it's said that this is performed when adding a new host or restart, but what happens after 3 months of Traefik running without any restart/new host added?
2/ When a Docker backend becomes unreachable, how is it possible to serve a custom static HTML page? I can see how to set specific error page from the documentation, but not how to redirect trafic to it when a given backend becomes unavailable.
3/ When a Docker backend needs to be updated, is there some steps that need to be performed on Traefik prior to performing the Docker stop/restart?
4/ It seems I can't get to have 2 docker backend running at the same time, see the configuration file below, if I uncomment the 2nd backend (api.mydomain.io), then the 1st one becomes not available anymore. Am I missing something here?
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --docker # Enables the web UI and tells Træfik to listen to docker
ports:
- "80:80" # The HTTP port
- "443:443" # The HTTPS port
- "8080:8080" # The Web UI (enabled by --api)
networks:
- proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/traefik.toml:/etc/traefik/traefik.toml
- $PWD/acme.json:/acme.json
- /root/mydomain_prod/cert/:/certs/
- /root/mydomain_prod/503.html:/503.html
container_name: traefik-reverse-proxy
##############################
# Front - www.mydomain.io
##############################
mydomain-front:
image: mydomain-front
labels:
- traefik.enable=true
- traefik.backend=mydomain-front
- traefik.frontend.rule=Host:traefik.mydomain.io
- traefik.port=8084
networks:
- internal
- proxy
container_name: mydomain-front
##############################
# API - api.mydomain.io
# Note: If I uncomment this one, then www.mydomain.io won't work anymore
##############################
#mydomain-api:
# image: mydomain-api
# labels:
# - traefik.enable=true
# - traefik.backend=mydomain-api
# - traefik.frontend.rule=Host:api.mydomain.io
# - traefik.port=8082
# networks:
# - internal
# - proxy
# container_name: mydomain-api
Many thanks,
Flo
1/ Traefik can handle the LetsEncrypt certificate renewal. Just remember to create a Volume to store the acme.json file. When the certificate expires, Traefik will do the renewal without asking.
2/ I don’t know if it’s possible. If you find a solution, share it.
3/ When you need to update a Docker, just update it. Traefik will be trigger from that change and update is own configuration.
4/ You can have to backend running at the same time. Below you can see a docker-compose.yml configuration :
version: '3'
services:
two-backend-service:
restart: always
image: ……..
labels:
- traefik.enable=true
- traefik.service1.frontend.rule=Host:service1.exemple.com
- traefik.service1.frontend.passHostHeader=true
- traefik.service1.port=8082
- traefik.Service2.backend=service2
- traefik.Service2.frontend.rule=Host:service2.exemple.com
- traefik.Service2.frontend.passHostHeader=true
- traefik.Service2.port=8081
traefik:
build:
context: ./traefik
dockerfile: Dockerfile
restart: always
ports:
- 80:80
- 443:443
labels:
- traefik.enable=false
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/etc/traefik/acme/
volumes:
traefik_letsencrypt:
driver: local
I'm having a hard time trying to configure one redis container for all my applications using traefik. This is my configuration:
1 - Docker compose for Traefik and Redis:
version: '2'
services:
proxy:
container_name: traefik
image: traefik:1.3.6-alpine
command: --docker
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik.toml:/traefik.toml
networks:
- proxy
labels:
- traefik.frontend.rule=Host:monitor.company.dev
- traefik.port=8080
redis:
container_name: main_redis
image: redis:3.2
restart: always
volumes:
- ./data/redis:/data
networks:
- proxy
labels:
- traefik.backend=main-redis
- traefik.default.protocol=http
- traefik.frontend.rule=Host:main-redis.company.dev
- traefik.docker.network=proxy
- traefik.port=6379
networks:
proxy:
external: true
2 - Docker compose for my PHP Application.
version: '2'
services:
...
php:
container_name: myapp_php
build: ./docker/php # php:7.1-fpm base image
networks:
- internal
- proxy
labels:
- traefik.enable=false
- traefik.docker.network=proxy
expose:
- 9000
networks:
proxy:
external: true
internal:
external: false
I tried to connect my php application to main-redis.company.dev on both ports 6379 and 80 but I get a Redis::connect(): connect() failed: Connection refused message.
I also changed these stuff in my redis.conf:
Commented the line with bind 127.0.0.1
And changed protected-mode to no
My docker containers are staying in the same network, so I think it should work. Anyone knows why am I having this problem?
2022 UPDATE to #djeeg answer
For some time now you can use TCP mode for your routers. To do that you need to define the labels with TCP instead of HTTP
labels:
- "traefik.enable=true"
- "traefik.tcp.routers.redis.rule=HostSNI(`redis.example.com`)"
- "traefik.tcp.routers.redis.entrypoints=redis" //6379 entrypoint
- "traefik.tcp.routers.redis.tls.certresolver=myresolver" //let's encrypt resolver
- "traefik.tcp.routers.redis.service=redis"
- "traefik.tcp.services.redis.loadbalancer.server.port=6379"
once you got that working in order to connect assuming you are using TSL you will get error Error: Protocol error, got "H" as reply type byte.
To prevent this you need to do things.
Allow for tls connection in the connection string
Setup SNI for your DNS name or provide public certificate file via cert pr cacert
redis-cli -u redis://redis.example.com:6379 --tls --sni redis.example.com
First off, remove the traefik labels from your redis service definition, traefik is currently (Nov 2017) a HTTP proxy, so you can't expose the endpoint like that
See here:
https://github.com/containous/traefik/issues/10
https://github.com/containous/traefik/issues/1611
Then to connect the php service to the redis service, that looks like you are trying to do that within the same docker instance (rather than externally)
Instead of main-redis.company.dev:6379, it should be like on of these:
redis:6379
main_redis:6379
%PROJECT_NAME%_redis:6379
depending upon how you are deploying the container
I am running Drone Server and Drone Agent on the same instance, i am trying to connect with the private ip of the instance.
If I curl the ip with port, I get a proper html page.
But in Drone Agent logs I get this continuously
drone-agent_1 | INFO: 2017/10/03 14:02:37 transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.
Since it would be on the same instance, it should work, and drone server should be configured for grpc as well.
2 Docker compose file, one for agent and one for server
version: '2'
services:
drone-agent:
image: drone/agent:0.8
command: agent
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=172.30.1.169:9456
- DRONE_SECRET=secret
Docker Compose Server
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 9456:8000
- 8502:9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=false
- DRONE_HOST=https://subdomain.somehost.com:9876/
- DRONE_BITBUCKET=true
- DRONE_BITBUCKET_CLIENT=secretc
- DRONE_BITBUCKET_SECRET=secretb
- DRONE_SECRET=secret
- DRONE_ADMIN=user1
In your example you expose the drone server GRPC endpoint at 8502:9000 but you provide the agent with port 9456. Providing the agent with the correct port should resolve this issue for you.
-DRONE_SERVER=172.30.1.169:9456
+DRONE_SERVER=172.30.1.169:8502
I have a microservices application which has two services and a rabbit mq used as a message queue for communication between them. Now, I want to deploy them on docker. I have the following code in the docker-compose.yml file:
version: "3"
services:
rabbitmq:
build: ./Rabbit
hostname: "rabbitmq"
container_name: "rabbitmq"
environment:
RABBITMQ_ERLANG_COOKIE: "cookie"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "pass"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "15672:15672"
- "5672:5672"
# labels:
# NAME: "rabbit1"
volumes:
- "/opt/rabbitmq:/var/lib/rabbitmq"
service1:
build: ./service1
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "8181:80"
depends_on:
- rabbitmq
links:
- rabbitmq
networks:
- webnet
So, here I build the RabbitMQ image in a container and then link this container to the container of service1. Since service1 one is an ASP.NET Core Web API, I use the following setup to connect to the message queue:
//Establish the connection
var factory = new ConnectionFactory
{
HostName = "rabbitmq",
Port = 5672,
UserName = "user",
Password = "pass",
VirtualHost = "/",
AutomaticRecoveryEnabled = true,
NetworkRecoveryInterval = TimeSpan.FromSeconds(15)
};
But when I try to run docker-compose up, I receive the following error message:
Unhandled Exception:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the
specified endpoints were reachable --->
RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException:
No such device or address
Maybe I have a mistake in the HostName but I am not sure how to correct it.
There are two problems that need to be fixed:
the two services do not belong to the same network so you need to add the rabbitmq service to the webnet network or create a new network for the two services
rabbitmq may take some time to become fully available (i.e. to listen to the 5672 port) so you need to make the service1 service to wait for the rabbitmq service; see this question about this issue.