Traefik - Unable to expose redis docker containers with the same port for different domains - redis

I'm trying to set up a Redis with docker-compose for different environments.
Therefore I need to expose two domains with traefik on the same port:
domain.com:6379
domain-dev.com:6379
I can't expose those ports on the container, because they are running on the same server.
My docker-compose file (for domain-dev) looks like this:
version: '2'
services:
redis:
container_name: redis-signalr-dev
image: redis
volumes:
- ./redis-signalr-data:/data
restart: always
labels:
- traefik.enable=true
- traefik.backend=redis-signalr-dev
- traefik.frontend.rule=Host:domain-dev.com
- traefik.port=6379
- traefik.docker.network=traefik_default
- traefik.frontend.entryPoints=redis
networks:
- traefik_default
volumes:
redis-signalr-data:
networks:
traefik_default:
external: true
I also tried to configure the treafik to use the following endpoint:
--entrypoints='Name:redis Address::6379'
When connecting to "domain-dev.com:6379" a connection cannot be astablished.
Does anyone know a solution to this problem?

Traefik is a reverse proxy for http, not a tcp load balancer. So traefik itself (usually) opens ports 80 and 443 for ingress and forwards incoming http requests to the given http-able backends. The port you specify in your compose service labels is the port of the container, the traffic should be passed to.
So if you run a nodejs (http) server on port 3000, you would connect to http://yourdomain:80 and traefik would forward the requests to your nodejs container on port 3000. This means that by specifying a port on a compose service, you will not open this port on your host.
In your example running redis with its custom protocol, traefik is not a solution as traefik only does http proxying. To expose redis on your host (if you really want to do that), just use regular docker port mappings and point your domains to your docker hosts. Doing this, there is no way to use the same port with different domains, just specify two different ports for your both instances. For http this works by traefik inspecting the http requests and doing routing based on the host header.

Traefik 2.0 will have TCP support: https://github.com/containous/traefik/pull/4587
Until then you'd have to use NGINX or similar.

Related

Traefik entrypoint redirect to scheme and port

I'm running traefik in docker-compose with network_mode: host to get an accurate remote_ip. My docker hosts ports 80 and 443 are occupied so traefik uses 5080 and 5443 web and websecure entry points. I've forwareded 5080/5443 to my routers 80/443 so my.domain.me routes to traefik. https://my.domain.me works correctly, but http://my.domain.me redirects to port 5443. How can I configure traefik to redirect to port 443?
version: '3.3'
services:
traefik:
image: traefik:v2.4
# use host network for accurate remote_ip
network_mode: host
command: # CLI arguments
- --providers.docker=true
# ports 80 and 443 are used by another process.
- --entryPoints.web.address=:5080
- --entryPoints.websecure.address=:5443
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https

traefik v2 forwarding to external host. Non container host

I am looking for examples of traefik v2 forwarding to other host such as virtual machines ( ie non container ).
Kind Regards,
Edward
That should look like this in the file-provider:
http:
routers:
...
...
services:
somename:
loadBalancer:
servers:
- url: http://yourserverip
Docker has to "see" your server (test with curl or ping).
Further information: Traefik Docs - Routers

asp.net core application docker swarm hosted client IP

I want to log my client's requests IP address and I have a docker service of asp.net core on Linux.
now we always have docker network IP address!
how can I get my clients real IP address?
You can get the real IP address of the clients if you change your mode to host. Below is an example of how to do this:
traefikedge:
image: traefik:1.4.3-alpine
ports:
- target: 80
published: 80 #for redirect to HTTPS
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
- target: 443
published: 443
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
There is an open issue here about this.

How to add a simple routing rule to traefik

I'm trying to get started with traefik in the hopes I can replace my current reverse proxy (pound) with traefik.
How do I add a simple routing rule so that mysubdomain.mydomain.com routes to http://192.168.x.x:8080?
I'm following the quickstart here. I created the following docker compose yml file and started it with docker-compose up -d reverse-proxy
version: '3'
services:
reverse-proxy:
container_name: reverse-proxy
image: traefik #The official Traefik docker image
command: --api --docker #Enables the web UI and tells Træfik to listen to docker
ports:
- "80:80" #The HTTP port
- "8080:8080" #The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock #So that Traefik can listen to the Docker events
Great, the container is running, but now what? How would I go about adding the simply routing rule?
If my backend web service isn't running one of these supported backends will it not work? Surely traefik can simply route http requests to any http backend right?
For example my backend web service is a web interface for my synology NAS at home. Traefik should be able to route this right? If so, how?

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust