Is there an equivalent to ReverseProxyPass for Apache in Traefik? - traefik

I have setup Traefik to work in Docker Swarm mode. I have deployed Portainer into the cluster with the following command:
docker service create \
\
--label "traefik.port=9000" \
--label "traefik.docker.network =traefik-net" \
--label "traefik.frontend.rule=Host:`hostname -f`;PathPrefixStrip:/portainer" \
--label="traefik.backend=portainer" \
\
--network traefik-net \
--constraint "node.role == manager" \
-p 9000:9000 \
--mount "type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock" \
--name portainer \
portainer/portainer
As can be seen I have configured Traefik, through the use of labels, to proxy the request for /portainer to the Portainer service. However the links that are served by Protainer are from / as it does not know it is being proxied, so the application does not work as Traefik does not know how to route each link.
I am trying to avoid having to change the deployments of services to work with Traefik as I want it to be transparent. To that end is it possible to get Traefik to rewrite the links from the service like ReverseProxyPass for Apache does?
I know that Traefik now sets the X-Forwarded-Prefix but I am not sure how to get things like Portainer to use it out of the box or indeed other services that are installed from the Docker Store for example.

My mistake, this is working. I was omitting the trailing / from the request. When I add this, it all works.
So now I call:
http://dummy.localhost/portainer/

Related

Restart pod depend on health check

I am using Azure Kubernetes service, I found sometimes I'm getting failing health checks to SQL Server, then my API is responding to any request with code 400.
In this case, a simple pod restart usually helps; I thought that liveness / readyness probes will manage that in such scenario, but it's not.
Any ideas how may i automatize restarts on pods if this happened again?
Monitor and restart unhealthy docker containers. This functionality was proposed to be included with the addition of HEALTHCHECK, however didn't make the cut. This container is a stand-in till there is native support for --exit-on-unhealthy https://github.com/docker/docker/pull/22719
Sample compose file is:
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
Simply execute docker-compose up -d on this
a) Apply the label autoheal=true to your container to have it watched.
b) Set ENV AUTOHEAL_CONTAINER_LABEL=all to watch all running containers.
c) Set ENV AUTOHEAL_CONTAINER_LABEL to existing label name that has the value true.
Refer official document https://hub.docker.com/r/willfarrell/autoheal/ for more details.

Static script files for angular 5 stops working with traefik proxy with docker swarm

I have a nginx container which will serve angular app on port 80 for all domains.
When i start the container in docker swarm as a service and bind it to port 80. The application runs fine.
But I tried to use traefik reverse-proxy and after that only HTML is working and all the scripts is not loading with error
Refused to execute script from '' because its MIME type
('text/plain') is not executable, and strict MIME type checking is
enabled. Refused to execute script from 'http://app.local/runtime.js'
because its MIME type ('text/plain') is not executable, and strict
MIME type checking is enabled. app.local/:1 Refused to execute script
from 'http://app.local/polyfills.js' because its MIME type
('text/plain') is not executable, and strict MIME type checking is
enabled. app.local/:1 Refused to execute script from
'http://app.local/styles.js' because its MIME type ('text/plain') is
not executable, and strict MIME type checking is enabled. app.local/:1
Refused to execute script from 'http://app.local/vendor.js' because
its MIME type ('text/plain') is not executable, and strict MIME type
checking is enabled. app.local/:1 Refused to execute script from
'http://app.local/main.js' because its MIME type ('text/plain') is not
executable, and strict MIME type checking is enabled.
For the traefik i followed the steps from https://jmkhael.io/traefik-as-a-dynamic-reverse-proxy-for-docker-swarm/
docker network create --driver=overlay traefik-net
docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \ traefik \
--docker \
--docker.swarmmode \
--docker.domain=jmkhael.io \
--docker.watch \
--logLevel=DEBUG \
--web
and for the app service
docker service create \
--name web \
--label 'traefik.port=80' \
--label traefik.frontend.rule="app.local; Path: /" \
--network traefik-net \ app
Is this the proper way? Or am i missing any other configurations.
Many thanks in advance.
The app started working after removing the
Path: /
part in the web container label.

Wrong password with basic auth and docker features from traefik 1.3.0

I tried the release 1.3.0 of traefik but I was unable to make the basic auth work. Here is what I did, can you point out my mistake(s) (if there is/are), please?
I'm working on a Macbook pro, with docker 17.03.1-ce, build c6d412e.
I followed the docker swarm mode tutorial from traefik documentation with one node on my localhost (no docker machine)
docker swarm init
docker network create --driver=overlay traefik-net
docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 --publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik \
--docker \
--docker.swarmmode \
--docker.domain=traefik \
--docker.watch \
--web
docker service create \
--name whoami \
--label traefik.port=80 \
--network traefik-net \
emilevauge/whoami
http://localhost:8080 gives me traefik dashboard with whoami added as frontend and backend
curl -H Host:whoami.traefik http://localhost gives the expected result
Hostname: d0ad61fcffa6 ...
I deleted and recreated whoami with basic auth label, the one from documentation:
docker service rm whoami
docker service create \
--name whoami \
--label traefik.port=80 \
--label traefik.frontend.auth.basic=test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/,test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0 \
--network traefik-net \
emilevauge/whoami
http://localhost:8080 gives me traefik dashboard with whoami added as frontend and backend
curl -H Host:whoami.traefik http://localhost gives me 401 as expected
curl -H Host:whoami.traefik -u test:test http://localhost gives me 401 which is not expected
curl -H Host:whoami.traefik -u test2:test2 http://localhost gives me 401 which is not expected
Any idea why basic auth doesn't work in my case?
Regards

Azure ACS - Kubernetes inter-pod communication

I've made an ACS instance.
az acs create --orchestrator-type=kubernetes \
--resource-group $group \
--name $k8s_name \
--dns-prefix $kubernetes_server \
--generate-ssh-keys
az acs kubernetes get-credentials --resource-group $group --name $k8s_name
And run helm init it has provisioned tiller pod fine. I then ran helm install stable/redis and got a redis deployment up and running (seemingly).
I can kube exec -it into the redis pod, and can see it's binding on 0.0.0.0 and can log in with redis-cli -h localhost and redis-cli -h <pod_ip>, but not redis-cli -h <service_ip> (from kubectl get svc.)
If I run up another pod (which is how I ran into this issue) I can ping redis.default and it shows the DNS resolving to the correct service IP but gives no response. When I telnet <service_ip> 6379 or redis-cli -h <service_ip> it hangs indefinitely.
I'm at a bit of a loss as to how to debug further. I can't ssh into the node to see what docker is doing.
Also, I'd initially tried this with a standard Alphine-Redis image, so the helm was a fallback. I tried it yesterday and the helm one worked, but the manual one didn't. Today doing it (on a newly built ACS cluster) it's not working at all on either.
I'm going to spin up the cluster again to see if its a stable reproduce, but I'm pretty confident something fishy is going on.
PS - I have a VNet with overlapping subnet 10.0.0.0/16 in a different region, when I go into the address range I do get a warning there that there is a clash, could that affect it?
<EDIT>
Some new insight... It's something to do with alpine based images (which we've been aiming to use)...
So kube run a --image=nginx (which is ubuntu based) and I can shell in, install telnet and connect to the redis service.
But, e.g. kubectl run c --image=rlesouef/alpine-redis then shell in, and telnet doesn't work to the same redis service.
</EDIT>
There was a similar issue https://github.com/Azure/acs-engine/issues/539 that has been fixed recently. One thing to verify is to check if nslookup works in the container.

Anonymous pull on docker repo in artifactory

I am on artifactory version 4.6 and have the following requirement on the docker registry.
Allow anonymous pulls on docker repository
Force authentication on the SAME docker repository
I know this is avaliable out of the box on the later versions of artifactory. However upgrading isnt an option for us for a while.
Does the following work around work?
Create a virtual docker repository on port 8443 and don't force authentication , call it docker-virtual
Create a local docker repository and force authentication, call it docker-local on port 8444
Configure 'docker-virtual' with the default deployment directory as 'docker-local'
docker pull docker-virtual should work
docker push docker-virtual should ask for credentials
Upon failure , I should be able to docker login docker-virtual
and docker push docker-virtual/myImage
Not sure about the artifactory side, but perhaps the following Docker advice helps.
You can start run two registries, one RW with authentication, and a second RO without any authentication, in Docker:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=My Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
docker run -d -p 5001:5000 --restart=always --name registry-ro \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry:ro \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note the volume settings for /var/lib/registry in each container. Then to pull from the anonymous registry, you'd just need to change the port. Since the filesystem is RO, any attempt to push to 5001 will fail.
The closest thing you can achieve is failing on docker push without credentials (while succeeding with pull).
No idea if this works with artifactory sorry.... you could try this handy project for docker registry auth.
Configure the registry to use this https://hub.docker.com/r/cesanta/docker_auth/
# registry config.yml
...
auth:
token:
# can be the same as your docker registry if you use nginx to proxy /auth to docker_auth
# https://docs.docker.com/registry/recipes/nginx/
realm: "example.com:5001/auth"
service: "Docker registry"
issuer: "Docker Registry auth server"
rootcertbundle: /certs/domain.crt
And allow anonymous with the corresponding ACL
# cesanta/docker_auth auth_config.yml
...
users:
# Password is specified as a BCrypt hash. Use htpasswd -B to generate.
"admin":
password: "$2y$05$LO.vzwpWC5LZGqThvEfznu8qhb5SGqvBSWY1J3yZ4AxtMRZ3kN5jC" # badmin
"": {} # Allow anonymous (no "docker login") access.
ldap_auth:
# See: https://github.com/cesanta/docker_auth/blob/master/examples/ldap_auth.yml
acl:
# See https://github.com/cesanta/docker_auth/blob/master/examples/reference.yml#L178
- match: {account: "/.+/"}
actions: ["*"]
comment: "Logged in users do anything."
- match: {account: ""}
actions: ["pull"]
comment: "Anonymous users can pull anything."
# Access is denied by default.