Overriding default traefik frontend rule when used with docker compose - traefik

By default, Traefik creates a frontend rule for newly started docker containers:
https://docs.traefik.io/configuration/backends/docker/
traefik.frontend.rule=EXPR | Overrides the default frontend rule. Default: Host:{containerName}.{domain} or Host:{service}.{project_name}.{domain} if you are using docker-compose.
I am using docker-compose, and this default behavior is not useful to me. I want to use a docker label and change the frontend rule to be Host:{hostname}.{domain} or even the default non-compose {containerName}.{domain}, but this does not work. The label does not get parsed. The rule ends up being the literal label string.
I do not understand the documentation.What exactly is the EXPR in traefik.frontend.rule=EXPR?

The {hostname} and {domain} is just to show that the default will take the values from docker. It, unfortunately, doesn't do replacement from global values like it would seem with how that is written.
The EXPR would be any sort of traefik rule expression like "Host: myapp.example.com". More examples can be seen in the documentation here: https://docs.traefik.io/basics/#examples

Related

Avoid setting `.tls=true` for every route

I'm using traefik as a reverse proxy. Clients connect to Traefik via HTTPS, but Traefik connects to the service via HTTP.
I decided to add a test service to my docker compose file:
test:
image: hashicorp/http-echo
command: -text="Hello, World!"
labels:
- "traefik.http.routers.test-domain.rule=Host(`test.localhost`)"
- "traefik.http.routers.test-domain.tls=true"
Everything works and I can see "Hello, World!" at https://test.localhost. However, if I remove traefik.http.routers.test-domain.tls=true it no longer works, and traefik start returning 404 at that URL.
I can see how the .rule label would need to be provided for every single service, because in each case the domain would be different. But the .tls label would always be exactly the same, since all of my services will use TLS termination with HTTP to backend. It seems tedious to keep adding traefik.http.routers.[ ... ].domain.tls=true to all my services. Is there a way to have traefik just assume all services will be .tls=true?
According to Ldez, this can be done by setting tls to true on the :443 entrypoint:
traefik:
# ...
command:
# ...
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls=true
# ...

Traefik: How do I best configure many websites (static virtual hosts) on the same backend container?

I have a webserver (currently nginx, but could just as well be Apache) which hosts many static websites as virtual hosts. With "many", I mean dozens, and I keep adding and removing them.
In my setup, I have a docker container with traefik, and a docker container with nginx. The same nginx container serves all these websites (that point is key to my question).
What is the best way to tell traefik about these host names, so that traefik can create Let's Encrypt certificates for it, and route traffic to this container?
The standard way seems to be to use a label on the nginx container, e.g.
docker run ...
-l traefik.backend=webserver \
-l traefik.port=80 \
-l traefik.frontend.rule="Host:example.com,www.example.com,docs.example.com,example.net,www.example.net,docs.example.net,example.org,www.example.org,example.de,www.example.de,development.com,www.development.com"
and so on. That list goes on and on and on. This works, but:
This is not very maintainable.
Worse, Traefik seems to pull one single cert for all these names. Let's say development.com is a completely differ entity from example.com, and I don't want both of them to be listed in the same cert.
Even worse, let's say I made a mistake somewhere. I misconfigured docs.example.net. Or, worse, they all work, but then in the future, I forget to renew example.net. And my Let's Encrypt cert needs to be renewed. Now, that renewal will fail, because if any one of the host names fails to verify, Let's Encrypt will refuse the certificate, which is totally correct. But means that all my websites will be down, suddenly at any unforseable time in the future, if any of the hostnames has a problem. That's a big risk. A risk one shouldn't take. The websites should be independent in the certificate.
It appears I am not using this right. So, my question is: How can I better configure this, so that each website is independent (in the configuration of traefik, and esp. in the SSL certificate), but I still use only one webserver container for all of them?
Here's what I tried:
I tried to manually configure the certificates in [acme] sections:
[[acme.domains]]
main = "example.com"
sans = [ "www.example.com" ]
[[acme.domains]]
main = "example.org"
sans = [ "www.example.org" ]
That looks more sane to me than the long label line on docker run. traefik apparently tries to get these certs, and writes them to acme.json. But it doesn't seem to use them. Even with these lines, traefik still uses the cert that has all the hostnames from the traefik.frontend.rule instead of the manually configured, more specific cert. That seems ill-advised.
Also, if I remove the hostname from the traefik.frontend.rule, traefik doesn't find the backend and returns a 404 to the client. That's logical, because traefik doesn't know where to route the traffic for this host.
I tried to set up [frontend] rules.
[frontends]
[frontends.example]
backend = "webserver"
[frontends.example.routes.com]
rule = "Host:example.com,www.example.com,docs.example.com"
[frontends.example.routes.org]
rule = "Host:example.org,www.example.org,docs.example.org"
...
That seems to be the right direction, although the configuration directives are very chatty, esp. all the section headers.
But I couldn't get this to work, all I got was "backend not found" in the traefik access log.

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.

Problems setting up artifactory as a docker registry

im currently trying to setup a private Docker Registry in Artifacory (v4.7.4).
I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo.
Reverse Proxy is working but if i try something like:
docker pull docker.my.company.com/ubuntu:16.04
I'm getting:
https://docker.my.company.com/v1/_ping: x509: certificate is valid for
*.company.com, company.com, not docker.my.company.com
My Artifactory URL is: "my.company.com/artifactory" and i want the repositorys to be accessible on repo.my.company.com/artifactory.
I also have a Wildcard Certificate for company.com so i don't understand whats the problem here.
Or is there a way to access Artifactory over just http without SSL
Any Ideas?
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper:
E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository will be accessible under, for example my.company.com:5001/ instead of docker.my.company.com.
You can find the explanation about the change and how to do it using Artifactory Proxy settings generator in the User Guide.
If you are prepared to live with the certificate-name mismatch for-now, and understand the security implications of ignoring the name-mismatch and accessing the repo insecurely, you can apply the following workaround:
Edit /etc/default/docker and add the option DOCKER_OPTS="--insecure-registry docker.my.company.com".
Restart docker: [sudo] service docker restart.

HAProxy 1.5.12 hdr(<name>)/hdr_?

I ma trying to figure out which of the hdrs to use in this situation. According to the documentation http://www.haproxy.org/download/1.5/doc/configuration.txt the following is stated:
hdr(<name>) The HTTP header <name> will be looked up in each HTTP
request. Just as with the equivalent ACL 'hdr()' function,
the header name in parenthesis is not case sensitive. If the
header is absent or if it does not contain any value, the
roundrobin algorithm is applied instead.
An optional 'use_domain_only' parameter is available, for
reducing the hash algorithm to the main domain part with some
specific headers such as 'Host'. For instance, in the Host
value "haproxy.1wt.eu", only "1wt" will be considered.
This algorithm is static by default, which means that
changing a server's weight on the fly will have no effect,
but this can be changed using "hash-type".
1) Where is the list of different <name>s?
2) Which one do I use when trying to use haproxy as a reverse proxy in this case (subdomains), would I use hdr() or would I use hdr_dom() for example:
acl host_deusexmachina hdr(<name>) -i deus.ex.machina.mydomain.com
acl host_fela hdr(<name>) -i fela.mydomain.com
acl host_mydomain hdr(<name>) -i mydomain.com
The different names are the headers available in the HTTP protocol.
You should probably use Host.