Traefik: How to configure route to direct every request to http service? - traefik

I'm too dumb and Google is not helpful.
I'm trying to set up the simplest configuration:
Traefik 2 (in latest Docker container), handling incoming requests...
should direct all incoming requests (http, https) to another service, the Traefik whoami demo container (which I've got already running)...
while terminating the SSL connection, calling the service via http on port 80...
while using a configuration file with explicitly defined routes
How would I configure this? Here's my try:
entryPoints:
web:
address: :80
websecure:
address: :443
log:
filePath: "/home/LogFiles/traefik.log"
level: DEBUG
accessLog:
filePath: "/home/LogFiles/trafik-access.log"
providers:
file:
filename: "/home/traefik.yml"
http:
routers:
route-https:
rule: "Host(`traefik-test.azurewebsites.net`) && PathPrefix(`/whoami`)"
service: "whoami"
tls: {}
route-http:
rule: "Host(`traefik-test.azurewebsites.net`) && PathPrefix(`/whoami`)"
service: "whoami"
services:
whoami:
loadBalancer:
servers:
- url: "http://whoami-test.azurewebsites.net/"
I am not sure about how the https to http conversion works. The documentation says that it's happening automatically. Another part of the doc says you have to use two routers and the tls: {} part tells to terminate the TLS connection. That's what I am doing above. (Is that correct?)
The whomami service URL can be accessed in the browser without problems, via http and https. But when calling it via Traefik (for the above sample this would be https://traefik-test.azurewebsites.net/whoami) I get a 400 and the Browser shows "Bad Request". I suspect the https->http part is not working.
Samples on the web commonly show how to orchestrate multiple containers that get discovered by Traefik. That's not what I'm doing here. I just want to tell Treafik about my already running service. Take every request, route everything to my service via http. Should be simple?
Any hints are appreciated.

There were two errors preventing my configuration from working.
Number one: nasty YAML. See the two spaces before "-"?
servers:
- url: "http://whoami-test.azurewebsites.net/"
They have to go for this to be valid:
servers:
- url: "http://whoami-test.azurewebsites.net/"
Number two: the Host header set by Traefik (which is set to the proxy host) makes the service web app redirect back to my proxy. The passHostHeader: false configuration was necessary:
services:
whoami:
loadBalancer:
passHostHeader: false # <------ added this
servers:
- url: "http://whoami-test.azurewebsites.net/"
Passing the proxy host as "Host" header causes some services to 301-redirect back to the proxy, causing a redirect loop between proxy and service. A .NET Core app (Kestrel) will respond with "Bad Request" instead. Omitting the header is the solution in my case.

Related

Avoid setting `.tls=true` for every route

I'm using traefik as a reverse proxy. Clients connect to Traefik via HTTPS, but Traefik connects to the service via HTTP.
I decided to add a test service to my docker compose file:
test:
image: hashicorp/http-echo
command: -text="Hello, World!"
labels:
- "traefik.http.routers.test-domain.rule=Host(`test.localhost`)"
- "traefik.http.routers.test-domain.tls=true"
Everything works and I can see "Hello, World!" at https://test.localhost. However, if I remove traefik.http.routers.test-domain.tls=true it no longer works, and traefik start returning 404 at that URL.
I can see how the .rule label would need to be provided for every single service, because in each case the domain would be different. But the .tls label would always be exactly the same, since all of my services will use TLS termination with HTTP to backend. It seems tedious to keep adding traefik.http.routers.[ ... ].domain.tls=true to all my services. Is there a way to have traefik just assume all services will be .tls=true?
According to Ldez, this can be done by setting tls to true on the :443 entrypoint:
traefik:
# ...
command:
# ...
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls=true
# ...

traefik route configuration with http and https

Our web application is already running on on-prem Kubernetes setup with the following traefik configuration. The HTTPS endpoints are working fine, and now we need to add two services that run on HTTP with their own specific ports.
So basically we need to do the following routing:
[existing setup]
HTTPS adminapp.mydomain.com -> Admin UI App
HTTPS myapp.mydomain.com -> UI App
HTTPS api.mydomain.com -> Backend API
[new services]
HTTP api.mydomain.com:8111 -> Service1 API Integration with HTTP
HTTP api.mydomain.com:9111 -> Service2 API Integration with HTTP
Service1 and Service2 are intranet systems that will send the data to their own specific ports.
Here is the traefik configuration:
## Entrypoint Configurations
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
service1:
address: ":8111"
service2:
address: ":9111"
----
## Service1 IngressRoute
entryPoints:
- service1
routes:
- match: Host(`api.mydomain.com`)
kind: Rule
services:
- name: service1-clusterip-service
port: 8111
----
## Service2 IngressRoute
entryPoints:
- service2
routes:
- match: Host(`api.mydomain.com`)
kind: Rule
services:
- name: service2-clusterip-service
port: 9111
When we try to call the Service1 service with the following API http://api.mydomain.com:8111/path/arg/item over the HTTP request, getting this specific error.
upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
There is not much detail in the access logs as well to identify where the request is breaking.
We have a middleware to force redirect from HTTP to HTTPS, but that is removed to test the above configurations.
Any idea on why the configuration is not working as expected!!
Issue resolved. We found a typo in the service that was pointing to the wrong pod selector.
Also, our setup changed a bit so putting it here if anyone else faces the same issue.
[existing setup]
HTTPS adminapp.mydomain.com -> Admin UI App
HTTPS myapp.mydomain.com -> UI App
HTTPS api.mydomain.com -> Backend API
[new services]
HTTP api.mydomain.com:8111 -> Service1 API Integration with HTTP
TCP api.mydomain.com:9111 -> Service2 API Integration with TCP
For TCP integration make sure you follow these:
Entrypoint is defined with :port/tcp
Router is defined with IngressRouterTCP
If you are doing then host check, then use the HostSNI(`*`) if tls is disabled.

Cloudflare SSL Does Not Work With Elixir/Phoenix Backend

I am using Cloudflare's flexible SSL to secure my website, and it works fine when running my vue.js frontend. Essentially what it is doing is sharing the SSL cert with 50 random customers, and it does so by piping my DNS through their edge network. Basically, it did the thing I needed and was fine, but now that I am trying to tie it to a phoenix/elixir backend it is breaking.
The problem is that you can't make an http request from inside an ssl page, because you'll get this error:
Blocked loading mixed active content
This makes sense - if it's ssl on load, it needs to be ssl all the way down. So now I need to add SSL to elixir.
This site (https://elixirforum.com/t/run-phoenix-https-behind-cloudflare-without-key-keyfile/12660/2) seemed to have the solution! Their answer was:
configs = Keyword.put(config, :http, [:inet6, port: "80"])
|> Keyword.put(:url, [scheme: "https", host: hostname, port: "443"])
So I made my config like this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
That only allows me to get to http:etc! So I've also tried this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ :inet6, port: "4443"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
Which doesn't work of course because there's no PEM files. Since I'm only using elixir as an API (and not a DNS) I can't use solutions like this (http://51percent.tech/blog/uncategorized/serving-phoenix-apps-ssl-and-lets-encrypt/), because letsencrypt does not allow IP address only auth (https://www.digitalocean.com/community/questions/ssl-for-ip-address).
So at this point I'm very confused. Does anyone have any advice?
EDIT:
Someone mentioned that you can go on to cloudflare and generate TLS certs by going to crypto>Origin Certificates>Create Certificate. I did that, downloaded the files, saved them in my project and ran this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ port: "4443",
keyfile: "priv/ssl/cloudflare/private.key",
certfile: "priv/ssl/cloudflare/public.pem"],
url: [scheme: "https", host: "website.me", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
So what are the results of all the possible ways to query the backend?
Well I'm running docker-compose so https://backendservice:4443 is what I query from the frontend. That gives me -
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://backendservice:4443/getComments?postnum=6. (Reason: CORS request did not succeed).[Learn More]
value of error : Error: "Network Error"
exports https://mywebsi.te/js/chunk-vendors.4345f11a.js:49:15423
onerror https://mywebsi.te/js/chunk-vendors.4345f11a.js:65:538540
actions.js:61:12
So that clearly doesn't work.
I can go to http://my.ip.address:4000, but I cannot go to https://my.ip.address:4443.
As far as I can tell cloudflare TLS certificates do not work.
Or, more likely, I am doing something stupid in writing the elixir config.
FURTHER CLARIFICATION:
Yes, there is a CORS header error above. However please note that it is only firing for https request and NOT http requests. Why this is happening is very confusing. I have a cors plugin for elixir in the entrypoint of my application that is currently allowing * incoming requests. This is it - it should be pretty straight forward:
plug CORSPlug, origin: "*"
More information can be found here (https://github.com/mschae/cors_plug).
I seem to have been able to make my website work with the following configuration:
config :my_app, MyAppWeb.Endpoint,
force_ssl: [hsts: true],
url: [host: "my.website", port: 443],
http: [:inet6, port: 4000],
https: [
:inet6,
port: 8443,
cipher_suite: :strong,
keyfile: "private.key.pem",
certfile: "public.cert.pem",
cacertfile: "intermediate.cert.pem"
],
cache_static_manifest: "priv/static/cache_manifest.json"
where private.key.pem and public.cert.pem are the origin certificate downloaded from Cloudflare.
(Note that the origin certificate you can download from Cloudflare is only useful for encrypting the connection between your website and Cloudflare.)
I also had to add routing rules via iptables.
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 4000
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443
This configuration also worked before for a setup using letsencrypt certificates.
I'm not sure if the cacertfile: "intermediate.cert.pem" part is needed. I obtained it from "Step 4" of the Cloudflare documentation.
I never used Cloudflare but I know that phoenix can pretend it is serving https content if x-forwarded-proto is set in request headers. Your configuration is OK (without https part, you don't need it).
Try to add force_ssl option in endpoint configuration. This will instruct to Plug.SSL to force SSL and redirect any http request to https.
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
force_ssl: [rewrite_on: [:x_forwarded_proto], host: nil],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
In my experience with Cloudflare and web backends (and without seeing the exact request that is causing the issue), this is usually caused by hard-coding a CSS/JS dependency with http:// or by making an AJAX request with http:// hard-coded. If the back end you are requesting with http:// doesn't automatically redirect to https://, you will get the error you're seeing.
If you're using Cloudflare's "One-Click SSL", this is how requests are being made to your server:
[ Client Browser ] <-- HTTPS --> [ Cloudflare ] <-- HTTP --> [ Your Server ]
Since the communication between Cloudflare and your server is all over http://, you should not need to change any Phoenix configuration at all, but there are 2 potential sources of error:
Automatic redirection to https:// is not being done by Cloudflare (or Cloudflare is not serving your content)
You have some CSS/JS dependencies or Ajax requests that are hard-coded to http:// and the server responding to the requests is not automatically redirecting to https://.
Troubleshooting point 1: According to Cloudflare's docs for their "One-Click SSL", they should automatically redirect http:// requests to https:// (although this might be a setting that you need to change). If a request comes to your domain over http://, Cloudflare should automatically redirect to https://. You can check whether this is happening by requesting a page from your backend in your browser with http:// explicitly included at the front of the URL. If you are not redirected to https://, it could mean that a) Cloudflare is not actually in between your browser and the backend or b) that Cloudflare is not automatically redirecting to https://. In either case, you have some hints about your next step in solving the problem.
Troubleshooting point 2: You can check this by using the "Network" tab in the developer tools in your browser, or by manually going through your website code looking for CSS/JS dependencies or Ajax requests hard-coded with http://. In the developer tools, you can check for exactly which request(s) are causing the problem (the line should be red and have an error about mixed content). From there you can check where this request is coming from in your code. When searching through your code for hard-coded http:// requests, most of the time you can simply replace it with https:// directly, but you should double-check the URL in your browser to make sure that the URL is actually available over https://. If the CSS/JS dependency or Ajax endpoint is not actually available over https:// then you will need to remove/replace it.
Hope this helps.

HTTPS endpoints for local kubernetes backend service addresses, after SSL termination

I have a k8s cluster that sits behind a load balancer. The request for myapisite.com passes through the LB and is routed by k8s to the proper deployment, getting the SSL cert from the k8s load balancer ingress, which then routes to the service ingress, like so:
spec:
rules:
- host: myapisite.com
http:
paths:
- backend:
serviceName: ingress-605582265bdcdcee247c11ee5801957d
servicePort: 80
path: /
tls:
- hosts:
- myapisite.com
secretName: myapisitecert
status:
loadBalancer: {}
So my myapisite.com resolves on HTTPS correctly.
My problem is that, while maintaining the above setup (if possible), I need to be able to go to my local service endpoints within the same namespace on HTTPS, i.e. from another pod I should be able to curl or wget the following without a cert error:
https:\\myapisite.namespace.svc.cluster.local
Even if I were interested in not terminating SSL until the pod level, creating a SAN entry on the cert for a .local address is not an option, so that solution is not viable.
Is there some simple way I'm missing to make all local DNS trusted in k8s? Or some other solution here that's hopefully not a reinvention of the wheel? I am using kubernetes version 1.11 with CoreDNS.
Thanks, and sorry in advance if this is a dumb question.
If your application can listen on both HTTP and HTTPS, you can configure both. Meaning you will be able to access via both HTTP and HTTPS by your preference. Now, how you create and distribute certificate is a different story, but you must solve it on your own (probably by using your own CA and storing cert/key in secret). Unless you want to use something like Istio and its mutual tls support to secure traffic between services.
While you write what you want to achieve, we don't really know why. The reason for this need might actually help to suggest the best solution

IP/hostname whitelist way for a call to API from openshift

This is more of a how-to question , as i am still exploring openshift.
We have an orchestrator running out of openshift which calls to a REST API written in Flask hosted on apache/RHEL.
While our end point is token authenticated - we wanted to add a second level of restriction by allowing access from a whitelisted source of hosts.
But given the concept of openshift, that can span a container across any (number of ) server across its cluster.
What is the best way to go about whitelisting the action from a cluster of computers?
I tried to take a look at External Load Balancer for my the orchestrator service.
clusterIP: 172.30.65.163
externalIPs:
- 10.198.40.123
- 172.29.29.133
externalTrafficPolicy: Cluster
loadBalancerIP: 10.198.40.123
ports:
- nodePort: 30768
port: 5023
protocol: TCP
targetPort: 5023
selector:
app: dbrun-x2
deploymentconfig: dbrun-x2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.29.29.133
So I am unsure is - what is the IP I am expected to see on the other side [ my API apache access logs ] with this setup?
or
Does this LoadBalancer act as gateway only for incoming calls to openshift.
sorry about the long post - would appreciate some inputs