Don't prepend http:// to Endpoint Subset IP - ssl

I have a Kubernetes Ingress, pointing to a headless service, pointing finally to an Endpoints object that routes to an external IP address. The following is the configuration for the endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: my-chart
subsets:
- addresses:
- ip: **.**.**.**
ports:
- port: 443
However, the upstream connection fails with 'connection reset by peer', and on looking at the logs I see the following error in the Kubernetes nginx-ingress-controller:
2020/01/15 14:39:50 [error] 24546#24546: *240425068 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: *****, server: dev.somehost.com, request: "GET / HTTP/1.1", upstream: "http://**.**.**.**:443/", host: "dev.somehost.com"
My theory is that the combination of http:// and the 443 port is what is triggering this (tested with cURL commands). How do I either 1) Specify a different protocol for the endpoint object or 2) just prevent the prepending of http://
Additional notes:
1) SSL is enabled on the target IP, and if I curl it I can set up a secure connection
2) SSL passthrough doesn't really work here. The incoming and outgoing requests will use two different SSL connections with two different certificates.
3) I want the Ingress host to be the SNI (and it looks like this may default to being the case)
Edit: Ingress controller version: 0.21.0-rancher3

We were able to solve this by adding the following to the metadata of our Ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
The first command turns on HTTPS for the backend protocol, and the second command enables SNI

Related

TLS handshake fails intermittently when using HAProxy Ingress Controller

I'm using HAProxy Ingress Controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) for TLS-termination for my app.
I have a simple Node.JS server listening on 8080 for HTTP, and 1935 as a simple echo server (not HTTP).
And I use HAProxy Ingress controller to wrap the ports in TLS. (8080 -> 443 (HTTPS), 1935 -> 1936 (TCP + TLS))
I installed HAProxy Ingress Controller with
helm upgrade --install haproxy-ingress incubator/haproxy-ingress \
--namespace test \
-f ./haproxy-ingress-values.yaml \
--version v0.0.27
, where the content of haproxy-ingress-values.yaml is
controller:
ingressClass: haproxy
replicaCount: 1
service:
type: LoadBalancer
tcp:
1936: "test/simple-server:1935:::test/ingress-cert"
nodeSelector:
"kubernetes.io/os": linux
defaultBackend:
nodeSelector:
"kubernetes.io/os": linux
And here's my ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
secretName: ingress-cert
rules:
- http:
paths:
- path: /
backend:
serviceName: "simple-server"
servicePort: 8080
The cert is self-signed.
If I test the TLS handshake with
echo | openssl s_client -connect "<IP>":1936
Sometimes (about 1/3 of the times) it fails with
CONNECTED(00000005)
139828847829440:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 316 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
The same problem doesn't happen for 443 port.
See here for the details of the settings to reproduce the problem.
[edit]
As pointed out by #JoaoMorais, it's because the default statistic port is 1936.
Although I didn't turn on statistics, it seems like it still interferes with the behavior.
There're two solutions that work for me.
Change my service's 1936 port to another
Change the stats port by adding values like below when installing the haproxy-ingress chart.
controller:
stats:
port: 5000
HAProxy by default allows to reuse the same port number across the same or other frontend/listen sections and also across other haproxy process. This can be changed adding noreuseport in the global section.
The default HAProxy Ingress configuration uses port number 1936 to expose stats. If such port number is reused by eg a tcp proxy, the incoming requests will be distributed between both frontends - sometimes your service will be called, sometimes the stats page. Changing the tcp proxy or the stats page (doc here) to another port should solve the issue.

How to configure ssl for ldap/opendj while using ISTIO service mesh

I have a couple of microservices and our backend is opendj/ldap. It has been configured to use SSL. Now we are trying to use ISTIO as our k8s service mesh. Every other service works fine but the ldap server - opendj - is not. My gues is it's because of the ssl configuration. It's meant to use self-signed cert.
I have a script that creates a self-signed cert in istio namespace and I have tried to use it like this on the gateway.yaml
- port:
number: 4444
name: tcp-admin
protocol: TCP
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
I also have tried to use
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: opendj-istio-mtls
spec:
host: opendj.{{.Release.Namespace }}.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
credentialName: tls-certificate
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: opendj-receive-tls
spec:
targets:
- name: opendj
peers:
- mtls: {}
For the ldap server but it's not connecting. While trying to use the tls spec in gateway.yaml I am getting this error
Error: admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: server cannot have TLS settings for non HTTPS/TLS ports
And the logs from opendj server
INFO - entrypoint - 2020-06-17 12:49:44,768 - Configuring OpenDJ.
WARNING - entrypoint - 2020-06-17 12:49:48,987 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
WARNING - entrypoint - 2020-06-17 12:49:53,293 -
Unable to connect to the server at
"oj-opendj-0.opendj.default.svc.cluster.local" on port 4444
Can someone please help me out how I should approach this.
To Enable non-https traffic over TLS connections you have to use Protocol TLS. TLS implies the connection will be routed based on the SNI header to the destination without terminating the TLS connection. You can check this.
- port:
number: 4444
name: tls
protocol: TLS
hosts:
- "*"
tls:
mode: SIMPLE # enable https on this port
credentialName: tls-certificate # fetch cert from k8s secret
Please check this istio documentation also.

Phoenix in Production on EC2 not rendering in HTTPS with AWS Load Balancer

I have followed this tutorial to set up my phoenix app on EC2, and later I added the load balancer for SSL.
I used ACM (Amazon Certificate Manager) to get the public certificate and applied on the Amazon Load Balancer (ALB).
I'm still a bit fuzzy on the port mapping, so I suppose it might be the cause.
# config/prod.exs
host = System.get_env("HOST") || "example.com"
config :app_web, AppWeb.Endpoint,
force_ssl: [rewrite_on: [:x_forwarded_proto]],
load_from_system_env: true,
http: [port: 80],
url: [host: host, port: 80],
url: [host: host, port: 443, scheme: "https"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
version: '2'
services:
kroo:
image: [image url]
environment:
- HOST=0.0.0.0
ports:
- '443:443'
- '80:80'
$ docker ps
PORTS
0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
$ docker logs
01:56:30.177 [info] Running AppWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:80 (http)
01:56:30.177 [info] Access AppWeb.Endpoint at https://example.com
Running Release tasks
[]
01:56:31.316 [info] Already up
01:56:33.085 [info] Plug.SSL is redirecting GET / to https://example.com with status 301
When I don't include force_ssl: [rewrite_on: [:x_forwarded_proto]], I'm able to have the page displayed fine in http, but when I include force_ssl, it redirects the https which is working fine, but I'm getting unable to connect error.
My confusion is that, since the load balancer is taking care of the SSL, I don't have the key and the certificate for SSL, which is why I don't have https: [] option in prod.exs.
Could someone point out what I'm doing wrong here?
Thanks
UPDATE: I finally got it working, below is my working configs in case anyone would find it helpful.
# config/prod.exs
# https config is not needed since ALB is handling the SSL
# Phoenix app serving in http is fine
config :app_web, AppWeb.Endpoint,
load_from_system_env: true,
http: [port: 8080],
url: [host: "example.com"],
server: true,
secret_key_base: System.get_env("SECRET_KEY_BASE")
# docker-compose.yml
# map phoenix port 8080 to docker 8080
ports:
- '8080:8080'
Since I'm not providing SSL certificates, but I still want to force ssl, like #jamesvl suggested in answer, use your load balancer to redirect http traffic to https.
If you need help setting up SSL on ALB, I followed this guide
If somehow your app still not showing up under your domain, make sure that you have an A Record with an alias map to the DNS name of your load balancer
I would suggest setting the listen port of your docker container to something other than 80, and don't listen on 443 at all.
Rationale
I think the issue may lie in the fact that your http: configuration is listening on port 80.
With force_ssl: enabled, you're indicating that you want http connections to go to port 443, but when something arrives on 443 (via the load balancer), you send it to your (listening) port 80... which redirects it back to 443?
Fix
Let Phoenix listen on an arbitrary port (say... 4010) for http only connections. (Since the load balancer does your SSL termination, all your communication with the load balancer will be over http.) This involves changing your Docker container to forward connections to that port as well - you don't want to listen on 80 or 443 at all in your container.
Your url: configuration would then be looking only at headers, redirecting http requests to https as needed.
By the way, Amazon's ALB can also do 80 -> 443 redirection for you if you setup the rules; this saves Phoenix from even having to have a config url: setup for port 80 at all

Cloudflare SSL Does Not Work With Elixir/Phoenix Backend

I am using Cloudflare's flexible SSL to secure my website, and it works fine when running my vue.js frontend. Essentially what it is doing is sharing the SSL cert with 50 random customers, and it does so by piping my DNS through their edge network. Basically, it did the thing I needed and was fine, but now that I am trying to tie it to a phoenix/elixir backend it is breaking.
The problem is that you can't make an http request from inside an ssl page, because you'll get this error:
Blocked loading mixed active content
This makes sense - if it's ssl on load, it needs to be ssl all the way down. So now I need to add SSL to elixir.
This site (https://elixirforum.com/t/run-phoenix-https-behind-cloudflare-without-key-keyfile/12660/2) seemed to have the solution! Their answer was:
configs = Keyword.put(config, :http, [:inet6, port: "80"])
|> Keyword.put(:url, [scheme: "https", host: hostname, port: "443"])
So I made my config like this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
That only allows me to get to http:etc! So I've also tried this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ :inet6, port: "4443"],
url: [scheme: "https", host: "my.website", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
Which doesn't work of course because there's no PEM files. Since I'm only using elixir as an API (and not a DNS) I can't use solutions like this (http://51percent.tech/blog/uncategorized/serving-phoenix-apps-ssl-and-lets-encrypt/), because letsencrypt does not allow IP address only auth (https://www.digitalocean.com/community/questions/ssl-for-ip-address).
So at this point I'm very confused. Does anyone have any advice?
EDIT:
Someone mentioned that you can go on to cloudflare and generate TLS certs by going to crypto>Origin Certificates>Create Certificate. I did that, downloaded the files, saved them in my project and ran this:
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
https: [ port: "4443",
keyfile: "priv/ssl/cloudflare/private.key",
certfile: "priv/ssl/cloudflare/public.pem"],
url: [scheme: "https", host: "website.me", port: "443"],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
So what are the results of all the possible ways to query the backend?
Well I'm running docker-compose so https://backendservice:4443 is what I query from the frontend. That gives me -
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://backendservice:4443/getComments?postnum=6. (Reason: CORS request did not succeed).[Learn More]
value of error : Error: "Network Error"
exports https://mywebsi.te/js/chunk-vendors.4345f11a.js:49:15423
onerror https://mywebsi.te/js/chunk-vendors.4345f11a.js:65:538540
actions.js:61:12
So that clearly doesn't work.
I can go to http://my.ip.address:4000, but I cannot go to https://my.ip.address:4443.
As far as I can tell cloudflare TLS certificates do not work.
Or, more likely, I am doing something stupid in writing the elixir config.
FURTHER CLARIFICATION:
Yes, there is a CORS header error above. However please note that it is only firing for https request and NOT http requests. Why this is happening is very confusing. I have a cors plugin for elixir in the entrypoint of my application that is currently allowing * incoming requests. This is it - it should be pretty straight forward:
plug CORSPlug, origin: "*"
More information can be found here (https://github.com/mschae/cors_plug).
I seem to have been able to make my website work with the following configuration:
config :my_app, MyAppWeb.Endpoint,
force_ssl: [hsts: true],
url: [host: "my.website", port: 443],
http: [:inet6, port: 4000],
https: [
:inet6,
port: 8443,
cipher_suite: :strong,
keyfile: "private.key.pem",
certfile: "public.cert.pem",
cacertfile: "intermediate.cert.pem"
],
cache_static_manifest: "priv/static/cache_manifest.json"
where private.key.pem and public.cert.pem are the origin certificate downloaded from Cloudflare.
(Note that the origin certificate you can download from Cloudflare is only useful for encrypting the connection between your website and Cloudflare.)
I also had to add routing rules via iptables.
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 4000
iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443
This configuration also worked before for a setup using letsencrypt certificates.
I'm not sure if the cacertfile: "intermediate.cert.pem" part is needed. I obtained it from "Step 4" of the Cloudflare documentation.
I never used Cloudflare but I know that phoenix can pretend it is serving https content if x-forwarded-proto is set in request headers. Your configuration is OK (without https part, you don't need it).
Try to add force_ssl option in endpoint configuration. This will instruct to Plug.SSL to force SSL and redirect any http request to https.
config :albatross, AlbatrossWeb.Endpoint,
http: [:inet6, port: "4000"],
url: [scheme: "https", host: "my.website", port: "443"],
force_ssl: [rewrite_on: [:x_forwarded_proto], host: nil],
secret_key_base: "SUPERSECRET",
render_errors: [view: AlbatrossWeb.ErrorView, accepts: ~w(html json)],
pubsub: [name: Albatross.PubSub,
adapter: Phoenix.PubSub.PG2]
In my experience with Cloudflare and web backends (and without seeing the exact request that is causing the issue), this is usually caused by hard-coding a CSS/JS dependency with http:// or by making an AJAX request with http:// hard-coded. If the back end you are requesting with http:// doesn't automatically redirect to https://, you will get the error you're seeing.
If you're using Cloudflare's "One-Click SSL", this is how requests are being made to your server:
[ Client Browser ] <-- HTTPS --> [ Cloudflare ] <-- HTTP --> [ Your Server ]
Since the communication between Cloudflare and your server is all over http://, you should not need to change any Phoenix configuration at all, but there are 2 potential sources of error:
Automatic redirection to https:// is not being done by Cloudflare (or Cloudflare is not serving your content)
You have some CSS/JS dependencies or Ajax requests that are hard-coded to http:// and the server responding to the requests is not automatically redirecting to https://.
Troubleshooting point 1: According to Cloudflare's docs for their "One-Click SSL", they should automatically redirect http:// requests to https:// (although this might be a setting that you need to change). If a request comes to your domain over http://, Cloudflare should automatically redirect to https://. You can check whether this is happening by requesting a page from your backend in your browser with http:// explicitly included at the front of the URL. If you are not redirected to https://, it could mean that a) Cloudflare is not actually in between your browser and the backend or b) that Cloudflare is not automatically redirecting to https://. In either case, you have some hints about your next step in solving the problem.
Troubleshooting point 2: You can check this by using the "Network" tab in the developer tools in your browser, or by manually going through your website code looking for CSS/JS dependencies or Ajax requests hard-coded with http://. In the developer tools, you can check for exactly which request(s) are causing the problem (the line should be red and have an error about mixed content). From there you can check where this request is coming from in your code. When searching through your code for hard-coded http:// requests, most of the time you can simply replace it with https:// directly, but you should double-check the URL in your browser to make sure that the URL is actually available over https://. If the CSS/JS dependency or Ajax endpoint is not actually available over https:// then you will need to remove/replace it.
Hope this helps.

Ensure SSL all the way to the pod using Google Container Engine Ingress

I have a pod that serves HTTPS traffic on 443 using a self-signed certificate with a CN of app that matches the service name, app. It also serves HTTP on 80.
I also have an Ingress object that exposes the pod externally on foo.com and I use kube-lego to dynamically fetch and configure a LetsEncrypt certificate for foo.com:
spec:
rules:
- host: foo.com
http:
paths:
- backend:
serviceName: kube-lego-gce
servicePort: 8080
path: /.well-known/acme-challenge/*
- backend:
serviceName: app
servicePort: 80
path: /*
tls:
- hosts:
- foo.com
secretName: app-tls-staging
This means there are two levels of SSL. If I set the servicePort of the Ingress to 443 to ensure traffic between Google's LB and my pod is encrypted, the endpoint returns 502 Server Error. Is this because it doesn't know to trust my self-signed cert? Is there a way to specify a CA?
Once I set it back to 80 the Ingress starts working normally again after a few minutes.
What you're looking for is "re-encryption" for Load Balancer <=> cluster traffic. If you see here, re-encryption is listed as "Future Work".
But I see this feature is merged: https://github.com/kubernetes/ingress/pull/519/commits
It looks like you can find an example here and it's described as "Backend HTTPS" in this document: https://github.com/kubernetes/ingress/tree/master/controllers/gce#backend-https
However today this feature seems to be in alpha, so it may be subject to change and you may need to create an alpha GKE cluster (which is short-lived) to use it.