I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers.
However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.
Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination.
I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.
I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers.
Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible.
Answered this one myself: https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru
Our current (internal) ingress code:
---
rbac:
create: true
controller:
ingressClass: nginx-internal
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600
targetPorts:
https: 80
replicaCount: 3
defaultBackend:
replicaCount: 3
Can I simply add the following? :
controller:
extraArgs:
enable-ssl-passthrough: ""
Note: The above piece of code is what we use on our external ingress controller.
additionally, I found this:
Ingress and SSL Passthrough
Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from?
eg:
service.beta.kubernetes.io
nginx.ingress.kubernetes.io
Both come from the domain kubernetes.io, or does the sub-domain make a difference?
I mean: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
That page doesn't show any of the service.beta annotations on it ..
What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?
I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers.
However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)
So I found the answer to my own question(s):
The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term.
The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter.
And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io
To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:
---
rbac:
create: true
controller:
ingressClass: nginx-internal-ssl-passthrough
service:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3"
targetPorts:
https: 443
replicaCount: 2
extraArgs:
enable-ssl-passthrough: ""
defaultBackend:
replicaCount: 2
Toke me about 2 days of researching/searching the web & 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast:
https://kubernetes.github.io/ingress-nginx/deploy/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
https://kubernetes.io/docs/concepts/services-networking/service/
This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.
From here you can find out how to redirect the HTTPS traffic to the pod without SSL-termination
https://stackoverflow.com/a/66767691/1938507
Related
I was using istio 1.8.6, and now we have migrated to 1.14.5.
After this upgrade the AuthorizationPolicy stops to working as it was previously.
In my case, I have 2 namespaces, and I want to restrict my namespace-1 to only accept requests coming from namespace-2. Services in namespace-1 cannot call other services in that same namespace-1.
This is the AuthorizationPolicy:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-only-ns-1
namespace: namespace-1
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["namespace-2"]
I have a api gateway running in namespace-2 to map/route all services in namespace-1.
So, if an service in namespace-1 needs to call another service in that namspace, it must call it by the api gateway running in namespace-2.
This is a flow example allowed:
service-1.namespace-1 -> api-gateway.namespace-2 -> service-2.namespace-1
This is a flow example NOT allowed:
service-1.namespace-1 -> service-2.namespace-1
After this istio upgrade (1.14.5), the AuthorizationPolicy has stopped to work. This new version starts to block that requests with error: 403 Forbidden (RBAC).The services are not allowed to receive requests from nowhere.
The old version (1.8.6) was working correctly in namespace-1, blocking requests coming from namespace-1 and allowing requests from namespace-2.
Any idea was is going on?
I am running Traefik and first I configured to use cloudflare as my certresolver for domain1.com. But I have domain2.net hosted on Route 53.
This is what I have so far:
--entrypoints.websecure.http.tls.certresolver=cloudflare
--entrypoints.websecure.http.tls.domains[0].main=local.domain1.com
--entrypoints.websecure.http.tls.domains[0].sans=*.local.domain1.com
--certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
--certificatesresolvers.cloudflare.acme.email=myemail#gmail.com
--certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
--certificatesresolvers.cloudflare.acme.storage=/certs/acme.json
--entrypoints.websecure.web.tls.domains[1].main=local.domain2.net
--entrypoints.websecure.web.tls.domains[1].sans=*.local.domain2.net
--certificatesresolvers.route53.acme.dnschallenge.provider=route53
--certificatesresolvers.route53.acme.email=myemail#gmail.com
--certificatesresolvers.route53.acme.storage=/certs/acme.json
But when I setup this way, only route53 is configured as a certificate resolver. That's because it's being called last.
Is there a way to make this work with multiple certificate resolvers?
Thanks!
I figure this out and forgot to update.
So just create additional args on traefik deployment:
- --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.cloudflare.acme.email=myemail#gmail.com
- --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
- --certificatesresolvers.cloudflare.acme.storage=/certs/cloudflare.json
- --certificatesresolvers.route53.acme.dnschallenge.provider=route53
- --certificatesresolvers.route53.acme.email=myemail#gmail.com
- --certificatesresolvers.route53.acme.storage=/certs/route53.json
And then the entrypoints you add to the annotation of the app deployment with its own domain.
We are leveraging Kubernetes ingress with external service JWT authentication using auth-url as a part of the ingress.
Now we want to use the auth-cache-key annotation to control the caching of JWT token. At current our external auth service just respond with 200/401 by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.
annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
Looking at the example, $remote_user$http_authorization is specified as an example in K8s documentation. However not sure if $remote_user will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?
Not enough example/documentations exists around this.
Posting general answer as no further details and explanation provided.
It's true that there is not so much documentation around, so I decided to dig into NGINX Ingress source code.
The value set in annotation nginx.ingress.kubernetes.io/auth-cache-key is a variable $externalAuth.AuthCacheKey in code:
{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
As can see, $externalAuth.AuthCacheKey is used by variable $tmp_cache_key, which is encoded to base64 format and set as variable $cache_key using lua NGINX module:
rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
Then $cache_key is used to set variable $proxy_cache_key which defines a key for caching:
proxy_cache_key "$cache_key";
Based on the above code, we can assume that we can use any NGINX variable to set nginx.ingress.kubernetes.io/auth-cache-key annotation. Please note that some variables are only available if the corresponding module is loaded.
Example - I set following auth-cache-key annotation:
nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
Then, on the NGINX Ingress controller pod, in the file /etc/nginx/nginx.conf there is following line:
set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
If you will set auth-cache-key annotation to nonexistent NGINX variable, the NGINX will throw following error:
nginx: [emerg] unknown "nonexistent_variable" variable
It's up to you which variables you need.
Please check also following articles and topics:
A Guide to Caching with NGINX and NGINX Plus
external auth provider results in a lot of external auth requests
Is there any node modules for sailsjs framework to make ssl certificate using let's encrypt?
There is a middleware that enables http->https redirect and also handles the ACME-validation requests from Let's Encrypt. As far as I can tell, it does not actually trigger the renewal, nor writes anything, but I believe that the ACME-scripts handle that as cron-jobs every 3 months or so, allowing you app to just validate automatically when they run. I haven't implemented this myself yet though.
I would also ask you to really consider using CloudFlare or some other SSL-termination service, as that also gives you a lot of other benefits like DDoS protection, some CDN-features etc.
Docs:#sailshq/lifejacket
As has been mentioned, you should consider the best overall solution in terms of CloudFlare or SSL-offload via nginx etc.
However, you can use greenlock-express.js for this to achieve SSL with LetsEncrypt directly within the Sails node environment.
The example below:
Configures an HTTP express app using greenlock on port 80 that handles the
redirects to HTTPS and the LetsEncrypt business logic.
Uses the greenlock SSL configuration to configure the primary Sails app as HTTPS on port 443.
Sample configuration for config/local.js:
// returns an instance of greenlock.js with additional helper methods
var glx = require('greenlock-express').create({
server: 'https://acme-v02.api.letsencrypt.org/directory'
, version: 'draft-11' // Let's Encrypt v2 (ACME v2)
, telemetry: true
, servername: 'domainname.com'
, configDir: '/tmp/acme/'
, email: 'myemail#somewhere.com'
, agreeTos: true
, communityMember: true
, approveDomains: [ 'domainname.com', 'www.domainname.com' ]
, debug: true
});
// handles acme-challenge and redirects to https
require('http').createServer(glx.middleware(require('redirect-https')())).listen(80, function () {
console.log("Listening for ACME http-01 challenges on", this.address());
});
module.exports = {
port: 443,
ssl: true,
http: {
serverOptions: glx.httpsOptions,
},
};
Refer to the greenlock documentation for fine-tuning configuration detail, but the above gets an out-of-the-box LetsEncrypt working with Sails.
Note also, that you may wish to place this configuration in somewhere like config/env/production.js as appropriate.
We have an AWS ElasticBeanstalk application. There are various environments, some load balanced some not.
At the moment with the load balanced ones, the SSL is configured manually in the console, whereas the single instance ones are configured via the .ebextensions file (and only for the single instance deployments).
Is there a way to configure the SSL for load balancers via the .ebextensions file as well, so we can keep it all in one place, and automate it?
I did not tried this yet, but while reading the documentation, I've discover that it is possible to automate it.
If you have any lucky on following the instructions in the documentation, please let me know.
Update:
I actually tested, and yes, it is possible. Here follow a configuration example:
option_settings:
- namespace: aws:elb:listener:443
option_name: ListenerProtocol
value: HTTPS
- namespace: aws:elb:listener:443
option_name: InstancePort
value: 80
- namespace: aws:elb:listener:443
option_name: InstanceProtocol
value: HTTP
- namespace: aws:elb:listener:443
option_name: SSLCertificateId
value: arn:aws:iam::<your arn cert id here>
- namespace: aws:elb:listener:80
option_name: ListenerEnabled
value: true
- namespace: aws:elb:listener:443
option_name: ListenerEnabled
value: true