Login page inside ingress in kubernetes - authentication

How can I have login page inside my ingress (nginx)? I know I can use basic authentication or OAuth but I want to have a login page just with one user and I don't want it will be like basic authentication. I want it has a specific page.

As per this official NGINX Ingress Controller document, you can create a custom nginx page for OAuth or basic authentication nginx ingress controller. For this you have to use the volume but at the same time if you are using new template then the configmap also needs to be updated.
By using a volume you can add your custom template to nginx deployment like this
volumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
volumes:
- name: nginx-template-volume
configMap:
name: nginx-template
items:
- key: custom-nginx.tmpl
path: custom-nginx.tmpl
For more detailed information on how to use the custom templates refer this document DOC1 DOC2
Try this tutorial for more details (refer to the custom templates section)

Related

A few questions about authentication and authorization with Kong jwt in microservices architecture

As a newbie in microservices architecture, I need to ask a few questions about implementing JWT authentication using Kong.
The architecture of my application looks like in the picture below:
So far I have only used Kong as a proxy and load balancer. The Authentication Service was responsible for creating the token. The token was created during registration and logging in. During registration or logging in, the authentication service asked the user service, and the user service checked the user's data in the mongodb database. Each endpoint from the other services had to receive a JWT in the header and had a function along with a secret which decoded the token. However, it seems to me that this is an unnecessary duplication of code and the whole process of creating and decoding JWT may or even should be done in Kong with JWT plugin.
I tried to follow a couple of tutorials and YouTube guides just like this one:
JWT Kong Gateway
Unfortunately each of the tutorials shows how to create a JWT only for a single consumer, without Kong being connected to the base.
My kong.yml file:
_format_version: "3.0"
_transform: true
services:
- name: building_service
url: http://building_service/building
routes:
- name: building_service_route
paths:
- /building
- name: user_service
url: http://user_service/user
routes:
- name: user_service_route
paths:
- /user
- name: role_service
url: http://role_service/role
routes:
- name: role_service_route
paths:
- /role
- name: task_service
url: http://task_service/task
routes:
- name: task_service_route
paths:
- /task
- name: authorization_service
url: http://authorization_service/authorization
routes:
- name: authorization_service_route
paths:
- /authorization
plugins:
- name: jwt
route: building_service_route
enabled: true
config:
key_claim_name: kid
claims_to_verify:
- exp
# consumers:
# - username: login_server_issuer
# jwt_secrets:
# - consumer: login_server_issuer
# secret: "secret-hash-brown-bear-market-rate-limit"
- name: bot-detection
- name: rate-limiting
config:
minute: 60
policy: local
Kongo service in docker-compose.yml:
services:
kong:
build: ./App/kong
volumes:
- ./App/kong/kong.yml:/usr/local/kong/declarative/kong.yml
container_name: kong
environment:
KONG_DATABASE: 'off'
KONG_PROXY_ACCESS_LOG: '/dev/stdout'
KONG_ADMIN_ACCESS_LOG: '/dev/stdout'
KONG_PROXY_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_ERROR_LOG: '/dev/stderr'
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DECLARATIVE_CONFIG: "/usr/local/kong/declarative/kong.yml"
command: "kong start"
networks:
- api-network
ports:
- "8000:8000"
- "8443:8443"
- "127.0.0.1:8001:8001"
- "127.0.0.1:8444:8444"
List of my questions:
How authentication service should connect to Kong and create JWT with chosen user (as I understand consumer) data?
Should Kong be somehow connected to database to get required user data and create secret?
How to decode JWT with kong and transfer it to other services in header?
Can anyone provide an example of how to achieve desired result?
Do I misunderstood something about JWT or Kong and what I want to achieve is impossible?
If you can consider using Keycloak for user management, then you can have a look at the jwt-keycloak plugin:
https://github.com/gbbirkisson/kong-plugin-jwt-keycloak

Secure mTLS communication within Istio-knative services + external requests

We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).

Kubernetes internal nginx ingress controller with SSL termination & ssl-passthrough

I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers.
However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.
Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination.
I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.
I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers.
Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible.
Answered this one myself: https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru
Our current (internal) ingress code:
---
rbac:
create: true
controller:
ingressClass: nginx-internal
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600
targetPorts:
https: 80
replicaCount: 3
defaultBackend:
replicaCount: 3
Can I simply add the following? :
controller:
extraArgs:
enable-ssl-passthrough: ""
Note: The above piece of code is what we use on our external ingress controller.
additionally, I found this:
Ingress and SSL Passthrough
Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from?
eg:
service.beta.kubernetes.io
nginx.ingress.kubernetes.io
Both come from the domain kubernetes.io, or does the sub-domain make a difference?
I mean: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
That page doesn't show any of the service.beta annotations on it ..
What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?
I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers.
However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)
So I found the answer to my own question(s):
The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term.
The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter.
And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io
To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:
---
rbac:
create: true
controller:
ingressClass: nginx-internal-ssl-passthrough
service:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3"
targetPorts:
https: 443
replicaCount: 2
extraArgs:
enable-ssl-passthrough: ""
defaultBackend:
replicaCount: 2
Toke me about 2 days of researching/searching the web & 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast:
https://kubernetes.github.io/ingress-nginx/deploy/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
https://kubernetes.io/docs/concepts/services-networking/service/
This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.
From here you can find out how to redirect the HTTPS traffic to the pod without SSL-termination
https://stackoverflow.com/a/66767691/1938507

Istio Authorization with JWT

I am running isio 1.0.2 and am unable to configure service authorization based on JWT claims against Azure AD.
I have succesfully configured and validated Azure AD oidc jwt end user authentication and it works fine.
Now I'd like to configure RBAC Authorization using request.auth.claims["preferred_username"] attribute.
I've created a ServiceRoleBinding like below:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: service-reader
namespace: default
spec:
rules:
- services: ["myservice.default.svc.cluster.local"]
methods: ["GET"]
paths: ["*/products"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: service-reader-binding
namespace: default
spec:
subjects:
- properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
roleRef:
kind: ServiceRole
name: "service-reader"
However, I keep getting 403 Forbidden from the service proxy, even though preferred_username claim from Authentication header is correct.
If I comment out request.auth.claims["preferred_username"]: "user#company.com" line the request succeeds.
Can anyone point me in the right direction regarding configuring authorization based on oidc and jwt?
Never mind. I found the problem.
I was missing user: "*" check to allow all users.
so under subjects it should say:
subjects:
- user: "*"
properties:
source.principal: "*"
request.auth.claims["preferred_username"]: "user#company.com"
That fixes it.

Can you configure ElasticBeanstalk Loadbalanced SSL cert via the .ebexensions file?

We have an AWS ElasticBeanstalk application. There are various environments, some load balanced some not.
At the moment with the load balanced ones, the SSL is configured manually in the console, whereas the single instance ones are configured via the .ebextensions file (and only for the single instance deployments).
Is there a way to configure the SSL for load balancers via the .ebextensions file as well, so we can keep it all in one place, and automate it?
I did not tried this yet, but while reading the documentation, I've discover that it is possible to automate it.
If you have any lucky on following the instructions in the documentation, please let me know.
Update:
I actually tested, and yes, it is possible. Here follow a configuration example:
option_settings:
- namespace: aws:elb:listener:443
option_name: ListenerProtocol
value: HTTPS
- namespace: aws:elb:listener:443
option_name: InstancePort
value: 80
- namespace: aws:elb:listener:443
option_name: InstanceProtocol
value: HTTP
- namespace: aws:elb:listener:443
option_name: SSLCertificateId
value: arn:aws:iam::<your arn cert id here>
- namespace: aws:elb:listener:80
option_name: ListenerEnabled
value: true
- namespace: aws:elb:listener:443
option_name: ListenerEnabled
value: true