We have an AWS ElasticBeanstalk application. There are various environments, some load balanced some not.
At the moment with the load balanced ones, the SSL is configured manually in the console, whereas the single instance ones are configured via the .ebextensions file (and only for the single instance deployments).
Is there a way to configure the SSL for load balancers via the .ebextensions file as well, so we can keep it all in one place, and automate it?
I did not tried this yet, but while reading the documentation, I've discover that it is possible to automate it.
If you have any lucky on following the instructions in the documentation, please let me know.
Update:
I actually tested, and yes, it is possible. Here follow a configuration example:
option_settings:
- namespace: aws:elb:listener:443
option_name: ListenerProtocol
value: HTTPS
- namespace: aws:elb:listener:443
option_name: InstancePort
value: 80
- namespace: aws:elb:listener:443
option_name: InstanceProtocol
value: HTTP
- namespace: aws:elb:listener:443
option_name: SSLCertificateId
value: arn:aws:iam::<your arn cert id here>
- namespace: aws:elb:listener:80
option_name: ListenerEnabled
value: true
- namespace: aws:elb:listener:443
option_name: ListenerEnabled
value: true
Related
I have an instance of MuleSoft's Flex Gateway (v 1.2.0) installed on a Linux machine in a podman container. I am trying to forward container as well as API logs to Splunk. Below is my log.yaml file in /home/username/app folder. Not sure what I am doing wrong, but the logs are not getting forwarded to Splunk.
apiVersion: gateway.mulesoft.com/v1alpha1
kind: Configuration
metadata:
name: logging-config
spec:
logging:
outputs:
- name: default
type: splunk
parameters:
host: <instance-name>.splunkcloud.com
port: "443"
splunk_token: xxxxx-xxxxx-xxxx-xxxx
tls: "on"
tls.verify: "off"
splunk_send_raw: "on"
runtimeLogs:
logLevel: info
outputs:
- default
accessLogs:
outputs:
- default
Please advise.
The endpoint for Splunk's HTTP Event Collector (HEC) is https://http-input.<instance-name>.splunkcloud.com:443/services/collector/raw. If you're using a free trial of Splunk Cloud then change the port number to 8088. See https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform for details.
I managed to get this work. The issue was that I had to give full permissions to the app folder using "chmod" command. After it was done, the fluent-bit.conf file had an entry for Splunk and logs started flowing.
If I understand right from Apply Pod Security Standards at the Cluster Level, in order to have a PSS (Pod Security Standard) as default for the whole cluster I need to create an AdmissionConfiguration in a file that the API server needs to consume during cluster creation.
I don't see any way to configure / provide the AdmissionConfiguration at CreateCluster , also I'm not sure how to provide this AdmissionConfiguration in a managed EKS node.
From the tutorials that use KinD or minikube it seems that the AdmissionConfiguration must be in a file that is referenced in the cluster-config.yaml, but if I'm not mistaken the EKS API server is managed and does not allow to change or even see this file.
The GitHub issue aws/container-roadmap Allow Access to AdmissionConfiguration seems to suggest that currently there is no possibility of providing AdmissionConfiguration at creation, but on the other hand aws-eks-best-practices says These exemptions are applied statically in the PSA admission controller configuration as part of the API server configuration
so, is there a way to provide PodSecurityConfiguration for the whole cluster in EKS? or I'm forced to just use per-namespace labels?
See also Enforce Pod Security Standards by Configuration the Built-in Admission Controller and EKS Best practices PSS and PSA
I don't think there is any way currently in EKS to provide configuration for the built-in PSA controller (Pod Security Admission controller).
But if you want to implement a cluster-wide default for PSS (Pod Security Standards) you can do that by installing the the official pod-security-webhook as a Dynamic Admission Controller in EKS.
git clone https://github.com/kubernetes/pod-security-admission
cd pod-security-admission/webhook
make certs
kubectl apply -k .
The default podsecurityconfiguration.yaml in pod-security-admission/webhook/manifests/020-configmap.yaml allows EVERYTHING so you should edit it and write something like
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["policy-test2"]
then
kubectl apply -k .
kubectl -n pod-security-webhook rollout restart deployment/pod-security-webhook # otherwise the pods won't reread the configuration changes
After those changes you can verify that the default forbids privileged pods with:
kubectl --context aihub-eks-terraform create ns policy-test1
kubectl --context aihub-eks-terraform -n policy-test1 run --image=ecerulm/ubuntu-tools:latest --rm -ti rubelagu-$RANDOM --privileged
Error from server (Forbidden): admission webhook "pod-security-webhook.kubernetes.io" denied the request: pods "rubelagu-32081" is forbidden: violates PodSecurity "restricted:latest": privileged (container "rubelagu-32081" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "rubelagu-32081" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "rubelagu-32081" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "rubelagu-32081" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "rubelagu-32081" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Note: that you get the error forbidding privileged pods even when the namespace policy-test1 has no label pod-security.kubernetes.io/enforce, so you know that this rule comes from the pod-security-webhook that we just installed and configured.
Now if you want to create a pod you will be forced to create in a way that complies with the restricted PSS, by specifying runAsNonRoot, seccompProfile.type and capabilities and For example:
apiVersion: v1
kind: Pod
metadata:
name: test-1
spec:
restartPolicy: Never
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: ecerulm/ubuntu-tools:latest
imagePullPolicy: Always
command: ["/bin/bash", "-c", "--", "sleep 900"]
securityContext:
privileged: false
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
We are converting existing k8s services to use istio & knative. The services receive requests from external users as well as from within the cluster. We are trying to setup Istio AuthorizationPolicy to achieve the below requirements:
Certain paths (like docs/healthchecks) should not require any special header or anything and must be accessible from anywhere
Health & metric collection paths required to be accessed by knative must be accisible only by knative controllers
Any request coming from outside the cluster (through knative-serving/knative-ingress-gateway basically) must contain a key header matching a pre-shared key
Any request coming from any service within the cluster can access all the paths
Below is a sample of what I am trying. I am able to get the first 3 requirements working but not the last one...
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: my-svc
namespace: my-ns
spec:
selector:
matchLabels:
serving.knative.dev/service: my-svc
action: "ALLOW"
rules:
- to:
- operation:
methods:
- "GET"
paths:
- "/docs"
- "/openapi.json"
- "/redoc"
- "/rest/v1/healthz"
- to:
- operation:
methods:
- "GET"
paths:
- "/healthz*"
- "/metrics*"
when:
- key: "request.headers[User-Agent]"
values:
- "Knative-Activator-Probe"
- "Go-http-client/1.1"
- to:
- operation:
paths:
- "/rest/v1/myapp*"
when:
- key: "request.headers[my-key]"
values:
- "asjhfhjgdhjsfgjhdgsfjh"
- from:
- source:
namespaces:
- "*"
We have made no changes to the mTLS configuration provided by default by istio-knative setup, so assume that the mtls mode is currently PERMISSIVE.
Details of tech stack involved
AWS EKS - Version 1.21
Knative Serving - Version 1.1 (with Istio
1.11.5)
I'm not an Istio expert, but you might be able to express the last policy based on either the ingress gateway (have one which is listening only on a ClusterIP address), or based on the SourceIP being within the cluster. For the latter, I'd want to test that Istio is using the actual SourceIP and not substituting in the Forwarded header's IP address (a different reasonable configuration).
i'm trying to use istio end-user authentication example with latest rancher, but I'm getting below error
unable to recognize "STDIN": no matches for kind "RequestAuthentication" in version "security.istio.io/v1beta1"
when I use below command
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: foo
spec:
selector:
matchLabels:
app: httpbin
jwtRules:
- issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json"
EOF
According to this support matrix from rancher website,the istio version given is 1.4.7.
RequestAuthentication kind was introduced in istio in the version 1.5.So you might be applying the incorrect resource in this version.See this for istio's upgrade notes on 1.5.Since rancher is having not the latest version ,you will have to apply the old policy resources.You can find 1.4 docs at https://archive.istio.io/v1.4/docs/
Hope this helps.
I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers.
However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.
Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination.
I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.
I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers.
Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible.
Answered this one myself: https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru
Our current (internal) ingress code:
---
rbac:
create: true
controller:
ingressClass: nginx-internal
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600
targetPorts:
https: 80
replicaCount: 3
defaultBackend:
replicaCount: 3
Can I simply add the following? :
controller:
extraArgs:
enable-ssl-passthrough: ""
Note: The above piece of code is what we use on our external ingress controller.
additionally, I found this:
Ingress and SSL Passthrough
Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from?
eg:
service.beta.kubernetes.io
nginx.ingress.kubernetes.io
Both come from the domain kubernetes.io, or does the sub-domain make a difference?
I mean: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
That page doesn't show any of the service.beta annotations on it ..
What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?
I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers.
However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)
So I found the answer to my own question(s):
The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term.
The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter.
And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io
To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:
---
rbac:
create: true
controller:
ingressClass: nginx-internal-ssl-passthrough
service:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3"
targetPorts:
https: 443
replicaCount: 2
extraArgs:
enable-ssl-passthrough: ""
defaultBackend:
replicaCount: 2
Toke me about 2 days of researching/searching the web & 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast:
https://kubernetes.github.io/ingress-nginx/deploy/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
https://kubernetes.io/docs/concepts/services-networking/service/
This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.
From here you can find out how to redirect the HTTPS traffic to the pod without SSL-termination
https://stackoverflow.com/a/66767691/1938507