K3s ingress TLS enabled accessing TLS enabled backend, how to? - ssl

I have a local K3s kubernetes cluster with its traefik ingress controller.
(Mac OSX, Multipass Hyper-V based local VMs: v1.18.3+k3s1 Ubuntu 16.04.6 LTS 4.4.0-179-generic containerd://1.3.3-k3s2)
what I want is having an ingress that is tls enabled AND forwarding to vault port 8200 via tls/https
k get -n kube-system svc traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.105.6 192.168.64.5 80:30303/TCP,443:30142/TCP 4h21m
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/vault-0 1/1 Running 0 4h31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 4h31m
service/vault ClusterIP 10.43.8.235 <none> 8200/TCP,8201/TCP 4h31m
NAME READY AGE
statefulset.apps/vault 1/1 4h31m
I deployed vault via helm chart standalone vault (non dev mode) and tls enabled (values.yaml see below)
vault's cert is signed by k3s itself: kubectl -n "${NAMESPACE}" certificate approve "${CSR_NAME}"
certinfo tmp/localK3s/certs/vault/vault.crt
Version: 3 (0x2)
Serial Number:
ed:8f:07:da:0d:3d:8d:55:3d:73:aa:93:9d:98:d2:69
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN=k3s-server-ca#1591718124
Validity
Not Before: Jun 9 15:53:56 2020 GMT
Not After : Jun 9 15:53:56 2021 GMT
Subject: CN=vault.vault.svc
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name:
DNS:vault, DNS:vault.vault, DNS:vault.vault.svc, DNS:vault.vault.svc.iac.local, DNS:localhost, IP Address:127.0.0.1
now I can access the vault service directly by e.g.:
$ kubectl -n vault port-forward service/vault 8200:8200 &
$
$ export VAULT_ADDR=https://127.0.0.1:8200
$ export VAULT_CAPATH=$(pwd)/tmp/localK3s/certs/localK3s_root.ca
$ export VAULT_CLIENT_CERT=$(pwd)/tmp/localK3s/certs/vault/vault.crt
$ export VAULT_CLIENT_KEY=$(pwd)/tmp/localK3s/certs/vault/vault.key
$
$ vault status
Handling connection for 8200
Handling connection for 8200
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.4.2
Cluster Name vault-cluster-5bc9e954
Cluster ID ca5496a6-525d-2b86-22dd-f771da82d5e0
HA Enabled false
now what I want is having an ingress that is tls enabled AND forwarding to vault port 8200 via tls/https
so I have
$ kubectl get ingress vault -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: vault
namespace: vault
annotations:
meta.helm.sh/release-name: vault
meta.helm.sh/release-namespace: vault
labels:
helm.sh/chart: vault-0.6.0
spec:
rules:
- host: vault.iac.local
http:
paths:
- backend:
serviceName: vault
servicePort: 8200
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- vault.iac.local
secretName: vault-tls
status:
loadBalancer: {}
$ export VAULT_ADDR=https://vault.iac.local
$ export VAULT_CAPATH=$(pwd)/tmp/localK3s/certs/localK3s_root.ca
$ export VAULT_CLIENT_CERT=$(pwd)/tmp/localK3s/certs/vault/vault.crt
$ export VAULT_CLIENT_KEY=$(pwd)/tmp/localK3s/certs/vault/vault.key
$
$ vault status
vault status -tls-skip-verify
Error checking seal status: Error making API request.
URL: GET https://vault.iac.local/v1/sys/seal-status
Code: 404. Raw Message:
404 page not found
helm vault values.yaml
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-tls/vault.ca
extraVolumes:
- type: secret
name: vault-tls
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = "false" # 1
# address = "0.0.0.0:8200"
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
ingress:
enabled: true
hosts:
- host: vault.iac.local
tls:
- secretName: vault-tls
hosts:
- vault.iac.local
any ideas anybody?

ok, it's always helpful to read the logs (sigh), e.g. of the ingress controller itself, e.g.:
INGCTRL=traefik && \
kubectl -n kube-system logs \
pod/$(kubectl -n kube-system get pods -l app=$INGCTRL | sed -n -E "s/^($INGCTRL-[a-z0-9-]+).*$/\1/p")
if you use generic secrets for ingress tls, beware the secret keys have to be tls.crt and tls.key (or use kubectl create secret tls and not generic in the first place)
also check that your target service has an endpoint at all and not
k describe svc theService
...
Endpoints: <none>
...

Related

Kubectl Ingress without IP

I have developed a very small service on dotnet 6, running in Windows 10 and Docker 20.10.17. I want to expose as service in Kubernetes in local machine as "http://localhost:15001/Calculator/sum/1/1".
I am running an script like:
docker build -f API/CalculadoraREST/Dockerfile . --tag calculadorarestapi:v1.0
kubectl config set-context --current --namespace=calculadora
kubectl apply -f kubernetes/namespace.yml --overwrite=true
kubectl apply -f kubernetes --overwrite=true
When finished and run kubectl get ingress -n calculadora I get the ingress object and found without IP to access:
NAME CLASS HOSTS ADDRESS PORTS AGE
calculadora-ingress <none> * 80 5s
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal AS base
WORKDIR /app
EXPOSE 15001
ENV ASPNETCORE_URLS=http://+:15001
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["API/CalculadoraREST/CalculadoraRestAPI.csproj", "API/CalculadoraREST/"]
RUN dotnet restore "API/CalculadoraREST/CalculadoraRestAPI.csproj"
COPY . .
WORKDIR "/src/API/CalculadoraREST"
RUN dotnet build "CalculadoraRestAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CalculadoraRestAPI.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CalculadoraRestAPI.dll"]
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calculadora-ingress
namespace: calculadora
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /Calculator/
pathType: Prefix
backend:
service :
name: calculadorarestapi-service
port:
number: 15001
Service:
apiVersion: v1
kind: Service
metadata:
name: calculadorarestapi-service
namespace: calculadora
spec:
selector:
app: calculadorarestapi
ports:
- protocol: TCP
port: 15001
targetPort: 15001
name: http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculadorarestapi-deployment
namespace: calculadora
spec:
selector:
matchLabels:
app: calculadorarestapi
replicas: 2
template:
metadata:
labels:
app: calculadorarestapi
spec:
containers:
- name: calculadorarestapi
image: calculadorarestapi:v1.0
ports:
- containerPort: 15001
resources:
requests:
memory: "150Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullSecrets:
- name: regsecret
Any ideas? I will really appreciate your comments. :-)
Could you add the kubernetes.io/ingress.class: "nginx" annotation to the Ingress resource or set the class field as per this Git link.

How to use a .pfx certificate in Kubernetes?

I have a .pfx file that a Java container needs to use.
I have created a tls secret using the command
kubectl create secret tls secret-pfx-key --dry-run=client --cert tls.crt --key tls.key -o yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name : secret-pfx-key
namespace: default
data:
#cat tls.crt | base64
tls.crt: base64-gibberish....
#cat tls.key | base64
tls.key: base64-gibberish....
However, now I cannot understand how to use it. When I add the secret as volume in the pod I can see the two files that are created. But I need the combination of the two in one .pfx file.
Am I missing something? Thanks.
Note: I have read the related stackoverflow questions but could not understand how to use it.
You can convert to pfx first, then kubectl create secret generic mypfx --from-file=pfx-cert=<converted pfx file>
Mount the secret as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-mypfx
spec:
restartPolicy: OnFailure
volumes:
- name: pfx-volume
secret:
secretName: mypfx
containers:
- name: busybox
image: busybox
command: ["ash","-c","cat /path/in/the/container/pfx-cert; sleep 5"]
volumeMounts:
- name: pfx-volume
mountPath: /path/in/the/container
The above example dump the cert, wait for 5s and exit.

Cert-manager fails to complete dns01 challenge with cloudflare

Cert-manager various versions ( 15 and 16 ) installed on both k3s version v1.18.8+k3s1 and docker-desktop version v1.16.6-beta.0 using the following command:
helm install cert-manager \
--namespace cert-manager jetstack/cert-manager \
--version v0.16.1 \
--set installCRDs=true \
--set 'extraArgs={--dns01-recursive-nameservers=1.1.1.1:53}'
I applied the following test yaml file:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: test
type: Opaque
stringData:
api-token: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt
namespace: test
spec:
acme:
email: user#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
cloudflare:
email: user#example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: example.com
namespace: test
spec:
secretName: example.com-tls
issuerRef:
name: letsencrypt
dnsNames:
- example.com
Result (I have waited even hours):
kubectl -n test get certs,certificaterequests,order,challenges,ingress -o wide
NAME READY SECRET ISSUER STATUS AGE
certificate.cert-manager.io/example.com False example.com-tls letsencrypt Issuing certificate as Secret does not exist 57s
NAME READY ISSUER STATUS AGE
certificaterequest.cert-manager.io/example.com-rx7jg False letsencrypt Waiting on certificate issuance from order test/example.com-rx7jg-273779930: "pending" 56s
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/example.com-rx7jg-273779930 pending letsencrypt 55s
NAME STATE DOMAIN REASON AGE
challenge.acme.cert-manager.io/example.com-rx7jg-273779930-625151916 pending example.com Cloudflare API error for POST "/zones/xxxxxxxxxxxxxxxxxxxxxxxxxx xxxxx/dns_records" 53s
Cloudflare setting are the ones from:
https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ and i have tried with both token and key.
Cert-manager pod logs:
I0828 08:34:51.370299 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:34:55.251730 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.251982 1 controller.go:152] cert-manager/controller/challenges "msg"="syncing item" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.252131 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:35:38.797954 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
What's wrong?
Thank you!
Not 100% if it'll resolve your issue but I did come across this thread - https://github.com/jetstack/cert-manager/issues/1163. They show helm being invoked like this and purporting it worked.
$ helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerName=letsencrypt-staging-issuer \
--set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
jetstack/cert-manager

Kubernetes Redis Cluster issue

I'm trying to create redis cluster using kubernetes on centos. I have my kubernetes master running on one host and kubernetes slaves on 2 different hosts.
etcdctl get /kube-centos/network/config
{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
Here is my replication controller
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
replicas: 6
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
command:
- "redis-server"
args:
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master
name: config
- mountPath: /redis-master-data
name: data
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-config
items:
- key: redis-config
path: redis.conf
kubectl create -f rc.yaml
NAME READY STATUS RESTARTS AGE IP NODE
redis-master-149tt 1/1 Running 0 8s 172.30.96.4 centos-minion-1
redis-master-14j0k 1/1 Running 0 8s 172.30.79.3 centos-minion-2
redis-master-3wgdt 1/1 Running 0 8s 172.30.96.3 centos-minion-1
redis-master-84jtv 1/1 Running 0 8s 172.30.96.2 centos-minion-1
redis-master-fw3rs 1/1 Running 0 8s 172.30.79.4 centos-minion-2
redis-master-llg9n 1/1 Running 0 8s 172.30.79.2 centos-minion-2
Redis-config file used
appendonly yes
cluster-enabled yes
cluster-config-file /redis-master/nodes.conf
cluster-node-timeout 5000
dir /redis-master
port 6379
I used the following command to create the kubernetes service.
kubectl expose rc redis-master --name=redis-service --port=6379 --target-port=6379 --type=NodePort
Name: redis-service
Namespace: default
Labels: app=redis
role=master
tier=backend
Selector: app=redis,role=master,tier=backend
Type: NodePort
IP: 10.254.229.114
Port: <unset> 6379/TCP
NodePort: <unset> 30894/TCP
Endpoints: 172.30.79.2:6379,172.30.79.3:6379,172.30.79.4:6379 + 3 more...
Session Affinity: None
No events.
Now I have all the pods and service up and running. I'm using redis-trib pod to create redis cluster.
kubectl exec -it redis-trib bash
./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379
Redis Cluster created as expected with the below message.
[OK] All 16384 slots covered.
Now I should be able to access my redis-cluster on kubernetes node IP(192.168.240.116) and nodePort(30894) from any host within my network. Everything works as expected when I execute the below command from one of the kubernetes node.
redis-cli -p 30894 -h 192.168.240.116 -c
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
OK
172.30.79.4:6379>
When I run the same command from different (non-kubernetes) node within the same network, I see the connected timed out error.
redis-cli -c -p 30894 -h 192.168.240.116
192.168.240.116:30894> set foo bar
-> Redirected to slot [12182] located at 172.30.79.4:6379
Could not connect to Redis at 172.30.79.4:6379: Connection timed out
Is it not possible to access the redis-cluster outside the kubernetes cluster network when exposed using NodePort service type?
Also I cannot use LoadBalancer service type as I'm not hosting it on cloud.
I have been stuck with this issue for quite a while. Can someone suggest on what approach I should use to access my redis-cluster outside my network ?
Thanks
Running ./redis-trib.rb create --replicas 1 172.30.79.2:6379 172.30.79.3:6379 172.30.79.4:6379 172.30.96.2:6379 172.30.96.3:6379 172.30.96.4:6379 doesn't make sense with this setup.
The port 6379 is only accessible through the service which you brough up, but never directly as you try. That's why you run into issues when you try to use your setup.
What you can do is to expose each POD with it's own service and have one additional cluster services to loadbalance external requests. As shown in the example repository from Kelsey Hightower. This way the PODs can communicate though the internally exposed ports and (external) clients can use the loadbalanced cluster port. The implication then is also that each POD requires it's own ReplicaSet (or Deployment). There's a long talk available on YouTube from Kelsey explaining the setup - YouTube / Slideshare.
An alternative would be to use a single redis master as shown in other examples.

Kubernetes: hostPath volume does not mount

I want to create a web app using apache server with https, and I have generated certificate files using letsencrypt. I already verified that cert.pem, chain.pem, fullchain.pem, and privkey.pem are stored on the host machine. However, I cannot map them to the pod. Here is the web-controller.yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <my-wen-app-image>
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND']
name: web
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/local/myapp/https
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/live/xxx.xxx.xxx.edu
name: test-volume
After kubectl create -f web-controller.yaml the error log says:
AH00526: Syntax error on line 8 of /etc/apache2/sites-enabled/000-default.conf:
SSLCertificateFile: file '/usr/local/myapp/https/cert.pem' does not exist or is empty
Action 'configtest' failed.
This is why I think the problem is that the certificates are not mapped into the container.
Could anyone help me on this? Thanks a lot!
I figured it out: I have to mount it to /etc/letsencrypt/live/host rather than /usr/local/myapp/https
This is probably not the root cause, but it works now.