How to use a .pfx certificate in Kubernetes? - ssl

I have a .pfx file that a Java container needs to use.
I have created a tls secret using the command
kubectl create secret tls secret-pfx-key --dry-run=client --cert tls.crt --key tls.key -o yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
name : secret-pfx-key
namespace: default
data:
#cat tls.crt | base64
tls.crt: base64-gibberish....
#cat tls.key | base64
tls.key: base64-gibberish....
However, now I cannot understand how to use it. When I add the secret as volume in the pod I can see the two files that are created. But I need the combination of the two in one .pfx file.
Am I missing something? Thanks.
Note: I have read the related stackoverflow questions but could not understand how to use it.

You can convert to pfx first, then kubectl create secret generic mypfx --from-file=pfx-cert=<converted pfx file>
Mount the secret as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-mypfx
spec:
restartPolicy: OnFailure
volumes:
- name: pfx-volume
secret:
secretName: mypfx
containers:
- name: busybox
image: busybox
command: ["ash","-c","cat /path/in/the/container/pfx-cert; sleep 5"]
volumeMounts:
- name: pfx-volume
mountPath: /path/in/the/container
The above example dump the cert, wait for 5s and exit.

Related

Kubectl Ingress without IP

I have developed a very small service on dotnet 6, running in Windows 10 and Docker 20.10.17. I want to expose as service in Kubernetes in local machine as "http://localhost:15001/Calculator/sum/1/1".
I am running an script like:
docker build -f API/CalculadoraREST/Dockerfile . --tag calculadorarestapi:v1.0
kubectl config set-context --current --namespace=calculadora
kubectl apply -f kubernetes/namespace.yml --overwrite=true
kubectl apply -f kubernetes --overwrite=true
When finished and run kubectl get ingress -n calculadora I get the ingress object and found without IP to access:
NAME CLASS HOSTS ADDRESS PORTS AGE
calculadora-ingress <none> * 80 5s
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:6.0-focal AS base
WORKDIR /app
EXPOSE 15001
ENV ASPNETCORE_URLS=http://+:15001
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["API/CalculadoraREST/CalculadoraRestAPI.csproj", "API/CalculadoraREST/"]
RUN dotnet restore "API/CalculadoraREST/CalculadoraRestAPI.csproj"
COPY . .
WORKDIR "/src/API/CalculadoraREST"
RUN dotnet build "CalculadoraRestAPI.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CalculadoraRestAPI.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CalculadoraRestAPI.dll"]
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: calculadora-ingress
namespace: calculadora
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /Calculator/
pathType: Prefix
backend:
service :
name: calculadorarestapi-service
port:
number: 15001
Service:
apiVersion: v1
kind: Service
metadata:
name: calculadorarestapi-service
namespace: calculadora
spec:
selector:
app: calculadorarestapi
ports:
- protocol: TCP
port: 15001
targetPort: 15001
name: http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculadorarestapi-deployment
namespace: calculadora
spec:
selector:
matchLabels:
app: calculadorarestapi
replicas: 2
template:
metadata:
labels:
app: calculadorarestapi
spec:
containers:
- name: calculadorarestapi
image: calculadorarestapi:v1.0
ports:
- containerPort: 15001
resources:
requests:
memory: "150Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
imagePullSecrets:
- name: regsecret
Any ideas? I will really appreciate your comments. :-)
Could you add the kubernetes.io/ingress.class: "nginx" annotation to the Ingress resource or set the class field as per this Git link.

Cert-manager fails to complete dns01 challenge with cloudflare

Cert-manager various versions ( 15 and 16 ) installed on both k3s version v1.18.8+k3s1 and docker-desktop version v1.16.6-beta.0 using the following command:
helm install cert-manager \
--namespace cert-manager jetstack/cert-manager \
--version v0.16.1 \
--set installCRDs=true \
--set 'extraArgs={--dns01-recursive-nameservers=1.1.1.1:53}'
I applied the following test yaml file:
apiVersion: v1
kind: Namespace
metadata:
name: test
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: test
type: Opaque
stringData:
api-token: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt
namespace: test
spec:
acme:
email: user#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- dns01:
cloudflare:
email: user#example.com
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: example.com
namespace: test
spec:
secretName: example.com-tls
issuerRef:
name: letsencrypt
dnsNames:
- example.com
Result (I have waited even hours):
kubectl -n test get certs,certificaterequests,order,challenges,ingress -o wide
NAME READY SECRET ISSUER STATUS AGE
certificate.cert-manager.io/example.com False example.com-tls letsencrypt Issuing certificate as Secret does not exist 57s
NAME READY ISSUER STATUS AGE
certificaterequest.cert-manager.io/example.com-rx7jg False letsencrypt Waiting on certificate issuance from order test/example.com-rx7jg-273779930: "pending" 56s
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/example.com-rx7jg-273779930 pending letsencrypt 55s
NAME STATE DOMAIN REASON AGE
challenge.acme.cert-manager.io/example.com-rx7jg-273779930-625151916 pending example.com Cloudflare API error for POST "/zones/xxxxxxxxxxxxxxxxxxxxxxxxxx xxxxx/dns_records" 53s
Cloudflare setting are the ones from:
https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ and i have tried with both token and key.
Cert-manager pod logs:
I0828 08:34:51.370299 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:34:55.251730 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.251982 1 controller.go:152] cert-manager/controller/challenges "msg"="syncing item" "key"="test/example.com-m72dq-3139291111-641020922"
I0828 08:35:35.252131 1 dns.go:102] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="example.com" "domain"="example.com" "resource_kind"="Challenge" "resource_name"="example.com-m72dq-3139291111-641020922" "resource_namespace"="test" "type"="dns-01"
E0828 08:35:38.797954 1 controller.go:158] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="Cloudflare API error for POST \"/zones/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/dns_records\"" "key"="test/example.com-m72dq-3139291111-641020922"
What's wrong?
Thank you!
Not 100% if it'll resolve your issue but I did come across this thread - https://github.com/jetstack/cert-manager/issues/1163. They show helm being invoked like this and purporting it worked.
$ helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerName=letsencrypt-staging-issuer \
--set extraArgs='{--dns01-recursive-nameservers-only,--dns01-self-check-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
jetstack/cert-manager

Kubernetes : How to make another person access kubernetes cluster with edit clusterRole permissions?

I did this:
I created a service account
cat <<EOF | kubectl create -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: myname
...
I generate the token from the secret created in the service account:
token=$(kubectl get secrets myname-xxxx-xxxx -o jsonpath={.data.token} | base64 --decode)
I set credentials for the serviceAccount myname created:
kubectl config set-credentials myname --token=$token
I created a context
kubectl config set-context myname-context --cluster=my-cluster --user=myname
then I created a copie of ~/.kube/config and delete the cluster-admin entries (letting only the user myname)
I rolebinded the user to a specific namespace with the edit clusterRole permissions:
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-access
namespace: my-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: myname
EOF
I sent the edited ~/.kube/config to the person who want to access the cluster, now he can lists the pods but not exec into them:
Error
(Forbidden): pods "pod-xxxxx-xxxx" is forbidden: User "system:serviceaccount:default:myname" cannot create resource "pods/exec" in API group "" in the namespace "my-ns"
I want to do that from a non master machine which have the master ~/.kube/config copied into it.
Thanks
The RoleBinding that you have is binding the ClusterRole to a User and not a ServiceAccount. The error clearly shows a ServiceAccount system:serviceaccount:default:myuser So the RoleBinding should be as below
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-access
namespace: my-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: myuser
namespace: default
EOF
To verify all permissions of the ServiceAccount myuser use below command
kubectl auth can-i --list --as=system:serviceaccount:default:myuser
To verify specific permission of pods/exec of the ServiceAccount myuser use below command
kubectl auth can-i create pods/exec --as=system:serviceaccount:default:myuser

K3s ingress TLS enabled accessing TLS enabled backend, how to?

I have a local K3s kubernetes cluster with its traefik ingress controller.
(Mac OSX, Multipass Hyper-V based local VMs: v1.18.3+k3s1 Ubuntu 16.04.6 LTS 4.4.0-179-generic containerd://1.3.3-k3s2)
what I want is having an ingress that is tls enabled AND forwarding to vault port 8200 via tls/https
k get -n kube-system svc traefik
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.43.105.6 192.168.64.5 80:30303/TCP,443:30142/TCP 4h21m
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/vault-0 1/1 Running 0 4h31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 4h31m
service/vault ClusterIP 10.43.8.235 <none> 8200/TCP,8201/TCP 4h31m
NAME READY AGE
statefulset.apps/vault 1/1 4h31m
I deployed vault via helm chart standalone vault (non dev mode) and tls enabled (values.yaml see below)
vault's cert is signed by k3s itself: kubectl -n "${NAMESPACE}" certificate approve "${CSR_NAME}"
certinfo tmp/localK3s/certs/vault/vault.crt
Version: 3 (0x2)
Serial Number:
ed:8f:07:da:0d:3d:8d:55:3d:73:aa:93:9d:98:d2:69
Signature Algorithm: ecdsa-with-SHA256
Issuer: CN=k3s-server-ca#1591718124
Validity
Not Before: Jun 9 15:53:56 2020 GMT
Not After : Jun 9 15:53:56 2021 GMT
Subject: CN=vault.vault.svc
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name:
DNS:vault, DNS:vault.vault, DNS:vault.vault.svc, DNS:vault.vault.svc.iac.local, DNS:localhost, IP Address:127.0.0.1
now I can access the vault service directly by e.g.:
$ kubectl -n vault port-forward service/vault 8200:8200 &
$
$ export VAULT_ADDR=https://127.0.0.1:8200
$ export VAULT_CAPATH=$(pwd)/tmp/localK3s/certs/localK3s_root.ca
$ export VAULT_CLIENT_CERT=$(pwd)/tmp/localK3s/certs/vault/vault.crt
$ export VAULT_CLIENT_KEY=$(pwd)/tmp/localK3s/certs/vault/vault.key
$
$ vault status
Handling connection for 8200
Handling connection for 8200
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.4.2
Cluster Name vault-cluster-5bc9e954
Cluster ID ca5496a6-525d-2b86-22dd-f771da82d5e0
HA Enabled false
now what I want is having an ingress that is tls enabled AND forwarding to vault port 8200 via tls/https
so I have
$ kubectl get ingress vault -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: vault
namespace: vault
annotations:
meta.helm.sh/release-name: vault
meta.helm.sh/release-namespace: vault
labels:
helm.sh/chart: vault-0.6.0
spec:
rules:
- host: vault.iac.local
http:
paths:
- backend:
serviceName: vault
servicePort: 8200
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- vault.iac.local
secretName: vault-tls
status:
loadBalancer: {}
$ export VAULT_ADDR=https://vault.iac.local
$ export VAULT_CAPATH=$(pwd)/tmp/localK3s/certs/localK3s_root.ca
$ export VAULT_CLIENT_CERT=$(pwd)/tmp/localK3s/certs/vault/vault.crt
$ export VAULT_CLIENT_KEY=$(pwd)/tmp/localK3s/certs/vault/vault.key
$
$ vault status
vault status -tls-skip-verify
Error checking seal status: Error making API request.
URL: GET https://vault.iac.local/v1/sys/seal-status
Code: 404. Raw Message:
404 page not found
helm vault values.yaml
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-tls/vault.ca
extraVolumes:
- type: secret
name: vault-tls
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = "false" # 1
# address = "0.0.0.0:8200"
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
ingress:
enabled: true
hosts:
- host: vault.iac.local
tls:
- secretName: vault-tls
hosts:
- vault.iac.local
any ideas anybody?
ok, it's always helpful to read the logs (sigh), e.g. of the ingress controller itself, e.g.:
INGCTRL=traefik && \
kubectl -n kube-system logs \
pod/$(kubectl -n kube-system get pods -l app=$INGCTRL | sed -n -E "s/^($INGCTRL-[a-z0-9-]+).*$/\1/p")
if you use generic secrets for ingress tls, beware the secret keys have to be tls.crt and tls.key (or use kubectl create secret tls and not generic in the first place)
also check that your target service has an endpoint at all and not
k describe svc theService
...
Endpoints: <none>
...

Kubernetes: hostPath volume does not mount

I want to create a web app using apache server with https, and I have generated certificate files using letsencrypt. I already verified that cert.pem, chain.pem, fullchain.pem, and privkey.pem are stored on the host machine. However, I cannot map them to the pod. Here is the web-controller.yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <my-wen-app-image>
command: ['/bin/sh', '-c']
args: ['sudo a2enmod ssl && service apache2 restart && sudo /usr/sbin/apache2ctl -D FOREGROUND']
name: web
ports:
- containerPort: 80
name: http-server
volumeMounts:
- mountPath: /usr/local/myapp/https
name: test-volume
readOnly: false
volumes:
- hostPath:
path: /etc/letsencrypt/live/xxx.xxx.xxx.edu
name: test-volume
After kubectl create -f web-controller.yaml the error log says:
AH00526: Syntax error on line 8 of /etc/apache2/sites-enabled/000-default.conf:
SSLCertificateFile: file '/usr/local/myapp/https/cert.pem' does not exist or is empty
Action 'configtest' failed.
This is why I think the problem is that the certificates are not mapped into the container.
Could anyone help me on this? Thanks a lot!
I figured it out: I have to mount it to /etc/letsencrypt/live/host rather than /usr/local/myapp/https
This is probably not the root cause, but it works now.