PHP CURL failed with Operation timeout in Kubernetes CronJob - php-curl

I have a cronJob that runs at some time interval to download images from remote servers. I had alpine-php:7.2-fpm docker image. It works fine with some of the URLs. but it is failing with some URLs.
Here is the code for CURL
$fp = fopen($fileNameWithPath, 'w');
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => $url,
CURLOPT_FILE => $fp,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_CONNECTTIMEOUT => 90,
CURLOPT_TIMEOUT => 180,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_VERBOSE => 1
));
$result = curl_exec($ch);
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
fclose($fp);
I had enabled verbose and the logs in Kubernetes pods gives the following output
* TCP_NODELAY set
* Connected to images.asos-media.com (23.32.5.80) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=GB; L=London; O=ASOS.com Limited; CN=*.asos-media.com
* start date: Feb 26 00:00:00 2020 GMT
* expire date: May 27 12:00:00 2021 GMT
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Secure Site ECC CA-1
* SSL certificate verify ok.
> GET /products/wonderbra-new-ultimate-strapless-bra-a-g-cup/5980845-1-beige?$XXL$ HTTP/1.1
Host: images.asos-media.com
Accept: */*
Accept-Encoding: deflate, gzip
* old SSL session ID is stale, removing
* Operation timed out after 180000 milliseconds with 0 bytes received
* Closing connection 0
If I run this code from docker-image locally it works fine.
Kubernetes Deployment Files
CronJoB
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: scheduleApp
name: imagedownlload
labels:
app: scheduleApp
spec:
schedule: "5 */4 * * *" # Specify schedule using linux cron syntax
concurrencyPolicy: Allow
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 2
jobTemplate:
spec:
parallelism: 1 # Number of Pods start together with Job
template:
metadata:
labels:
tier: cronservice
spec:
volumes:
- name: pv-restorage
persistentVolumeClaim:
claimName: pipeline-volumeclaim
containers:
- name: imagedownload
image: gcr.io/{project_id}/{image_name}:v1.0.2 # Set the image tobe used in container with full repository URL
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secret
volumeMounts:
- name: pv-restorage
mountPath: /var/www/html/restorage
restartPolicy: Never
Service file
apiVersion: v1
kind: Service
metadata:
name: cron-loadbalancer
namespace: scheduleApp
spec:
selector:
tier: cronservice
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
sessionAffinity: None
type: LoadBalancer
Dockerfile
FROM php:7.2-fpm-alpine
RUN apk update && apk add \
libzip-dev \
unzip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install mysqli zip \
&& rm -rf /var/cache/apk/*
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
COPY composer.* /var/www/html/
RUN cd /usr/local/etc/php/conf.d/ \
&& echo 'memory_limit = -1' >> /usr/local/etc/php/conf.d/docker-php-memlimit.ini
WORKDIR /var/www/html
RUN composer install && composer clear-cache
COPY . /var/www/html/
ENTRYPOINT ["php","console"]
CMD ["-V"]

Related

traefik listens on port 80 and forwards the request to minio console(5000) 404

I deployed minio and the console in K8S, used ClusterIP to expose ports 9000 & 5000
Listening for port 80 and 5000 forwarding requests to minio.service(ClusterIP)
Request console all right through port 5000
By requesting the console on port 80, you can see the console, but the request is 404 in the browser
enter image description here
enter image description here
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Release.Namespace }}
name: minio-headless
labels:
app: minio-headless
spec:
type: ClusterIP
clusterIP: None
ports:
- name: server
port: 9000
targetPort: 9000
- name: console
port: 5000
targetPort: 5000
selector:
app: minio
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingress-route-minio
namespace: {{ .Release.Namespace }}
spec:
entryPoints:
- minio
- web
routes:
- kind: Rule
match: Host(`minio-console.{{ .Release.Namespace }}.k8s.zszc`)
priority: 10
services:
- kind: Service
name: minio-headless
namespace: {{ .Release.Namespace }}
port: 5000
responseForwarding:
flushInterval: 1ms
scheme: http
strategy: RoundRobin
weight: 10
traefik access log
{
"ClientAddr": "192.168.4.250:55485",
"ClientHost": "192.168.4.250",
"ClientPort": "55485",
"ClientUsername": "-",
"DownstreamContentSize": 19,
"DownstreamStatus": 404,
"Duration": 688075,
"OriginContentSize": 19,
"OriginDuration": 169976,
"OriginStatus": 404,
"Overhead": 518099,
"RequestAddr": "minio-console.etb-0-0-1.k8s.zszc",
"RequestContentSize": 0,
"RequestCount": 1018,
"RequestHost": "minio-console.etb-0-0-1.k8s.zszc",
"RequestMethod": "GET",
"RequestPath": "/api/v1/login",
"RequestPort": "-",
"RequestProtocol": "HTTP/1.1",
"RequestScheme": "http",
"RetryAttempts": 0,
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
"StartLocal": "2023-01-27T13:20:06.337540015Z",
"StartUTC": "2023-01-27T13:20:06.337540015Z",
"entryPointName": "web",
"level": "info",
"msg": "",
"time": "2023-01-27T13:20:06Z"
}
It looks to me like the request for /api is conflicting with rules for the Traefik dashboard. If you look at the access log in your question, we see:
"RouterName": "traefik-traefik-dashboard-6e26dcbaf28841493448#kubernetescrd",
If you have installed Traefik from the Helm chart, it installs an IngressRoute with the following rules:
- kind: Rule
match: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
services:
- kind: TraefikService
name: api#internal
In theory those are bound only to the traefik entrypoint, but it looks like you may have customized your entrypoint configuration.
Take a look at the IngressRoute resource for your Traefik dashboard and ensure that it's not sharing an entrypoint with minio.

GKE Ingress TLS vs jupyter TLS but not both?

I'm setting up a jupyter-lab container in a kubernetes cluster and want to enable TLS. I have successfully done this in 2 ways:
Include the certificate and key files inside the container and enable TLS when running the jupyter command. Add a LoadBalancer Service to expose the container.
#Dockerfile
...
CMD jupyter-lab --no-browser --allow-root --ip 0.0.0.0 --port=443 --certfile=<crt path> --keyfile=<key path>
#yaml
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
type: LoadBalancer
selector:
app: <app-name>
ports:
- protocol: TCP
port: 443
targetPort: 443
Run jupyter with no TLS. Add certificate and key in base64 to a Secret. Add NodePort, Ingress and BackendConfig yamls.
#Dockerfile
...
CMD jupyter-lab --no-browser --allow-root --ip 0.0.0.0 --port=443
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: http-hc-config
spec:
healthCheck:
checkIntervalSec: 300
timeoutSec: 10
healthyThreshold: 2
unhealthyThreshold: 5
type: HTTP
requestPath: /login
port: 443
---
apiVersion: v1
kind: Service
metadata:
name: <service-name>
annotations:
cloud.google.com/backend-config: '{"ports": {"443":"http-hc-config"}}'
spec:
type: NodePort
selector:
app: <app-name>
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <ingress-name>
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: <secret-name>
defaultBackend:
service:
name: <service-name>
port:
number: 443
---
apiVersion: v1
data:
tls-crt: <base64 crt>
tls-key: <base64 key>
kind: Secret
metadata:
name: <secret-name>
type: kubernetes.io/tls
However, when I try to combine both (follow steps in 2, but also enable tls in jupyter-lab), I get 502 errors. Why is this?
Also, which setup is better?
If you want TLS between the HTTP(S) LB created by Ingress, you'll need to modify your BackendConfig to specify HTTPS for the healthcheck:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: http-hc-config
spec:
healthCheck:
checkIntervalSec: 300
timeoutSec: 10
healthyThreshold: 2
unhealthyThreshold: 5
type: HTTPS
requestPath: /login
port: 443

Securing grafana ingress with tls in kube-prometheus-stack values.yaml and make grafana available via https

I am using kube-prometheus-stack to monitor my system in gcp. Due to new requirements all my ingress need to be secured with tls. As a first step I wanted to make the grafana webpage available via https.
I created a tls secret and updated my values.yaml. After helm upgrade everything seems to work fine but page is still available via http only.
Hope you can support me here.
grafana:
enabled: true
namespaceOverride: ""
## Deploy default dashboards.
##
defaultDashboardsEnabled: true
adminPassword: prom-operator
ingress:
## If true, Grafana Ingress will be created
##
enabled: true
## Annotations for Grafana Ingress
##
# annotations: {
# kubernetes.io/ingress.class: gce-internal
# kubernetes.io/tls-acme: "true"
# }
## Labels to be added to the Ingress
##
labels: {}
## Hostnames.
## Must be provided if Ingress is enable.
##
# hosts:
# - grafana.domain.com
hosts: []
## Path for grafana ingress
# path: /*
## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls:
- secretName: monitoring-tls-secret
# hosts:
# - grafana.example.com
in the meantime I decided to create the ingress a different way.
I created a ssl-certificate and try to use that instead.
When starting up I get the failure down below. Which is strange as kubernetes.io/ingress.allow-http is configured.
kubectl describe ingress monitoring-cl2-grafana -n monitoring-cl2
Name: monitoring-cl2-grafana
Namespace: monitoring-cl2
Address: x.x.x.x
Default backend: default-http-backend:80 (y.y.y.y:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* monitoring-cl2-grafana:80 (<deleted>)
Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: monitoring-ssl
ingress.kubernetes.io/backends:
{"k8s1-613c3440-kube-system-default-http-backend-80-240d1018":"HEALTHY","k8s1-613c3440-mtx-monitoring--mtx-monitoring-cl2-gra-8-f146f2b2":...
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/https-target-proxy: k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
ingress.kubernetes.io/ssl-cert: monitoring-ssl
ingress.kubernetes.io/url-map: k8s2-um-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: gce-internal
kubernetes.io/ingress.global-static-ip-name: grafana-cl2
meta.helm.sh/release-name: monitoring-cl2
meta.helm.sh/release-namespace: monitoring-cl2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 34m (x12 over 35m) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer 3s1rnwzg-mtx-monitoring--monitoring-cl2-gr-hgx28ojy does not exist: error invalid internal ingress https config
Warning WillNotConfigureFrontend 26m (x18 over 35m) loadbalancer-controller gce-internal Ingress class does not currently support both HTTP and HTTPS served on the same IP (kubernetes.io/ingress.allow-http must be false when using HTTPS).
Normal Sync 3m34s loadbalancer-controller TargetProxy "k8s2-ts-3s1rnwzg-monitoring--monitoring-cl2-gr-hgx28ojy" certs updated
Normal Sync 3m29s (x9 over 35m) loadbalancer-controller Scheduled for sync
grafana:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/<deleted info>/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
WORKING NOW WITH FOLLOWING CONFIG
grafana:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/grafana
tag: 7.5.5
sha: ""
sidecar:
image:
repository: europe-west3-docker.pkg.dev/del/mtx-monitoring/prometheus/k8s-sidecar
tag: 1.10.7
sha: ""
imagePullPolicy: IfNotPresent
service:
enabled: true
type: NodePort
# port: 80
# targetPort: 3000
annotations: {
cloud.google.com/neg: '{"ingress": true}'
}
labels: {}
portName: service
ingress:
enabled: true
path: /*
pathType: ImplementationSpecific
annotations: {
ingress.gcp.kubernetes.io/pre-shared-cert: "monitoring-ssl",
kubernetes.io/ingress.allow-http: "false",
kubernetes.io/ingress.class: "gce-internal",
kubernetes.io/ingress.global-static-ip-name: "grafana-cl2"
}
spec:
rules:
- host: grafana.monitoring.com
http:
paths:
- backend:
service:
name: mtx-monitoring-cl2-grafana
port:
number: 80

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

Can't get kubernetes to pass my tls certificate to browsers

I've been struggling for a while trying to get HTTPS access to my Elasticsearch cluster in Kubernetes.
I think the problem is that Kubernetes doesn't like the TLS certificate I'm trying to use, which is why it's not passing it all the way through to the browser.
Everything else seems to work, since when I accept the Kubernetes Ingress Controller Fake Certificate, the requests go through as expected.
In my attempt to do this I've set up:
The cluster itself
An nginx-ingress controller
An ingress resource
Here's the related yaml:
Cluster:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-03T03:20:47Z
labels:
run: my-es
name: my-es
namespace: default
resourceVersion: "3159488"
selfLink: /api/v1/namespaces/default/services/my-es
uid: 373047e0-96cc-11e8-932b-42010a800043
spec:
clusterIP: 10.63.241.39
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 9200
selector:
run: my-es
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
The ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-origins: http://localhost:3425 https://mydomain.ca
https://myOtherDomain.ca
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: 2018-08-12T08:44:29Z
generation: 16
name: es-ingress
namespace: default
resourceVersion: "3159625"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/es-ingress
uid: ece0071d-9e0b-11e8-8a45-42001a8000fc
spec:
rules:
- http:
paths:
- backend:
serviceName: my-es
servicePort: 8080
path: /
tls:
- hosts:
- mydomain.ca
secretName: my-tls-secret
status:
loadBalancer:
ingress:
- ip: 130.211.179.225
The nginx-ingress controller:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-08-12T00:41:32Z
labels:
app: nginx-ingress
chart: nginx-ingress-0.23.0
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: default
resourceVersion: "2781955"
selfLink: /api/v1/namespaces/default/services/nginx-ingress-controller
uid: 755ee4b8-9dc8-11e8-85a4-4201a08000fc
spec:
clusterIP: 10.63.250.256
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32084
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31182
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.212.6.131
I feel like I'm missing something basic, because it doesn't seem like it should be this hard to expose something this simple...
To get my certificate, I just requested one for mydomain.ca from godaddy.
Do I need to somehow get a certificate using my ingress resource's cluster IP as the common name?
It doesn't seem possible to verify ownership of an IP.
I've seen people mention ways for Kubernetes to automatically create certificates for ingress resources, but those seem to be self signed.
Here are some logs from the nginx-controller:
This one is talking about a PEM with the tls-secret, but it's only a warning.
{
insertId: "1kvvhm7g1q7e0ej"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T02:58:42.135388365Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "WARNING"
textPayload: "error obtaining PEM from secret default/my-tls-cert: error retrieving secret default/my-tls-cert: secret default/my-tls-cert was not found"
timestamp: "2018-08-14T02:58:37Z"
}
I have a few occurences of this handshake error, which may be a result of the last warning...
{
insertId: "148t6rfg1xmz978"
labels: {
compute.googleapis.com/resource_name: "fluentd-gcp-v2.0.17-5b82n"
container.googleapis.com/namespace_name: "default"
container.googleapis.com/pod_name: "nginx-ingress-controller-58f57fc597-zl25s"
container.googleapis.com/stream: "stderr"
}
logName: "projects/project-7d320/logs/nginx-ingress-controller"
receiveTimestamp: "2018-08-14T15:55:52.438035706Z"
resource: {
labels: {
cluster_name: "my-elasticsearch-cluster"
container_name: "nginx-ingress-controller"
instance_id: "2341889542400230234"
namespace_id: "default"
pod_id: "nginx-ingress-controller-58f57fc597-zl25s"
project_id: "project-7d320"
zone: "us-central1-a"
}
type: "container"
}
severity: "ERROR"
textPayload: "2018/08/14 15:55:50 [crit] 1548#1548: *860 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:442"
timestamp: "2018-08-14T15:55:50Z"
}
The above logs make it seem like my tls secret isnt working, but when I run kubectl describe ingress, it says my secret terminates.
aaronmw#project-7d320:~$ kubectl describe ing
Name: es-ingress
Namespace: default
Address: 130.221.179.212
Default backend: default-http-backend:80 (10.61.3.7:8080)
TLS:
my-tls-secret terminates mydomain.ca
Rules:
Host Path Backends
---- ---- --------
*
/ my-es:8080 (<none>)
Annotations:
Events: <none>
I figured it out!
What I ended up doing was adding a default ssl certificate to my nginx-ingress controller on creation using the following command
helm install --name nginx-ingress --set controller.extraArgs.default-ssl-certificate=default/search-tls-secret stable/nginx-ingress
Once I had that, it was passing the cert as expected, but I still had the wrong cert as the CN didn't match my load balancer IP.
So what I did was:
Make my load balancer IP static
Add an A record to my domain, to map a subdomain to that IP
Re-key my cert to match that new subdomain
And I'm in business!
Thanks to #Crou, who's comment reminded me to look at the logs and got me on the right track.