Why Aren't My Environment Variables Set in Kubernetes Pods From ConfigMap? - express

I have the following configmap spec:
apiVersion: v1
data:
MY_NON_SECRET: foo
MY_OTHER_NON_SECRET: bar
kind: ConfigMap
metadata:
name: web-configmap
namespace: default
$ kubectl describe configmap web-configmap
Name: web-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
MY_NON_SECRET:
----
foo
MY_OTHER_NON_SECRET:
----
bar
Events: <none>
And the following pod spec:
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
envFrom:
- configMapRef:
name: web-configmap
ports:
- containerPort: 3000
$ kubectl describe pod web-deployment-5bb9d846b6-8k2s9
Name: web-deployment-5bb9d846b6-8k2s9
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 12 Jul 2021 12:22:24 +0300
Labels: app=web-pod
pod-template-hash=5bb9d846b6
service=web-service
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/web-deployment-5bb9d846b6
Containers:
web:
Container ID: docker://8de5472c9605e5764276c345865ec52f9ec032e01ed58bc9a02de525af788acf
Image: kahunacohen/hello-kube:latest
Image ID: docker-pullable://kahunacohen/hello-kube#sha256:930dc2ca802bff72ee39604533342ef55e24a34b4a42b9074e885f18789ea736
Port: 3000/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 12 Jul 2021 12:22:27 +0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcqwz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-tcqwz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/web-deployment-5bb9d846b6-8k2s9 to minikube
Normal Pulling 19m kubelet Pulling image "kahunacohen/hello-kube:latest"
Normal Pulled 19m kubelet Successfully pulled image "kahunacohen/hello-kube:latest" in 2.3212119s
Normal Created 19m kubelet Created container web
Normal Started 19m kubelet Started container web
The pod has container that is running expressjs with this code which is trying to print out the env vars set in the config map:
const process = require("process");
const express = require("express");
const app = express();
app.get("/", (req, res) => {
res.send(`<h1>Kubernetes Expressjs Example 0.3</h2>
<h2>Non-Secret Configuration Example</h2>
<p>This uses ConfigMaps as env vars.</p>
<ul>
<li>MY_NON_SECRET: "${process.env.MY_NON_SECRET}"</li>
<li>MY_OTHER_NON_SECRET: "${process.env.MY_OTHER_NON_SECRET}"</li>
</ul>
`);
});
app.listen(3000, () => {
console.log("Listening on http://localhost:3000");
})
When I deploy these pods, the env vars are undefined
When I do $ kubectl exec {POD_NAME} -- env
I don't see my env vars.
What am I doing wrong? I've tried killing the pods, waiting till they restart then check again to no avail.

It looks like your pods are managed by web-deployment deployment. You cannot patch such pods directly.
If you run kubectl get pod <pod-name> -n <namespace> -oyaml, you'll see a block called ownerReferences under the metadata section. This tells you who is the owner/manager of this pod.
In case of a deployment, here is the ownership hierarchy:
Deployment -> ReplicaSet -> Pod
i.e. A deployment creates replicaset and replicaset in turn creates pod.
So, if you want to change anything in the pod Spec, you should make that change in the deployment, not in the replicaset or the pod directly as they will get overwritten.
Patch your deployment either by running and edit the environment field there:
kubectl edit deployment.apps <deployment-name> -n <namespace>
or update the deployment yaml with your changes and run
kubectl apply -f <deployment-yaml-file>

Related

problem to schedule pod in fargate with error "Pod not supported on Fargate: volumes not supported: dir-authentication not supported because:"

New to AWS EKS Fargate.
I created a cluster on aws EKS fargate and then proceed to install a helm chart; and the pods are all in pending state, looking at the pod description, I noticed there is some errors as
eksctl create cluster -f cluster-fargate.yaml
k -n bd describe pod bd-blackduck-authentication-6c8ff5cc85-jwr8m
Name: bd-blackduck-authentication-6c8ff5cc85-jwr8m
Namespace: bd
Priority: 2000001000
Priority Class Name: system-node-critical
Node: <none>
Labels: app=blackduck
component=authentication
eks.amazonaws.com/fargate-profile=fp-bd
name=bd
pod-template-hash=6c8ff5cc85
version=2021.10.5
Annotations: checksum/blackduck-config: 6c1796e5e4218c71ea2ae7a1249fefbb6f7c216f702ea38919a0bb9751b06922
checksum/postgres-config: f21777c0b5bf24b5535a5b4a8dbf98a5df9c9dd2f4a48e5219dcccf46301a982
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/bd-blackduck-authentication-6c8ff5cc85
Init Containers:
bd-blackduck-postgres-waiter:
Image: docker.io/blackducksoftware/blackduck-postgres-waiter:1.0.0
Port: <none>
Host Port: <none>
Environment Variables from:
bd-blackduck-config ConfigMap Optional: false
Environment:
POSTGRES_HOST: <set to the key 'HUB_POSTGRES_HOST' of config map 'bd-blackduck-db-config'> Optional: false
POSTGRES_PORT: <set to the key 'HUB_POSTGRES_PORT' of config map 'bd-blackduck-db-config'> Optional: false
POSTGRES_USER: <set to the key 'HUB_POSTGRES_USER' of config map 'bd-blackduck-db-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85q7d (ro)
Containers:
authentication:
Image: docker.io/blackducksoftware/blackduck-authentication:2021.10.5
Port: 8443/TCP
Host Port: 0/TCP
Limits:
memory: 1Gi
Requests:
memory: 1Gi
Liveness: exec [/usr/local/bin/docker-healthcheck.sh https://127.0.0.1:8443/api/health-checks/liveness /opt/blackduck/hub/hub-authentication/security/root.crt /opt/blackduck/hub/hub-authentication/security/blackduck_system.crt /opt/blackduck/hub/hub-authentication/security/blackduck_system.key] delay=240s timeout=10s period=30s #success=1 #failure=10
Environment Variables from:
bd-blackduck-db-config ConfigMap Optional: false
bd-blackduck-config ConfigMap Optional: false
Environment:
HUB_MAX_MEMORY: 512m
DD_ENABLED: false
HUB_MANAGEMENT_ENDPOINT_PROMETHEUS_ENABLED: false
Mounts:
/opt/blackduck/hub/hub-authentication/ldap from dir-authentication (rw)
/opt/blackduck/hub/hub-authentication/security from dir-authentication-security (rw)
/tmp/secrets/HUB_POSTGRES_ADMIN_PASSWORD_FILE from db-passwords (rw,path="HUB_POSTGRES_ADMIN_PASSWORD_FILE")
/tmp/secrets/HUB_POSTGRES_USER_PASSWORD_FILE from db-passwords (rw,path="HUB_POSTGRES_USER_PASSWORD_FILE")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85q7d (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
dir-authentication:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: bd-blackduck-authentication
ReadOnly: false
db-passwords:
Type: Secret (a volume populated by a Secret)
SecretName: bd-blackduck-db-creds
Optional: false
dir-authentication-security:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-85q7d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s fargate-scheduler Pod not supported on Fargate: volumes not supported: dir-authentication not supported because: PVC bd-blackduck-authentication not bound
my storageClass is currently set to gp2 in my values.yaml.
what can I do next to troubleshoot this?
Currently, Fargate does not support PersistentVolume back by EBS. You can use EFS instead.

.Net Core API with Redis on Kubernetes cluster(minikube)

I have implemented a .Net Core API which is caching and accessing the data from Redis cache. I have created images for API and Redis and deployed both of them on 2 different pods. Currently I am using minikube as a Kubernetes cluster on my local machine.
.Net API code:
Controller method code:
[HttpGet]
public IActionResult Get()
{
var cacheKey = "weatherList";
var rng = new Random();
IEnumerable<WeatherForecast> weatherList = new List<WeatherForecast>();
var redisWeatherList = _distributedCache.Get(cacheKey);
if (redisWeatherList != null)
{
var serializedWeatherList = Encoding.UTF8.GetString(redisWeatherList);
weatherList = JsonConvert.DeserializeObject<List<WeatherForecast>>(serializedWeatherList);
return Ok(new Tuple<IEnumerable<WeatherForecast>, string>(weatherList, "Data fetched from cache"));
}
else
{
weatherList = Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
}).ToArray();
var serializedWeatherList = JsonConvert.SerializeObject(weatherList);
redisWeatherList = Encoding.UTF8.GetBytes(serializedWeatherList);
var options = new DistributedCacheEntryOptions()
.SetAbsoluteExpiration(DateTime.Now.AddSeconds(5));
_distributedCache.Set(cacheKey, redisWeatherList, options);
return Ok(new Tuple<IEnumerable<WeatherForecast>>(weatherList));
}
}
Redis cache configuration in the Startup class
services.AddStackExchangeRedisCache(options =>
{
options.Configuration = Configuration.GetSection("AppConfiguration").GetSection("RedisConfiguration").Value;
});
For deployment I have created the deployment and service yaml files for both of them. Since we have to access the API so its type is selected as LoadBalancer and Redis will be interacted by the API type is selected as default ClusterIP.
YML files:
API YML files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: api-deployment
template:
metadata:
labels:
app: api-deployment
spec:
volumes:
- name: appsettings-volume
configMap:
name: appsettings-configmap
containers:
- name: api-deployment
image: testapiimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: appsettings-volume
mountPath: /app/appsettings.json
subPath: appsettings.json
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 200m
memory: 200Mi
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api-service
spec:
ports:
- port: 8080
targetPort: 80
type: LoadBalancer
selector:
app: api-deployment
Redis YML files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
labels:
app: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: redis-deployment
template:
metadata:
labels:
app: redis-deployment
spec:
containers:
- name: redis-deployment
image: redis
ports:
- containerPort: 6379
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 200m
memory: 200Mi
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: redis-service
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis-deployment
Now the issue and confusion is how API will know how to connect to the ClusterIP of Redis pod. In the local code it is connecting through localhost. Therefore, while API container is running it is still trying to connect to the Redis container through localhost only just like in code.
Is there any way that the API container will automatically connected to the ClusterIP of the Redis container like Environment variables but in my case I have to depend on the value of Redis host which in the container environment is the ClusterIP which is not static and cant be injected through appsettings.
Please guide me here.
To reach other pods in the cluster that are not exposed publicly, you can use the fully qualified domain name (FQDN) assigned by the Kubernetes internal DNS. Kubernetes internal DNS will resolve this to the necessary ClusterIP value to reach the service.
The FQDN is in the format:
<service-name>.<namespace>.svc.<cluster-domain>
The default value of the cluster-domain is cluster.local and if no namespace was provided to redis, it will be running in the default namespace.
Assuming all defaults, the FQDN of the redis service would be:
redis-service.default.svc.cluster.local.
Kubernetes will automatically resolve this to the appropriate ClusterIP address.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

How to connect Redis cluster from application in Kubernetes cluster?

This is my application code:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host=os.getenv("REDIS", "redis"), db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Here want to connect a Redis host in the Kubernetes cluster. So put a environment variable REDIS to set value in Kubernetes' manifest file.
This is the Kubernetes deployment manifest file:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "name" . }}
chart: {{ template "chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: REDIS
value: {{ template "fullname" . }}-master-svc
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
In order to connect Redis host in the same cluster, set environment value as:
env:
- name: REDIS
value: {{ template "fullname" . }}-master-svc
About Redis cluster, installed by official redis-ha chart. Its master service manifest file is:
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}-master-svc
annotations:
{{ toYaml .Values.servers.annotations | indent 4 }}
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: "redis-ha"
redis-node: "true"
redis-role: "master"
release: "{{ .Release.Name }}"
type: "{{ .Values.servers.serviceType }}"
But it seems that the application pod didn't connect to the Redis master service name successfully. When I got accessed URL:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wishful-rabbit-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
from browser, got this result:
Hello World!
Hostname: wishful-rabbit-mychart-85dc7658c6-9blg5
Visits: cannot connect to Redis, counter disabled
More information about pods and services:
kubectl get po
NAME READY STATUS RESTARTS AGE
wishful-rabbit-mychart-bdfdf6558-8fjmb 1/1 Running 0 8m
wishful-rabbit-mychart-bdfdf6558-9khfs 1/1 Running 0 7m
wishful-rabbit-mychart-bdfdf6558-hgqxv 1/1 Running 0 8m
wishful-rabbit-mychart-sentinel-8667dd57d4-9njwq 1/1 Running 0 37m
wishful-rabbit-mychart-sentinel-8667dd57d4-jsq6d 1/1 Running 0 37m
wishful-rabbit-mychart-sentinel-8667dd57d4-ndqss 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-2fn4s 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-bwgrq 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-spkkm 1/1 Running 0 37m
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
wishful-rabbit-mychart NodePort 10.101.103.224 <none> 80:30033/TCP 37m
wishful-rabbit-mychart-master-svc ClusterIP 10.108.128.167 <none> 6379/TCP 37m
wishful-rabbit-mychart-sentinel ClusterIP 10.107.63.208 <none> 26379/TCP 37m
wishful-rabbit-mychart-slave-svc ClusterIP 10.99.211.111 <none> 6379/TCP 37m
What's the right usage?
Redis Server Pod Env
kubectl exec -it wishful-rabbit-mychart-server-746f47dfdd-2fn4s -- sh
/ # printenv
KUBERNETES_PORT=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167
KUBERNETES_SERVICE_PORT=443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379
HOSTNAME=wishful-rabbit-mychart-server-746f47dfdd-2fn4s
SHLVL=1
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp
HOME=/root
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208
WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224
REDIS_CHART_PREFIX=wishful-rabbit-mychart-
TERM=xterm
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208
WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80
REDIS_SENTINEL_SERVICE_HOST=redis-sentinel
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379
Application Pod Env
kubectl exec -it wishful-rabbit-mychart-85dc7658c6-8wlq6 -- sh
# printenv
KUBERNETES_SERVICE_PORT=443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379
KUBERNETES_PORT=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224
HOSTNAME=wishful-rabbit-mychart-85dc7658c6-8wlq6
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp
PYTHON_PIP_VERSION=9.0.1
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208
HOME=/root
GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
REDIS=wishful-rabbit-mychart-master-svc
WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80
WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224
NAME=World
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=C.UTF-8
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111
WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80
PYTHON_VERSION=2.7.14
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/app
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379
All Endpoints
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 4h
wishful-rabbit-mychart 172.17.0.5:80,172.17.0.6:80,172.17.0.7:80 1h
wishful-rabbit-mychart-master-svc <none> 1h
wishful-rabbit-mychart-sentinel 172.17.0.11:26379,172.17.0.12:26379,172.17.0.8:26379 1h
wishful-rabbit-mychart-slave-svc <none>
Describe Redis Master Service
kubectl describe svc wishful-rabbit-mychart-master-svc
Name: wishful-rabbit-mychart-master-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=redis-ha,redis-node=true,redis-role=master,release=wishful-rabbit
Type: ClusterIP
IP: 10.108.128.167
Port: <unset> 6379/TCP
TargetPort: 6379/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Describe Redis Server Pod
kubectl describe po wishful-rabbit-mychart-server-746f47dfdd-2fn4s
Name: wishful-rabbit-mychart-server-746f47dfdd-2fn4s
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 02 Feb 2018 15:28:42 +0900
Labels: app=mychart
chart=mychart-0.1.0
heritage=Tiller
name=redis-server
pod-template-hash=3029038988
podIP=172.17.0.10
redis-node=true
redis-role=master
release=wishful-rabbit
runID=cbf8e0
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wishful-rabbit-mychart-server-746f47dfdd","uid":"4fcb0dfc-07e2-1...
Status: Running
IP: 172.17.0.10
Controlled By: ReplicaSet/wishful-rabbit-mychart-server-746f47dfdd
Containers:
redis:
Container ID: docker://2734d60bd44a1446ec6556369359ed15b866a4589abe1e6d394f9252114c6d4d
Image: quay.io/smile/redis:4.0.6r2
Image ID: docker-pullable://quay.io/smile/redis#sha256:8948a952920d4495859c984546838d4c9b4c71e0036eef86570922d91cacb3df
Port: 6379/TCP
State: Running
Started: Fri, 02 Feb 2018 15:28:44 +0900
Ready: True
Restart Count: 0
Environment:
REDIS_SENTINEL_SERVICE_HOST: redis-sentinel
REDIS_CHART_PREFIX: wishful-rabbit-mychart-
Mounts:
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wfv2q (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wfv2q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wfv2q
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
There is a discrepancy between your wishful-rabbit-mychart-master-svc service's Selector and your master redis pods' labels.
Your service is searching for pods with the following labels:
app=redis-ha
redis-node=true
redis-role=master
release=wishful-rabbit
While your pods have the following labels:
app=mychart
chart=mychart-0.1.0
heritage=Tiller
name=redis-server
pod-template-hash=3029038988
podIP=172.17.0.10
redis-node=true
redis-role=master
release=wishful-rabbit
runID=cbf8e0
As you can see the app label is different (redis-ha in your service, mychart in your pods).
This causes the service to be unbounded (it doesn't know where to forward the incoming traffic).
While I don't know the actual cause of this configuration error, I can suggest a solution to make it work.
You have to edit your redis service and change its selector attribute in order to match the pod's labels. Just run:
kubectl edit svc wishful-rabbit-mychart-master-svc
and change app: "redis-ha" with app: "mychart".
Your application should suddenly be able to reach your redis instance.
A Kubernetes Service is an abstraction which defines a logical set of Pods. The set of Pods targeted by a Service is (usually) determined by a Label Selector
So, Pods can be targeted by Service with the help of selector.
The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object.
If, no Pod found with your provided selector, no valid Endpoint will be created.
In this case, if you use this Service to access Pod, this will not work, because DNS will not get any IP address from Endpoint
To learn more, read how Service works