Conftest Fails For a Valid Kubernetets YAML File - testing

I have the following simple Kubernetes YAML Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
namespace: {{ .Values.app.namespace }}
spec:
selector:
matchLabels:
app: {{ .Values.app.name }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.app.name }}
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.plantSimulatorService.image.repository }}:{{ .Values.plantSimulatorService.image.tag }}
ports:
- containerPort: {{ .Values.plantSimulatorService.ports.containerPort }} # Get this value from ConfigMap
I have the following in my test.rego:
package main
import data.kubernetes
name = input.metadata.name
deny[msg] {
kubernetes.is_deployment
not input.spec.template.spec.securityContext.runAsNonRoot
msg = sprintf("Containers must not run as root in Deployment %s", [name])
}
When I ran this using the following command:
joesan#joesan-InfinityBook-S-14-v5:~/Projects/Private/infrastructure-projects/plant-simulator-deployment$ helm conftest helm-k8s -p test
WARN - Found service plant-simulator-service but services are not allowed
WARN - Found service plant-simulator-grafana but services are not allowed
WARN - Found service plant-simulator-prometheus but services are not allowed
FAIL - Containers must not run as root in Deployment plant-simulator
FAIL - Deployment plant-simulator must provide app/release labels for pod selectors
As you can see I'm indeed not running the container as root, but despite that I get this error message - Containers must not run as root in Deployment plant-simulator
Any ideas what the reason could be?

You need to add runAsNonRoot to your securityContext:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
The rego rule is only able to validate Yaml structure - it is not clever enough to work out that your configuration is effectively running a non-root user.

Related

helm config map from external yaml file

I want to update my helm dependencies with configurations , declared in central folder ,among microservices .
having the following tree of folders
- config-repo
- application.yml
- specific.yml
- kubernetes
- helm
- common
- components
- microservice#1 (templates relating to )
- config-repo
- application.yml
- specefic.yml
- templates
- configmap_from_file.yaml
- values.yaml
- Chart.yaml
- microservice#2
- ...
here is the template of microservice#1 configmap_from_file.yaml file
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "common.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "common.name" . }}
helm.sh/chart: {{ include "common.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
{{ (.Files.Glob "config-repo/*").AsConfig | indent 2 }}
{{- end -}}
and inside of microservice#1's config-repo files are
specific.yml
../../../../../config-repo/specific.yml
application.yml
../../../../../config-repo/application.yml
they both are just reference to other files
when I command helm dep update . and then helm template . -s templates/configmap_from_file.yaml
I expect the following output
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: CONTENTS OF THE FILE IN ADDRESS
specific.yml: CONTENTS OF THE FILE IN ADDRESS
but the following is appeared
apiVersion: v1
kind: ConfigMap
metadata:
name: review
labels:
app.kubernetes.io/name: review
helm.sh/chart: review-1.0.0
app.kubernetes.io/managed-by: Helm
data:
application.yml: ../../../../../config-repo/application.yml
specific.yml: ../../../../../config-repo/specific.yml
why just address is injected and not the content are appeared

Tekton - mount path workspace issue - Error of path

Currently, I am trying to deploy tutum-hello-world. I have written a script for the same, but it does not work as it is supposed to.
I am certain that this issue is related to workspace.
UPDATE
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
steps:
- name: tutum-deploy
image: bitnami/kubectl
script: |
kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/
Error -
root#master1:~/tekton-scripts# tkn taskrun logs tutum-deploy-run-8sq8s -f -n default
[tutum-deploy] + kubectl apply -f /root/tekton-scripts/tutum-deploy.yaml
[tutum-deploy] error: the path "/root/tekton-scripts/tutum-deploy.yaml" cannot be accessed: stat /root/tekton-scripts/tutum-deploy.yaml: permission denied
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-14T12:54:01.096Z","type":"InternalTektonResult"}]
PS - I have placed my script on the master node at - /root/tekton-scripts/tutum-deploy.yaml
root#master1:~/tekton-scripts# ls -l tutum-deploy.yaml
-rwxrwxrwx 1 root root 626 Jun 11 11:31 tutum-deploy.yaml
OLD SCRIPT
Here is my code for task-tutum-deploy.yaml-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: tutum-deploy
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
steps:
- name: tutum-deploy
image: bitnami/kubectl
command: ["kubectl"]
args:
- "apply"
- "-f"
- "./tutum-deploy.yaml"
Here is my code for tutum-deploy.yaml which is present on the machine (master node) of Kubernetes cluster with read, write and execute permissions -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-tutum
labels:
service: hello-world-tutum
spec:
replicas: 1
selector:
matchLabels:
service: hello-world-tutum
template:
metadata:
labels:
service: hello-world-tutum
spec:
containers:
- name: tutum-hello-world
image: tutum/hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-tutum
spec:
type: NodePort
selector:
service: hello-world-tutum
ports:
- name: "80"
port: 80
targetPort: 80
nodePort: 30050
I ran the following commands from my master node of Kubernetes cluster -
1. kubectl apply -f task-tutum-deploy.yaml
2. tkn task start tutum-deploy
Error -
Using tekton command - $ tkn taskrun logs tutum-deploy-run-tvlll -f -n default
task tutum-deploy has failed: "step-tutum-deploy" exited with code 1 (image: "docker-pullable://bitnami/kubectl#sha256:b83299ee1d8657ab30fb7b7925b42a12c613e37609d2b4493b4b27b057c21d0f"); for logs run: kubectl -n default logs tutum-deploy-run-tvlll-pod-vbl5g -c step-tutum-deploy
[tutum-deploy] error: the path "./tutum-deploy.yaml" does not exist
container step-tutum-deploy has failed : [{"key":"StartedAt","value":"2021-06-11T14:01:49.786Z","type":"InternalTektonResult"}]
The error is from this part of your YAML:
spec:
workspaces:
- name: messages
optional: true
mountPath: /root/tekton-scripts/tutum-deploy.yaml
spec.workspaces.mountPath expects a directory, rather than a file, as you have specified here. You may mean /root/tekton-scripts/ instead but I am unfamiliar with tutum-hello-world.
If you look at the documentation you will see that all references to mountPath are directories rather than files.

sslCertificateCouldNotParseCert - Error syncing to GCP:

I have a terraform code which will deploy the frontend application and have ingress.yaml helm chart.
ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ .Values.global.namespace }}-ingress
namespace: {{ .Values.global.namespace }}
labels:
{{- include "test-frontend.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ .servicename }}
servicePort: {{ .serviceport }}
{{- end }}
{{- end }}
{{- end }}
values.yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: <<test.pem value>>
key: <<test.key value>>
The terraform apply command ran successfully. In GCP also the certificate is accepted and ingress in up and running inside Kubernetes Service in GCP. But If I pass the .crt and .key as a file in values.yaml in terraform code
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls
crt: file(../../.secret/test.crt)
key: file(../../.secret/test.key)
The values.yaml will send the certificate to helm->template->secret.yaml which will create the secrets(ingress-tls-credential-file)
secret.yaml
{{- if .Values.ingress.tls }}
{{- $namespace := .Values.global.namespace }}
{{- range .Values.ingress.tls }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .secretName }}
namespace: {{ $namespace }}
labels:
{{- include "test-frontend.labels" $ | nindent 4 }}
type: {{ .type }}
data:
tls.crt: {{ toJson .crt | b64enc | quote }}
tls.key: {{ toJson .key | b64enc | quote }}
{{- end }}
{{- end }}
We are getting below error in GCP -> Kubernetes Engine -> Service & Ingress. How to pass the files to the values.yaml file.
Error syncing to GCP: error running load balancer syncing routine:
loadbalancer 6370cwdc-isp-isp-ingress-ixjheqwi does not exist: Cert
creation failures - k8s2-cr-6370cwdc-q0ndkz9m629eictm-ca5d0f56ba7fe415
Error:googleapi: Error 400: The SSL certificate could not be parsed.,
sslCertificateCouldNotParseCert
So google can accept your cert and key files, you need to make sure they have the proper format as per next steps
You need to first format them creating a Self Managed SSL Certificate resource with your existing files using you GCP Cloud Shell
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--certificate=CERTIFICATE_FILE \
--private-key=PRIVATE_KEY_FILE \
--region=REGION \
--project=PROJECT_ID
Then you need to complete a few more steps to make sure you have all the parameters required in your .yaml file and that you have the proper services enable to accept the information coming from it (you may already have completed them):
Enable Kubernetes Engine API by running the following command:
gcloud services enable container.googleapis.com \
--project=PROJECT_ID
Create a GKE cluster:
gcloud container clusters create CLUSTER_NAME \
--release-channel=rapid \
--enable-ip-alias \
--network=NETWORK_NAME \
--subnetwork=BACKEND_SUBNET_NAME \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--region=REGION --machine-type=MACHINE_TYPE \
--project=PROJECT_ID
The cluster is created in the BACKEND_SUBNET_NAME.
The cluster uses GKE version is 1.18.9-gke.801 or later.
The cluster is created with the Cloud Platform scope.
The cluster is created with the desired service account you would like to use to run the - application.
The cluster is using n1-standard-4 machine type or better.
Enable IAP by doing the following steps:
Configure the OAuth consent screen.
Create OAuth credentials.
Convert the ID and Secret to base64 by running the following commands:
echo -n 'CLIENT_ID' | base64
echo -n 'CLIENT_SECRET' | base64
Create an internal static IP address, and reserve a static IP address for your load balancer
gcloud compute addresses create STATIC_ADDRESS_NAME \
--region=REGION --subnet=BACKEND_SUBNET_NAME \
--project=PROJECT_ID
Get the static IP address by running the following command:
gcloud compute addresses describe STATIC_ADDRESS_NAME \
--region=REGION \
--project=PROJECT_ID
7.Create the values YAML file by copying the gke_internal_ip_config_example.yaml and renaming it to PROJECT_ID_gke_config.yaml:
clientIDEncoded: Base64 encoded CLIENT_ID from earlier step.
clientSecretEncoded: Base64 encoded CLIENT_SECRET from earlier step.
certificate.name: CERTIFICATE_NAME that you have created earlier.
initialEmail: The INITIAL_USER_EMAIL email of the initial user who will set up Custom Governance.
staticIpName: STATIC_ADDRESS_NAME that you created earlier.
Try again your deployment after completing above steps.
You seem to mix a secret and a direct definition.
You need first to create the ingress-tls-credential-file secret then link it in your ingress definition like the example https://kubernetes.io/fr/docs/concepts/services-networking/ingress/#tls
apiVersion: v1
data:
tls.crt: file(../../.secret/test.crt)
tls.key: file(../../.secret/test.key)
kind: Secret
metadata:
name: ingress-tls-credential-file
namespace: default
type: kubernetes.io/tls
Then clean your ingress
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.regional-static-ip-name : "ingress-internal-static-ip"
kubernetes.io/ingress.allow-http: "false"
hosts:
- host: test-dev.test.com
paths:
- path: "/*"
servicename: test-frontend-service
serviceport: 80
- path: "/api/*"
servicename: test-backend-service
serviceport: 80
tls:
- hosts:
- test-dev.test.com
secretName: ingress-tls-credential-file
type: kubernetes.io/tls

Problems with Traefik/Keycloak (and Gatekeeper) in front of Kibana

I want to use Keycloak as a standard way of authenticating users to applications running in our Kubernetes clusters. One of the clusters is running the Elastic ECK component (v1.1.1) and we use the operator to deploy Elastic clusters and Kibana as a frontend. In order to keep things as simple as possible I’ve done the following.
Deployed Kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: {{ .Values.kibana.name }}
namespace: {{ .Release.Namespace }}
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
spec:
version: {{ .Values.kibana.version }}
count: {{ .Values.kibana.instances }}
elasticsearchRef:
name: {{ .Values.kibana.elasticCluster }}
namespace: {{ .Release.Namespace }}
podTemplate:
spec:
containers:
- name: kibana
env:
- name: SERVER_BASEPATH
value: {{ .Values.kibana.serverBasePath }}
resources:
requests:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.requests }}
{{- end }}
memory: {{ .Values.kibana.memory.requests }}Gi
limits:
{{- if not .Values.kibana.cpu.enableBurstableQoS }}
cpu: {{ .Values.kibana.cpu.limits }}
{{- end }}
memory: {{ .Values.kibana.memory.limits }}Gi
http:
tls:
selfSignedCertificate:
disabled: true
Created Ingress
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: kibana-{{ .Values.kibana.name }}-stripprefix
namespace: {{ .Release.Namespace }}
spec:
stripPrefix:
prefixes:
- {{ .Values.kibana.serverBasePath }}
forceSlash: true
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Values.kibana.name }}-ingress
namespace: {{ .Release.Namespace }}
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: http
traefik.ingress.kubernetes.io/router.middlewares: {{ .Release.Namespace }}-kibana-{{ .Values.kibana.name }}-stripprefix#kubernetescrd
spec:
rules:
- http:
paths:
- path: {{ .Values.kibana.serverBasePath }}
backend:
servicePort: {{ .Values.kibana.port }}
serviceName: {{ .Values.kibana.name }}-kb-http
Result
Deploying the above works perfectly fine. I’m able to reach the Kibana UI using the external IP exposed by our MetalLB component. I simply enter http://external IP/service/logging/kibana and I’m presented to the Kibana log in screen and I can log on using the “built in” authentication process.
Adding the Keycloak Gatekeeper
Now, if I add the following to the Kibana manifest, effectively adding the Keycloak Gatekeeper sidecar to the Kibana Pod:
- name: {{ .Values.kibana.name }}-gatekeeper
image: "{{ .Values.kibana.keycloak.gatekeeper.repository }}/docker-r/keycloak/keycloak-gatekeeper:{{ .Values.kibana.keycloak.gatekeeper.version }}"
args:
- --config=/etc/keycloak-gatekeeper.conf
ports:
- containerPort: 3000
name: proxyport
volumeMounts:
- name: gatekeeper-config
mountPath: /etc/keycloak-gatekeeper.conf
subPath: keycloak-gatekeeper.conf
volumes:
- name: gatekeeper-config
configMap:
name: {{ .Release.Name }}-gatekeeper-config
with the following ConfigMap which is "mounted":
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-gatekeeper-config
namespace: {{ .Release.Namespace }}
data:
keycloak-gatekeeper.conf: |+
redirection-url: {{ .Values.kibana.keycloak.gatekeeper.redirectionUrl }}
discovery-url: https://.../auth/realms/{{ .Values.kibana.keycloak.gatekeeper.realm }}
skip-openid-provider-tls-verify: true
client-id: kibana
client-secret: {{ .Values.kibana.keycloak.gatekeeper.clientSecret }}
enable-refresh-tokens: true
encryption-key: ...
listen: :3000
tls-cert:
tls-private-key:
secure-cookie: false
upstream-url: {{ .Values.kibana.keycloak.gatekeeper.upstreamUrl }}
resources:
- uri: /*
groups:
- kibana
The upstream-url points to http://127.0.0.1:5601
and add an intermediary service:
In order to explicitly address the Gatekeeper proxy I added another service, “keycloak-proxy” as such:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.kibana.name }}-keycloak-proxy
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
common.k8s.elastic.co/type: kibana
kibana.k8s.elastic.co/name: cap-logging
ports:
- name: http
protocol: TCP
port: 8888
targetPort: proxyport
and change the backend definition in the Kibana definition to:
servicePort: 8888
serviceName: {{ .Values.kibana.name }}-keycloak-proxy
and then issue the same URL as above, http://external IP/service/logging/kibana, I’m redirected to http://external IP/oauth/authorize?state=0db97b79-b980-4cdc-adbe-707a5e37df1b and get an “404 Page not found” error.
If I reconfigure the “keycloak-proxy” service and convert it into a NodePort and expose it on, say, port 32767 and issue an http://host IP:32767 I’m presented to the Keycloak login screen on the Keycloak server!
If I look into the Gatekeeper startup log I find the following:
1.6018108005048046e+09 info starting the service {"prog": "keycloak-gatekeeper", "author": "Keycloak", "version": "7.0.0 (git+sha: f66e137, built: 03-09-2019)"}
1.6018108005051787e+09 info attempting to retrieve configuration discovery url {"url": "https://.../auth/realms/...", "timeout": "30s"}
1.601810800537417e+09 info successfully retrieved openid configuration from the discovery
1.6018108005392597e+09 info enabled reverse proxy mode, upstream url {"url": "http://127.0.0.1:5601"}
1.6018108005393562e+09 info using session cookies only for access and refresh tokens
1.6018108005393682e+09 info protecting resource {"resource": "uri: /*, methods: DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT,TRACE, required: authentication only"}
1.6018108005398147e+09 info keycloak proxy service starting {"interface": ":3000"}
This is what I get when I try to access Kibana through the Gatekeeper proxy:
http://host/service/logging/kibana (gets redirected to) http://host/oauth/authorize?state=4dbde9e7-674c-4593-83f2-a8e5ba7cf6b5
and the Gatekeeper log:
1.601810963344485e+09 error no session found in request, redirecting for authorization {"error": "authentication session not found"}
I've been struggling with this for some time now and seems to be stuck! If anybody here "knows what's going on" I'd be very grateful.

How to connect Redis cluster from application in Kubernetes cluster?

This is my application code:
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host=os.getenv("REDIS", "redis"), db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
#app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Here want to connect a Redis host in the Kubernetes cluster. So put a environment variable REDIS to set value in Kubernetes' manifest file.
This is the Kubernetes deployment manifest file:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "name" . }}
chart: {{ template "chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: REDIS
value: {{ template "fullname" . }}-master-svc
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
In order to connect Redis host in the same cluster, set environment value as:
env:
- name: REDIS
value: {{ template "fullname" . }}-master-svc
About Redis cluster, installed by official redis-ha chart. Its master service manifest file is:
apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}-master-svc
annotations:
{{ toYaml .Values.servers.annotations | indent 4 }}
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: "redis-ha"
redis-node: "true"
redis-role: "master"
release: "{{ .Release.Name }}"
type: "{{ .Values.servers.serviceType }}"
But it seems that the application pod didn't connect to the Redis master service name successfully. When I got accessed URL:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wishful-rabbit-mychart)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
from browser, got this result:
Hello World!
Hostname: wishful-rabbit-mychart-85dc7658c6-9blg5
Visits: cannot connect to Redis, counter disabled
More information about pods and services:
kubectl get po
NAME READY STATUS RESTARTS AGE
wishful-rabbit-mychart-bdfdf6558-8fjmb 1/1 Running 0 8m
wishful-rabbit-mychart-bdfdf6558-9khfs 1/1 Running 0 7m
wishful-rabbit-mychart-bdfdf6558-hgqxv 1/1 Running 0 8m
wishful-rabbit-mychart-sentinel-8667dd57d4-9njwq 1/1 Running 0 37m
wishful-rabbit-mychart-sentinel-8667dd57d4-jsq6d 1/1 Running 0 37m
wishful-rabbit-mychart-sentinel-8667dd57d4-ndqss 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-2fn4s 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-bwgrq 1/1 Running 0 37m
wishful-rabbit-mychart-server-746f47dfdd-spkkm 1/1 Running 0 37m
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h
wishful-rabbit-mychart NodePort 10.101.103.224 <none> 80:30033/TCP 37m
wishful-rabbit-mychart-master-svc ClusterIP 10.108.128.167 <none> 6379/TCP 37m
wishful-rabbit-mychart-sentinel ClusterIP 10.107.63.208 <none> 26379/TCP 37m
wishful-rabbit-mychart-slave-svc ClusterIP 10.99.211.111 <none> 6379/TCP 37m
What's the right usage?
Redis Server Pod Env
kubectl exec -it wishful-rabbit-mychart-server-746f47dfdd-2fn4s -- sh
/ # printenv
KUBERNETES_PORT=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167
KUBERNETES_SERVICE_PORT=443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379
HOSTNAME=wishful-rabbit-mychart-server-746f47dfdd-2fn4s
SHLVL=1
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp
HOME=/root
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208
WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224
REDIS_CHART_PREFIX=wishful-rabbit-mychart-
TERM=xterm
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208
WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80
REDIS_SENTINEL_SERVICE_HOST=redis-sentinel
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379
Application Pod Env
kubectl exec -it wishful-rabbit-mychart-85dc7658c6-8wlq6 -- sh
# printenv
KUBERNETES_SERVICE_PORT=443
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PORT=6379
KUBERNETES_PORT=tcp://10.96.0.1:443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_ADDR=10.108.128.167
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_PROTO=tcp
WISHFUL_RABBIT_MYCHART_SERVICE_HOST=10.101.103.224
HOSTNAME=wishful-rabbit-mychart-85dc7658c6-8wlq6
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP_PROTO=tcp
PYTHON_PIP_VERSION=9.0.1
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_HOST=10.107.63.208
HOME=/root
GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
REDIS=wishful-rabbit-mychart-master-svc
WISHFUL_RABBIT_MYCHART_SERVICE_PORT=80
WISHFUL_RABBIT_MYCHART_PORT=tcp://10.101.103.224:80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT_6379_TCP=tcp://10.108.128.167:6379
WISHFUL_RABBIT_MYCHART_SENTINEL_SERVICE_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_ADDR=10.101.103.224
NAME=World
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PORT=80
WISHFUL_RABBIT_MYCHART_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
LANG=C.UTF-8
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_ADDR=10.107.63.208
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_HOST=10.99.211.111
WISHFUL_RABBIT_MYCHART_PORT_80_TCP=tcp://10.101.103.224:80
PYTHON_VERSION=2.7.14
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PORT=26379
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_HOST=10.108.128.167
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/app
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT=tcp://10.99.211.111:6379
WISHFUL_RABBIT_MYCHART_SERVICE_PORT_HTTP=80
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_SLAVE_SVC_PORT_6379_TCP_ADDR=10.99.211.111
WISHFUL_RABBIT_MYCHART_SENTINEL_PORT_26379_TCP=tcp://10.107.63.208:26379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_SERVICE_PORT=6379
WISHFUL_RABBIT_MYCHART_MASTER_SVC_PORT=tcp://10.108.128.167:6379
All Endpoints
kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 4h
wishful-rabbit-mychart 172.17.0.5:80,172.17.0.6:80,172.17.0.7:80 1h
wishful-rabbit-mychart-master-svc <none> 1h
wishful-rabbit-mychart-sentinel 172.17.0.11:26379,172.17.0.12:26379,172.17.0.8:26379 1h
wishful-rabbit-mychart-slave-svc <none>
Describe Redis Master Service
kubectl describe svc wishful-rabbit-mychart-master-svc
Name: wishful-rabbit-mychart-master-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=redis-ha,redis-node=true,redis-role=master,release=wishful-rabbit
Type: ClusterIP
IP: 10.108.128.167
Port: <unset> 6379/TCP
TargetPort: 6379/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Describe Redis Server Pod
kubectl describe po wishful-rabbit-mychart-server-746f47dfdd-2fn4s
Name: wishful-rabbit-mychart-server-746f47dfdd-2fn4s
Namespace: default
Node: minikube/192.168.99.100
Start Time: Fri, 02 Feb 2018 15:28:42 +0900
Labels: app=mychart
chart=mychart-0.1.0
heritage=Tiller
name=redis-server
pod-template-hash=3029038988
podIP=172.17.0.10
redis-node=true
redis-role=master
release=wishful-rabbit
runID=cbf8e0
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wishful-rabbit-mychart-server-746f47dfdd","uid":"4fcb0dfc-07e2-1...
Status: Running
IP: 172.17.0.10
Controlled By: ReplicaSet/wishful-rabbit-mychart-server-746f47dfdd
Containers:
redis:
Container ID: docker://2734d60bd44a1446ec6556369359ed15b866a4589abe1e6d394f9252114c6d4d
Image: quay.io/smile/redis:4.0.6r2
Image ID: docker-pullable://quay.io/smile/redis#sha256:8948a952920d4495859c984546838d4c9b4c71e0036eef86570922d91cacb3df
Port: 6379/TCP
State: Running
Started: Fri, 02 Feb 2018 15:28:44 +0900
Ready: True
Restart Count: 0
Environment:
REDIS_SENTINEL_SERVICE_HOST: redis-sentinel
REDIS_CHART_PREFIX: wishful-rabbit-mychart-
Mounts:
/redis-master-data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wfv2q (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wfv2q:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wfv2q
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
There is a discrepancy between your wishful-rabbit-mychart-master-svc service's Selector and your master redis pods' labels.
Your service is searching for pods with the following labels:
app=redis-ha
redis-node=true
redis-role=master
release=wishful-rabbit
While your pods have the following labels:
app=mychart
chart=mychart-0.1.0
heritage=Tiller
name=redis-server
pod-template-hash=3029038988
podIP=172.17.0.10
redis-node=true
redis-role=master
release=wishful-rabbit
runID=cbf8e0
As you can see the app label is different (redis-ha in your service, mychart in your pods).
This causes the service to be unbounded (it doesn't know where to forward the incoming traffic).
While I don't know the actual cause of this configuration error, I can suggest a solution to make it work.
You have to edit your redis service and change its selector attribute in order to match the pod's labels. Just run:
kubectl edit svc wishful-rabbit-mychart-master-svc
and change app: "redis-ha" with app: "mychart".
Your application should suddenly be able to reach your redis instance.
A Kubernetes Service is an abstraction which defines a logical set of Pods. The set of Pods targeted by a Service is (usually) determined by a Label Selector
So, Pods can be targeted by Service with the help of selector.
The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object.
If, no Pod found with your provided selector, no valid Endpoint will be created.
In this case, if you use this Service to access Pod, this will not work, because DNS will not get any IP address from Endpoint
To learn more, read how Service works