ElastiCache Redis Cluster and Istio - redis

I'm trying to connect to my ElastiCache Redis Cluster 5.0 from within a container in EKS that has Istio as a sidecar proxy but I constantly get MOVED error-loop.
I have 1 shard with 2 replicas and I have added a ServiceEntry and a VirtualService for each of the shards + the configuration endpoint.
Example config used for Istio routing:
kind: ServiceEntry
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
ports:
- number: 6379
name: tcp
protocol: TCP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
http:
- timeout: 30s
route:
- destination:
host: redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com

Note that Redis protocol is not HTTP, so you cannot use an http VirtualService.
To control egress access for a TCP protocol like Redis, check Egress Control for TLS section of the Consuming External MongoDB Services blog post.

Related

istio egress tracing / metrics for a mitm https connection

I want to get egress traces and metrics from a pod which I don't control much (in terms of code) to a third-party egress endpoint (that I don't control at all). You can think of it as e.g. traffic from a wordpress installation to api.wordpress.org.
I plan to terminate the tls on the egress and then create a new tls session from there. For that I generate a certificate for api.wordpress.org from a CA that I can inject into the pod.
I have the following configuration:
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
gateways:
- mesh
- egress-api-wordpress-org
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- api.wordpress.org
route:
- destination:
host: istio-egressgateway.istio-egress.svc.cluster.local
port:
number: 443
http:
- match:
- gateways:
- egress-api-wordpress-org
port: 443
route:
- destination:
host: api.wordpress.org
port:
number: 443
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: egress-api-wordpress-org
spec:
hosts:
- api.wordpress.org
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
spec:
host: api.wordpress.org
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
With this setup I see the traffic passing through the egress (and I have the metrics and traces egress-side). However, there are no details on the origin -- which kind of makes sense as the sidecar's envoy can't see what's the traffic inside.
Is there any way to provide the origin details to the egress without hacking on the origin pod's source code? I'm generally fine with weird things like tls-in-tls if it's possible to set it up (I'm not sure I can terminate tls on egress twice -- for istio_mutual and simple layers).

GCP Health Checks with SSL enabled

I kind of new on Kubernetes stuff and I'm trying to improve one current system we have here.
The Application is developed using Spring Boot and until now it was using HTTP (Port 8080) without any encryption. The system requirement is to enable e2e-encryption for all Data In-Transit. So here is the problem.
Currently, we have GCE Ingress with TLS enabled using Let's Encrypt to provide the Certificates on Cluster entrance. This is working fine. Our Ingress has some Path Rules to redirect the traffic to the correct microservice and those microservices are not using TLS on the communication.
I managed to create a Self-Signed certificate and embedded it inside the WAR and this is working on the Local machine just fine (using certificate validation disabled). When I deploy this on GKE, the GCP Health Check and Kubernetes Probes are not working at all (I can't see any communication attempt on the Application logs).
When I try to configure the Backend and Health Check on GCP changing both to HTTPS, they don't show any error, but after some time they quietly switch back to HTTP.
Here are my YAML files:
admin-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
spec:
type: NodePort
selector:
app: admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
admin-deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "admin"
namespace: "default"
labels:
app: "admin"
spec:
replicas: 1
selector:
matchLabels:
app: "admin"
template:
metadata:
labels:
app: "admin"
spec:
containers:
- name: "backend-admin"
image: "gcr.io/my-project/backend-admin:X.Y.Z-SNAPSHOT"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8443
scheme: HTTPS
initialDelaySeconds: 8
periodSeconds: 30
env:
- name: "FIREBASE_PROJECT_ID"
valueFrom:
configMapKeyRef:
key: "FIREBASE_PROJECT_ID"
name: "service-config"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "admin-etu-vk1a"
namespace: "default"
labels:
app: "admin"
spec:
scaleTargetRef:
kind: "Deployment"
name: "admin"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 3
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
ingress.yaml
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ingress-addr
kubernetes.io/ingress.class: "gce"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- my-domain.com
secretName: mydomain-com-tls
rules:
- host: my-domain.com
http:
paths:
- path: /admin/v1/*
backend:
serviceName: admin-service
servicePort: 443
status:
loadBalancer:
ingress:
- ip: XXX.YYY.WWW.ZZZ
Reading this document from GCP I understood that Loadbalancer it's compatible with Self-signed certificates.
I would appreciate any insight or new directions you guys can provide.
Thanks in advance.
EDIT 1: I've added here the ingress YAML file which may help to a better understanding of the issue.
EDIT 2: I've updated the deployment YAML with the solution I found for liveness and readiness probes (scheme).
EDIT 3: I've found the solution for GCP Health Checks using annotation on Services declaration. I will put all the details on the response to my own question.
Here is what I found on how to fix the issue.
After reading a lot of documentation related to Kubernetes and GCP I found a document on GCP explaining to use annotations on Service declaration. Take a look at lines 7-8.
---
apiVersion: v1
kind: Service
metadata:
name: admin-service
namespace: default
annotations:
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
spec:
type: NodePort
selector:
app: iteam-admin
ports:
- port: 443
targetPort: 8443
name: https
protocol: TCP
This will hint GCP to create the backend-service and health-check using HTTPS and everything will work as expected.
Reference: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application

Unable to access Kibana after successful authentication with IBM Cloud AppID

I have a k8s deployment of Kibana in IBM Cloud. It is exposed through ClusterIP k8s service, a k8s Ingress and it is accessible for a single Cloud Directory user authenticated through IBM Cloud App ID.
Kubernetes correctly re-directs to App ID login screen. The issue is that Kibana deployment is not accessible after successful AppID authentication. I get 301 Moved Permanently in a loop.
The same k8s deployment as above is exposed through k8s NodePort and works fine.
The same setup as above works correctly for a simple hello-world app with authentication.
I followed this tutorial.
In App ID Authentication Settings, the redirect URL is:
https://our-domain/app/kibana/appid_callback
Here are portions of the k8s definitions, which are relevant:
---
kind: Service
apiVersion: v1
metadata:
name: kibana-sec
namespace: default
labels:
app: kibana-sec
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 5601
selector:
app: kibana-sec
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/redirect-to-https: "True"
ingress.bluemix.net/appid-auth: "bindSecret=<our-bindSecret> namespace=default requestType=web serviceName=kibana-sec"
...
spec:
rules:
- host: <our-domain>
http:
paths:
...
- backend:
serviceName: kibana-sec
servicePort: 8080
path: /app/kibana/
tls:
- hosts:
- <our-domain>
secretName: <our-secretName>
status:
loadBalancer:
ingress:
- ip: <IPs>
- ip: <IPs>
There is no "ingress.bluemix.net/rewrite-path" annotation for our service.

Kubernetes can't connect redis on Cluster-IP of service

I got on Google cloud this setup:
Pod and service with (php) web app
Pod and service with mysql server
Pod and service with redis server
Where kubernetes configuration file for mysql server and redis server are almost identical, only what differs is name, port and image.
I can connect mysql server from the web app but I can't connect redis server.
Also I can't connect redis server from web app on its service CLUSTER-IP but I can connect redis server on its pod IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: launcher.gcr.io/google/redis4
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
env:
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
selector:
app: redis
role: master
tier: backend
ports:
- port: 6379
targetPort: 6379
The deployment spec is missing some labels so the service is not selecting it.
Current deployment spec:
metadata:
labels:
app: redis
include the other labels required by the service:
metadata:
labels:
app: redis
role: metadata
tier: backend
or depending on how you want to look at it the service spec is trying match labels that don't exist, you can change the service from:
selector:
app: redis
role: master
tier: backend
to:
selector:
app: redis

Port 443 times out on kubernetes in GCE

I have created a kubernetes cluster where I'm currently running only a docker service that is serving a static web page. It is working exposing the standard port 80.
Now I want to attach an SSL certificate to the domain, and have managed to do so running locally. But when I try to publish my service to the kubernates cluster, the https://my.domain.com times out. It appears like the service does not receives the request, but is blocked by the kuernates or GCE.
Do I need to open up a firewall, or setup my cluster deployment to open port 443? What might be the issue?
I have heard about Ingress and kubernetes secrets, and that is the way to go. But all I find is with using Ingress-nginx, and as I'm only having a single docker service I do not utilize Nginx. To me it seems like enabling the 443 call to reach the service would be the easiest solution. Or am I wrong?
Below is my setup:
apiVersion: v1
kind: Service
metadata:
name: client
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: client-pods
-----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: client
spec:
replicas: 1
revisionHistoryLimit: 0
template:
metadata:
labels:
name: client-pods
spec:
containers:
- image: <CONTAINER>
name: client-container
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 1
I have also enabled HTTPS traffic on the GKE VM running the cluster, and the Dockerfile exposes both 80 and 443. I'm at a loss. Anyone know what I'm doing wrong?