I got on Google cloud this setup:
Pod and service with (php) web app
Pod and service with mysql server
Pod and service with redis server
Where kubernetes configuration file for mysql server and redis server are almost identical, only what differs is name, port and image.
I can connect mysql server from the web app but I can't connect redis server.
Also I can't connect redis server from web app on its service CLUSTER-IP but I can connect redis server on its pod IP.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: launcher.gcr.io/google/redis4
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
env:
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
role: master
tier: backend
spec:
selector:
app: redis
role: master
tier: backend
ports:
- port: 6379
targetPort: 6379
The deployment spec is missing some labels so the service is not selecting it.
Current deployment spec:
metadata:
labels:
app: redis
include the other labels required by the service:
metadata:
labels:
app: redis
role: metadata
tier: backend
or depending on how you want to look at it the service spec is trying match labels that don't exist, you can change the service from:
selector:
app: redis
role: master
tier: backend
to:
selector:
app: redis
Related
I'm trying to deploy my first asp.net app (sample VS 2019 project) to AKS.
I was able to create a docker container, run it locally and access it via http://localhost:8000/weatherforecast.
However, I'm not able to access the endpoint when it's deployed in AKS.
Yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
<t/>name: aspnetdemo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: aspnetdemo
template:
metadata:
labels:
app: aspnetdemo
spec:
containers:
- name: mycr
image: mycr.azurecr.io/aspnetdemo:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: aspnetdemo-service
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: aspnetdemo
type : LoadBalancer
I verified that the pod is running -
kubectl get pods
NAME READY STATUS RESTARTS AGE
aspnetdemo-deployment-* 2/2 Running 0 21m
and the service too -
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aspnetdemo-service LoadBalancer 10.0.X.X 13.89.X.X 80:30635/TCP 22m
I am getting a error when I try to access 13.89.X.X/weatherforecast:
"this site can't be reached - the connection was reset"
Any ideas?
When I run the following command, it returns an endpoint -
kubectl describe service aspnetdemo-service | select-string Endpoints
Endpoints: 10.244.X.X:8080
I also tried port forwarding and that didn't work either.
kubectl port-forward service/aspnetdemo-service 3000:80
http://localhost:3000/weatherforecast
E0512 15:16:24.429387 21356 portforward.go:400] an error occurred forwarding 3000 -> 8080: error forwarding port 8080 to pod a87ebc116d0e0b6e7066f32e945661c50d745d392c76844de084c7da96a874b8, uid : exit status 1: 2020/05/12 22:16:24 socat[18810] E write(5, 0x14da4c0, 535): Broken pipe
Thanks in advance!
I Have deployed few microservices on Kubernetes cluster. Now the problem is these microservices are not communicating to Redis server regardless it is deployed in same Kubernetes cluster or hosted outside. but at the same time if I start the microservice outside of cluster it is able to access both kinds of Redis hosted remotely or deployed in the k8s cluster. I am not getting what I am doing wrong, here are my deployment and service yaml files.
Deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Service yaml:
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
ports:
-
nodePort: 30011
port: 80
targetPort: 6379
selector:
app: redis
role: master
tier: backend
type: NodePort
I Have followed multiple blogs also tried to get help from other sources but didn't work. and also forgot to mention, I am able to access Redis deployed in k8s cluster from the remote machines using Redis desktop manager.
#
k8s cluster is running on-prem in Ubuntu 16.04
#
Microservices when trying from outside k8s running on Windows machine can access redis on k8s cluster and outside of k8s with the same code.
When microservices are trying to communicate with the Redis from inside the cluster getting the following log
2019-03-11 10:28:50,650 [main] INFO org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer - Tomcat started on port(s): 8082 (http)
2019-03-11 10:29:02,382 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization started
2019-03-11 10:29:02,396 [http-nio-8082-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'dispatcherServlet': initialization completed in 14 ms
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Print : JedisConnectionException - redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
I'm trying to connect to my ElastiCache Redis Cluster 5.0 from within a container in EKS that has Istio as a sidecar proxy but I constantly get MOVED error-loop.
I have 1 shard with 2 replicas and I have added a ServiceEntry and a VirtualService for each of the shards + the configuration endpoint.
Example config used for Istio routing:
kind: ServiceEntry
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
ports:
- number: 6379
name: tcp
protocol: TCP
resolution: NONE
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-test-cluster
spec:
hosts:
- redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
http:
- timeout: 30s
route:
- destination:
host: redis-cluster-test.XXXX.clustercfg.euw1.cache.amazonaws.com
Note that Redis protocol is not HTTP, so you cannot use an http VirtualService.
To control egress access for a TCP protocol like Redis, check Egress Control for TLS section of the Consuming External MongoDB Services blog post.
I have two Pods and they are in the same kubernetes cluster
and Pod1 should communicate Pod2 over https.
I use the internal
Domainname: backend-srv.area.cluster.local
But howto generate and integrate a cert to Pod2(apache)?
Your certificates should be generated and passed to apache by a Kubernetes Secret Resource
apiVersion: v1
kind: Secret
metadata:
name: apache-secret
data:
cacerts: your_super_long_string_with_certificate
In your pod yaml configuration you're going to use that secret:
volumes:
- name: certs
secret:
secretName: apache-secret
items:
- key: cacerts
path: cacerts
I suggest you to use a Service to connect to your pods:
apiVersion: v1
kind: Service
metadata:
labels:
app: apache
name: apache
spec:
externalTrafficPolicy: Cluster
ports:
- name: apache
port: 80
targetPort: 80
nodePort: 30080
selector:
app: apache
type: NodePort
Make the proper adjustments to my examples.
I intend to deploy my online service that depends on Redis servers in Kubernetes. So far I have:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "redis"
spec:
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
I can also expose redis as a service with:
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
protocol: TCP
selector:
app: redis
With this, I can run a single pod with a Redis server and expose it.
However, my application needs to contact several Redis servers (this can be configured but ideally should not change live). It does care which Redis server it talks to so I can't just use replicas and expose it a single service as the service abstraction would not allow me to know which instance I am talking too. I understand that the need for my application to know this hinders scalability and am happy to loose some flexibility as a result.
I thought of deploying the same deployment several times and declaring a service for each, but I could not find anything to do that in the Kubernetes documentation. I could of course copy-paste my deployment and service YAML files and adding a suffix for each but this seems silly and way too manual (and frankly too complex).
Is there anything in Kubernetes to help me achieve my goal ?
You should look into pet sets. A pet set will get a unique, but determinable, name per instance like redis0, redis1, redis2.
They are also treated as pets opposed to deployment pods, which are treated as cattle.
Be advised that pet sets are harder to upgrade, delete and handle in general, but thats the price of getting the reliability and determinability.
Your deployment as pet set:
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
name: redis
spec:
serviceName: "redis"
replicas: 3
template:
metadata:
labels:
app: redis
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
terminationGracePeriodSeconds: 0
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
protocol: TCP
Also consider to use volumes to make it easier to access data in the pods.