AWS EKS with Fargate pod status pending due to PersistentVolumeClaim not found - amazon-eks

I have deployed EKS cluster with Fargate and alb-ingress-access using the following command:
eksctl create cluster --name fargate-cluster --version 1.17 --region us-east-2 --fargate --alb-ingress-access
A Fargate namespace has also been created.
The application being deployed has four containers namely mysql, nginx, redis and web.
The YAML files have been applied to the correct namespace.
The issue I am having is that after applying the YAML files when I get the pods status I the following status:
NAMESPACE NAME READY STATUS RESTARTS AGE
flipkicks flipkicksdb-7669b44bbb-xww26 0/1 Pending 0 112m
flipkicks flipkicksredis-74bbf9bd8c-p59hb 1/1 Running 0 112m
flipkicks nginx-5b46fd5977-9d8wk 0/1 Pending 0 112m
flipkicks web-56666f5d8-64w4d 1/1 Running 0 112m
MySQL and Nginx pods go into pending status. The deployment YAML for both have the following volumeMounts values:
MYSQL
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-db
NGINX
volumeMounts:
- mountPath: "/etc/nginx/conf.d"
name: nginx-conf
- mountPath: "/var/www/html"
name: admin-panel
The output from the events part of the kubectl describe command for both pods is:
MYSQL
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: mysql-db not supported because: PVC mysql-db not bound
NGINX
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: admin-panel is of an unsupported volume Type
Would really appreciate any help in understanding this problem and how to resolve it.

Since your NGINX and MYSQL pods are requiring volumeMounts, you will need a PersistentVolumeClaim which is a request for storage from the PersistentVolume resource. Your pods can then use the claim as a volume, for more info see Kubernetes Persistent Volumes.
For the longest time EKS Fargate did not support persistent storage until Aug 17, 2020 when the AWS EFS CSI driver was introduced.
You will need to deploy the AWS EFS CSI driver and update your manifests to deploy the PersistentVolume, PersistentVolumeClaim and get your pods to use the claim as a volume. I would suggest starting with the Amazon EFS CSI driver guide to deploy the CSI driver into your EKS Fargate cluster and update your manifests to match the examples provided here.

Related

Need to collect the logs from application running inside AWS's EKS Pod

I would like to achieve something similar as below but in AWS - EKS
In my kubeadm k8s cluster, if I have to collect application log from pod, I will run below command
kubectl cp podName:/path/to/application/logs/logFile.log /location/on/master/
But with AWS I am not sure, how to collect logs like above?
One workaround is persist the logs on S3 with PV and PVC and then get them from there.
volumeMounts:
- name: logs-sharing
mountPath: /opt/myapp/AppServer/logs/container
volumes:
- name: logs-sharing
persistentVolumeClaim:
claimName: logs-sharing-pvc
Other way would be use sidecar container with a logging agent
But I would appreciate if there is any workaround which will as easy as the one I mentioned above that I follow with kubeadm
...as easy as the one I mentioned above that I follow with kubeadm
kubectl cp is a standard k8s command, you can use it for EKS, AKS, GKE etc.

Not able to access the rabbimq Cluster which is setup using rabbitmq clustor operator

I have an AWS instance where I have minkibe installed. I have also added RabbitMQ cluster operator to it. After that I started rabbit cluster with 3 nodes. I am able to see 3 pods and their logs got no error. The service for Rabbitmq is started as loadbalancer. When i try to list URL for the service I get Rabbitmq, Rabbitmq management UI and Prometheus pod on ports. The external IP is not generated for the service. I use patch command to assign external IP.
MY issue is the RabbitMQ cluster is running fine with no errors but I am not able to access it from using public IP of the AWS instance so other services can send message to it.
Here are all the files --
clientq.yml file
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: clientq
spec:
replicas: 3
image: rabbitmq:3.9-management
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
rabbitmq:
additionalConfig: |
log.console.level = info
channel_max = 1700
default_user= guest
default_pass = guest
default_user_tags.administrator = true
service:
type: LoadBalancer
all setup --
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/clientq-server-0 1/1 Running 0 11m
pod/clientq-server-1 1/1 Running 0 11m
pod/clientq-server-2 1/1 Running 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/clientq LoadBalancer 10.108.225.186 12.27.54.12 5672:31063/TCP,15672:31340/TCP,15692:30972/TCP
service/clientq-nodes ClusterIP None <none> 4369/TCP,25672/TCP
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
NAME READY AGE
statefulset.apps/clientq-server 3/3 11m
NAME ALLREPLICASREADY RECONCILESUCCESS AGE
rabbitmqcluster.rabbitmq.com/clientq True True 11m
here 12.27.54.12 is my public ip of the instance
which i patched using
kubectl patch svc clientq -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["12.27.54.12"]}}'
the urls for service are --
minikube service clientq --url
http://192.168.49.2:31063
http://192.168.49.2:31340
http://192.168.49.2:30972
I am able to curl these from instance it self. But I am not able to access them from public ip of instance. Did i missed something or there is a way to expose these ports ? please let me know
I have enabled all ports for inbound and outbound traffic

Error from server (Forbidden): pods "my-pod" is forbidden: User "system:node:i

I'm getting error while running command kubectl get all within EC2 managed worked node.
I've installed ssm agent on eks node using DameonSet in AWS EC2 and connect worker node using System Manager.
Getting same error while trying to interect with running pod using command
kubectl exec -it my-pod-id -- sh
Error from server (Forbidden): pods "my-pod-id" is forbidden: User "system:node:ec2-ip-address.aws-region.compute.internal" cannot create resource "pods/exec" in API group "" in the namespace "default"
you can edit clusterRolebinding(system:node),add this subject:-
subjects:
- kind: User
apiGroup: rbac.authorization.k8s.io
name: system:node:*
, you can also customize it's clusterRole it's exist inside it

Why do I get Kubernetes error request: Forbidden from the IntelliJ Kubernetes plugin?

I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.

Making logs available to Stackdriver from a Custom Kubernetes docker container running Apache and PHP-FPM

We are running a small test cluster of Custom Kubernetes pods on Google cloud, that internally are running Apache and PHP-FPM.
The Cluster has the following key config:
Master version: 1.10.6-gke.2
Kubernetes alpha features: Disabled
Total size: 3
StackDriver Logging: Enabled
StackDriver Monitoring:
Enabled
Once the cluster comes up a kubectl get pods --all-namespaces is showing the fluentd and heapster services running along side our services as I would expect.
kube-system event-exporter-v0.2.1-5f5b89fcc8-r89d5 2/2 Running 0 13d
kube-system fluentd-gcp-scaler-7c5db745fc-gbrqx 1/1 Running 0 21d
kube-system fluentd-gcp-v3.1.0-76mr4 2/2 Running 0 13d
kube-system fluentd-gcp-v3.1.0-kl4xp 2/2 Running 0 13d
kube-system fluentd-gcp-v3.1.0-vxsq5 2/2 Running 0 13d
kube-system heapster-v1.5.3-95c7549b8-fdlmm 3/3 Running 0 13d
kube-system kube-dns-788979dc8f-c9v2d 4/4 Running 0 99d
kube-system kube-dns-788979dc8f-rqp7d 4/4 Running 0 99d
kube-system kube-dns-autoscaler-79b4b844b9-zjtwk 1/1 Running 0 99d
We can get the logging from our application code (that runs inside our pods) to show up in Stackdriver Logging, but we want to aggregate the logging for Apache (/var/log/httpd/access_log and error_log) and PHP-FPM in Stackdriver as well.
This page from Google's Docs implies that this should be enabled by default.
https://cloud.google.com/kubernetes-engine/docs/how-to/logging
Note: Stackdriver Logging is enabled by default when you create a new cluster using the gcloud command-line tool or Google Cloud Platform Console.
However that is obviously not the case for us. We have tried a few different approaches to get this to work (listed below), but without success.
Including:
redirecting the log output from Apache to stdout and/or stderr, as described in this post.
https://serverfault.com/questions/711168/writing-apache2-logs-to-stdout-stderr
Installing the stackdriver agent inside each pod as described in https://cloud.google.com/monitoring/agent/plugins/apache#configuring
It didn't appear that this step should be required as the documentation implies that you only need to do this on a VM instance, but we tried it anyway on our k8s pods. As part of this step we made sure that Apache has mod_status enabled (/server-status) and PHP-FPM has /fpm-status enabled, and then installed the module Apache plugin following the docs.
Piping the Apache logging to STDOUT
How to Redirect Apache Logs to both STDOUT and Apache Log File
This seems like it should be a simple thing to do, but we have obviously missed something. Any help would be most appreciated.
Cheers, Julian Cone