Need to collect the logs from application running inside AWS's EKS Pod - amazon-eks

I would like to achieve something similar as below but in AWS - EKS
In my kubeadm k8s cluster, if I have to collect application log from pod, I will run below command
kubectl cp podName:/path/to/application/logs/logFile.log /location/on/master/
But with AWS I am not sure, how to collect logs like above?
One workaround is persist the logs on S3 with PV and PVC and then get them from there.
volumeMounts:
- name: logs-sharing
mountPath: /opt/myapp/AppServer/logs/container
volumes:
- name: logs-sharing
persistentVolumeClaim:
claimName: logs-sharing-pvc
Other way would be use sidecar container with a logging agent
But I would appreciate if there is any workaround which will as easy as the one I mentioned above that I follow with kubeadm

...as easy as the one I mentioned above that I follow with kubeadm
kubectl cp is a standard k8s command, you can use it for EKS, AKS, GKE etc.

Related

Why is there an aws command embedded in the EKS kube config file?

I'm curious about something in the kube config file generated by the aws eks update-kubeconfig command. At the bottom of the file, there is this construct:
- name: arn:aws:eks:us-west-2:redacted:cluster/u62d2e14b31f0270011485fd3
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- u62d2e14b31f0270011485fd3
command: aws
It is clearly an invocation of the aws eks get-token command. Why is this here? Does this command get automatically called?
Why is this here?
The command gets an IAM token using your IAM account and pass along to EKS via the HTTP header Authorization: Bearer <token> for authentication. See here for details.
Does this command get automatically called?
Yes, by kubectl.

how to deploy same job on all my runners?

I have several VMs running gilab-runner,and I'm using gitlab-ci to deploy microservices into those VMs. Now I want to monitoring those VMs with prometheus and grafana, but i need to setup node-exporter/cadvisor etc. service into those VMs.
My Ideas is using gitlab-ci to define a common job for those VMs.
I have already write the docker-compose.yml and .gitlab-ci.yml.
version: '3.8'
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
cadvisor:
image: google/cadvisor
container_name: cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- "8080:8080"
deploy-workers:
tags:
- worker
stage: deploy-workers
script:
- docker-compose -f docker-compose.worker.yaml pull
- docker-compose -f docker-compose.worker.yaml down
- docker-compose -f docker-compose.worker.yaml up -d
then I register the runner in all my VMs with 'worker' tag.
However, only one worker job is triggered during ci.
I have about 20 VMs to go.
Do anyone have suggestions?
This is probably not a good way to be deploying your services onto virtual machines. You don't want to just launch your GitLab CI job and then hope that it results in what you want. Managing each VM separately is both going to be tedious and error-prone.
You probably want to do is have a method that has a declarative way to define/describe your infrastructure and the state of how that infrastructure should be configured and the applications running on it.
For example, you could:
Use a proper orchestrator, such as docker swarm or Kubernetes AND/OR
Use a provisioning tool, such as Ansible connected to each VM, or if your VMs run in the cloud, Terraform or similar.
In both these examples, you can leverage these tools from a single GitLab CI job and deploy changes to all of your VMs/clusters at once.
Using docker swarm
For example, instead of running your docker-compose on 20 hosts, you can join all 20 VMs to the same docker swarm.
Then in your compose file, you create a deploy key specifying how many replicas you want across the swarm, including numbers per node. Or use mode: global to simply specify you want one container of the service per host in your cluster.
services:
node-exporter:
deploy:
mode: global # deploy exactly one container per node in the swarm
# ...
cadvisor:
deploy:
mode: global # deploy exactly one container per node in the swarm
Then running docker stack deploy from any manager node will do the right thing to all your swarm worker nodes. Docker swarm will also automatically restart your containers if they fail.
See deploy reference.
Using swarm (or any orchestrator) has a lot of other benefits, too, like health checking, rollbacks, etc. that will make your deployment process a lot safer and more maintainable.
If you must use a job per host
Set a unique tag for each runner on each VM. Then use a parallel matrix with a job set to each tag.
job:
parallel:
matrix:
RUNNER: [vm1, vm2, vm3, vm4, vm5] # etc.
tags:
- $RUNNER
See run a matrix of parallel jobs
You want to make sure the tag is unique and covers all your hosts, or you may run the same job on the same host multiple times.
This will let you do what you were seeking to do. However, it's not an advisable practice. As a simple example: there's no guarantee that your docker-compose up will succeed and you may just take down your entire cluster all at once.

How to access eks cluster from local machine

I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.

AWS EKS with Fargate pod status pending due to PersistentVolumeClaim not found

I have deployed EKS cluster with Fargate and alb-ingress-access using the following command:
eksctl create cluster --name fargate-cluster --version 1.17 --region us-east-2 --fargate --alb-ingress-access
A Fargate namespace has also been created.
The application being deployed has four containers namely mysql, nginx, redis and web.
The YAML files have been applied to the correct namespace.
The issue I am having is that after applying the YAML files when I get the pods status I the following status:
NAMESPACE NAME READY STATUS RESTARTS AGE
flipkicks flipkicksdb-7669b44bbb-xww26 0/1 Pending 0 112m
flipkicks flipkicksredis-74bbf9bd8c-p59hb 1/1 Running 0 112m
flipkicks nginx-5b46fd5977-9d8wk 0/1 Pending 0 112m
flipkicks web-56666f5d8-64w4d 1/1 Running 0 112m
MySQL and Nginx pods go into pending status. The deployment YAML for both have the following volumeMounts values:
MYSQL
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-db
NGINX
volumeMounts:
- mountPath: "/etc/nginx/conf.d"
name: nginx-conf
- mountPath: "/var/www/html"
name: admin-panel
The output from the events part of the kubectl describe command for both pods is:
MYSQL
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: mysql-db not supported because: PVC mysql-db not bound
NGINX
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> fargate-scheduler Pod not supported on Fargate: volumes not supported: admin-panel is of an unsupported volume Type
Would really appreciate any help in understanding this problem and how to resolve it.
Since your NGINX and MYSQL pods are requiring volumeMounts, you will need a PersistentVolumeClaim which is a request for storage from the PersistentVolume resource. Your pods can then use the claim as a volume, for more info see Kubernetes Persistent Volumes.
For the longest time EKS Fargate did not support persistent storage until Aug 17, 2020 when the AWS EFS CSI driver was introduced.
You will need to deploy the AWS EFS CSI driver and update your manifests to deploy the PersistentVolume, PersistentVolumeClaim and get your pods to use the claim as a volume. I would suggest starting with the Amazon EFS CSI driver guide to deploy the CSI driver into your EKS Fargate cluster and update your manifests to match the examples provided here.

Why do I get Kubernetes error request: Forbidden from the IntelliJ Kubernetes plugin?

I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.