gitlab pipeline fail to use aws service - gitlab-ci

This error show "An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=********************************************" in gitlab ci pipeline
I am already create AWS varible in gitlab ci and created s3 bucket in aws console. my gitlab ci config is
upload to s3:
image:
name: banst/awscli
entrypoint: [""]
script:
- aws configure set region us-east-1
- aws s3 ls
please answer me!

How are you doing?
Next I will present the step by step to list the buckets through gitlab-ci.
1- Create a repository in gitlab.
2- In your GitLab project, go to Settings > CI/CD. Set the following CI/CD variables:
AWS_ACCESS_KEY_ID: Your Access key ID.
AWS_SECRET_ACCESS_KEY: Your secret access key.
AWS_DEFAULT_REGION: Your region code.
Variables of Aws' credentials
3- Create a file called .gitlab-ci.yml in your repository. I left attached an example of the pipeline with listing and creating buckets in S3. Aws has as a rule that the name of the bucket is unique in all accounts, so be creative or specific in the name.
File of gitlab.ci - Pipeline S3
4- When you commit to the repository, it will open a pipeline. I left the steps manually, so I need to press to perform the step.Steps of the Pipeline
I hope I have helped, if you have any questions I am at your disposal.
- build_s3
- create_s3
create-s3:
stage: create_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3api create-bucket --bucket my-bucket-stackoverflow-mms --region us-east-1
build-s3:
stage: build_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3 ls

Related

Why is there an aws command embedded in the EKS kube config file?

I'm curious about something in the kube config file generated by the aws eks update-kubeconfig command. At the bottom of the file, there is this construct:
- name: arn:aws:eks:us-west-2:redacted:cluster/u62d2e14b31f0270011485fd3
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- u62d2e14b31f0270011485fd3
command: aws
It is clearly an invocation of the aws eks get-token command. Why is this here? Does this command get automatically called?
Why is this here?
The command gets an IAM token using your IAM account and pass along to EKS via the HTTP header Authorization: Bearer <token> for authentication. See here for details.
Does this command get automatically called?
Yes, by kubectl.

Need to collect the logs from application running inside AWS's EKS Pod

I would like to achieve something similar as below but in AWS - EKS
In my kubeadm k8s cluster, if I have to collect application log from pod, I will run below command
kubectl cp podName:/path/to/application/logs/logFile.log /location/on/master/
But with AWS I am not sure, how to collect logs like above?
One workaround is persist the logs on S3 with PV and PVC and then get them from there.
volumeMounts:
- name: logs-sharing
mountPath: /opt/myapp/AppServer/logs/container
volumes:
- name: logs-sharing
persistentVolumeClaim:
claimName: logs-sharing-pvc
Other way would be use sidecar container with a logging agent
But I would appreciate if there is any workaround which will as easy as the one I mentioned above that I follow with kubeadm
...as easy as the one I mentioned above that I follow with kubeadm
kubectl cp is a standard k8s command, you can use it for EKS, AKS, GKE etc.

AWS EKS custom AMI managed Node Group Bootstrap file not exists

Below are steps i preformed to use custom AMI EKS managed node group.
bootstrap_user_data file has been created and its converted to base64 format as per the standard.
#!/bin/bash
set -ex
B64_CLUSTER_CA= <My eks cluster Certificate authority value>
API_SERVER_URL= <My EKS cluster API server URl>
/etc/eks/bootstrap.sh <cluster-name> --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL
cat bootstrap_user_data | base64
Launch template created via custom-configuration.json file with below data
cat config_custom_ami.json
{
"LaunchTemplateData": {
"EbsOptimized": false,
"ImageId": "ami-0e00c1f097aff7fe8",
"InstanceType": "t3.small",
"UserData": "bootstrap_user_data",
"SecurityGroupIds": [
"sg-0e9b58499f42bcd4b"
]
}
}
Security group has been selected EKS cluster security group it was created automatically while creating EKS cluster first time.
creating launch template using eksctl command
aws ec2 create-launch-template --region eu-central-1 --launch-template-name my-template-name --version-description "first version " --cli-input-json file://custom.config.json
creating node group using eksctl command
aws eks create-nodegroup --region eu-central-1 --cluster-name my-cluster --nodegroup-name my-node-group --subnets subnet-<subnet1> subnet-<subnet2> --node-role 'arn:aws:iam::123456789:role/EKSNODEGROUP' --launch-template name=my-template-name
After executing node group creation command it was taking 20 min to create node group at the same time desired VM is created as part of auto scaling group but nodes group not able to join to the cluster after 20 min.
Connect to your Amazon EKS worker node instance with SSH and check kubelet agent logs
ssh -i my.key ec2-user#1.2.3.4
sudo -i
cd /etc/eks/bootstrap.sh
-bash: cd: /etc/eks: No such file or directory
could you please some one help why my bootstrap.sh file not exists inside the /etc/eks location in other hand in AWS console launch template - Advanced tab - i can able to see my user data in decoded format.

Why do I get Kubernetes error request: Forbidden from the IntelliJ Kubernetes plugin?

I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.

Get S3 object size with Ansible

There's a backup script that dumps some databases and uploads the backups to S3.
I'm writing an Ansible playbook to check the S3 backup sizes independently, from some other host. It would alert me if size is less than X GiB as that would indicate a failed backup. Nothing unknown so far, but...
I don't seem to be able get the requested object size from S3 bucket with aws_s3 module. Any ideas?
I don't know if there is an S3 module available that allows running ls commands over S3 Buckets. What you could do is run an aws s3api command, using the command module.
---
- name: Get S3 object size
hosts: all
connection: local
gather_facts: no
vars_files:
- ./secret.yml
tasks:
- name: Get the `list-object` result for the `object`
command: >
aws s3api list-objects
--bucket {{ bucket }}
--prefix {{ object }}
register: output
- name: Parse the `list-object` output
set_fact:
object_size: '{{ output.stdout | from_json | json_query("Contents[0].Size") }}'
I hope it helps