There's a backup script that dumps some databases and uploads the backups to S3.
I'm writing an Ansible playbook to check the S3 backup sizes independently, from some other host. It would alert me if size is less than X GiB as that would indicate a failed backup. Nothing unknown so far, but...
I don't seem to be able get the requested object size from S3 bucket with aws_s3 module. Any ideas?
I don't know if there is an S3 module available that allows running ls commands over S3 Buckets. What you could do is run an aws s3api command, using the command module.
---
- name: Get S3 object size
hosts: all
connection: local
gather_facts: no
vars_files:
- ./secret.yml
tasks:
- name: Get the `list-object` result for the `object`
command: >
aws s3api list-objects
--bucket {{ bucket }}
--prefix {{ object }}
register: output
- name: Parse the `list-object` output
set_fact:
object_size: '{{ output.stdout | from_json | json_query("Contents[0].Size") }}'
I hope it helps
Related
This error show "An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=********************************************" in gitlab ci pipeline
I am already create AWS varible in gitlab ci and created s3 bucket in aws console. my gitlab ci config is
upload to s3:
image:
name: banst/awscli
entrypoint: [""]
script:
- aws configure set region us-east-1
- aws s3 ls
please answer me!
How are you doing?
Next I will present the step by step to list the buckets through gitlab-ci.
1- Create a repository in gitlab.
2- In your GitLab project, go to Settings > CI/CD. Set the following CI/CD variables:
AWS_ACCESS_KEY_ID: Your Access key ID.
AWS_SECRET_ACCESS_KEY: Your secret access key.
AWS_DEFAULT_REGION: Your region code.
Variables of Aws' credentials
3- Create a file called .gitlab-ci.yml in your repository. I left attached an example of the pipeline with listing and creating buckets in S3. Aws has as a rule that the name of the bucket is unique in all accounts, so be creative or specific in the name.
File of gitlab.ci - Pipeline S3
4- When you commit to the repository, it will open a pipeline. I left the steps manually, so I need to press to perform the step.Steps of the Pipeline
I hope I have helped, if you have any questions I am at your disposal.
- build_s3
- create_s3
create-s3:
stage: create_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3api create-bucket --bucket my-bucket-stackoverflow-mms --region us-east-1
build-s3:
stage: build_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3 ls
I am using localstack version 0.12.19.4 on a Mac.
I have created an s3 bucket called mybucket
localstack start ------ s3 runs on port 4566
http://localhost:4566/health ---- everything is running
awslocal s3 mb s3://mybucket
awslocal s3api put-bucket-acl --bucket mybucket --acl public-read
I add some files to my s3 bucket and then I check with awslocal and aws
aws --endpoint-url=http://127.0.0.1:4566 s3 ls
awslocal s3 ls
shows my bucket existing.
Now from a docker image, when I try to access one of the files in mybucket s3 bucket, I get the following error:
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4566/mybucket/dev/us/2020_08_11/eea9efc9-5970-426b-b867-9f57d6d9548f/850f35c8-0ada-44e4-96e1-e050e3040609"
when I check the contents of the s3 bucket, I do see the specific file existing.
one more fact when I retrieve docker port for localstack I see
4566/tcp -> 127.0.0.1:4566
4571/tcp -> 127.0.0.1:4571
Any ideas as to what I am doing wrong or missing?
I have a playbook to download a file from s3 bucket to a target host. I am using the aws_s3 module in ansible. The block looks something like this:-
- name: Get backup file from s3
aws_s3:
bucket: "{{ bucket_name }}"
object: "{{ object_name }}"
dest: /usr/local/
mode: get
My question is whether this will get the file to the ansible host or to the target host. Should there be any other specification I should be giving to address this difference.
Unless you delegate this action to another host, it will download the object to the managed nodes (aka. target hosts).
It's been decreed that all our S3 buckets should have access logs and versioning enabled. Unfortunately I have a lot of S3 buckets. Is there an efficient way of doing this that doesn't involve setting the attributes on each one individually in the console?
You can also develop your own custom AWS Config rule to manage the compliance of AWS S3 Buckets. (versionning and logs enabled)
https://aws.amazon.com/config/
You can check a lot of examples here:
https://github.com/awslabs/aws-config-rules
You can adapt this one to your needs:
https://github.com/awslabs/aws-config-rules/blob/master/python/s3_bucket_default_encryption_enabled.py
For most of the tasks on AWS, the simplest way is using the AWS CLI, especially about the repetitive things.
You can use AWS CLI and simple bash script like this, by rtrouton:
#!/bin/bash
# This script is designed to check the object versioning status of all S3 buckets associated with an AWS account
# and enable object versioning on any S3 buckets where object versioning is not enabled.
# Get list of S3 buckets from Amazon Web Services
s3_bucket_list=$(aws s3api list-buckets --query 'Buckets[*].Name' | sed -e 's/[][]//g' -e 's/"//g' -e 's/,//g' -e '/^$/d' -e 's/^[ \t]*//;s/[ \t]*$//')
# Loop through the list of S3 buckets and check the individual bucket's object version status.
for bucket in $(echo "$s3_bucket_list")
do
version_status=$(aws s3api get-bucket-versioning --bucket "$bucket" | awk '/Status/ {print $2}' | sed 's/"//g')
if [[ "$version_status" = "Enabled" ]]; then
# If the object version status is Enabled, report that the S3 bucket has object versioning enabled.
echo "The $bucket S3 bucket has object versioning enabled."
elif [[ "$version_status" != "Enabled" ]]; then
# If the object version is a status other than Enabled, report that the S3 bucket does not have
# object versioning enabled, then enable object versioning
echo "The $bucket S3 bucket does not have object versioning enabled. Enabling object versioning on the $bucket S3 bucket."
aws s3api put-bucket-versioning --bucket "$bucket" --versioning-configuration Status=Enabled
fi
done
For more information you can check the following document on the AWS website:
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-versioning.html
I'm trying to create an ec2 instance and running into the following problem:
msg: Instance creation failed => UnauthorizedOperation:
You are not authorized to perform this operation.
Encoded authorization failure message: ....very long encoded message.
Update: This only happens when using the secret and access key for a specific user on my account. If I use the access keys for root then it works. But that's not what I want to do. I guess I'm missing something about how users authorize with ec2.
My ansible yml is using aws access and secret key in that order.
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test_vars.yml
tasks:
- name: Spin up Ubuntu Server 14.04 LTS (PV) instance
local_action:
module: ec2
region: 'us-west-1'
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
instance_type: 't1.micro'
image: ami-f1fdfeb4
wait: yes
count: 1
register: ec2
You need to go into the AWS IAM console ( https://console.aws.amazon.com/iam ) and give that user (related to the Access Key in your script) and give it permissions (a policy) to create EC2 instances.
It sounds like your 'root' user account in AWS already has those permissions if that helps any for comparing the two users to figure out what policy you need to add - you could just create an EC2 group with the right policy from the policy generator and add that user to that EC2 group.
It looks like a permission issue with AWS. Root user have full permission so it will definitely work with that. Check if your AWS specific user has permissions to launch an instance.