Create multiple bucket in s3 using ANSIBLE-PLAYBOOK - amazon-s3

i want to create multiple s3 bucket using ansible. Upload an object/dir in the created buckets (which is working in terraform) and a file (i think it is not working in terraform - with more than 1 bucket).
Is there any chance doing these in ansible?
I'm just new using ansible, i just read documentations and watching some videos.
Here are my basic data that being gathered.
---
- name: create s3 bucket
hosts: localhost
connection: local
vars:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
vars_files:
- creds.yml
tasks:
- name: create a simple s3 bucket
amazon.aws.s3_bucket:
name: kevs-task2-ansible
state: present
region: ap-southeast-1
acl: public-read
versnioning: enabled
- name: create folder in the bucket
amazon.aws.aws_s3:
bucket: kevs-task2-ansible
object: /public
mode: create
acl: public-read
- name: create file in the folder
amazon.aws.aws_s3:
bucket: kevs-task2-ansible
object: /public/info.txt
src: info.txt
mode: put

In a nutshell and super simple:
- name: create a simple s3 bucket
amazon.aws.s3_bucket:
name: "{{ item }}"
state: present
region: ap-southeast-1
acl: public-read
versionning: enabled
loop:
- kevs-task2-ansible
- an_other_name
- yet_an_other_one
To enhance that, please read ansible loops

Related

Ansible variables and tags

I have a playbook that calls 2 roles with shared variables. I'm using the roles to create some level of abstraction layer.
The problem happens when I try to call the role with the tags and variables which belong to another role I get an error. Also, I tried to use dependencies didn't work
Let me paste the code here to explain.
I have a role --> KEYS. Where I save my API calls to my 2 different platforms. As listed I'm registering the result to the user_result1 and user_result2
first role my_key.yml
# tasks file for list_users
- name: List Users platform 1
uri:
url: 'http://myhttpage.example.platform1'
method: GET
headers:
API-KEY: 'SOME_API_KEY'
register: user_result1
- name: List Users platform 2
uri:
url: 'http://myhttpage.example.platform2'
method: GET
headers:
API-KEY: 'SOME_API_KEY'
register: user_result2
Second role: list_users
- name: List users platform1
set_fact:
user: '{{ user | default([]) + [ item.email ] }}'
loop: "{{ user_result1.json }}"
- debug:
msg: "{{ user }}"
tags:
- user_1
- name: List users Cloudflare
set_fact:
name: "{{ name | default([]) + [item.user.email] }}"
loop: "{{ user_result2.result }}"
- debug:
msg: "{{ name }}"
tags:
- user_2
Playbook.yml
---
- name: Users
gather_facts: no
hosts: localhost
roles:
- my_key
- list_users
When I do the call without the --tags user_1 or user_2, it works fine.
However, when I do the call using the tags I got an error showing that variable user_result1 or user_result2 doesn't exist.
Any idea, please?
Thanks, Joe.
(#U880D basically answered the question but the OCD me wants to mark this as fixed so I'm typing this)
This is working as expected - --tags basically let you skip every task except those with the tag specified. See the official doc for more info on tags:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html
Echoing what #zeitounator said - if you want something to run unconditionally when --tag is used tag them with always.

serverless remove never works because bucket I never created does not exist

I have a lambda s3 trigger and a corresponding s3 bucket in my serverless.yaml which works perfectly when I deploy it via serverless deploy.
However when I want to remove everything with serverless remove I always get the same error: (even without changing anything in the aws console)
An error occurred: DataImportCustomS31 - Received response status [FAILED] from custom resource. Message returned: The specified bucket does not exist
Which is strange because I never specified a bucket with that name in my serverless. I assume this somehow comes from the existing: true property of my s3 trigger but I can't fully explain it nor do I know how to fix it.
this is my serverless.yaml:
service: myTestService
provider:
name: aws
runtime: nodejs12.x
region: eu-central-1
profile: myprofile
stage: dev
stackTags:
owner: itsme
custom:
testDataImport:
bucketName: some-test-bucket-zxzcq234
# functions
functions:
dataImport:
handler: src/functions/dataImport.handler
events:
- s3:
bucket: ${self:custom.testDataImport.bucketName}
existing: true
event: s3:ObjectCreated:*
rules:
- suffix: .xlsx
tags:
owner: itsme
# Serverless plugins
plugins:
- serverless-plugin-typescript
- serverless-offline
# Resources your functions use
resources:
Resources:
TestDataBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
BucketName: ${self:custom.testDataImport.bucketName}
VersioningConfiguration:
Status: Enabled

Serverless Framework - Create a Lambda and S3 and upload a file in S3. Then, extract to DynamoDB with Lambda

It is my first time using serverless framework and my mission is to create a lambda, s3 and dynamodb with serverless and then invoke lambda to transfer from s3 to dynamo.
I am trying to get a name generated by serverless to my S3 to use it in my Lambda but I had no luck with that.
This is how my serveless.yml looks like:
service: fetch-file-and-store-in-s3
frameworkVersion: ">=1.1.0"
custom:
bucket:
Ref: Outputs.AttachmentsBucketName
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:PutObjectAcl
Resource: "arn:aws:s3:::${self:custom.bucket.Ref}/*"
functions:
save:
handler: handler.save
environment:
BUCKET: ${self:custom.bucket.Ref}
resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
-
AllowedOrigins:
- '*'
AllowedHeaders:
- '*'
AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and here is the part where it creates s3 bucket
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
and this is the error I am currently getting:
λ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service fetch-file-and-store-in-s3.zip file to S3 (7.32 MB)...
Serverless: Validating template...
Error --------------------------------------------------
Error: The CloudFormation template is invalid: Invalid template property or properties [AttachmentsBucket, Type, Properties]
You have some issues with indentation:
resources:
Resources:
# S3
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value:
Ref: AttachmentsBucket
Indentation is important for serverless.yml file. In this case, AttachmentsBucket is a resource, it should be sub-section under Resources with one tab space, and then Type and Properties should have one tabbed spaces from Resource Name: AttachmentsBucket, while it actually have two in the sample provided. CloudFormation will not be able to process this particular resource since it is not able to identify resource with proper name and properties.
See the updated sample:
Resources:
AttachmentsBucket:
Type: AWS::S3::Bucket
Properties:
# Set the CORS policy
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- '*'
- AllowedHeaders:
- '*'
- AllowedMethods:
- GET
- PUT
- POST
- DELETE
- HEAD
- MaxAge: 3000
# Print out the name of the bucket that is created
Outputs:
AttachmentsBucketName:
Value: !Ref AttachmentsBucket
You can validate the cloudformation templates by using the aws cli tool here
But your question is regarding how to make lambda and dynamodb load works and in your description you are asking about the deployment part. Can you update your question and tags?
I was able to figure out a solution. As I am very new and it was my first project I wasn't very familiar with the terms in the beginning. what I did was to name my bucket here:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.bucket} # Getting the name of table I defined under custom in serverless.yml
# Make Bucket publicly accessable
MyBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref Bucket
PolicyDocument:
Statement:
- Effect: Allow
Principal: '*' # public access to access the bucket files
Action: s3:GetObject
Resource: 'arn:aws:s3:::${self:custom.bucket}/*'
Then to upload a file with the deploy I found a plugin called serverless-s3bucket-sync
And added in the custom attribute and the location of my file under folder:
custom:
bucket: mybucketuniquename #unique global name it will create for the bucket
s3-sync:
- folder: images
bucket: ${self:custom.bucket}
And added the IamRole:
iamRoleStatements:
#S3 Permissions
- Effect: Allow
Action:
- s3:*
Resource: "arn:aws:s3:::${self:custom.bucket}"

serverless error: "bucket already exist" while deploying to Gitlab

I am newbee to serverless stack,Following is the serverless.yml file. On deploying this in GitLab I get error as:
Serverless Error ---------------------------------------
An error occurred: S3XPOLLBucket - bucket already exists.
Serverless.yml file is :
service: sa-s3-resources
plugins:
- serverless-s3-sync
- serverless-s3-remover
custom:
basePath: sa-s3-resources
environment: ${env:ENV}
provider:
name: aws
stage: ${env:STAGE}
region: ${env:AWS_DEFAULT_REGION}
environment:
STAGE: ${self:provider.stage}
resources:
Resources:
S3XPOLLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-xpoll-file-${self:custom.environment}-${self:provider.stage}
S3JNLBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: gs-sa-jnl-file-${self:custom.environment}-${self:provider.stage}
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.
That means you have to choose a unique name that has not already chosen by someone else (or even you in a different development stack) globally
More details
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html

Get entire bucket or more than one objects from AWS S3 bucket through Ansible

As far as I know S3 module of Ansible, it can only get an object at once.
My question is that what if I want to download/get entire bucket or more than one object from S3 bucket at once. Is there any hack?
I was able to achieve it like so:
- name: get s3_bucket_items
s3:
mode=list
bucket=MY_BUCKET
prefix=MY_PREFIX/
register: s3_bucket_items
- name: download s3_bucket_items
s3:
mode=get
bucket=MY_BUCKET
object={{ item }}
dest=/tmp/
with_items: s3_bucket_items.s3_keys
Notes:
Your prefix should not have a leading slash.
The {{ item }} value will have the prefix already.
You have to first list files to a variable and copy files using that variable.
- name: List files
aws_s3:
aws_access_key: 'YOUR_KEY'
aws_secret_key: 'YOUR_SECRET'
mode: list
bucket: 'YOUR_BUCKET'
prefix : 'YOUR_BUCKET_FOLDER' #Remember to add trailing slashes
marker: 'YOUR_BUCKET_FOLDER' #Remember to add trailing slashes
register: 's3BucketItems'
- name: Copy files
aws_s3:
aws_access_key: 'YOUR_KEY'
aws_secret_key: 'YOUR_SECRET'
bucket: 'YOUR_BUCKET'
object: '{{ item }}'
dest: 'YOUR_DESTINATION_FOLDER/{{ item|basename }}'
mode: get
with_items: '{{s3BucketItems.s3_keys}}'
The ansible S3 module has currently no built-in way to syncronize buckets to disk recursively.
In theory, you could try to collect the keys to download with a
- name: register keys for syncronization
s3:
mode: list
bucket: hosts
object: /data/*
register: s3_bucket_items
- name: sync s3 bucket to disk
s3:
mode=get
bucket=hosts
object={{ item }}
dest=/etc/data/conf/
with_items: s3_bucket_items.s3_keys
While I often see this solution, it does not seem to work with current ansible/boto versions, due to a bug with nested S3 'directories' (see this bug report for more information), and the ansible S3 module not creating subdirectories for keys.
I believe it is also possible that you would run into some memory issues using this method when syncing very large buckets.
I also like to add that you most likely do not want to use credentials coded into your playbooks - I suggest you use IAM EC2 instance profiles instead, which are much more secure and comfortable.
A solution that works for me, would be this:
- name: Sync directory from S3 to disk
command: "s3cmd sync -q --no-preserve s3://hosts/{{ item }}/ /etc/data/conf/"
with_items:
- data
It will be able to:
- name: Get s3 objects
s3:
bucket: your-s3-bucket
prefix: your-object-directory-path
mode: list
register: s3_object_list
- name: Create download directory
file:
path: "/your/destination/directory/path/{{ item | dirname }}"
state: directory
with_items:
- "{{ s3_object_list.s3_keys }}"
- name: Download s3 objects
s3:
bucket: your-s3-bucket
object: "{{ item }}"
mode: get
dest: "/your/destination/directory/path/{{ item }}"
with_items:
- "{{ s3_object_list.s3_keys }}"
As of Ansible 2.0 the S3 module includes the list action, which lets you list the keys in a bucket.
If you're not ready to upgrade to Ansible 2.0 yet then another approach might be to use a tool like s3cmd and invoke it via the command module:
- name: Get objects
command: s3cmd ls s3://my-bucket/path/to/objects
register: s3objects
the non-ansible solution, but finally got it working on the instance running with an assumed role with S3 bucket access, or AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
---
- name: download fs s3 bucket
command: aws s3 sync s3://{{ s3_backup_bucket }} {{ dst_path }}
The following code will list every file in every S3 bucket in the account. It is run as a role with a group_vars/localhost/vault.yml containing the AWS keys.
I still haven't found out why the second, more straight-forward method II doesn't work but maybe someone can enlighten us.
- name: List S3 Buckets
aws_s3_bucket_facts:
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
# region: "eu-west-2"
register: s3_buckets
#- debug: var=s3_buckets
- name: Iterate buckets
set_fact:
app_item: "{{ item.name }}"
with_items: "{{ s3_buckets.ansible_facts.buckets }}"
register: app_result
#- debug: var=app_result.results #.item.name <= does not work??
- name: Create Fact List
set_fact:
s3_bucketlist: "{{ app_result.results | map(attribute='item.name') | list }}"
#- debug: var=s3_bucketlist
- name: List S3 Bucket files - Method I - works
local_action:
module: aws_s3
bucket: "{{ item }}"
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
mode: list
with_items:
- "{{ s3_bucketlist }}"
register: s3_list_I
#- debug: var=s3_list_I
- name: List S3 Bucket files - Method II - does not work
aws_s3:
aws_access_key: "{{ aws_access_key_id }}"
aws_secret_key: "{{ aws_secret_access_key }}"
bucket: "{{ item }}"
mode: list
with_items: "{{ s3_bucketlist }}"
register: s3_list_II
Maybe you could change your "with_items", then should work
- name: get list to download
aws_s3:
region: "{{ region }}"
bucket: "{{ item }}"
mode: list
with_items: "{{ s3_bucketlist }}"
register: s3_bucket_items
but maybe fast is:
- name: Sync directory from S3 to disk
command: "aws --region {{ region }} s3 sync s3://{{ bucket }}/ /tmp/test"