Access S3 bucket without running aws configure with kubernetes - amazon-s3

I have an S3 bucket with some sql scripts and some backup files using mysqldump.
I also have a .yaml file that deploys a fresh mariadb image.
As I'm not very experienced with kubernetes yet, if I want to restore one of those backup files into the pod, I need to bash into it, run aws cli, insert my credentials, then sync the bucket locally and run mysql < backup.sql
This, obviously, destroys the concept of full automated deployment.
So, the question is... how can I securely make this pod immediately configured to access S3?

I think you should consider mounting S3 bucket inside the pod.
This can be achieved by for example s3fs-fuse.
There are two nice articled about Mounting a S3 bucket inside a Kubernetes pod and Kubernetes shared storage with S3 backend, I do recommend reading both to understand how this works.
You basically have to build your own image from Dockerfile and supply necessary S3 bucket info and AWS security credentials.
Once you have the storage mounted you will be able to call scripts from it in a following way:
apiVersion: v1
kind: Pod
metadata:
name: test-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]

Related

Set mfsymlinks when mounting Azure File volume to ACI instance

Is there a way to specify the mfsymlinks option when mounting an Azure Files share to an ACI container instance?
As shown on learn.microsoft.com symlinks can be supported in Azure Files when mounted in Linux with this mfsymlinks option enabling Minshall+French symlinks.
I would like to use an Azure Files share mounted to an Azure Container Instance but I need to be able to use symlinks in the mounted file system, but I cannot find a way to specify this. Does anyone know of a way to do this?
Unfortunately, as far as I know, when you create the container and mount the Azure File Share through the CLI command az container create with parameters such as
--azure-file-volume-account-key
--azure-file-volume-account-name
--azure-file-volume-mount-path
--azure-file-volume-share-name
You cannot set the symlinks as you want and there also no parameter for you to set it.
In addition, if you take a look at the Template for Azure Container Instance, then you can find that there no property shows the setting about symlinks. In my opinion, it means you cannot set the symlinks for Azure Container Instance as you want. Hope this will help you.
As a workaround that suits my use case, once the file structure, including symlinks, has been created on the container's local FS, I tar up the files onto the Azure Files share: tar -cpzf /mnt/letsencrypt/etc.tar.gz -C / etc/letsencrypt/ Then when the container runs again, it extracts from the tarball, preserving the symlinks: tar -xzf /mnt/letsencrypt/etc.tar.gz -C /
I'll leave this open for now to see if ACI comes to support the option natively.
Update from Azure docs (azure-files-volume#mount-options):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

GoReplay - Upload to S3 does not work

I am trying to capture all incoming traffic on a specific port using GoReplay and to upload it directly to S3 servers.
I am running a simple file server on port 8000 and a gor instance using the (simple) command
gor --input-raw :8000 --output-file s3://<MyBucket>/%Y_%m_%d_%H_%M_%S.log
I does create a temporal file at /tmp/ but other than that, id does not upload any thing to S3.
Additional information :
The OS is Ubuntu 14.04
AWS cli is installed.
The AWS credentials are deffined within the environent
It seems the information you are providing or scenario you explained is not complete however to upload a file from your EC2 machine to S3 is simple as written command below.
aws s3 cp yourSourceFile s3://your bucket
To see your file you can see your file by using below command
aws s3 ls s3://your bucket
However, s3 is object storage and you can't use it to upload those files which are continually editing or adding or updating.

How to use S3 adapter cli for snowball

I'm using s3 adapter to copy files from a snowball device to local machine.
Everything appears to be in order as I was able to run this command and see the bucket name:
aws s3 ls --endpoint http://snowballip:8080
But besides this, aws doesn't offer any examples for calling cp command. How do I provide the bucket name and the key with this --endpoint flag.
Further, when I ran this:
aws s3 ls --endpoint http://snowballip:8080/bucketname
It returned 'Bucket'... Not sure what that means because I expect to see the files.
I can confirm the following is correct for snowball and snowball edge, as #sqlbot says in the comment
aws s3 ls --endpoint http://snowballip:8080 s3://bucketname/[optionalprefix]
References:
http://docs.aws.amazon.com/cli/latest/reference/
http://docs.aws.amazon.com/snowball/latest/ug/using-adapter-cli.html
Just got one in the post

How do I design a Bootup script for setting up an apache web server running on a brand new AWS EC2 Ubuntu instance?

I want to configure an EC2 instance so that it installs, configures and starts an Apache web server without my (human) intervention.
To this end, I am taking advantage of the "User Data" section and I have written the following script:
#!/bin/bash
sudo apt-get upgrade
sudo apt-get install -y apache2
sudo apt-get install -y awscli
while [ ! -e /tmp/index.html ]; do aws s3 cp s3://vietnhi-s-bucket/index.html /var/www/html; done
sudo systemctl restart apache2
Description of the Functionality of the Bootup script:
The script forces an update of the Ubuntu instance from whatever the date of the AMI image was when the image was created to today, when the EC2 instance is created from the image.
The script installs the Apache 2 server.
The script installs the AWS CLI interface. Because the aws s3 cp command on the next line is not going to work without the AWS CLI interface.
The script copies the sample index.html file from the vietnhi-s-bucket S3 bucket to the /var/www/html directory of the Apache web server and overwrites its default index.html file.
The script restarts the Apache web server. I could have used "Start" but I chose to use "restart".
Explanatory Notes:
The script assumes that I have created an IAM role that permits AWS to copy the file index.html from an S3 bucket called "vietnhi-s-bucket". I have given the name "S3" to the IAM role and assigned the "S3ReadAccess" policy to that role.
The script assumes that I have created an S3 bucket called "vietnhi-s-bucket" where I have stashed a sample index.html file.
For reference, here are the contents of the sample index.html file:
[html]
[body]
This is a test
[/body]
[/html]
Does the bootup script work as intended?
The script works as-is.
To arrive at that script, I had to overcome three challenges:
Create an appropriate IAM role. The minimum viable role MUST include the "S3ReadAccess" policy. This role is absolutely necessary for AWS to be able to use the public and private access keys for your AWS account and that are loaded in your environment. Copying the index.html file from the Vietnhi-s-bucket S3 bucket is not feasible if AWS does not have access to your AWS account keys.
Install the AWS CLI interface (awscli). For whatever reason, I never saw that line included in any of the AWS official documentation or any of the support offered on the web including the AWS forums. You can't run the AWS CLI s3 cp command if you don't install the AWS CLI interface.
I originally used "aws s3 cp s3://vietnhi-s-bucket/index.html /var/www/html" as my original copy-from-S3 instruction. Bad call. https://forums.aws.amazon.com/thread.jspa?threadID=220187
The link above refers to a timing issue that AWS hasn't resolved and the only workaround is to wrap retries around the aws s3 cp command.