How do I design a Bootup script for setting up an apache web server running on a brand new AWS EC2 Ubuntu instance? - apache

I want to configure an EC2 instance so that it installs, configures and starts an Apache web server without my (human) intervention.
To this end, I am taking advantage of the "User Data" section and I have written the following script:
#!/bin/bash
sudo apt-get upgrade
sudo apt-get install -y apache2
sudo apt-get install -y awscli
while [ ! -e /tmp/index.html ]; do aws s3 cp s3://vietnhi-s-bucket/index.html /var/www/html; done
sudo systemctl restart apache2
Description of the Functionality of the Bootup script:
The script forces an update of the Ubuntu instance from whatever the date of the AMI image was when the image was created to today, when the EC2 instance is created from the image.
The script installs the Apache 2 server.
The script installs the AWS CLI interface. Because the aws s3 cp command on the next line is not going to work without the AWS CLI interface.
The script copies the sample index.html file from the vietnhi-s-bucket S3 bucket to the /var/www/html directory of the Apache web server and overwrites its default index.html file.
The script restarts the Apache web server. I could have used "Start" but I chose to use "restart".
Explanatory Notes:
The script assumes that I have created an IAM role that permits AWS to copy the file index.html from an S3 bucket called "vietnhi-s-bucket". I have given the name "S3" to the IAM role and assigned the "S3ReadAccess" policy to that role.
The script assumes that I have created an S3 bucket called "vietnhi-s-bucket" where I have stashed a sample index.html file.
For reference, here are the contents of the sample index.html file:
[html]
[body]
This is a test
[/body]
[/html]
Does the bootup script work as intended?

The script works as-is.
To arrive at that script, I had to overcome three challenges:
Create an appropriate IAM role. The minimum viable role MUST include the "S3ReadAccess" policy. This role is absolutely necessary for AWS to be able to use the public and private access keys for your AWS account and that are loaded in your environment. Copying the index.html file from the Vietnhi-s-bucket S3 bucket is not feasible if AWS does not have access to your AWS account keys.
Install the AWS CLI interface (awscli). For whatever reason, I never saw that line included in any of the AWS official documentation or any of the support offered on the web including the AWS forums. You can't run the AWS CLI s3 cp command if you don't install the AWS CLI interface.
I originally used "aws s3 cp s3://vietnhi-s-bucket/index.html /var/www/html" as my original copy-from-S3 instruction. Bad call. https://forums.aws.amazon.com/thread.jspa?threadID=220187
The link above refers to a timing issue that AWS hasn't resolved and the only workaround is to wrap retries around the aws s3 cp command.

Related

aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>

Recently I installed aws cli on a linux machine following the documentation from aws official website. In the first go, I was able to run the s3 commands without any issue. As part of my development, I uninstalled aws-cli and re-installed it. I was getting the error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
when I execute aws s3 ls
I figured it out.
I just need to add the region
aws configure
AWS Access Key ID [******************RW]:
AWS Secret Access Key [******************7/]:
Default region name [None]: **us-east-1**
Then it works!
Thanks.

How to download file from S3 into EC2 instance using packers to build custom AMI

I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp

gsutil cannot copy to s3 due to authentication

I need to copy many (1000+) files to s3 from GCS to leverage an AWS lambda function. I have edited ~/.boto.cfg and commented out the 2 aws authentication parameters but a simple gsutil ls s3://mybucket fails from either an GCE or EC2 VM.
Error is The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256..
I use gsutil version: 4.28 and locations of GCS and S3 bucket are respectively US-CENTRAL1 and US East (Ohio) - in case this is relevant.
I am clueless as the AWS key is valid and I enabled http/https. Downloading from GCS and uploading to S3 using my laptop's Cyberduck is impracticable (>230Gb)
As per https://issuetracker.google.com/issues/62161892, gsutil v4.28 does support AWS v4 signatures by adding to ~/.boto a new [s3] section like
[s3]
# Note that we specify region as part of the host, as mentioned in the AWS docs:
# http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
host = s3.eu-east-2.amazonaws.com
use-sigv4 = True
The use of that section is inherited from boto3 but is currently not created by gsutil config so it needs to be added explicitly for the target endpoint.
For s3-to-GCS, I will consider the more server-less Storage Transfer Service API.
I had a similar problem. Here is what I ended up doing on a GCE machine:
Step 1: Using gsutil, I copied files from GCS to my GCE hard drive
Step 2: Using aws cli (aws s3 cp ...), I copied files from GCE hard drive to s3 bucket
The above methodology has worked reliably for me. I tried using gsutil rsync but it fail unexpectedly.
Hope this helps

GoReplay - Upload to S3 does not work

I am trying to capture all incoming traffic on a specific port using GoReplay and to upload it directly to S3 servers.
I am running a simple file server on port 8000 and a gor instance using the (simple) command
gor --input-raw :8000 --output-file s3://<MyBucket>/%Y_%m_%d_%H_%M_%S.log
I does create a temporal file at /tmp/ but other than that, id does not upload any thing to S3.
Additional information :
The OS is Ubuntu 14.04
AWS cli is installed.
The AWS credentials are deffined within the environent
It seems the information you are providing or scenario you explained is not complete however to upload a file from your EC2 machine to S3 is simple as written command below.
aws s3 cp yourSourceFile s3://your bucket
To see your file you can see your file by using below command
aws s3 ls s3://your bucket
However, s3 is object storage and you can't use it to upload those files which are continually editing or adding or updating.

ssh-error when using Ansible with AWS EC2

The scenario is:
AWS Ec2 instances have two users, namely root and ubuntu. root is not
advisable, but AWS recommends using ubuntu as the default
user and this user has all sudo permissions.
Ansible's controlling machine runs on an EC2 instance. The Ansible playbook bootstraps another EC2 instance and installs certain softwares on them.
Node.js web app triggers this Ansible scripts from root user.
The setup works well when all the files for Ansible and Node.js are kept in the same folder. But when organised in different folder, gives Ansible ssh-error.
Error:
So, when organised in separate folders. The nodejs app triggers the ansible scripts. The new Ec2 instance is bootstrapped, but when the SSH-port is ready, cannot install the required softwares as it gives the ssh permission denied error.
The nodejs code that triggers the ansible scripts is executed as
child.exec("ansible playbook ../playbook.yml");
The only change in the code, after organising into folders is the addition of "../" path.
Debugging:
As I told you, there are two users in EC2 instance, the ec2-key module while bootstrapping stores the root's ssh-key to the newly bootstrapped instance. But, while installing the software on the newly bootstrapped instance, ubuntu's ssh=key is used for getting access. Thus, the conflicts with the keys give the ssh-permission denied error. And, this error particularly occurs after organising the Ansible files and Node.js files into separate folders. If all the Ansible and Node.js files are put inn the same folder, then no error is raised. FYI: All files are stored in the ubuntu user.
Just puzzled about this!
As said by #udondan, adding cwd to exec function works.