Move Amazon EC2 AMIs between regions via web-interface? - amazon-s3

Any easy way to move and custom AMI image between regions? (tokyo -> singapore)
I know you can mess up with API and S3 to get it done, but there there any easier way to do it?

As of December, 2012, Amazon now supports migrating an AMI to another region through the UI tool (Amazon Management Console). See their documentation here
So, how I've done it is..
From the AMI find out the Snapshot-ID and how it is attached (e.g. /dev/sda1)
Select the Snapshot, click "Copy", set Destination region and make the copy (takes a while!)
Select the new Snapshot, click "Create Image"
Architecture: (choose 32 or 64 bit)
Name/Description: (give it one)
Kernel ID: when migrating a Linux AMI, if you choose "default" it may fail. What worked for me was to go to the Amazon Kernels listing here to find the kernels Amazon supports, then specify it when creating the image)
Root Device Name: /dev/sda1
Click "Yes, Create"
4.Launch an instance from the new AMI and test that you can connect.

You can do it using Eric's post:
http://alestic.com/2010/10/ec2-ami-copy

The following assumes your AWS Console utilities are installed in /opt/aws/bin/, JAVA_HOME=/usr and you are running i386 architecture, otherwise replace with x86_64.
1) Run a live snapshot, where you believe your image can fit in 1.5GB and you have that to spare in /mnt (check running df)
/opt/aws/bin/ec2-bundle-vol -d /mnt -k /home/ec2-user/.ec2/pk-XXX.pem -c /home/ec2-user/.ec2/cert-XXX.pem -u 123456789 -r i386 -s 1500
2) Upload to current region's S3 bucket
/opt/aws/bin/ec2-upload-bundle -b S3_BUCKET -m /mnt/image.manifest.xml -a abcxyz -s SUPERSECRET
3) Transfer the image to EU S3 bucket
/opt/aws/bin/ec2-migrate-image -K /home/ec2-user/.ec2/pk-XXX.pem -C /home/ec2-user/.ec2/cert-XXX.pem -o abcxyz -w SUPERSECRET --bucket S3_BUCKET_US --destination-bucket S3_BUCKET_EU --manifest image.manifest.xml --location EU
4) Register your AMI so you can fire up the instance in Ireland
/opt/aws/bin/ec2-register –K /home/ec2-user/.ec2/pk-XXX.pem –C /home/ec2-user/.ec2/cert-XXX.pem http://s3.amazonaws.com:80/S3_BUCKET/image.manifest.xml --region eu-west-1 -name DEVICENAME -a i386 --kernel aki-xxx

There are API tools for this. http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-MigrateImage.html

I think that is now outdated by ec2-bundle-vol and ec2-migrate-image, BTW you can also take a look at this Perl script by Lincoln D. Stein:
http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl
Usage:
$ migrate-ebs-image.pl --from us-east-1 --to ap-southeast-1 ami-123456

Amazon have just announced support for this functionality in this blog post. Note that the answer from dmohr relates to copying EBSs, not AMIs.
In case the blog post is unavailable, quoting the relevant parts:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.

AWS now supports the copy of an EBS snapshot to another region via UI/CLI/API. You can copy the snapshot and then make an AMI from it. Direct AMI copy is coming - from AWS:
"We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up
to this feature, which will enable you to copy both public and
custom-created AMIs across regions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html?ref_=pe_2170_27415460

Ylastic allows you to move EBS backed linux images between regions.
Its $25 or $50 per month but it looks like you can evaluate it for a week.

I just did this using a script on CloudyScripts, worked fantastically: https://cloudyscripts.com/tool/show/5 (and it's free).

As of 2017, it's pretty simple.. just follow the screenshots:

I'll add Scalr to the list of tools you can use (Disclaimer: I work there). Within Scalr, you can create your own AMI (we call them roles). Once your role is ready, you just have to choose where you want to deploy it (so in any regions).
Scalr is open-source released under the Apache 2 license: you can to download it and install it yourself. Otherwise, it is also available through a hosted version including support. Alternatives to Scalr includes RightScale and enStratus.

Related

How to download file from S3 into EC2 instance using packers to build custom AMI

I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp

youtube-dl not able to authenticate from amazon ec2 machine

I installed youtube-dl on my local machine using curl as mentioned in the official README here.
sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
sudo chmod a+rx /usr/local/bin/youtube-dl
Now when I run below command on my local machine
youtube-dl --cookies cookie.txt https://www.youtube.com/watch?v=x-5V_RS3Q48 -u my_account#gmail.com -p my_pass_word
I am able to download the video without any hassle as shown below.
But when I try to download the same video on one of my ec2 instances, I get the following exception.
The installation procedure on both machine is exactly same, youtube-dl version is exactly same (2017.08.18), python version is same (2.7.6)
The only difference I could figure out is the kernel versions on both machines:
On My Local: Linux-3.19.0-25-generic-x86_64-with-Ubuntu-14.04-trusty
On EC2 Instance: Linux-3.13.0-74-generic-x86_64-with-Ubuntu-14.04-trusty
Also the video I am trying to download is private and uploaded by same user I am providing credentials of.
One important point to note is that EC2 machine is able to download the video without any trouble if I am not using the username & password (which is only possible for videos that are not private)
Thanks
Posting the answer in case someone else is stuck with the similar issue.
The Issue with the cookies not being generated on server edition Linux OS on ec2 instance provided by AWS.
According to what I learnt recently, we don't have support for firefox browser on these machines (at least by default) and that's why it was failing to create the cookie file.
Solution
I created the file locally and set expiry time to 20 years in future and moved that cookie on server ec2 instance and used that cookie to sign in rather than creating one.
Thanks

Where should a dockerized web application store uploaded files?

I'm building a web application that needs to allow users to upload profile pictures. I want the application to be self-contained, so that people don't need to have an s3 or other cloud storage service account.
It's best to keep docker containers as disposable as possible, so I guess I should create a volume. So I want the volume to be created automatically, so people don't have to specify a volume when running the container, but the documentation for the VOLUME instruction in dockerfiles confuses me.
The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.
What does it mean to be marked as such? The data is to be written by the application, it's not coming from an extenrl source.
When you mark a volume in the dockerfile, say VOLUME /site/uploads,it makes it very easy to later run another container with --volumes-from <container-name> and have /site/uploads available in the new container with all the data that has been written and that will be written (if the first container is still running).
Also, you'll be able to see that volume with docker volume ls after you start the container the first time.
The only problem that you might have if you delete the container, is that you will lose the mapping provided by docker inspect <container-name> that tells you which volume your container created. To see the volume your container created really clearly and quickly, try docker inspect <container-name> | jq '.[].Mounts' if you have jq installed. Otherwise, docker inspect <container-name> | grep Mounts -A 10 might be enough when you only have one volume. (you can also just wade through all the json yourself)
Even if you remove the container that created the volume, the volume will remain on your system, viewable with docker volume ls unless you run docker volume rm <volume-name>
Note: I'm using docker version 1.10.3
You will not have problems with that, the images will be uploaded to the mounted filesystem without problems.
Maybe you have to specify free permissions to the uploads folder so that you can write on it.

how to run instance in different through aws api tools

when i execute the command ec2-describe-availability-zones it shows
AVAILABILITYZONE us-east-1a available us-east-1
AVAILABILITYZONE us-east-1c available us-east-1
AVAILABILITYZONE us-east-1d available us-east-1
so.. i can only run instance in us-east-1
how can i use other region if i want to run instance in other refion like ue-west-1?
i had copy ami form us-east-1 to us-west-1
and i execute the command
ec2-run-instances ami-526a0662 -n 1 -k USweastOregon -g launch-wizard-2 --monitor
it shows
Client.InvalidAMIID.NotFound: The image id '[ami-526a0662]' does not exist
AMIs are region-specific. If you want to use an AMI in a different region, you must first copy it:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.
The AMI in the new region will have a different AMI ID.
You should use the --region and specify eu-west-1 or us-west-1.
--region REGION
Specify REGION as the web service region to use.
This option will override the URL specified by the "-U URL" option
and EC2_URL environment variable.

getting large datasets onto amazon elastic map reduce

There are some large datasets (25gb+, downloadable on the Internet) that I want to play around with using Amazon EMR. Instead of downloading the datasets onto my own computer, and then re-uploading them onto Amazon, what's the best way to get the datasets onto Amazon?
Do I fire up an EC2 instance, download the datasets (using wget) into S3 from within the instance, and then access S3 when I run my EMR jobs? (I haven't used Amazon's cloud infrastructure before, so not sure if what I just said makes any sense.)
I recommend the following...
fire up your EMR cluster
elastic-mapreduce --create --alive --other-options-here
log on to the master node and download the data from there
wget http://blah/data
copy into HDFS
hadoop fs -copyFromLocal data /data
There's no real reason to put the original dataset through S3. If you want to keep the results you can move them into S3 before shutting down your cluster.
If the dataset is represented by multiple files you can use the cluster to download it in parallel across the machines. Let me know if this is the case and I'll walk you through it.
Mat
If you're just getting started and experimenting with EMR, I'm guessing you want these on s3 so you don't have to start an interactive Hadoop session (and instead use the EMR wizards via the AWS console).
The best way would be to start a micro instance in the same region as your S3 bucket, download to that machine using wget and then use something like s3cmd (which you'll probably need to install on the instance). On Ubuntu:
wget http://example.com/mydataset dataset
sudo apt-get install s3cmd
s3cmd --configure
s3cmd put dataset s3://mybucket/
The reason you'll want your instance and s3 bucket in the same region is to avoid extra data transfer charges. Although you'll be charged in bound bandwidth to the instance for the wget, the xfer to S3 will be free.
I'm not sure about it, but to me it seems like hadoop should be able to download files directly from your sources.
just enter http://blah/data as your input, and hadoop should do the rest. It certainly works with s3, why should it not work with http?