Amazon EC2 creates automatically if I use S3? - amazon-s3

Amazon EC2 creates automatically if I use S3?
I use only S3.

No, if you use S3 it won't automatically create an Amazon EC2 Instance if that is what you are referring to. Can you clarify your question.

An AWS EC2 instance/server is different from S3.
If you use AWS S3 to upload/download store files no EC2 servers will be launched.
You can access these files through AWS console or through AWS Cli on your local machine.

Related

AWS S3 file transfer from a non-AWS Linux server

Was wondering if anyone has a solution to transfer files from a non-AWS Linux Server A to a AWS S3 bucket location by using/running commands from a non-AWS Linux Server B? Is it possible to avoid doing two hops? Future plan is to automate the process on Server B.
new info:
I am able to upload files to S3 from ServerA such as:
aws s3 sync /path/files s3://bucket/folder
But not sure how to run/execute it from a different Linux server (ServerB)?
There are several steps to using the aws s3 sync command from any server that supports the aws cli and aws s3 sync command, Linux or otherwise
Enable Programmatic Access for the IAM user/account you will use with the AWS CLI and download the credentials
docs: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console
Download and install the aws-cli for your operating system
Instructions available for:
Docker
Linux
macOS
Windows
docs: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
Configure your aws credentials for your cli
e.g. aws configure
docs: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
Create the bucket you will sync to and allow your aws user/identity access to this bucket
doc: https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html
Run the aws s3 sync command according to the rules outlined in the official documentation
e.g. aws s3 sync myfile s3://mybucket
docs: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html#examples

How to set up a volume linked to S3 in Docker Cloud with AWS?

I'm running my Play! webapp with Docker Cloud (could also use Rancher) and AWS and I'd like to store all the logs in S3 (via volume). Any ideas on how I could achieve that with minimal effort?
Use docker volumes to store the logs in the host system.
Try S3 aws-cli to sync your local directory with S3 Bucket
aws s3 sync /var/logs/container-logs s3://bucket/
create a cron to run it on every minute or so.
Reference: s3 aws-cli

How to specify different AWS credentials for EMR and S3 when using MRJob

I can specify what AWS credentials to use to create an EMR cluster via environment variables. However, I would like to run a mapreduce job on another AWS user's S3 bucket for which they gave me a different set of AWS credentials.
Does MRJob provide a way to do this, or would I have to copy the bucket using my account first so that the bucket and EMR keys are the same?

Web hosting on AWS - Where do we create the database and how to link S3 to EC2

I usually use webhosts with cpanel.
I'm trying a larger scale website so I thought I'll give AWS a try.
I've been googling and reading the documentations but still can't find a step by step guide to get my website live.
I've transferred all my files to S3
what do I do with EC2?
How do I create a mySQL database?
Thank you for your help.
David
For setting MySQL database, there are two options :
Setup MySQL in your EC2 instance. This is same like as we do in normal scenario.
Make use of Amazon RDS for database. Launch an Amazon RDS instance and give EC2 security group permissions to RDS security group. Access the newly created RDS instance by making use of endpoint name. Advantage of having RDS instance is you don't have to worry about backups and version upgrades.
Attach an EBS volume to your EC2 instance and store everything in that. EBS volumes can be attached on the fly. Your data persists even if EC2 instance crashes out. We can create snapshots of EBS volume and store them in S3 for backups.
Regards,
Sanket Dangi

Hadoop upload files from local machine to amazon s3

I am working on a Java MapReduce app that has to be able to provide an upload service for some pictures from the local machine of the user to an S3 bucket.
The thing is the app must run on an EC2 cluster, so I am not sure how I can refer to the local machine when copying the files. The method copyFromLocalFile(..) needs a path from the local machine which will be the EC2 cluster...
I'm not sure if I stated the problem correctly, can anyone understand what I mean?
Thanks
You might also investigate s3distcp: http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html
Apache DistCp is an open-source tool you can use to copy large amounts of data. DistCp uses MapReduce to copy in a distributed manner—sharing the copy, error handling, recovery, and reporting tasks across several servers. S3DistCp is an extension of DistCp that is optimized to work with Amazon Web Services, particularly Amazon Simple Storage Service (Amazon S3). Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by your Amazon Elastic MapReduce (Amazon EMR) job flow. You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3.
You will need to get the files from the userMachine to at least 1 node before you will be able to use them through a MapReduce.
The FileSystem and FileUtil functions refer to paths either on the HDFS or the local disk of one of the nodes in the cluster.
It cannot reference the user's local system. (Maybe if you did some ssh setup... maybe?)