Running aws S3 locally to test saving of images - amazon-s3

Hi everyone we implemented aws S3 for uploading images and generating pdfs through this images. However on our local testing server we cannot use the aws S3 locally and I am looking for ways to run aws S3 locally maybe using docker or another software.
Do you have any recommendations?
EDIT: I use docker for the local DB and Tomcat for the local server

You can NOT run S3 locally.
What you can do is to mimic the S3 API calls.
see https://github.com/spulec/moto
https://medium.com/#l.peppoloni/how-to-mock-s3-services-in-python-tests-dd5851842946

Related

.Net core not able to connect to AWS S3 bucket

I have .Net core code to write file in AWS S3 bucket. This code is getting called from Hangfire as a Job. It works fine in my machine but when I deploy solution to cloud foundry as docker container, code throws error "connection refused" at line where it connects to AWS. Exception doesn't have much information.
I did setup proxy environment in docker file and I can verify proxy by ssh to container in cloud foundry. By ssh when I run curl command to AWS S3 bucket it gives "Access Denied" error which confirms that docker container has no firewall issue accessing AWS.
I am not able to figure out why Hangfire job can't connect to AWS. Any suggestions.

AWS S3 and AWS ELB instead of AWS Elastic beanstalk for SPA Angular 6 application

I am creating an Angular 6 frontend application. My backend api are created in DotNet. Assume the application is similar to https://www.amazon.com/.
My query is related to frontend portion deployment related only, on AWS. Large number of users with variable count pattern are expected on my portal. I thought of using AWS elastic beanstalk as PAAS web server.
Can AWS S3/ ELB be used instead of PAAS beanstalk without any limitations?
I'm not 100% sure what you mean by combining an Elastic Load Balancer with S3. I think you may be confused as to the purpose of the ELB, which is to distribute requests to multiple servers e.g. NodeJS servers, but cannot be used with S3 which is already highly available.
There are numerous options when serving an Angular app:
You could serve the files using a nodejs app, but unless you are doing server-side rendering (using Angular Universal), then I don't see the point because you are just serving static files (files that don't get stitched together by a server such as when you use PHP). It is more complicated to deploy and maintain a server, even using Elastic Beanstalk, and it is probably difficult to get the same performance as you could do with other setups (see below).
What I suspect most people would do is to configure an S3 bucket to host and serve the static files of your Angular app (https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html). You basically configure your domain name to resolve to the S3 bucket's url. This is extremely cheap, as you are not paying for a server which is running constantly, but rather only have to pay the small storage cost and plus a data transfer fee which would be directly proportional to your traffic.
You can further improve on the S3 setup by creating a CloudFront distribution that uses your S3 bucket as it's origin (the location that it get files from). When you configure your domain name to resolve to your CloudFront distribution, then instead of a user's request getting the files from the S3 bucket (which could be in a region on the other side of the world and so slower), the request will be directed to the closest "edge location" which will be much closer to your user, and check if files are cached there first. It is basically a global content delivery network for your files. This is a bit more expensive than S3 on it's own. See https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/.

Access S3 in cron job in docker on Elastic Beanstalk

I have a cron job in a docker image that I deploy to elastic beanstalk. In that job I wish to include read and write operations on S3 and have included the AWS CLI tools for that purpose.
But AWS CLI isn't very useful without credentials. How can I securely include the AWS credentials in the Docker image, such that that AWS CLI will work? Or should I take some other approach?
Always try to avoid setting credentials on machines if they run within AWS.
Do the following:
Go into the IAM console and create an IAM role, then edit the policy of that role to have appropriate S3 read/write permissions.
Then go the Elastic Beanstalk console, find your environment and go to the the configuration/instances section. Set the "instance profile" to use the role you created (a profile is associated with a role, you can see it in the IAM console when you're viewing the role).
This will mean that each beanstalk EC2 instance will have the permissions you set in the IAM role (the AWS CLI will automatically use the instance profile of the current machine if available).
More info:
http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html#use-roles
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html

Secure way to access Amazon S3

I have been using Transmit FTP program to access my Amazon S3 storage buckets. Just been reading on here that this isn't that secure.
I'm not a command line person as you can probably tell so what would be the best way for me to access my S3 storage on my Mac?
I'm using to store image files that I am making available for download on my website.
Thanks
FTP isn't secure, but it sounds like you are confusing this fact with the fact that you are using a multiprotocol client to access S3, and that client also happens to support FTP.
S3 cannot be directly accessed using the FTP protocol, so the client you are using can't actually be accessing S3 using FTP... hence, your security-related concern appears to be unfounded.

404 redirect with cloud storage

I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image.
We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.
THe Amazon instances are just like normal hosted server instances once they are up and running so your Apache configuration could assumedly be identical to what you currently have.
Your only issue will be where to store the images. The new Amazon Elastic Block Store makes it easy to mount a drive based on S3 backed data. You could store all your images on such a volume and use it with your Apache instance.