I'm planning to use Next.js SSR/SSG/ISR on Amazon's EC2 and store images on S3 Bucket. Also to add CloudFront CDN on top of it.
The question is:
Should I cache images from S3 in Next.js (which is in EC2) thus "doubling" images (origin in S3, optimised instances in EC2 Next.js cache), or it makes no sense, since everything is located within one cloud (AWS) and covered with CDN layer (CloudFront)?
Or there is a way to move next.js caching to CloudFront?
I do understand that next/image is providing image optimisation (different sizes and quality), but I'm bothered by "doubling" the images, thus paying more for storage.
P.S. I've seen this question, I'm just not experienced with lambda, so currently looking for something I understand already.
Cloudfront gives you the option to have different origin for different behaviours and you can also apply different cache policy per behaviour. What you can do is have a behaviour for /images which will go to S3 and Default behaviour will point to Ec2 origin.
Related
I want to append my pre-signed URL to a CloudFront URL to use instead
any idea how to achieve this?
Use an Amazon CloudFront Signed URL instead of attempting to use an Amazon S3 pre-signed URL with CloudFront.
See: Using Signed URLs - Amazon CloudFront
I find the question relevant, it matches my needs. I have files stored in S3 Singapore and external consumers in Europe. AWS default bandwidth quality is quite poor (takes several minutes to download a 50 MB file for quite a few of my end users), so I'd like to optimize their network path through a layer of "dumb" CDN (not leveraging any caching, just using it for more qualitative network paths).
Turns out "Amazon S3 Transfer Acceleration" does exactly that:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
============
Why Use Amazon S3 Transfer Acceleration?
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
You have customers that upload to a centralized bucket from all over the world.
You transfer gigabytes to terabytes of data on a regular basis across
continents.
You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.
Getting Started with Amazon S3 Transfer Acceleration
To get started using Amazon S3 Transfer Acceleration, perform the following steps:
Enable Transfer Acceleration on a bucket
Transfer data to and from the acceleration-enabled bucket by using one of the following s3-accelerate endpoint domain names:
bucketname.s3-accelerate.amazonaws.com – to access an acceleration-enabled bucket.
============
Remarks:
It's more expensive than S3 + Cloudfront. You pay normal S3 bandwidth + something like 0.04 USD / GB for the acceleration (whereas when using Cloudfront, the S3 <> Cloudfront bandwidth is free)
You will probably need to re-sign the URLs. Usually the host is part of the signature, and acceleration requires using a different host. However, this is just normal S3 signing, not the completely different Cloudfront signing.
We have a bunch of images that are loaded on Amazon S3. Right now we directly call these images with the S3 URL. I would like to install Cloudfront CDN and mod_pagespeed to resize these images and optimize them. The web server itself isn't hosted on Amazon at all.
How can I get Cloudfront to cache the mod_pagespeed's resized images? My idea was to spin up an EC2 instance and use it as a reverse proxy to S3. This EC2 instance would have mod_pagespeed installed. So it would go Cloudfront -> EC2 proxy -> S3. So far I haven't been able to get this to resize the images. It all works, just not the mod_pagespeed part.
We're not wanting the images to pulled to the web server out of Amazon as they would waste a lot of bandwidth. I want the images from S3 to be resized either on the new EC2 instance or some other way inside of Amazon.
Anyone have any recommendations?
Take a look at AWS Lambda - there are a number of examples of doing exactly what you're trying to do with it.
I've read a lot of articles stating that I should be using Amazon S3 in conjunction with the CDN Cloudfront. I'm currently not doing this. I'm simply using Cloudfront with my standard shared hosting package.
Is it OK to use Cloudfront on its own with my standard shared hosting package? Surely there is no added benefit to using S3 also as the files are already located within Cloudfront.
Any enlightenment on this is much appreciated.
Leigh
S3 allows you to do things like static webhosting, with logging and redirection. I.E www.example.com redirects to example.com. You can then use Cloudfront to place your assets as close to the end user as possible ("nearest edge location"). An excellent guide on how to do this is in the AWS docs. Two main things are that S3 supports https, and changes to files in S3 are reflected instantly. Because Cloudfront is a CDN, you have to manually expire files if you change them, otherwise is could take up to 24 hours to reflect your changes.
http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html
A quick comparison between the two is given here:
http://www.bucketexplorer.com/documentation/cloudfront--amazon-s3-vs-amazon-cloudfront.html
There is no problem of using CloudFront against your own origin server comparing to a S3 server.
There are some benefits of using S3:
Data transfer is faster between S3 and CloudFront
Don't need to worry about the stability and maintenance of origin S3 server
Multiple origin regions
There are also benefits if you use your own server:
Cost saving of S3 hosting (this depends on whether you need to pay for your own server)
Easy for customization should you need it
Data storage location for company/country regulation
So it's all depending on your specific circumstances, such as how much you pay for your hosting package, do you need low-level configuration of your origin server, and how sensitivity your data is.
I would say for majority of the small/medium projects, S3 is a perfect place to store data.
Our current plan for a site is to use Amazon's Cloudfront service as a CDN for asset files such as CSS, JavaScript, and Images, and any other static files.
We currently have 1 bucket in S3 that contains all of these static files. The files are separated into different folders depending on what they are, "Scripts" are JS files, "Images" are Images, etc yadda yadda yadda.
So, what I didn't realize from the start was that once you deploy a Bucket from S3 to a Cloudfront Distribution, then every subsequent update to the bucket won't deploy again to that same Distribution. So, it looks as if you have to redeploy the bucket to another Cloudfront instance every time you have a static file update.
That's fine for images, because we can easily make sure that if there is a change to an image, then we just create a new image. But, that's difficult to do for CSS and JS.
So, that gets me to the Best Practice questions:
Is it best practice to create another Cloudfront Distribution for every production deployment? The problem here would be that causes trouble with CNAME records.
Is it best practice to NOT warehouse CSS and JS in Cloudfront because of the nature of those files, and their need to be easily modified? Seems like the answer to this would be NO because that's the purpose of a CDN.
Is there some other method with Cloudfront that I don't know about?
You can issue invalidation requests to CloudFront.
http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
Instead of an S3 bucket, though, we use our own server as a custom origin. We have .htaccess alias style_*.css to style.css, and we inject the file modification time for style.css in the HTML. As CloudFront sees a totally different URL, it'll fetch the new version.
(Note: Some CDNs let you do that via query string, but CloudFront ignores all query string data for caching, hence the .htaccess solution.)
edit: CloudFront can be (optionally) configured to use query strings now.
CloudFront has started supporting query strings, which you can use to invalidate cache.
http://aws.typepad.com/aws/2012/05/amazon-cloudfront-support-for-dynamic-content.html
We want to be able to have a folder that can securely serve images across a cluster of web servers. What's the best way to handle this with Amazon Web Services (AWS)? Amazon S3? Amazon Elastic Block Store (EBS)? Amazon Cloudfront?
EDIT: Answer no longer needed...thanks.
I'm not sure what your main goal is or if you have read about the services you ask about. But I will try to explain it as far as I've understood AWS and your choices:
S3 is a STORAGE (with buckets and objects, a sort of folder structure with meta access)
EBS is a VOLUME (these are attached to an EC2 instance as extra drive you can access as a local harddrive)
CloudFront is a WEB-CACHE (you select which datacenter you want them in, and then you point at a S3 bucket and Amazon will replicate the content for you)
So we only need to figure out what you mean by "securely" as there are two options as I see it:
You can protect buckets in the S3 or make access levels with accounts, for "administrator access" only and PUBLIC READABLE...
You can store the data in a EBS volume and keep them there, then they are very secure and NOT public, but shareable (I believe) among the servers (I've planned to check out this myself within the next week)
You cannot protect "cloudfront" data as it's controlled by the Bucket permissions from S3...
Hope you can use this a little. I've not stated anything regarding SPEED nor COST, thats for you to benchmark/test with your data requirements. :o)