I'm giving out big images and resizing it on the client.
Today I found pagespeed can resize the image and offer it to client.
Well, that sounded great, but it isn't that easy to set it up.
Here's what happens now.
1. client request page
2. with ajax or json inside the initial page response, image url is provided
3. image url is at aws s3 (or I could set that image url to point to my server and proxy it)
How do I make it as following..
1. client request page
2. ajax
3. image url point to my server
4. upon image request, my server gets the image from s3
5. perform the optimization as if the image was obtained from local (resize, cache)
6. give it to the user
PageSpeed and CDN images
Google mod_pagespeed for ajax loaded content
https://superuser.com/questions/768040/https-proxy-s3-aws-via-nginx-with-pagespeed
https://code.google.com/p/modpagespeed/issues/detail?id=599
seems to point to an answer, but hard to get my head around it...
All you need is Nginx S3 proxy and Nginx image filter module: http://nginx.org/en/docs/http/ngx_http_image_filter_module.html
Related
I have an S3 bucket used for media file storage, most importantly for mp4 videos. There is a web front-end to all of that. (There's no CloudFront involved and I don't want to sign up for one due to extra cost.) When a browser encounters the raw URL to an mp4 video the default behavior is to play the mp4 file upon clicking (either in place or in a new tab depending on the target HTML tag). Then you could download the video from the native browser player but that takes many clicks and we are talking about sometimes 40 videos in a playlist and it'd be a huge productivity hit vs if the videos would just download.
So understandably a user need emerged for a download button which would allow the download of the video instead of playing it. After hours of research I stumbled upon StreamSaver.js which works great, and I can even give the downloaded video a meaningful name (with the athlete name, position, jersey#, etc) instead of the GUIDs, but there's a downside: I hit CORS policy errors like Access to fetch at 'https://sportsboardmedia.s3.amazonaws.com/uploads/video/{GUID}_{GUID}.mp4' from origin 'https://app.sportsboard.io' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My bucket was accessible to the public from the get go. After some research I switched my bucket over to static website hosting mode (as per this SO entry), and I also applied a CORS policy advised by this SO entry.
The current situation is that some video files are downloadable now, but other files (we are talking about the same bucket) still throw CORS errors. How is that even possible? The CORS policy supposed to be global to the whole bucket and every file should be equal from CORS point of view as far as I know. Where do I even start to fix this?
I applied this CORS policy a few days ago and my bucket contains about 5TB of videos. I think that's not a big whop for Amazon. Here is an example athlete locker where some of the videos throw CORS error: https://app.sportsboard.io/playerlocker/media/event/6272C5BE-CE03-4344-BED1-29369306C831/5e267f70-f1de-11e7-b8d3-f23c91087063/videos/
I opened an AWS Forum post as well, but all I hear is crickets there too: https://forums.aws.amazon.co/thread.jspa?messageID=991094#991094
Now I opened an AWS IQ request as well: https://iq.aws.amazon.com/p/N3SFMRLKRH
I recorded a video too: https://youtu.be/Dy38JRI5-oU
Maybe I should record a TikTok video as well???
I'm trying to add a test coverage badge to the Readme of a private repository on GitHub. Our continuous integration process saves out the image to a secured Google Cloud Storage bucket that's not accessible to the public, and should remain that way.
Google's authorization layer is smart enough that if I go to the URL for the image, I'm automatically redirected to the resource with a valid auto-generated signed URL.
E.g., if I go to http://storage.cloud.google.com/secret-files/mysecretfile.png, then if I'm logged in and allowed to view it, I'm automatically redirected to something like https://blahblah-apidata.googleusercontent.com/download/storage/v1/b/secret-files/o/mysecretfile.png?key=verylongkey, where I can load the image.
This seemed perfect. Reference the canonical path in the GitHub Readme, authenticated users see the image, unauthenticated users are still blocked, we don't have to make the file public, and we don't have to do anything complicated.
Except that GitHub is proxying the image request, meaning that it will always be unauthenticated. My browser is loading something like https://camo.githubusercontent.com/mysecretimage.png.
Is there a clever way to work around this? Or do I need to go back to the drawing board?
All images on github.com are proxied using the Camo image proxy. There are a couple reasons for this:
It preserves the privacy of users. It isn't possible for a document to track users by directing them to a different site or using cookies to track them.
It means images can be cached and served at an appropriate size.
GitHub can have a very strict content security policy that does not allow loading from untrusted sites, which means that any sort of accidental security problem (like an XSS) is a lot less likely to work.
Note the last part. Even if you found some sneaky way to get another image URL to render properly in the website, your browser wouldn't load it because it violates the Content-Security-Policy header the site sent, and moreover, your browser would tattle about that to the reporting URL that GitHub provided.
So any image URL you provide will need to be readable by GitHub's image proxy and it won't be possible to serve different content to different users.
I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.
I have the following requirements for a (Rails) web application that uses S3/Cloudfront for image storage:
A user may only see an image if they are logged in. If the user sends an image URL to a friend, it will not work.
If a user has seen an image, it should be cached by their browser, so they don't have to download it again.
…
Requirement 1 can solved with S3's Query String Authentication (QSA)
(e.g. with 30 second expiry).
Requirement 2 can be solved using HTTP
caching.
Is it possible to use them both together?
The challenge I'm facing is that QSA effectively changes the URL of the image after expiry, even though a perfectly good copy may reside in the browser cache.
I have an image fetching "dynamically" from amazon s3 bucket, here is the source example:
src = http://s3.amazonaws.com/kidslink_assets/logos/5082f5d279216d14d000001e/original.png?1350759890
i need this to be a http*s* (https) request.
If i understood S3 terminology, S3 response always serve as https . I tried to access your image using http, that's default redirecting to https (https://s3.amazonaws.com/kidslink_assets/logos/5082f5d279216d14d000001e/original.png?1350759890)
So you not need to worry about https/ssl.
As client needs to see it as https request, i added manually when the page is rendering the image. I means added "s" manually for the link before rendering the page as http and https are doing the same.