How to cache on-the-fly generated images with Remix on Vercel? - vercel

I'm trying to cache an on-the-fly generated image on Vercel's edge cache:
I've created an API route in my Remix project (/routes/api/image.ts) where I have a loader which returns an image and a cache-control header (s-maxage=1, stale-while-revalidate'). Everything looks fine while running in dev mode:
But running this in Vercel just gives my MISSes:
Other content routes which return HTML or JSON are returning the cached content (I've set the same cache-control header there):

Related

PWA Caching Issue

I have a PWA which has been developed in ASP.net Core and hosted on an Azure App Service (Linux).
When a new version of the PWA was released, I found that devices failed to update without clearing the browser cache.
To resolve this, I discovered a tag helper called asp-append-version that will clear cache for a specific file. I also discovered that I can append the version of the src attribute that specifies the URL of a file to trigger the browser to retrieve the latest file. For example, src="/scripts/pwa.js?v=1". Each time I update the pwa.js file I would also change the version i.e. v=2.
I’ve now discovered that my PWA is caching other JavaScript files in my application which results in the app not working on devices that have been updated to the new version however failed to clear the cache on specific files.
I believed that if I didn’t specify any cache control headers such as Cache-Control that the browser would not cache any files however this appears not to be the case.
To resolve this issue, is the recommended approach to add the appropriate Cache-Control headers (Cache-Control, Pragma, and Expires) to prevent browser caching or should I only add the tag helper asp-append-version to for example scripts tags to auto clear cache for that specific file?
I would preferably like the browser to store for example images rather than going to the server each time to retrieve these. I believe setting the header Cache-Control: no-cache would work as this would check if the file has changed before retrieving the updated version?
Thanks.
Thanks # SteveSandersonMS for your insights, In your web server returns correct HTTP cache control headers, browsers will know not to re-use cached resources.
Refer here link 1 & link 2 for Cache control headers on Linux app service
For example, if you use the "ASP.NET Core hosted" version of the Blazor WebAssembly template, the server will return Cache-Control: no-cache headers which means the browser will always check with the server whether updated content is present (and this uses etags, so the server will return 304 meaning "keep using your cached content" if nothing has changed since the browser last updated its content).
If you use a different web server or service, you need to configure the web server to return correct caching headers. Blazor WebAssembly can't control or even influence that.
Refer here

Nuxtjs - Axios API calls in asyncData after the first SSR load fail on Safari

All API calls made using nuxt-axios-module #nuxtjs/axios fail / give error only on Safari browser when you navigate through page with client side loading (SSR first time load works fine).
The error given in Safari console is vague and not much to be extracted from it.
Has anyone have this issue before? It is consistent for all different API calls and they all follow the same pattern: Using $axios in asyncData and fail after the first SSR load.
Error:
All other browsers work, including IE11 even.
I've tried Nuxt's modern mode on both 'client' and 'server' modes, to no avail.
Any thoughts?
Turns out it was unrelated to nuxt. It was my nginx server sending the following alongside OPTIONS requests and Safari apparently cannot handle that.
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;

How to enable safari browser caching for urls which are 302 redirects

I have a single-page application which is dependent on a javascript bundle to work. For fetching this bundle's CDN (cloudfront) url, I'm making a call to an AWS API Gateway endpoint which returns a HTTP 302 response having the Location header parameter as the CDN url. Now this CDN Url responds with cache-control headers having a sufficiently large max-age value. All the other browsers like Chrome, Firefox seem to honor this and cache the CDN Url response for further requests. But Safari isn't doing so (Version - 12). However, it does cache the response when I'm making the request to the CDN Url directly. Do I need to add some more headers or some additional metadata in the 302 response to make it work for safari?
I tried fiddling with the cache-control parameters like adding 'immutable' but nothing worked. I googled quite a lot about this issue but nothing concrete turned up.
I expected Safari to work with just the max-age parameter present in CDN's response, but it never caches it.

Cloudfront and Lambda#Edge - fetch from custom origin depending on user agent

I'm serving my JavaScript app (SPA) by uploading it to S3 (default root object index.html with Cache-Control:max-age=0, no-cache, pointing to fingerprinted js/css assets) and configuring it as an origin to a CloudFront distribution. My domain name, let's say SomeMusicPlatform.com has a CNAME entry in Route53 containing the distribution URL. This is working great and all is well cached.
Now I want to serve a prerendered HTML version for purposes of bots and social network crawlers. I have set up a server that responds with a pre-rendered version of the JavaScript app (SPA) at the domain prerendered.SomeMusicPlatform.com.
What I'm trying to do in the lambda function is to detect the user agent, identify bots and serve them the prerendered version from my custom server (and not the JavaScript contents from S3 as I would normally serve to regular browsers).
I thought I could achieve this by using a Lambda#Edge: Using an Origin-Request Trigger to Change From an Amazon S3 Origin to a Custom Origin function that switches the origin to my custom prerender server in case it identifies a crawler bot in response headers (or, in the testing phase, with a prerendered=true query parameter).
The problem is that the Origin-Request trigger with the Lambda#Edge function is not triggering because CloudFront still has Default Root Object index.html cached and tends to return the content from the cached edge. I get X-Cache:RefreshHit from cloudfront by using both SomeMusicPlatform.com/?prerendered=true and SomeMusicPlatform.com, even though there is a Cache-Control:max-age=0, no-cache on the Default Root Object - index.html.
How can I keep the well-cached serving and low latency of my JavaScript SPA with CloudFront and add serving content from my custom prerender server just for crawler bots?
The problem with caching (getting the same hit when using either mywebsite.com/?prerendered=true or mywebsite.com) was solved by adding prerendered to the query whitelist in the cloudfront distribution. This means that CloudFront now correctly maintains both normal and prerendered version of the website content, depending on presence of the parameter (without the parameter cached content from S3 origin is served, and with the parameter cached content from my custom origin specified in the lambda function is served).
This was enough for the testing phase - to ensure the mechanism is working correctly. Then I followed Michael's advice and added another lambda function in the Viewer Request trigger which adds a custom header Is-Bot in case a bot is detected in User-Agent. Again, whitelisting was needed, this time for the custom header (to maintain caches for both origins depending on the custom header). The other lambda function later in the Origin Request trigger then decides which origin to use, depending on the Is-Bot header.

Sails 0.10.5 compress middleware and serving gzipped assets

In sails 0.10.5 express compression is supposed to be in the middleware for production mode by default according to the issues on github, but none of the response headers have the appropriate Content-Encoding to suggest that they have been gzipped. Furthermore, the sizes of the assets all match the uncompressed assets.
After searching for any other issues related to this, I found this SO question which was theoretically the opposite of my problem: he had the gzipped files in place and needed the middleware and I have the middleware (supposedly by default) but no files. His problem was (apparently) solved by adding the middleware config, which was required for compression before 0.10.5. So, I npm installed grunt-contrib-compress and set up the config file. Now, I have the gzipped files being produced successfully, but they're not being served. I tried manually requesting the gzipped version of the asset by injecting it in sails-linker instead of the regular js, but the Content-Type on the response header was 'application/octect-stream'.
Has anyone successfully served gzipped static assets from a sails app? Am I doing anything obviously incorrectly? Even an outline of the general process would be appreciated.