Force reload cached image with same url after dynamic DOM change - http-headers

I'm developping an angular2 application (single page application). My page is never "reloaded", but it's content changes according to user interactions.
I'm having some cache problems especially with images.
Context :
My page contains an editable image list :
<ul>
<li><img src="myImageController/1">Edit</li>
<li><img src="myImageController/2">Edit</li>
<li><img src="myImageController/3">Edit</li>
</ul>
When i want to edit an image (Edit link), my dom content is completly changed to show another angular component with a fileupload component.
The myImageController returns the LastModified header, and cache-control : no-cache and must-revalidate.
After a refresh (hit F5), my page does a request to get all img src, which is correct : if image has been modified, it is downloaded, if not, i just get a 304 which is fine.
Note : my images are stored in database as blob fields.
Problem :
When my page content is dynamically reloaded with my single page app, containing img tags, the browser do not call a GET http request, but immediatly take image from cache. I assume this a browser optimization to avoid getting the same resource on the same page multiple times.
Wrong solutions :
The first solution is to add something like ?time=(new Date()).getTime() to generate unique urls and avoid browser cache. This won't send the If-Modified-Since header in the request, and i will download my image every time completly.
Do a "real" refresh : the first page load in angular apps is quite slow, and i don't to refresh all.
Tests
To simplify the problem, i trying to create a static html page containing 3 images with the exact same link to my controller : /myImageController/1. With the chrome developper tool, i can see that only one get request is called. If i manage to get mulitple server calls in this case, it would probably solve my problem.
Thank you for your help.

5th version of HTML specification describes this behavior. Browser may reuse images regardless of cache related HTTP headers. Check this answer for more information. You probably need to use XMLHttpRequest and blobs. In this case you also need to consider Same-origin policy.
You can use following function to make sure user agent performs every request:
var downloadImage = function ( imgNode, url ) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.responseType = "blob";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200 || xhr.status == 304) {
var blobUrl = URL.createObjectURL(xhr.response);
imgNode.src = blobUrl;
// You can also use imgNode.onload callback to release blob resources.
setTimeout(function () {
URL.revokeObjectURL(blobUrl);
}, 1000);
}
}
};
xhr.send();
};
For more information check New Tricks in XMLHttpRequest2 article by Eric Bidelman, Working with files in JavaScript, Part 4: Object URLs article by Nicholas C. Zakas and URL.createObjectURL() MDN page and Same-origin policy MDN page.

You can use the random ID trick. This changes the URL so that the browser reloads the image. Not that this can be done in the query parameters to force a full cache break or in the hash to allow the browser to re-validate the image from the cache (and avoid re-downloading it if unchanged).
function reloadWithCache(img: HTMLImageElement, url: string) {
img.src = url.replace(/#.*/, "") + "#" + Math.random();
}
function reloadBypassCache(img: HTMLImageElement, url: string) {
let sep = img.indexOf("?") == -1? "?" : "&";
img.src = url + sep + "nocache=" + Math.random()
}
Note that if you are using reloadBypassCache regularly you are better off fixing your cache headers. This function will always hit your origin server leading to higher running costs and making CDNs ineffective.

Related

Cloudfront serving png/jpg vs webp based on request headers

I have Cloudfront in front of S3 serving images (png and jpg).
I have all png and jpg images in webp format in the same directory with .webp extension. For example:
png: /path/to/file.png
webp: /path/to/file.png.webp
I'd like to serve the webp file dynamically without changing the markup.
Since browsers flag webp support via Accept header, what i need to do is: if the user has support for webp (via Accept header) Cloudfront would pull the webp version (filename.png.webp), if not it should serve the original file (filename.png)
Is this possible to achieve?
Making Cloudfront serve different resources is easy (when you have done it a couple of times), but my concern is whether the entity making the request (i.e. browser) and possible caching elements between (proxies etc) expects to have different media types on the same request URI. But that is a bit beyond your question. I believe the usual way to handle this problem is with a element where the browser is free to choose an image from different media types like this:
<picture>
<source type="image/svg+xml" srcset="pyramid.svg" />
<source type="image/webp" srcset="pyramid.webp" />
<img
src="pyramid.png"
alt="regular pyramid built from four equilateral triangles" />
</picture>
But if you still want to serve different content from Cloufront for the same URL this is how you do it:
Cloudfront has 4 different points where you can inject a lamdba function for request manipulation (Lambda#Edge).
For your use case we need to create a Lambda#Edge function at the Origin Request location then associate this function with your Cloudfront Distribution.
Below is an example from AWS docs that looks on device type and does URL manipulation. For your use case, something similar can be done by looking at the "Accept" header.
'use strict';
/* This is an origin request function */
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
/*
* Serve different versions of an object based on the device type.
* NOTE: 1. You must configure your distribution to cache based on the
* CloudFront-Is-*-Viewer headers. For more information, see
* the following documentation:
* https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers
* https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type
* 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer
* request event. To use this example, you must create a trigger for the
* origin request event.
*/
const desktopPath = '/desktop';
const mobilePath = '/mobile';
const tabletPath = '/tablet';
const smarttvPath = '/smarttv';
if (headers['cloudfront-is-desktop-viewer']
&& headers['cloudfront-is-desktop-viewer'][0].value === 'true') {
request.uri = desktopPath + request.uri;
} else if (headers['cloudfront-is-mobile-viewer']
&& headers['cloudfront-is-mobile-viewer'][0].value === 'true') {
request.uri = mobilePath + request.uri;
} else if (headers['cloudfront-is-tablet-viewer']
&& headers['cloudfront-is-tablet-viewer'][0].value === 'true') {
request.uri = tabletPath + request.uri;
} else if (headers['cloudfront-is-smarttv-viewer']
&& headers['cloudfront-is-smarttv-viewer'][0].value === 'true') {
request.uri = smarttvPath + request.uri;
}
console.log(`Request uri set to "${request.uri}"`);
callback(null, request);
};
Next you need to tell Cloudfront that you want to use the Accept header as a part of your cache key (otherwise Cloudfront would only execute your Origin Request lambda once and also not expose this header to your function).
You do this nowadays with cache and origin request policies. Or with legacy settings (Edit Behaviour under your Cloudfront distribution settings) such as:
Worth to note here is that, if you get low cache hit ratio due to different variants of the Accept header you need to pre-process / clean it. The way I would do it is with a Viewer Request Lamdba that gets executed for each request. This new Lambda would then check if the Accept header supports Webp and then add a single NEW header to the request that it passes on to the Origin Request above. That way the Origin Request can cache on this new header (which only has two different possible values)
There's more config/setup needed such as IAM policies to get Lamdba to run etc, but there's lots of great material out there that walks you through the steps. Maybe start here?

kinvey rest api upload

I'm trying to upload on Kinvey using REST API method.
I can successfully get the google storage URL link provided after sending a 'POST' request to https://baas.kinvey.com/blob/:myAppId
The problem is when I'm sending a 'PUT' request to the google storage URL, I'm getting this error:
XMLHttpRequest cannot load (my storage.google URL). Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin (my localhost) is therefore not allowed access.
This appears to be a fairly standard CORS error (which you can read a LOT more about over here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS ) , which takes place when you are making a cross-origin request. There's a lot of different ways that you can approach this issue, but the easiest would probably be to use one of our SDK's to help you. If you take a look over at http://devcenter.kinvey.com/html5/downloads you will find an SDK that you can include in your projects and guides / documentation for it in the top navigation.
File uploads using the HTML5 library are fairly trivial as well. Here's some sample code that I have whipped up:
HTML portion:
<input type="file" name="_file" id="_file" onchange="fileSelected();" />
<div id="fileinfo">
<div id="filename"></div>
<div id="filetype"></div>
</div>
Javascript portion:
function fileSelected(){
var oFile = document.getElementById('_file').files[0];
var oReader = new FileReader();
oReader.onload = function(e) {
document.getElementById('fileinfo').style.display = 'block';
document.getElementById('filename').innerHTML = 'Name: ' + oFile.name;
document.getElementById('filetype').innerHTML = 'Type: ' + oFile.type;
};
oReader.readAsDataURL(oFile);
fileUpload(oFile);
}
function fileUpload(file) {
var file = document.getElementById('_file').files[0];
var promise = Kinvey.File.upload(file,{
filename: document.getElementById('fileinfo').toString(),
mimetype: document.getElementById('filetype').toString()
})
promise.then(function() {
alert("File Uploaded Successfully");
}, function(error){
alert("File Upload Failure: " + error.description);
});
}
This will be slightly different for each of Kinvey's Javascript libraries, but should follow roughly the same outline. Get file, call Kinvey.File.Upload asynchronously, and let the SDK do it's magic. This should handle all the ugliness of CORS for you.
Thanks,

What's the easiest way to display content depending on the URL parameter value in DocPad

I'd like to check for an URL parameter and then display the confirmation message depending on it.
E.g. if I a GET request is made to /form?c=thankyou docpad shows the form with thank you message
I think there is two basic ways to do this.
look at the url on the server side (routing) and display differing content according to URL parameters
Look at the parameter on the client side using JavaScript and either inject or show a dom element (eg div) that acts as a message box.
To do this on the server side you would need to intercept incoming requests in the docpad.coffee file in the serverExtend event. Something like this:
events:
# Server Extend
# Used to add our own custom routes to the server before the docpad routes are added
serverExtend: (opts) ->
# Extract the server from the options
{server} = opts
docpad = #docpad
# As we are now running in an event,
# ensure we are using the latest copy of the docpad configuraiton
# and fetch our urls from it
latestConfig = docpad.getConfig()
oldUrls = latestConfig.templateData.site.oldUrls or []
newUrl = latestConfig.templateData.site.url
server.get "/form?c=thankyou", (req,res,next) ->
document = docpad.getCollection('documents').findOne({relativeOutPath: 'index.html'});
docpad.serveDocument({
document: document,
req: req,
res: res,
next: next,
statusCode: 200
});
Similar to an answer I gave at how to handle routes in Docpad
But I think what you are suggesting is more commonly done on the client side, so not really specific to Docpad (assumes jQuery).
if (location.search == "?c=thankyou") {
$('#message-sent').show();//show hidden div
setTimeout(function () {
$('#message-sent').fadeOut(1000);//fade it out after a period of time
}, 1000);
}
This is a similar answer I gave in the following Docpad : show error/success message on contact form
Edit
A third possibility I've just realised is setting the document to be dynamically generated on each request by setting the metadata property dynamic = true. This will also add the request object (req) to the template data passed to the page. See Docpad documentation on this http://docpad.org/docs/meta-data.
One gotcha that gets everyone with setting the page to dynamic is that you must have the docpad-plugin-cleanurls installed - or nothing will happen. Your metadata might look something like this:
---
layout: 'default'
title: 'My title'
dynamic: true
---
And perhaps on the page (html.eco):
<%if #req.url == '/?c=thankyou':%>
<h1>Got It!!!</h1>
<%end%>

XmlHttpRequest origin header is null in webkit

I am trying to integrate 2 webapps to let one (new one) send a url to another (old one) that it will load into a text area. The URL is actually to a file in S3 (with the multiple redirects that come with that) and I am trying to do this in the receiver using an XMLHTTPRequest like so;
var client = new XMLHttpRequest();
client.withCredentials = true;
client.open('GET', geneIdFileUrl, true );
client.setRequestHeader("Content-type", "text/plain");
client.onreadystatechange = function() {
document.getElementById('geneList').value = client.responseText;
}
client.send();
This is fine in firefox but in webkit browsers (chrome, safari) the request is being sent out with the request header
Origin null
instead of the real origin of the page making the request. The get from S3 is coming as a ContentDisposition attachment so I can't just dump it into an iframe and read the contents from there.
Why do the webkit browsers not set the origin properly and is it possible to coecre them into doing so?

Using Node JS to proxy http and modify response

I'm trying to write a front end to an API service with Node JS.
I'd like to be able to have a user point their browser at my node server and make a request. The node script would modify the input to the request, call the api service, then modify the output and pass back to the user.
I like the solution here (with Express JS and node-http-proxy) as it passes the cookies and headers directly from the user through my site to the api server.
proxy request in node.js / express
I see how to modify the input to the request, but i can't figure out how to modify the response. Any suggestions?
transformer-proxy could be useful here. I'm the author of this plugin and I'm answering here because I found this page when looking for the same question and wasn't satisfied with harmon as I don't want to manipulate HTML.
Maybe someone else is looking for this and finds it useful.
Harmon is designed to plug into node-http-proxy https://github.com/No9/harmon
It uses trumpet and so is stream based to work around any buffering problems.
It uses an element and attribute selector to enable manipulation of a response.
This can be used to modify output response.
See here: https://github.com/nodejitsu/node-http-proxy/issues/382#issuecomment-14895039
http-proxy-interceptor is a middleware I wrote for this very purpose. It allows you to modify the http response using one or more transform streams. There are tons of stream-based packages available (like trumpet, which harmon uses), and by using streams you can avoid buffering the entire response.
var httpProxy = require('http-proxy');
var modifyResponse = require('http-proxy-response-rewrite');
var proxy = httpProxy.createServer({
target:'target server IP here',
});
proxy.listen(8001);
proxy.on('error', function (err, req, res) {
res.writeHead(500, {
'Content-Type': 'text/plain'
});
res.end('Something went wrong. And we are reporting a custom error message.');
});
proxy.on('proxyRes', function (proxyRes, req, res) {
modifyResponse(res, proxyRes.headers['content-encoding'], function (body) {
if (body && (body.indexOf("<process-order-response>")!= -1)) {
var beforeTag = "</receipt-text>"; //tag after which u can add data to
// response
var beforeTagBody = body.substring(0,(body.indexOf(beforeTag) + beforeTag.length));
var requiredXml = " <ga-loyalty-rewards>\n"+
"<previousBalance>0</previousBalance>\n"+
"<availableBalance>0</availableBalance>\n"+
"<accuruedAmount>0</accuruedAmount>\n"+
"<redeemedAmount>0</redeemedAmount>\n"+
"</ga-loyalty-rewards>";
var afterTagBody = body.substring(body.indexOf(beforeTag)+ beforeTag.length)+
var res = [];
res.push(beforeTagBody, requiredXml, afterTagBody);
console.log(res.join(""));
return res.join("");
}
return body;
});
});