ImageResizer DiskCache+AzureReader strange behaviour - imageresizer

I'm using ImageResizer + Diskcache plugin and I'm finding issues to get cache working properly. Either the images are cached forever (regardless how many times I upload a new image) or changing some settings I get the old image in some browsers/computers and the new one in others.
This is what I have now in my web.config:
<add name="AzureReader2" connectionString="blahblahblah" endpoint="http://blahblahblah.blob.core.windows.net/" prefix="~/" redirectToBlobIfUnmodified="false" requireImageExtension="false" checkForModifiedFiles="true" cacheMetadata="true"/>
and:
<diskcache dir="~/imagecache" autoclean="true" hashModifiedDate="true" subfolders="8192" asyncWrites="true" asyncBufferSize="10485760" cacheAccessTimeout="15000" logging="true" />
Not sure if is something I can achieve using the existing parameters. My goal is to invalidate the cache preferably when the new image has been uploaded without having to change the query string serving the image to get the new one.
I was thinking:
Maybe having a blob storage trigger that when a replacement image has
been uploaded, fires a webhook that deletes the cache for that image?
Or a web request to my imageresizer app to preload the new image in
cache so it replaces the old cached image???
I've seen some posts about using IVirtualFileWithModifiedDate but from what I understand that would have a big performance impact? Is probably 5% of our images request that will have someone uploading an image and expecting it to see it right away since most of the images barely change but it's really frustrating if the image doesn't show the new one not even a day after they have uploaded it!
If I can use IVirtualFileWithModifiedDate to invalidate the cache when the image has changed and not in every image request? Would that be possible?

I get the old image in some browsers/computers and the new one in others.
That different browsers are displaying different versions indicates that either browser caching or proxy/CDN caching is at fault.
ImageResizer's DiskCache hashes the modified date, so it is always as correct as the storage provider.
Regarding your expectations around server-side invalidation:
You're using checkForModifiedFiles="true" cacheMetadata="true", which means that Azure is queried for the latest modified date, but that metadata is cached with a sliding expiration window of 1 hour. I.e, if a URL hasn't been accessed in 1 hour, the next request will cause the modified date to be checked. See StandardMetadataCache.
You can change this behavior by implementing IMetadataCache yourself and assigning that cache to the .MetadataCache member of the storage provider you're using.

Related

Leverage browser caching for external files

i'm trying to get my google page speed insights rating to be decent, but there are some external files that i would want to be cached aswell, anyone knows what would be the best way to deal with this?
https://s.swiftypecdn.com/cc.js (5 minutes)
https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js (60 minutes)
https://pagead2.googlesyndication.com/pagead/osd.js (60 minutes)
https://www.google-analytics.com/plugins/ua/linkid.js (60 minutes)
https://hey.hellobar.com/…d5837892514411fd16abbb3f71f0d400607f8f0b (2 hours)
https://www.google-analytics.com/analytics.js (2 hours)
Copy to your server and serve locally or from CDN, with different browser cache settings. Update GA scripts periodically with cronjob or something similar.
On Wordpress there are plugins that can do that for you, like this one: Above The Fold; they call this feature Javascript localization.
On the other hand, I use Google Pagespeed Module on server and it's directive MapProxyDomain in combination with Alternative async tracking snippet. That seems most elegant for me.
This should be enough for you to start solving your problem.
set cache-control to external resources?
You can't control the headers sent from a server that you don't control.
In other words, either host a copy yourself or there's nothing you can do about it.
Thanks
There is no solution for those files. If those files are CDN like bootstrap cdn you can copy those files locally into your host but if those request are generated on runtime than you can do nothing about it. :)
You can make your own cache
Place some files to the browser's localStorage (after the first time they arrive from the distant server) and next time you can serve them from the local copy. This way, you store things right where they're needed - the only thing to be careful with is updating them, you need a way to replace these files when it's time for a new version.
If you don't want to do this from scratch, here are some Javascript libraries:
https://www.sitepoint.com/9-javascript-libraries-working-with-local-storage/
Check out this lsCache for example, it looks super practical:
lscache.set('greeting', 'Hello World!', 2); // 2 minute expiration
lscache.get('greeting'); // returns "Hello World!"
lscache.remove('greeting'); // remove
lscache.flush(); // flush the entire cache
lscache.flushExpired(); // flush only expired items

Worklight - Updatable static content

I have this requirement : My WL application have a set of static pages that might be updated any time. Originally the source of all static content is a desktop page that will be transformed by xsl to a mobile friendly content. The problem that I don't want to do that on each request (HA requirement).
I want to get some inspiration on how to architect that without using direct update mechanism (don't want the end user to get notified of these updates).
I should note that pages will change rarely every few month maybe.
I'm thinking about 2 ways of doing that :
1- Making the transformation on adapter side and rely on WL caching so that transformation is not made each time (does that exist ?). But how the adapter will get notified of page change and flush the cache ? Should I program some advanced java based adapter ? (Storing in the cache and having a kind of a job that scans every day for content changes ?)
2- Doing it mobile side but I don't know how to get notified of changes !
Is your only problem with Worklight's Direct Update that the user is being notified and is required to explicitly approve the transfer?
In this case why not use the option of Silent Direct Update?
The property you're looking for is updateSliently set to true in initOptions.js.
For this to work it is required, obviously, that connectOnStartup will be set to true as well.
perhaps what is doable is to use an adapter to fetch the HTML (or whatever it is) and save it to the device's local storage and then have the app display this content, this way you do not alter the app's web resources and not trigger Direct Update.

Use data from NSURLCache

I have setup an NSURLCache and set the sharedcache to the initialized cache.
I can see the cache size growing when I log the size in the NSURLConnection delegates.
For some reason when requesting the same data, a request is made to the server again, and the log of the cache size shows the log staying the same size.
This indicates to me that the cache is working correctly, but for some reason the URL Loader is not pulling from the cache.
I use the default cache policy on the requests and have double checked the headers in the response.
After reading around on here I see other people mentioning the same issue and setting the memory capacity and disk capacity to the same value solves their issue, but it does not work for me. New requests are being made every time rather than pulling from the cache.
Would it be wise to instead check the cache for the data before making the request? I dont see the point of the cachingPolicy parameter to the NSURLRequest method.
well HTTP also lets the server tell you how to cache - it can disallow caching (and also set a max. cache time)
that may influence SDURLCache
anyways, also look at UIWebView and NSURLCache have a troubled relationship
there it is said that UIWebView doesnt work

Checking filesize before download of multiple files

Im currently implementing an update feature into an app that I'm building. It uses NSUrlConnection and the NSURLConnectionDelegate to download the files and save them to the users device.
At the moment, 1 update item downloads multiple files but i want to display this download of multiple files using 1 UIProgressView. So my problem is, how do i get the expected content length of all the files i'm about to download? I know i can get the expectedContentLength of the NSURLResponse object that gets passed into the didReceiveResponse method but thats just for the file thats being downloaded.
Any help is much appreciated. Thanks.
How about having some kind of information file on your server, which actually gives you the total bytes. You could load that at first and then load your files. Then you can substract the loaded amount for each file from the total amount.
Another method would be to connect to all files at first, and cancel the connection after you received responses. Add the expected bytes of all files and then use that as a basis for showing the total progress while loading files.
Downside of #1: you have to manually keep track of the bytes.
Downside of #2: you'll have the double amount of requests, even though they get cancelled after the response.
Use ASIhttp opensource framework widely used for this purpose,
here u just need to set progressview delegate..so it will keep updating your progress view
Try this
http://allseeing-i.com/ASIHTTPRequest/

Dynamic Image Caching

I have a CherryPy app that dynamically generates images, and those images are re-used a lot but generated each time. The image is generated from a querystring containing the variables, so the same query string will always return the same image (until I rewrite the generation code) and the images are not user-specific.
It occurred to me that I should be aggressively caching those images, but I have no idea how I would go about doing that. Should I be memoizing in CherryPy? Should I do something like this for Apache? Another layer entirely?
Your first attempt should be to use the HTTP cache that comes with CherryPy. See http://docs.cherrypy.org/dev/refman/lib/caching.html for an overview. From there, it's usually a "simple" matter of balancing computational expense versus RAM consumption via cache timeouts.