I am testing DiskCache with S3Reader2 for an imagecollection and it caches very nice and seemless.
However I have a hard time finding any info on clearing the diskcache for an image - which I need after image-update.
Is that possible via querystring? Or via C# Code?
Thanks
S3Reader2 supports invalidation in V4+ with a sliding expiration window.
You can also take the easy approach and append "&cachebreaker=x" when your underlying file changes. This has the benefit of working immediately and across CDNs.
Related
Over many different Objective-C iOS coding projects I have frequently come across the issue of having data be accessible after I initially got it.
For example, currently I am reading from the stackoverflow API. I do this with a session and get a dictionary back (my JSON response).
But outside the scope of the session, the dictionary is unavailable! I can't copy the contents to a different dictionary that I've defined globally, or anything. It's like it disappears outside of the session.
So I am wondering, what's the best way to save this data that I want to use? From what I've been reading it seems like NSUserDefaults or maybe creating a plist file, although admittedly I've been having trouble with both options. If there is a method that is best for this then I can concentrate on that.
Thank you!
It depends on how persistent you want to be.
If you save this dictionary into a global variable, it is stored in the part of device's RAM that is reserved for the running app. When the app stops running (gets killed by OS or removed by the user) or if your device reboots - this memory is lost.
If you save this dictionary onto the device's flash memory drive (and its file system) - it will live past restarts/reboots.
Usually people combine the approaches: when you get the data from the network, you keep it in a global variable, and save it to the file system. After the app restart you try to load the data from the file system. The reason for not using the FS all the time is that it is much slower than RAM access. I guess I'm describing caching.
Note that you can implement manual caching (using plain data or text files, NSUserDefaults, Core Data or other libraries), but also you can utilize a builtin HTTP cache - NSURLCache. If you create a session with NSURLSession.sharedSession it will use the default NSURLCache and respect a caching policy dictated by the server side.
For more control and full offline support I'd recommend to implement caching manually. See this about reading and writing plists and writeToFile:atomically:.
I am using ImageResizer (http://imageresizing.net/) and was curious if anyone has found a way to clear the cache for a particular item?
This would help me out greatly as I have some legacy systems that need to get updated images and I cannot add any query string parameters to the to images to refresh the cache.
In order to scale to millions of items, DiskCache does not maintain a cross-reference table between source files and cached images - it instead uses a one-way hash function that combines the source file, modified date, and commands.
If you want to do invalidation, your provider needs to support it via IVirtualFileWithModifiedDate. There is a cost associated with invalidation checks on every request, so some form of windowed caching is suggested.
You can also use URL rewriting to map "legacy URLs" to new immutable urls. For small numbers of images, this is the most performant approach. Keep in mind that even if ImageResizer is serving the right image, there are other layers of caching (browser, proxy, etc) that will get in the way.
I have a scenario in which multiple lines of text are to be appended to an existing text file... Is it possible to do this using Jclouds? (That would be ideal for me as jclouds supports a lot of cloud providers)...
Even if this is not doable using jclouds, does the native API of Amazon S3/Rackspace Cloudfiles/Azure storage support appending content to existing blobs?
If this is doable, then kindly point me to good working examples which show the same...
This is not possible in the underlying blob stores I know of.
So I am in the process of moving all the thumbnails of my major sites to S3 and now I am thinking about how I can consistently put all my CSS/JS/images that power the actual sites to it. It's easy enough to upload everything the first time but I am trying to think of a way to somehow automate the process everytime I push out to production.
Does anyone have any clever ways of doing this?
I used to use s3sync to compare and update the assets just before upload the site files using a bash file to iterate through my files
This works well but when the amount of likes to compare (lets say thousands) gets big this process start being really slow. If you have an small architecture (in term of assets) this would do the trick
to make this better I would recommend capistrano or some other assistant that helps you to deploy...this way you can run at all once..
upload assets
deploy your files
In the other hand you could take a look to cloudfront (amazon's CDN) and set it up using ORIGIN..this way you dont need to worry about upload the files to s3 since they will be automatically pulled on demand. The down side of this approach is the caching if you need to update a file and keep the same name (AKA expire the object)...you can do this in cloudfront but will need an script to do the task.
Depending in the traffic (and other factors, ofcourse) one or other path will fit the best.
I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.