Is IndexedDB on Safari guaranteed to be persistent? - safari

Similar to this question, is IndexedDB guaranteed to be persistent ? ie. Safari will not reclaim disk space if the device is low on memory.

Safari have "No Eviction policy", meaning it will not automatically clean the IndexDB on low disk pressure, without user doing it manually.
IndexDB is one of the fast evolving feature and you can expect to have a different eviction policy any time with no announcement. You should always build with fall back options.
Chrome has explicit persistent storage option which will guarantee no eviction, on user approval for persistent storage and we can expect Safari to do the same sometime, based on their track record of following Chrome in implementing PWA features(though its taking years with super bad documentation).

According to this blog post from the WebKit team, IndexedDB is not guaranteed to be persistentfrom iOS and iPadOS 13.4 and Safari 13.1 on macOS. Safari will delete it after seven days of Safari usage without interaction with the site:
Now ITP has aligned the remaining script-writable storage forms with
the existing client-side cookie restriction, deleting all of a
website’s script-writable storage after seven days of Safari use
without user interaction on the site. These are the script-writable
storage forms affected (excluding some legacy website data types):
Indexed DB
LocalStorage
Media keys
SessionStorage
Service Worker registrations and cache
However, IndexedDB is pretty much guaranteed to be persistent if your Web app is installed in your Home Screen, as the Web app will have its own usage context, and due to its very nature, it'd be impossible to use it for seven days without accessing the site where it came from:
[...] Web applications added to the home screen are not part of Safari
and thus have their own counter of days of use. Their days of use will
match actual use of the web application which resets the timer. We do
not expect the first-party in such a web application to have its
website data deleted.
Regardless of the above, I would personally not trust IndexedDB for any kind of long-term data storage. I've found it quite ropey, and not long ago broke altogether in Safari 14.1.1.

I have no definitive answer, but after using IndexedDB for over 2 years in a big browser/desktop (electron based) application I would attribute multiple datalosses to IndexedDB or at least IndexedDB in chrome. So my answer would be no. Don't rely on it.

Related

Safari ITP 2.3: Capped Lifetime For All Script-Writeable Website Data

I am a little confused about Safari's ITP 2.3 policy which caps the lifetime of script-writable storage in the browser to 7 days.
The official article states that:
After seven days of Safari use without the user interacting with a web page on website.example, all of website.example’s non-cookie website data is deleted.
This definitely includes localStorage. Does someone know for sure whether
IndexedDB
CacheStorage
Service Worker
are cleared as well?
Looking at the relevant Webkit commit, it clearly includes IndexedDB - it does not seem to include CacheStorage or ServiceWorker.
This post of March 24th lists what is affected.
On that list, I can see IndexDB and something that they call "Service Worker Registrations".
https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/

How big can an iPad app be within its sandbox?

I will be making an iPad app with many images (embedded) and it will be around 2gbs. I have made apps this size before for b2b with no issues. My question is does this 2gb limit Apple has extend to the downloading of new content packages? So the app ships much like a magazine app - at about 30-50mbs, say. And then you can download packages that might make the entire app 12gbs.
Apple's app size limit (50MB currently) only applies to mobile downloads, but not to WiFi downloads or purchases on your computer. There are a number of apps available today that exceed this limit (>1GB for some navigation packages that ship with custom offline map data).
However, that's not really a good user experience. Since there is still no incremental update system available on the App Store, the users will have to download the full app after every update.
Downloading this content from your own server after purchase might be a better option, if you can afford the traffic, since that data won't be cleared during an update. Make sure you disable iCloud backups for it though.
That being said, I wonder what kind off app requires 2GBs of essential images. Are you sure every user needs all of them at the same time? Can't you just download what is necessary?
The size limit of 2 GB only applies to the app itself, i.e. what's downloaded from the App Store. The size of things that the app downloads once it's installed is not really limited, but you must make sure that large data is not backed up via iTunes/iCloud, otherwise your app will very likely be rejected for violation of the iOS data storage guidelines.

Proper handling of NSUbiquityKeyValueStore updates across devices?

My app stores a single key-value pair in iCloud using NSUbiquityKeyValueStore, an array of objects. The entire array is saved to iCloud when a change is made to any object in the array. This works great as long as each device has an opportunity to pull down the latest update before a change is made locally. Otherwise the local change can get pushed up to iCloud before other devices' latest updates have been pulled down, and those updates get lost across all devices. Is this my app's shortcoming or iCloud's shortcoming, and how can I prevent this scenario from occurring?
Otherwise the local change can get pushed up to iCloud before other devices' latest updates have been pulled down
I ran into a similar issue this week with a project I'm working on. I just made sure that I didn't push anything up to the iCloud server until I received my first update from iCloud. Also, FWIW, I set a fake key-value pair right after initialization so that it updates immediately.
HackyStack's idea of a local flag is also a good solution; if a change comes in you can ask the user if they want to use it or not. (sorta like how Kindle asks if you want to update to the latest page).
I'm not sure I fully understand the exact issue, but I believe the answer is either a category on NSObject (where you could have a "version" property) to check the "version" of the object OR you need another key-value pair to store on iCloud for "version" that can be compared to one stored locally on the device (lastUpdateVersion) to know where you stand. If you could give me an exact real world example of your problem I could answer better... It could be that you don't even need a "version" but rather a flag (BOOL).
You should read the documentation for -[NSUbiquitousKeyValueStore synchronize]. It gives you a decent idea of when to use it and what its limits are. In particular, pay attention to the fact that it makes no promises on when it actually synchronises the data, and implies that updates are uploaded to iCloud only a couple of times at a minute, at most (and that may apply to the device as a whole, not just your app).
The key-value storage mechanism is intended to be very simple and used only for non-essential data, typically configuration information about your app. You shouldn't store user data in it, basically, or anything that resembles it. For that kind of data, use the file-based iCloud APIs. They're more complicated, but with them you have more insight into the sync state of your data, and most importantly you can be notified of conflicts and provide your own merge handler.
Is this my app's shortcoming or iCloud's shortcoming, and how can I prevent this scenario from occurring?
This is an app shortcoming and expected behaviour from iCloud. You can account for this in various ways, but in general, this won't be easy. Especially with >2 devices, there are scenarios where conflicting changes will never be presented to a device to do resolution, as generally speaking the iCloud behaviour is "last change wins" (see my longer description below). Some thoughts:
instead of using an array of objects, use individual keys for each object. Obviously this depends on the semantics of your app, but if the objects are essentially independent, then this generally will give your app the behaviour it expects 🎉
if all the items are interlinked, then you will have to do your own conflict resolution. The best way to do this will depend heavily on your app + data semantics. E.g. maybe you could add a timestamp to your array, or to some objects in the array. You could use new key names for every save so that all devices eventually get all keys and can resolve conflicts (obviously this could chew through storage quickly!). Resolving conflicts might not be worth doing depending what you're already storing locally to help with this
Background
I recently had reason to research the topic of NSUbiquitousKeyValueStore change conflicts in some (tedious) depth. I found some information in two old WWDC videos that expand on current Apple documentation, specifically WWDC11 Adopting iCloud Storage, part 1 (currently available here, found via here) at locations 17:38 and subsequently (e.g. 19:27). Another is a WWDC12 iCloud Storage Overview talk (here originally via here) at 6:30 and 10:55. I subsequently verified the behaviour described below by running two devices, an iPhone 8 running iOS 15.2 and an iPad Air 2 running iOS 12.4 with a test program and lots of console logging in Xcode. What follows is my best guess of the intended behaviour and mechanism for conflict resolution.
Summary
When an individual key is saved by a device using NSUbiquitousKeyValueStore.default.set(value, forKey: key), a hidden timestamp is included with the key with the device time of that call. If/when the operating system syncs with the iCloud replica of the key value store, it examines the timestamps for each key and, if the iCloud timestamp is earlier in time, it saves the new key value and timestamp into the iCloud key value store. If the key value is saved, devices that are currently registered and online to receive notifications will be notified that this key has changed and can fetch the new value if they wish. If iCloud does NOT save the key value, NO notification will happen on any device, and the change is simply dropped.
Notes
If all devices on this iCloud account are online while in use (caveat low power mode, poor internet connection etc.), the result is generally exactly what you want: the app makes a change, it is saved in iCloud, it propagates to other devices. Notifications happen as expected, if a device has registered for them.
If device A saves a value while it is offline, and another device B later saves a value while it is online, then device A goes online, the change from device A is ignored, as iCloud now has a newer value with a later timestamp. B will never be notified of A's change. However, if A has registered for changes, A will get notified of the newer B value and can then decide if it should re-submit its value.
Because of this "last in wins" behaviour, multiple values that belong together should thus be saved together as a dictionary or array, as suggested in various Apple docs and talks.
Values that don't interact should be saved as individual keys - thus allowing most recent changes from multiple devices to successfully intermingle.
There is no automated way to test these behaviours. Back in Xcode 9 days, it was possible to UI script two simulators to verify sync worked as expected, but that hasn't worked in a while, which leaves manual testing as a poor and tedious substitute.
NSUbiquitousKeyValueStore is a great solution for many scenarios beyond simple app settings. Personally, I'd like to see more keys (e.g. 10k instead of 1k), but the general ease of setup and separated storage from a customer's iCloud quota is generally a joy.
There's no perfect solution in a real world environment where devices are not always reliably connected. Indeed, some customers may intentionally keep, say, an older iPad, mostly offline to save battery between intermittent usage. If you can keep your synced data in small discrete units and save it one value per key, sync will generally work as expected.

How does Http live streaming works?

I have created one sample application for demonstrating a working of HTTP live streaming.
What I have done is, I have one library that takes input as video file (avi, mpeg, mov, .ts) and generating segments (.ts) and playlist (.m3u8) files for the given video file. I am storing playlist (as string) in a linked list, as an when i am getting playlist data from the library.
I have written one basic web server which will server the user requested segment and playlist files. I am requesting playlist.m3u8 file from the iPhone safari browser and it is launching the QuickTime player where it is requesting the segment.ts files listed in the received playlist files. after playing every segments (listed in current playlist) it again requests for the playlist, where i am responding with the next playlist file which contains the next set of segment.ts files listed in it.
Is this what we call HTTP live streaming?
Is there anything else, other that this i need to do for implementing HTTP live streaming?
Thanks.
Not much more. If you are taking input streams of media, encoding them, encapsulating in a format suitable for delivery and preparing the encapsulated media for distribution by placing it in such a way that they can be requested from the HTTP server, you are done. The idea behind the live streaming is that it leverages existing Internet architecture that is already optimized for serving HTTP requests for reasonably sized resources.
HTTP streaming renders many existing CDN solutions obsolete with their custom streaming protocols, custom routing and custom content caching.
You can also use media stream validator command line application for mac os x for validating streams generated by the HTTP Web server.
More or less but there's also adaptive bit-rate streaming to take care of if you want your server to push files to iOS devices. Which means your scope expands from having a single "index.m3u8" file that tracks all the TS files to a master index that then tracks the index files for each bitrate you'd want to support in your application which then individually track the TS files encoded at the respective bit-rates.
It's a good amount of work, but mostly routine/repetitive once you've got the hang of the basics.
For more on streaming, your bible, from the iOS standpoint, should ALWAYS be TN2224. Adhering closely to the specs in the Technote, is your best chance of getting through the App Store approval process vis-a-vis streaming.
Some people don't bother (building a streaming app over the past couple of months and looked at the HTTP logs of a whole bunch of video apps that don't quite seem to stick by the rules) - sometimes Apple notices, sometimes they don't, and sometimes the player is just too big for Apple to interfere.
So it's not very different there from every other aspect of the functionality of your app that undergoes Apple's scrutiny. It's just that there are ways you can be sure you're on the right track.
And of course, from a purely technical standpoint, as #psp1 mentioned the mediastreamvalidator tool can help you figure out if your streams are - at their very core, even if not in terms of their overall abilities - compatible with what's expected of HLS implementations.
Note: You can either roll with your own encoding solution (with ffmpeg, the plus being you have more control, the minus being it takes time to configure and get working just RIGHT. Plus once you start talking even the least amount of scale, you run into a whole host of other problems. And once you're done with all the technical hard-work, you'd find that was easy. Now you'd have to actually figure out which license you need to get for having a fancy H.264 encoder with you and jump through all the legal/procedural hoops to get one).
The easier solution for a developer without a legal/accounting team that could fill a football field, IMO, it's easier to go third-party with sites like Encoding.com, Zencoder etc who provide their encoding services a-la-carte or with a monthly fee. The plus is that they've taken care of all the licensing BS and are just providing you a simple "pay to use" service, which could also be extremely useful when you're building a project for a client. The minus is that you're now DEPENDENT on Zencoder/Encoding, the flip-side of which you'd know when your encoding jobs fail for a whole day because their servers are down, or even otherwise, when the API doesn't quite act as you expect or has been documented!
But anyhow that's about all the factors you got to Grok before pushing a HLS server into production!

Yslow alternatives - Optimisations for small websites

I am developing a small intranet based web application. I have YSlow installed and it suggests I do several things but they don't seem relevant for me.
e.g I do not need a CDN.
My application is slow so I want to reduce the bandwidth of requests.
What rules of YSlow should I adhere to?
Are there alternative tools for smaller sites?
What is the check list I should apply before rolling out my application?
I am using ASP.net.
Bandwidth on intranet sites shouldn't be an issue at all (unless you have VPN users, that is). If you don't and it's still crawling, it's probably something to do with the backend than the front-facing structure.
If you are trying to optimise for remote users, some of the same things apply to try and optimise the whole thing:
Don't use 30 stylesheets - cat them into one
Don't use 30 JS files, cat them into one
Consider compressing both JS and CSS using minifiers or the YUI compressor.
Consider using sprites (images with multiple versions in - eg button-up and button-down, one above the other)
Obviously, massive images are a no-no
Make sure you send expires headers to make sure stylesheets/js/images/etc are all cached for a sensible amount of time.
Make sure your pages aren't ridiculously large. If you're in a controlled environment and you can guarantee JS availability, you might want to page data with AJAX.
To begin,
limit the number of HTTP requests
made for images, scripts and other
resources by combining where
possible. Consider minifying them
too. I would recommend Fiddler for debugging HTTP
Be mindful of the size of Viewstate,
set EnableViewState = false where
possible e.g. For dropdown list controls
that never have their list of items changed,
disable Viewstate and populate in
Page_Init or override OnLoad. TRULY
understanding Viewstate is a
must read article on the subject
Oli has posted an answer while writing this and have to agree that bandwidth considerations should be secondary or tertiary for an intranet application.
I've discovered Page speed since asking this question. Its not really for smaller sites but is another great fire-bug plug-in.
Update: As of June 2015 Page Speed plugins for Firefox and Chrome is no longer maintained and available, instead, Google suggests the web version.
Pingdom tools provides a quick test for any publicly accessible web page.