Safari ITP 2.3: Capped Lifetime For All Script-Writeable Website Data - safari

I am a little confused about Safari's ITP 2.3 policy which caps the lifetime of script-writable storage in the browser to 7 days.
The official article states that:
After seven days of Safari use without the user interacting with a web page on website.example, all of website.example’s non-cookie website data is deleted.
This definitely includes localStorage. Does someone know for sure whether
IndexedDB
CacheStorage
Service Worker
are cleared as well?

Looking at the relevant Webkit commit, it clearly includes IndexedDB - it does not seem to include CacheStorage or ServiceWorker.

This post of March 24th lists what is affected.
On that list, I can see IndexDB and something that they call "Service Worker Registrations".
https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/

Related

Workarounds for Safari ITP 2.3

I am very confused as to how Safari 2.3 works in certain respects, and why sites can’t easily circumvent it. I don’t understand under what circumstances limits are applied, what the exact limits are, to what they are applied, and for how long.
To clarify my question I broke it down into several cases. I will be referring to Apple’s official blog post about ITP 2.3 [1] which you can quote from, but feel free to link to any other authoritative or factually correct sources in your answer.
For third-party sites loaded in iframes:
Why can’t they just use localStorage to store the values of cookies, and send this data along not as actual browser cookies🍪, but as data in the body of the request? Similarly, they can parse the response to updaye localStorage. What limits does ITP actually place on localStorage in third party iframes?
If the localStorage is frequently purged (see question 1), why can’t they simply use postMessage to tell a script on the enclosing website to store some information (perhaps encrypted) and then spit it back whenever it loads an iframe?
For sites that use link decoration
I still don’t understand what the limits on localStorage are in third party sites in iframes, which did NOT get classified as link decorator sites. But let’s say they are link decorator sites. According to [1] Apple only start limiting stuff further if there is a querystring or fragment. But can’t a website rather trivially store this information in the URL path before the querystring, ie /in/here without ?in=here … certainly large companies like Google can trivially choose to do that?
In the case a site has been labeled as a tracking site, does that mean all its non-cookie data is limited to 7 days? What about cookies set by the server, aren’t they exempted? So then simply make a request to your server to set the cookie instead of using Javascript. After all, the operator of the site is very likely to also have access to its HTTP server and app code.
For all sites
Why can’t a service like Google Analytics or Facebook’s widgets simply convince a site to additional add a CNAME to their DNS and get Google’s and Facebook’s servers under a subdomain like gmail.mysite.com or analytics.mysite.com ? And then boom, they can read and set cookies again, in some cases even on the top-level domain for website owners who don’t know better. Doesn’t this completely defeat the goals of Apple’s ITP, since Google and Facebook have now become a “second party” in some sense?
Here on StackOverflow, when we log out on iOS Safari the StackOverflow network is able to log out of multiple sites at once … how is that even accomplished if no one can track users across websites? I have heard it said that “second party cookies” still can be stored but what exactly makes a second party cookie different from a third party?
My question is broken down into 6 cases but the overall theme is, in each case: how does Apple’s latest ITP work in that case, and how does it actually block all cases of potentially malicious tracking (to the point where a well-funded company can’t just do the workarounds above) while at the same time allowing legitimate use cases?
[1] https://webkit.org/blog/9521/intelligent-tracking-prevention-2-3/

Is IndexedDB on Safari guaranteed to be persistent?

Similar to this question, is IndexedDB guaranteed to be persistent ? ie. Safari will not reclaim disk space if the device is low on memory.
Safari have "No Eviction policy", meaning it will not automatically clean the IndexDB on low disk pressure, without user doing it manually.
IndexDB is one of the fast evolving feature and you can expect to have a different eviction policy any time with no announcement. You should always build with fall back options.
Chrome has explicit persistent storage option which will guarantee no eviction, on user approval for persistent storage and we can expect Safari to do the same sometime, based on their track record of following Chrome in implementing PWA features(though its taking years with super bad documentation).
According to this blog post from the WebKit team, IndexedDB is not guaranteed to be persistentfrom iOS and iPadOS 13.4 and Safari 13.1 on macOS. Safari will delete it after seven days of Safari usage without interaction with the site:
Now ITP has aligned the remaining script-writable storage forms with
the existing client-side cookie restriction, deleting all of a
website’s script-writable storage after seven days of Safari use
without user interaction on the site. These are the script-writable
storage forms affected (excluding some legacy website data types):
Indexed DB
LocalStorage
Media keys
SessionStorage
Service Worker registrations and cache
However, IndexedDB is pretty much guaranteed to be persistent if your Web app is installed in your Home Screen, as the Web app will have its own usage context, and due to its very nature, it'd be impossible to use it for seven days without accessing the site where it came from:
[...] Web applications added to the home screen are not part of Safari
and thus have their own counter of days of use. Their days of use will
match actual use of the web application which resets the timer. We do
not expect the first-party in such a web application to have its
website data deleted.
Regardless of the above, I would personally not trust IndexedDB for any kind of long-term data storage. I've found it quite ropey, and not long ago broke altogether in Safari 14.1.1.
I have no definitive answer, but after using IndexedDB for over 2 years in a big browser/desktop (electron based) application I would attribute multiple datalosses to IndexedDB or at least IndexedDB in chrome. So my answer would be no. Don't rely on it.

How to get third-party API up-to-date?

So, I stepped once at this problem. I had offered a website that used the SoundCloud API. Everything worked properly. Content was extracted from the JSON and placed in the layout of the website. However, I received an email one day from the owner of the website, which indicated that the website did not work properly. I then came out to investigate and came to the conclusion that the "problem" was not on my side, but at SoundCloud's side. I studied on the API page of SoundCloud and came to the conclusion that the API had received a major update, making the link with SC and the site no longer worked.
Lately I'm trying many new APIs to, including those from Instagram and Dribbble. I was therefore wondering if it is at all possible to ensure that such problems can be reduced in the future or it might be appropriate API pages of this third-party APIs to monitor?
There's no "right" answer. After many years of using and maintaining many APIs here are some of the conclusions I've come to:
The best providers let you work with a specific version of their API whose interface and expected behavior never changes. They might release bug fixes and new endpoints, but you can be confident that as long as the API is supported it will not break your system.
A good provider will provide an end-of-life date for each version of their API. It's up to you to keep track of when you need to update.
Paid services will often be supported longer than free services. Plus the contract / SLA will guarantee it remains available for a specific amount of time.
The most popular APIs often have mailing lists and/or blogs. For those that offer it, sign up to be notified of updates. For those that don't you'll have to monitor their blogs or news posts. And I suggest not using any service that would drop support for an API version without warning.

Delicious API feeds.delicious.com no more?

In the past, I've been using the Delicious API available under feeds.delicious.com. When running this code today, I found out that the corresponding hostname is not available any longer (checked first time some days ago). I've already asked Delicious support directly about the state of the API, but not yet received an answer. So I thought anybody here might have more recent information, whether this is some temporary outage or the API has been cut completely?
This was likely part of the rollback to Delicious's old architecture in January 2016:
Fortunately for us, the version that the javascript site replaced has been kept alive at previous.delicious.com. This was built using a much more traditional framework, and it’s great! In fact, many of our longtime users have continued to prefer it over the main site, and frankly, so do we. Therefore, we are switching to this platform for our main site, and this transition will position us to quickly iterate in our ongoing efforts to keep Delicious thriving.
The auth URL on the documentation's OAuth page (delicious.com/auth/authorize) 404's for me as well, so I have a feeling this has indeed been retired.

Is it possible to be notified when W3C specs get updated?

WHATWG has announced recently that it's possible now to get notified about the changes in some sections of specifications (more on the way). Is there a similar mechanism for W3C specs? Is it possible to get notified of updates by email or through a feed?
If the specs are posted online (and I have to assume that w3c specs are), then why not use a service like http://www.changedetection.com/ ?
There are several similar services which will notify you whenever any web page is changed and you can even limit the checking ot certain parts of the page (to avoid changing banner headlines, for instance).
hope this helps
Many of the W3C specs are managed by version controlling systems like Git or Mercurial. Apart from being able to actually clone the repository and regularly pull in the changes, one can simply subscribe to the updates in RSS or Atom format. Every spec has it's own channel, but here's the page that seemingly lists them all, along with corresponding links - to repositories, zip/tarballs or RSS/Atom channels.
https://dvcs.w3.org/hg/
While this is definitely great. Some of the specifications are still off the list like File API.