Persist file descriptor across page reload in Chrome with FileSystemFileHandle - native-file-system-api-js

As we all know browsers do NOT support repopulating file path to <input/> file across page reload, due to security reasons. But recently chrome 86 has released File System Access API, that simplifies and allows users to read context of the file. If we pair this to File System API (do not confuse them), with the aid of window.requestFileSystem we can persist files between loads, which partially solves this Remember and Repopulate File Input stackoverflow issue.
I wonder if we can repopulate file using FileSystemFileHandle across tab reload. Doing this via requestFileSystem limits us on file size, since we copying file over each select. To be specific, I want to be able to upload and read user selected file after page has been reloaded w/o saving it to localFileSystem.
I also posted this question to github issues.

The GitHub issue had an answer, reposting slightly edited here to save others a click:
You can store the file handle you get from showOpenFilePicker in IndexedDB and read that back on subsequent page loads. In general regaining access might require the user to accept another permission prompt, but for the case of a page reload the current Chrome implementation will likely keep the permission grant around long enough to not require extra prompts. We have some ideas to extend that to session restore as well, but nothing concrete currently.

Related

Writing files safely (WinRT)

What approach do you use to write critical app files like settings, configuration files, user files in WinRT, or in general?
To illustrate my concern - in my app I am saving the list of user selected data sources as a JSON file. In case the user updates the list and saves it, I just overwrite the current file with the new JSON serialized list. But if the app were killed from the task manager or the computer lost power in that very moment when the file is being written, it would stay in an inconsistent state and would probably cause the app not to launch or the user would definitely lose data.
I considered writing into a different file and then swap them when finished. Is this solution the best one possible?

Word Files are opening in Readonly mode

I have program which get the word file from server and after editing it saved back to server . It was working and i was able to save files back to server but suddenly files are opening in Readonly mode . I have searched alot on google and have tried all options but it seems not working.Any ideas clues ??
If the file displays as read only in Word that means word thinks the file is readonly, which means its seen something to make it think that. Since word does not integrate with access control methods over webdav, that means it can only be that word has determined that the server does not support required operations for writing. This can be:
does not support uploading (ie PUT). Webdav reports this in the OPTIONS request, so please check for that
does not support locking (ie LOCK)
does support locking, and the file is locked. But this usually gives the user a specific warning
Locking comes into play in different ways depending on how you're connecting to the server (on windows you can either use a mapped drive or a network location), and the means of opening (either clicking a link in a web page which uses the sharepoint dll, or opening directly from a mapped drive, or opening from the file open dialog in MS office), and of course the application doing the opening (ie MS Office, Open Office, etc)
Depending on what combination of the above you're using locking might or might not be required to edit.
Webdav indicates locking support in the following ways
- the supported levels header
- MS-Author-VIA header, which should return "webdav"
- the presence of LOCK in the OPTIONS response.
So you might need to check for the presense of those, probably by using wireshark or similar.

Prevent file being overwritten

Imagine there are 3 or more independent locations where a file can be modified. These locations communicate to each other through email or mail (direct flash drive restoration). Though there is a big room for flow - to make simultaneous editing to the file and screw up things, this client won't change too much. He rather call everyone that he is working on the last update or tell the other guys that he is waiting for third guy's last update. Anyway, at some point after several exchanges, due to one of participants unintentional error THE LAST VERSION of the file eventually gets mixed up. From this point everyone searches for the last version BY LOOKING THE CONTENT of the file.
This client wants to have a central location (he has actually, that is his PC's some location) and let everybody (including himself) copy any new or suspected new file to this location but prevent file's last version being copied. From this location he has to easily copy, send or open the file and work.
So, here is my concept (2 steps):
step 1: I made an ad to the main application where this file is created or edited. This ad prompts the user to give a version number to the file with every invoked save command from the editing application. In fact the file can be re-saved multiple times but not considered modified (file attributes creation, save etc. do not have great meaning here). This said the user can cancel my ad-in but have saved the file, not saving a new file version.
step 2: multiple solutions:
solution A: I'm thinking to have a folder/file watch and prevent the last version of the file being overwritten. As you know, FileSystemWatcher will fire the change/delete etc., events AFTER FACT so, I have to back copy overwritten file after the fact (w/ some tricks).
solution B: have a database to store all version of files and built-in some shell extension to extract/view files from the database. Move all copied/pasted files to the database (my program folder) and restore latest file in working folder after watcher fires change/delete event.
solution 3: find out built-in windows tools (API etc.) to greatly rely on it with some programming.
Any ideas?
Thanks in advance.

How to tell Windows Explorer not to request file details and thumbnails in certain folder?

Is there a way (via shell extension or registry setting) to tell Windows Explorer that it shouldn't read files in the folder being shown in order to extract metadata or create thumbnails?
The problem is that when the user navigates to the folder, Windows Explorer attempts to read all files in the folder and extract certain metadata from them. If the medium is slow, this takes ages and causes unnecessary load on the file system. This is especially true in case of thumbnails, when the whole graphic file is read.
I am looking for ways to do this (restrict Explorer) in code, so "don't use Thumbnail mode" is not an acceptable answer :).
Upd: per-user settings won't work unfortunately cause we as a disk provider can deal only with our own disk (and the user might want to have separate settings for regular disks and virtual disks). I believe there must be some way to "explain" the OS that the drive is slow.
Maybe there's some IRP on driver level that we need to handle to tell the OS that the medium is slow?
Is there a way (via shell extension or
registry setting) to tell Windows
Explorer that it shouldn't read files
in the folder being shown in order to
extract metadata or create thumbnails?
Not that I know off, but depending on the priorities regarding the use case details you outlined there might be two options still to approximate the desired result:
Via group policy
Note that this essential expands/details the network folder related aspect of Freds answer, which you dismissed in your update; however, you claim to be able to deploy shell extensions or registry settings and the following two group policies simply execute the latter by administrative means:
User Configuration -> Administrative Templates -> Windows Components -> Windows Explorer:
Turn off the display of thumbnails and only display icons **on network folders**
Turns off the caching of thumbnails in hidden thumbs.db files.
This boils down to the following registry settings:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Explorer]
"DisableThumbnailsOnNetworkFolders"=dword:00000001
"DisableThumbsDBOnNetworkFolders"=dword:00000001
Of course this is still not per folder, but at least limited to network folders and ignores regular disks and virtual disks.
Via hackish workaround
Given your statement we as disk provider can deal only with our own disk there might be a hackish workaround, though I'm afraid it lacks the last mile (untested by myself).
Starting from Chris W. Reas own answer to How can I suppress those annoying Thumbs.db files in Windows Vista and Windows 7?:
Also worth knowing: In Vista and Windows 7, Thumbs.db applies to network folders only. For local folders, Vista and Windows 7 instead save thumbnail cache information to a database in a local folder at "%userprofile%\AppData\Local\Microsoft\Windows\Explorer"
Continuing from there, Wil claims the following potentially clever solution to work on a per folder basis:
Go to the drive and create a file called thumbs.db (in notepad or anything), then change the permissions on the file for everyone (including SYSTEM) to deny all.
Unfortunately, aside from the automation requirements to create the dummy thumbs.db in each folder, the outcome depends on how Explorer will react on the inaccessible file - because caching is optional as per group policy, it might as well display thumbnails without caching them, making the bandwidth issue even worse in turn ...
Good luck!
I'm not sure if you can disable thumbnail generation/display for certain folders but this article talks about a script which could quickly disable it via context menu.
The script modifies a value in the registry key HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\. I suppose you could find something similar in that key for the other metadata. ShowInfoTip sounds promising. There might be relevant information in other nearby keys.
This may be a complete non-answer depending on your needs, but how about storing the files without file extensions that the OS wants to make thumbnails of? Call it file.jpg.abc and it won't be reading thumbnails, for sure.

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.