How to clean cache in OSXFUSE? I'm using it to mount custom remote filesystem. And OSXFUSE caching file content. It is great to fast access for the same file - but sometimes I need to re-read file content from remote server.
I am not sure, that you can clean this cache. But you can disable UBC (Unified Buffer Cache) by specifying 'noubc' in mount options (see OSXFUSE mount options)
It will disable caching your responses to 'read' calls and osxfuse will perform calls into your filesystem without using cache.
Related
I'm wondering if there is any way to practically speedup directory listings of a s3fs mount? I have a WebDAV server, only for read operations, that basically access my s3fs mount. The problem is that listing directories is slow, while transfer speed is fine.
So I started to look a bit around the web a stumbled across "JuiceFS", sadly this was also not an option for several reasons. Then I tried "vmtouch" to index the mounted s3 storage to local memory, this is also not working as it's a shared resourced managed by the fuse kernel extension.
Even using S3FS built-in cache does not solve the issue, instead it makes it even worse as the file first getting downloaded from s3 into the cache locally and then served via WebDav ...
Is there no way to just speedup directory listing using S3? Basically, this is all I need in the end and no fancy POSIX compatible Block Device like JuiceFS which basically creates its own logic on top of your s3 bucket ... Not what I was searching for.
Unfortunately s3fs 1.91 has poor readdir performance. There are a few open issues and pull requests that track future improvements:
Option to not use head requests
Consider changing -o notsup_compat_dir default
Consider changing -o noobj_cache default
Increase -o multireq_max
Issue parallel requests in get_object_attribute
You can toggle #2-4 via command-line flags today but #5 is still in-progress. #1 is the big win that would give a 100x speedup but trades off less POSIX compatibility, e.g., no UID/GID, no permissions. One alternative that you can try today is goofys which implements #1.
I have 2 machines in our datacenter:
The public server exposes part of the internal servers's storage through ftp. When files are uploaded to the ftp, the files in fact end up on the internal storage. But when watching the inotify events on the internal server's storage, i notice the file gets written in chunks, probably due to buffering at client side. The software on the internal server, watches the inotify events, to determine if new files have arrived. But due to the NFS manner of writing the files, there is no good way of telling when a file is complete. Is there a way of telling the NFS client to write files in only one operation, or is there a work around for this behaviour?
EDIT:
The events i get on the internal server, when uploading a file of around 900 MB are:
./ CREATE big_buck_bunny_1080p_surround.avi
# after the CREATE i get around 250K MODIFY and CLOSE_WRITE,CLOSE events:
./ MODIFY big_buck_bunny_1080p_surround.avi
./ CLOSE_WRITE,CLOSE big_buck_bunny_1080p_surround.avi
# when the upload finishes i get a CLOSE_NOWRITE,CLOSE
./ CLOSE_NOWRITE,CLOSE big_buck_bunny_1080p_surround.avi
of course, i could listen to the CLOSE_NOWRITE event, but reading inotify documentation says:
close_nowrite
A watched file or a file within a watched directory was closed, after being opened in read-only mode.
Which is not exactly the same as 'the file is complete'. The only workaround I see, is to use .part or .filepart files and move them, once uploaded, to the original filename and ignore the .part files in my storage watcher. Disadvantage is I'll have to explain this to customers, how to upload with .part. Not many ftp clients support this by default.
Basically, if you want to check when the write operations is completed, monitor the event IN_CLOSE_WRITE.
IN_CLOSE_WRITE gets "fired" when a file gets closed which was open for writing. Even if the file gets transferred in chunks, the FTP server will close the file only after the whole file has been transferred.
My question is similar to this one, but the solutions provided haven't helped me: Force applicationCache to reload cached files
Here's the run down. I currently have a sencha touch application hosted on S3, and there's a problem which requires an update to the index.html file.
In order to enable offline access to the app, I've cached index.html in cache.appcache. Below is my cache.appcache file:
CACHE MANIFEST
# 127476e50461cf415c27fb33d81914faab1fc687
index.html
# 364c8e0f0cc7c9922d0019d083b4abba7d519e1c
resources/images/ajax-loader.gif
# 4028c1082f32387af25e2399aae7173ed0a51cf4
resources/images/cloud_download.png
# 40454710d633ca15b65d891d3842d3ef8b2136bf
resources/images/delete1.png
# 62c6a1ec578fa7d1d7a3117c2a84c5195c33ddb8
resources/images/loading.png
# ad85882c6285881966307da8da97ff597de9a486
resources/images/loadingbg.gif
# d2abb7549cd282c1e3fec6e9249d1e51ad5ec75d
resources/images/logo.png
FALLBACK:
NETWORK:
*
In hindsight, to enable offline access I should have probably left index.html as a non-cached network file with a fallback to some 'offline.html' file, but the app's been deployed for a while, and I need to make a change to index.html, and I just can't get the file to update, not even on my local machine by clearing the cache, and not by using private browsing as per the link above. I need to be able to change the file without the user having to do anything to receive the changes.
Here's what I've tried:
1) I removed index.html from the cache manifest and uploaded it. When I did that, and reloaded, the browser picked up the updated cache manifest and downloaded the files:
Application Cache Progress event (0 of 6) http://m.example.com/resources/images/loading.png (index):1
Application Cache Progress event (1 of 6) http://m.example.com/resources/images/ajax-loader.gif (index):1
Application Cache Progress event (2 of 6) http://m.example.com/resources/images/cloud_download.png (index):1
Application Cache Progress event (3 of 6) http://m.example.com/resources/images/loadingbg.gif (index):1
Application Cache Progress event (4 of 6) http://m.example.com/resources/images/delete1.png (index):1
Application Cache Progress event (5 of 6) http://m.example.com/resources/images/logo.png (index):1
Application Cache Progress event (6 of 6) http://m.example.com/ (index):1
Thankfully that means the browser isn't caching the manifest file, but unfortunately, even though it downloaded the index, the file didn't update. I've confirmed the file has been properly uploaded to the s3 bucket, and if I download the file the changes are there, but even after reloading the browser, clearing the browser cache multiple times, viewing the source shows the old index.html file. Note if I go to http://m.example.com/?bla, it works, so I know s3 is serving the correct file (although I haven't ruled out an s3 request cache), but http://m.example.com/ is still broken.
I'm guessing that, although the appcache is redownloading the file, at the browser level it's still cached, so appcache is just downloading the browser's cached version, although clearing the browser cache doesn't fix the issue.
2) I never set any expires headers on the file in s3 so not sure if s3 sets really long expiration headers by default, but I've tried adding Expires: -1 to index.html but doesn't help.
3) I've also tried uploading a new file called index2.html, and changing the index document in the s3 bucket to index2.html, but still I'm getting the old copy.
Not only do I need to get this working on my dev machine, but I also need to fix the issue on existing user's browsers, ideally without them having to do anything. I'm starting to think my only option is changing the app url, which I'd really rather not do. The index page seems so hard wired into the browser I'm not sure if even pointing m.example.com to a new ip address would help. Anyone have any ideas how I could solve this?
Update: I tried looking in the network tab in the chrome console while at the same time pushing up a new cache manifest and reloading the page. Unfortunately cache manifest requests don't seem to show in the network tab even if they're being re-downloaded.
Ok so after a bit more searching I discovered some idiosyncrasies in appcache, like the fact that whatever page includes your cache manifest, is auto cached by default regardless of your settings. I would have thought that, if the manifest was updated, the browser would at least re-download index.html, but it appears not. I used the solution found in the link below which indicates a workaround where you attach your manifest to another page which you reference in your source page via iframe, therefore allowing index.html to be not cached again.
My HTML5 Application Cache Manifest is caching everything
After doing that, to avoid future caching issues I made a duplicate page at 'offline.html' and made a failover pointing index.html to offline.html. So basically index.html is now never cached and isn't going to leave me stranded with an unusable domain, and when offline it will redirect to the offline page which can be happily cached for offline access. Phew!
I have problem with viewing video from my bucket on S3.
I'm using EC2 instance. Bucket mounted as folder via s3fs. When i try to load a big file i have a pause before starting download. In this pause, i see that file download (cache) to EC2. When it was cached, file start to download in browser.
I try to configure s3fs and disable cache, but option -o use_cache="" doesn't work. I try to use s3fslite, but it is also cache files before sending it to user.
How to disable caching? Maybe there is some faster solution, that can help me to use s3 bucket like folder on EC2?
You don't need to download the files, either serve them directly from s3, or use cloudfront.
If you are trying to control access to the file. Use signed URLs which will give them user a certain amount of time to access the file before the link expires.
I've used apache's content expire times for caching of static assets. The only way I know of to force it to update the content on command is to restart apache. is there a better way?
Caching on our server, as it turns out, was data stored in a file, so we had to clear the cache file's contents, and voila, the cache was cleared.