Is browser cache and our system cache (that reside b/w CPU and main memory) are different or same only? - browser-cache

When we browse in the browser for the same website that we have browsed previously then the observe loads it immediately from the cache memory. Similarly when we access some files from our local system's drive then we see that our system shows the recent files(see the image attached) so I am thinking that these files are available in our system cache so it is showing. Now clear me that actually the browser cache and the system cache are different caches or only one cache reside in our whole system?

The browser has its own logical "cache". Every file is stored to the disk and read from there.
The system handles all "file caches" and therefore some data may (still) be in memory when the browser tries to access the file.
These are some different type of "caching" that you're trying to understand.

Related

Uploaded file is not visible in browser unless I force no cache browser reload

I am facing weird issue with file uploads. When I upload a new file to publicly visible folder, I can see it instantly in anonymous mode. But if i try to access it in non-anonymous mode, the server responds with 404 unless I do hard refresh (ie ctrl + F5 for Mozzila).
I have already disabled cache control headers for that folder in apache, but that did not seem to resolve the issue. It seems to me that the apache is storing information that "there is actually no file at requested url" and serves it to user unless user clears cache even if the file is uploaded at that location. Anyone ran into similar issue in the past?
By default, most browsers cache images, styles and scripts automatically. The easiest way to bypass this for development environments is to set the caching headers detailed here
Another common way to bypass caching is to set a random query parameter (usually ?v=<random value here>).
Chromium based browsers also have a disable cache option in the dev tools

How to get around storage quota limits in a progressive web app

My question is essentially how can I get around the storage quota limits enforced on a PWA? A little background...
I am hoping to create an offline-ready line-of-business progressive web app that would ideally push about 2GB of images and video resources onto my user's phones or tablets - well beyond the current storage quota for caches and Indexed DB. What I'd like to be able to do is have my users (we all work at the same company) do a 1 time download of a zip file or directory and have the user's store that on their phone/tablet's file system in a well known directory. As the online version of the app treats these files as URL's, the fetch api would seem ideal since I could serve from online if connected or the local serviceworker managed cache if not online. But the qouta limits have me stumped. None of the files are larger than 15MB, but there's no way to know which files are needed before a user goes offline. Can I use something like an HTML input type=file tag to load files into the cache at runtime and then treat them as URL's? Of course I would remove other files to make room. But since these files wouldn't be coming from "the origin" with its secure https address (a PWA requirement I think) , but rather a local file system, I'm not sure this will work. If it is workable, would my users be forced to browse to the files manually?
If its an option, you can have a native Android service to do the caching part to avoid space constraint and then serve the data from native code to PWA using websockets/secure web sockets.
No PWA solution possible for now. File API has limitation as its sand-boxed.

Does Google Drive Android API help my application to work in background?

I have read Google Drive API documentation but I'm not able to understand the following:
Can files and folder be created and modified on drive in background of app?
My application needs working of drive in background.
For Files:
If you will check "Working with File Contents":
Lifecycle of a Drive file
The Drive Android API lets your app access files even if the device is offline. To support offline cases, the API implements a sync engine, which runs in the background to upstream and downstream changes as network access is available and to resolve conflicts.
Check this image from the document.
The lifecycle of a DriveFile object:
Perform an initial download request if the file is not yet synced to the local context but the user wants to open the file. The API handles this automatically when a file is requested.
Open the contents of a file. This creates a temporary duplicate of the file's binary stream which is only available to your application.
Read or modify the file contents, making changes to the temporary duplicate.
Commit or discard any file content changes that have been made.
If there are changes, the file contents are queued for upload to sync them back to the server.
Google API does support running in background. For folders there is no documentation regarding creating of folder can be done in background, but I think same implementation can be done.
I hope this helps.

How to get Apache to cache video files in memory?

I'm hosting an HLS stream with XAMPP / Apache, which basically means I have a folder in my document root that contains a couple of incrementally numbered 10-second video files.
Every 10 seconds, a new video file is saved into the folder and the oldest video file in the folder is deleted.
Apart from these video files, the document root also contains some other files, such as PHP scripts and playlist files.
My server has plenty of RAM and a pretty fast CPU, but is using a comparatively slow hard disk.
Given the fact that the constant downloading of these video files is likely what's going to make or break the server performance, it seems like a good idea to cache these files in memory.
If Apache were to keep all video files (with a .ts extension) that're downloaded by a user's video player, in it's memory for about 60 seconds, the next user would then be able to download the file much faster. Apache could rely on the files not changing after the first open and on the fact that the files won't be requested anymore after those 60 seconds.
All other files do not (necessarily) have to be cached, since they're rather small and are regularly modified.
Is anyone able to give me directions on how to get started?
Modern operating systems already cache accessed files in memory. The whole process is managed by the kernel automatically.
Apache in-memory caching won't help you since it needs all the files at start-up.
If you want some level of control over the caching you could use vmtouch. Check the manual.

How do I make Phantom.js cache resources like a normal browser?

Chrome doesn't re-download javascript files every request. They cache it.
However, when my Phantom.js hits pages, it downloads the javascript every single time. Is there a setting that can make this act like a browser?
PhantomJs already supports in-memory cache ; this means that if you browse multiple pages in side the same running instance, PhantomJs will not download resources already in the cache.
You could turn on disk cache ; this will store web resources (js, css, images, ...) in the physical disk.
This is controlled by a command line parameter :
disk-cache=[true|false] enables disk cache (at desktop services cache storage location, default is false). Also accepted: [yes|no]
max-disk-cache-size=size limits the size of disk cache (in KB).
From this link, it seems to be stored under %AppData%/Local/Ofi Labs/PhantomJS/cache/http. on windows.