In the past I had problems with the browser cache serving older versions of the xap file.
In order to overcome this I dynamically add a query string parameter containg the last modified date of the xap file to the silverlight source parameter in aspx page. This guarentees that clients will receive the latest xap file and not a cached stale versions.
I am now using the DeploymentCatalog functionallity in MEF in a silverlight app to download some xap files.
Does anyone know how this works under the hood?
Will it use the browser cache or does it download fresh everytime?
If it does go through the browser cache, how can I prevent stale cached versions from being served? (as described above).
Thank you!
The DeploymentCatalog just uses the WebClient class to download the xap. Whether it is cached will depend on your browser. From the server-side you should be able to control whether the browser caches the file by using the HTTP Expires header. Here is a question with some information about this: Browser Caching in ASP.NET application
Related
I have created a webapplication and set up an endpoint which returns a FileStreamResult, these can be large zip and pdf files for example. The main issue I'm facing is that if a download gets interrupted (for example the internet goes down by the client), the temporary file that is generated by the browser is immediately deleted.
I'm aware of range-requests, but to utilize them, I would have to read files from the client, to determine how much progress the download made, which is not possible from the server side and also, the same problem persists about the temporary file deletion.
So this seems like a browser limitation to me, but please correct me if I'm wrong, I would appreaciate any ideas.
I have a VueJS application that is deployed to a local IIS 10 server for intranet use.
Trouble is, the index.html file is getting cached and a forced, manual clearing of the browser is needed to see updates. I understand there are ways on the server side to prevent this, but I'm unclear based on what I've read so far as to what the proper way of making sure the html file isn't cached is (js, css and the like are, of course, not cached since they have the additional value appended to the file name during build.)
I'm very much a novice when it comes to the server side of things, so any insight would be greatly appreciated. Thanks!
Which packaging tool do you use in your project? Generally speaking, Webpack/Vue-CLI has settings for prevent the file from caching in the browser on the client-side. In other words, it adds hash value to output files which could flag the version we build recently, this will result in the client browser to forcibly request the new version file.
In the Webpack.config.js
output: {
filename: '[name].[contenthash].bundle.js'
}
https://webpack.js.org/guides/caching/
See these links for more details.
Browser cache problems with webpack chunks and vue.js components
how to force clearing cache in chrome when release new Vue app version
VueJS/browser caching production builds
I ran two sample Blazor WebAssembly apps accidentally on same port at https://localhost:44381, then things are messed up. One of the apps is erroring out because it tried and failed to load DLLs from the other sample app. I tried going to browser's devtool Application > Clear storage, but no help. How do I totally clean out the DLLs of a Blazor WebAssembly app from browser so that I could start fresh again?
Blazor WASM applications from version 3.1 download a file blazor.boot.json which lists the assemblies along with a sha256 hash to indicate the version. These assemblies are now downloaded to the browser's Application Cache Storage (see example below).
Application -> Clear storage should work - check that Application cache is selected on the Application -> Clear storage page:
Using the Empty Cache and Hard Reload will not clear out this cache, but will reload the blazor.boot.json file, and if the cached files have changed (the hash is different) then they should be reloaded.
You can also clear out individual assemblies from the Cache Storage view - right-click and you can delete them. When you refresh the application, Blazor will download the latest version.
Chrome and the new Edge press F12. This opens the developer tools. Whilst this is open right click the refresh page Icon on the browser. On that menu choose empty cache and hard refresh. This is the only way to clear everything including icons and PWA settings.
Just press Ctrl+F5 it cleans cache and gets files again.
In your .csproj (for your wasm site) file you can force the app to download resources each time it's requested. Bit of a performance hit for the first load, but gets you over the current problem.
<PropertyGroup>
<BlazorCacheBootResources>false</BlazorCacheBootResources>
</PropertyGroup>
There are some caveats - see documentation here: https://learn.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-5.0#disable-integrity-checking-for-non-pwa-apps-1
Folks:
I'm creating an app using Node Webkit. The purpose of this app is to display images and pdfs. The app needs to download those files from a central repository, and cache them locally. When the app runs offline, the files should still be available, and displayed.
On the face of it, this sounds like appcache is the answer - and that indeed is where I was heading when this was a pure webapp in a browser. However, now I've discovered node-webkit, and here we are.
node-webkit's GitHub wiki states:
"However, application cache is designed for browser use, for apps using node-webkit, it's less useful than the other two method, read HTML5 Application Cache if you want to use it."
But doesn't say why.
I've also researched node.js filesystem - but that seems like a whole magnitude of complexity above what I need.
Can anyone point me in a sensible direction?
Thanks.
It has to do with the nature of App Cache itself.
You specify a manifest file that lists all the static assets required for your app to run offline. You don't have any programmatic access to the cache to add and remove files via JS.
So for a node-webkit app, it'd make more sense to fetch these files and store them in the Application Support folder (Or AppData, depending on the platform). That's where the node.js part is really useful, the file IO stuff.
I have a PHP application running on a Micrisoft IIS 7 server. The application shows PDF files on an iFrame, which contains user's sensitive data that I wouldn't like to be directly accessed by anyone that knows the file address.
So basically, I'm looking for a way to protect files from direct browser access or download, but still be able to show it on the application's iFrame.
I made some research with Rewrite rules, but since the "HTTP_REFERER" of an iFrame is empty, I couldn't find a good solution
Any suggestions for this?
Thanks in advance
Without seeing any of your code, or how your application works, I can only give suggestions based on how I think your app works.
Rather than showing the files themselves, with links directly to those files, you should consider changing your application so that the PHP reads in the directory, displays the file names (however you want them to appear), with links that go to a download.php page. The download page (after checking whether the user has permission to download the file) then loads the file into memory and serves it out as a response (with appropriate Content-Disposition and Content-Type headers).
Since your PHP application can read files directly within the web directory, you can set up rewrite rules to prevent accessing those files from the web; that way, the files can only be accessed by the PHP application, which doesn't rely on rewrite rules to access the drive.
This is how places like Source Forge can display an advertisement with a countdown that your file download will begin in 5 seconds.