I am using the Twisted Web static.File resource for the static part of the web server.
For development I would like to be able to add new file or to modify the current static files without having the restart the Twisted web server.
I am looking at the source code for static.File at getChild method and I can not see how the resources are cached:
http://twistedmatrix.com/trac/browser/tags/releases/twisted-11.0.0/twisted/web/static.py#L280
From my understanding getChild method is returning a new resource at each call.
Any help in creating a non-cached static.File resource is much appreciated.
Many thanks,
Adi
twisted.web.static.File serves straight from the filesystem. It does not have a cache. Your web browser probably has a cache, though.
Related
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I recently came across situation where I need to clear my dispatcher cache manually. For instance, If I am modifying any Js/css files, I would need to clear my dispatcher cache manually in order to get those modified new Js/css else it would be serving the old version of code. I just heard that ACS developed version clientlib which will help us do versioning. I have so many question on this.
Before version clientlib how did AEM manage?
Doesn't AEM has intelligent to manage versioned clientlibs?
Is it correct of way of handling it?
Can we create a Script whil will take back up of the existing before clearing those JS/css files?
What are the other options we have?
Versioned clientlib is correct solution. But you ll need bit more:
Versioned Clientlibs is a clientside cache busting technique. Used to bust the browser cache.
It will NOT bust dispatcher cache. Pages cached at dispatcher continues to serve unless manually cleared.
Refer here for similar question.
To answer your queries:
Before version clientlib how did AEM manage? As #Subhash points out, it is part of prod deployment scripts(Bamboo or Jenkins) to clear dispatcher cache when clientlibs change.
Doesn't AEM has intelligent to manage versioned clientlibs? - This is same as any cms tools. Caching strategy has to be responsibility of http servers and NOT AEM. Morever when you deploy js code changes, you need to clear dispatcher cache to get pages reflect new js changes.
Is it correct of way of handling it? For clientside busting - versioned clientlib is 100% foolproof technique. For dispatcher cache busting, you ll need different method.
Can we create a Script while will take back up of the existing before clearing those JS/css files? Should be part of your CI process defined in Jenkins/Bamboo jobs. Not a responsibility of AEM.
What are the other options we have? - For dispatcher cache clearance, try dispatcher-flush-rules. You can configure that when /etc design paths are published, they should automatically clear entire tree so that subsequent requests will hit publisher and get updated clientlibs.
Recommended:
Use Versioned Clientlibs + CI driven dispatcher cache clearance.
Since clientlibs are modified by IT team and requires deployment, make it part of CI process to clear cache. Dispatcher flush rules might help. But it is NOT usecase for someone to modify js/css and hit publish button in production. Production deployment cycle should perform this task. Reference links for dispatcher cache clear scripts: 1. Adobe documentation, 2. Jenkins way, 3. Bamboo way
Before versioned clientlibs -
You usually wire dispatcher invalidation as part of build and deployment pipeline. The last phase after the packages have been deployed to the author and publish instances, dispatcher is invalidated. This still leads to issue of browser cache not getting cleared (in cases where clientlib name has not changed.)
To overcome this, you can build custom cache busting techniques where you maintain a scheme for naming clientlibs for each release - eg: /etc/designs/homepageclientlib.v1.js or /etc/designs/homepageclientlib.<<timestamp>>.js. This is just for the browser to trigger a fresh download of the file from the server. But with CI/CD and frequent releases these days, this is just an overhead.
There are also non elegant ways of enforcing bypass of dispatcher using query params. In fact, even now if you open any of the AEM pages, you might notice cq_ck query param which is for disabling caching.
Versioned clientlibs from acs commons is now the way to go. Hassle free, the config generates unique md5hash for clientlibs, thereby forcing not just bypassing dispatcher cache, but also the browser level cache.
There is an add-on for Adobe AEM that does resource fingerprinting (not limited to clientlibs, basically for all static website content), Cache-Control header management and true resource-only flushing of the AEM dispatcher cache. It also deletes updated resources from the dispatcher cache that are not covered by AEM's authoring process (e.g. when you deploy your latest code). A free trial version is available from https://www.browsercachebooster.com/
I read that using a service worker for offline caching is similar to browser caching. If so, then why would you prefer a service worker for this caching? Browser caching will check if the file is modified or not and then serve from the cache, and with a service worker we are handling the same thing from our code. By default, the browser has that feature so why prefer a service worker?
Service Workers give you complete control over network requests. You can return anything you want for the fetch event, it does not need to be the past or current contents of that particular file.
However, if the HTTP cache handles your needs, you are under no obligation to use Service Workers.
They are also used for things such as push notifications.
Documentation: https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API, https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers
I wanted to share the points that I observed while going through service worker documentation and implemented it.
Browser cache is different, as the service worker supports offline cache, the webapp will access the content that is cached, even though the network is not available.
Service worker will give native experience.
Service worker cannot modify DOM content but still it can serve the pages within its scope. With the help of events like postMessage, the page can be accessed and DOM can be changed.
Service worker do not require user interaction or webpage .
It runs in the background.
Actually, it's slower to response the request when you use sw instead of http cache...
Because sw use cache api to store the cache content, it's really slower than the browser cache--memory cache and disk cache.
It's not designed for faster than http cache, howerver, when you use sw, you can Fully customizable the response, I think the Fully customizable is the reason why you should use it.
If your situation is not complicated enough, you should not use it
I'm throwing together a quick data service in WCF to be accessed by a public Silverlight 2.0 application. As my data is very static and relatively simple I'd like to just store it in local XML files (which is made easier as there are a VERY limited number of people who will ever edit it).
I'm wondering what the best way to find a relative path from within my service will be. In traditional ASP.NET I could use the Server.MapPath....within this WCF service nothing similar is available. This solution will ultimately be hosted at a hosting provider I have no control over so I can't hardcode any fixed locations. I'd much rather just get a relative path to some XML files in my AppData folder.
Any suggestions?
You could try using Environment.CurrentDirectory or AppDomain.CurrentDomain.BaseDirectory
Try using HostingEnvironment.ApplicationPhysicalPath.
The WCF services still have access to a lot of the same things as your ASP.NET pages (since, in the end there is still an HTTP request and response). You can still use Server.MapPath like so:
HttpContext.Current.Server.MapPath(...)
You could store the files in IsolatedStorage instead of in your folder for the application. Look at the example on the linked page to see how it works.
First, add an operation to the service to return the current directory. Have the new operation just return Environment.CurrentDirectory. In the client, check to see if you are surprised by what the current directory was. Adjust as needed.
I cannot use the Resource File API from within a file system plugin due to a PlatSec issue:
*PlatSec* ERROR - Capability check failed - Can't load filesystemplugin.PXT because it links to bafl.dll which has the following capabilities missing: TCB
My understanding of the issue is that:
File system plugins are dlls which are executed within the context of the file system process. Therefore all file system plugins must have the TCB PlatSec privilege which in turn means they cannot link against a dll that is not in the TCB.
Is there a way around this (without resorting to a text file or an intermediate server)? I suspect not - but it would be good to get a definitive answer.
The Symbian file server has the following capabilities:
TCB ProtServ DiskAdmin AllFiles PowerMgmt CommDD
So any DLL being loaded into the file server process must have at least these capabilities. There is no way around this, short of writing a new proxy process as you allude to.
However, there is a more fundamental reason why you shouldn't be using bafl.dll from within a fileserver plugin: this DLL provides utility functions which interface to the file servers client API. Attempting to use it from within the filer server will not work; at best, it will lead to the file server deadlocking as it attempts to connect to itself.
I'd suggest rethinking that you're trying to do, and investigating an internal file-server API to achieve it instead.
Using RFs/RFile/RDir APIs from within a file server plugin is not safe and can potentially lead to deadlock if you're not very careful.
Symbian 9.5 will introduce new APIs (RFilePlugin, RFsPlugin and RDirPlugin) which should be used instead.
Theres a proper mechanism for communicating with plugins, RPlugin.
Do not use RFile. I'm not even sure that it would work as the path is checked in Initialise of RFile functions which is called before the plugin stack.
Tell us what kind of data you are storing in the resource file.
Things that usually go into resource files have no place in a file server plugin, even that means hardcoding a few values.
Technically, you can send data to a file server plugin using RFile.Write() but that's not a great solution (intercept RFile.Open("invalid file name that only your plugin understands") in the plugin).
EDIT: Someone indicated that using an invalid file name will not let you send data to the plugin. hey, I didn't like that solution either. for the sake of completness, I should clarify. make up a filename that looks OK enough to go through to your plugin. like using a drive letter that doesn't have a real drive attached to it (but will still be considered correct by filename-parsing code).
Writing code to parse the resource file binary in the plugin, while theoratically possible, isn't a great solution either.