Im working on and existing dojo application which is requesting the same static content on every page change.
Is there a way to configure the application so this content is cached?.... i.e so every http request has cache control headers?
This is nothing related with Dojo or any other javascript framework.
You can define the cache control configurations either in the Content Server (Apache / Nginx) or in the application server. (if the content is created by the backend server).
Related
I have a PWA which has been developed in ASP.net Core and hosted on an Azure App Service (Linux).
When a new version of the PWA was released, I found that devices failed to update without clearing the browser cache.
To resolve this, I discovered a tag helper called asp-append-version that will clear cache for a specific file. I also discovered that I can append the version of the src attribute that specifies the URL of a file to trigger the browser to retrieve the latest file. For example, src="/scripts/pwa.js?v=1". Each time I update the pwa.js file I would also change the version i.e. v=2.
I’ve now discovered that my PWA is caching other JavaScript files in my application which results in the app not working on devices that have been updated to the new version however failed to clear the cache on specific files.
I believed that if I didn’t specify any cache control headers such as Cache-Control that the browser would not cache any files however this appears not to be the case.
To resolve this issue, is the recommended approach to add the appropriate Cache-Control headers (Cache-Control, Pragma, and Expires) to prevent browser caching or should I only add the tag helper asp-append-version to for example scripts tags to auto clear cache for that specific file?
I would preferably like the browser to store for example images rather than going to the server each time to retrieve these. I believe setting the header Cache-Control: no-cache would work as this would check if the file has changed before retrieving the updated version?
Thanks.
Thanks # SteveSandersonMS for your insights, In your web server returns correct HTTP cache control headers, browsers will know not to re-use cached resources.
Refer here link 1 & link 2 for Cache control headers on Linux app service
For example, if you use the "ASP.NET Core hosted" version of the Blazor WebAssembly template, the server will return Cache-Control: no-cache headers which means the browser will always check with the server whether updated content is present (and this uses etags, so the server will return 304 meaning "keep using your cached content" if nothing has changed since the browser last updated its content).
If you use a different web server or service, you need to configure the web server to return correct caching headers. Blazor WebAssembly can't control or even influence that.
Refer here
I am integrating 3ds for spartacus. The payment provider requires a POST back from an iframe they serve. I post back to an endpoint in OCC. I have added the origin to the allow-from corsfilter.commercewebservices.allowedOrigins=http\://localhost\:4200 https\://localhost\:4200 https\://test.domain.com
The XSSFilter is blocking the request because of this configuration xss.filter.header.X-Frame-Options=SAMEORIGIN, this property is set in hybris platform. When I remove this property manually, the POST is working properly. When set it to an empty string (xss.filter.header.X-Frame-Options=) chrome rejects the requests because of an erroneous header.
How I can I remove this property in configuration, without manually removing the property every time the server restarts?
I am running locally at the moment, but should run on both ccv2 and on custom infrastructure. Hopefully without changes to the http server (nginx/apache) as this is part of a library that we want to publish for spartacus.
I'm serving my JavaScript app (SPA) by uploading it to S3 (default root object index.html with Cache-Control:max-age=0, no-cache, pointing to fingerprinted js/css assets) and configuring it as an origin to a CloudFront distribution. My domain name, let's say SomeMusicPlatform.com has a CNAME entry in Route53 containing the distribution URL. This is working great and all is well cached.
Now I want to serve a prerendered HTML version for purposes of bots and social network crawlers. I have set up a server that responds with a pre-rendered version of the JavaScript app (SPA) at the domain prerendered.SomeMusicPlatform.com.
What I'm trying to do in the lambda function is to detect the user agent, identify bots and serve them the prerendered version from my custom server (and not the JavaScript contents from S3 as I would normally serve to regular browsers).
I thought I could achieve this by using a Lambda#Edge: Using an Origin-Request Trigger to Change From an Amazon S3 Origin to a Custom Origin function that switches the origin to my custom prerender server in case it identifies a crawler bot in response headers (or, in the testing phase, with a prerendered=true query parameter).
The problem is that the Origin-Request trigger with the Lambda#Edge function is not triggering because CloudFront still has Default Root Object index.html cached and tends to return the content from the cached edge. I get X-Cache:RefreshHit from cloudfront by using both SomeMusicPlatform.com/?prerendered=true and SomeMusicPlatform.com, even though there is a Cache-Control:max-age=0, no-cache on the Default Root Object - index.html.
How can I keep the well-cached serving and low latency of my JavaScript SPA with CloudFront and add serving content from my custom prerender server just for crawler bots?
The problem with caching (getting the same hit when using either mywebsite.com/?prerendered=true or mywebsite.com) was solved by adding prerendered to the query whitelist in the cloudfront distribution. This means that CloudFront now correctly maintains both normal and prerendered version of the website content, depending on presence of the parameter (without the parameter cached content from S3 origin is served, and with the parameter cached content from my custom origin specified in the lambda function is served).
This was enough for the testing phase - to ensure the mechanism is working correctly. Then I followed Michael's advice and added another lambda function in the Viewer Request trigger which adds a custom header Is-Bot in case a bot is detected in User-Agent. Again, whitelisting was needed, this time for the custom header (to maintain caches for both origins depending on the custom header). The other lambda function later in the Origin Request trigger then decides which origin to use, depending on the Is-Bot header.
For a web-application, we are dependant on CMS deployed on web-logic and web-app deployed on tomcat. When user access a page, dynamic content is rendered from tomcat(sticky session is enabled) and static content(js, css etc.,) are rendered from CMS(on web-logic). This is leading to a conflict on JSESSIONID cookie. The web-logic JSESSIONID is overriding the Tomcat JSESSIONID and the user is loosing the contents saved in session, when moving to and from various parts of the site.
The request flow is as below
[1]: http://i.stack.imgur.com/17Ft5.png
As a band-aid, we wrote a rule on load balancer to drop JSESSIONID for all responses coming from CMS.
Though it worked, looking for a better way to handle this.
Why your CMS is setting a cookie? Does it need sessions to provide those files?
Usually static files do not need a session. One should allow them to be cached on proxies and on the client.
Configure your CMS appropriately. If it is a web application, you may add a Filter that removes Set-Cookie header from its responses (like you are doing on your LB).
It is possible to change the name of a session cookie. This is configurable using <session-config>/<cookie-config>/<name> element in web.xml in web applications that adhere to Servlet 3.0 (or later) specification.
(It is also configurable as sessionCookieName attribute on Context element in META-INF/context.xml, but using web.xml is the recommended way).
Note that Cookies can have a Path attribute. A browser won't send a cookie if its Path does not match the URL of the request. Cookies with Path:/web and Path:/content can happily co-exist together.
Tomcat supports requests that have several JSESSIONID cookies. It just chooses the one that matches an existing session. All the others are ignored.
by default browsers will cache the static files like image, js and css files. And it also cache http get request. If this feature is already there, then why we need asp.net output caching feature?
Thanks.
The asp.net caching is for creating the output sent to multiple clients the browser cache is a single client caching for itself.
Asp.net caching can cache individual parts of a larger output and jsut change the bits that are required to service a particualr client. e.g. changing the greeting at the top of the page, or making the "Top sellers" region relative.