Cache-control and some other doubts for Citrix ADC - header

So I'm configuring a new Citrix gateway to provide external access for one of our clients, and they're complaining about a dual cache-control entry, basically like this:
Cache-Control: "no-cache, no-store, must-revalidate"
Cache-Control: "no-cache"
Now I don't really know whether that is acceptable or not, but I also don't know where the 2nd header is coming from as I only have one cache-control action/policy configured in this gateway. Some other things I'm noticing is that, when running a SSL test (Qualys') results say that I don't have STS and Content-Security configured, yet I do have policies for those headers binded to my VS:
Bindings
These are the actions:
Actions
And I did notice that there are no hits for most of these policies for some reason:
hits
Is there anything wrong with my config?
Thanks and Regards

Regarding the cache you will need to run a network trace on the Netscaler (set size to 0 and capture ssl keys) and you will have to observe where the Header comes from.
For the no hits try binding a policy of NOCACHE (expression true) with priority 1 to the cache policies on the vserver. then flush the cache of the Netscaler and try again. I dont know your setup but this might also fix the Cache-Control Header if the Netscaler is the one doing it.

just adding a small but essential setting.
enableStaticPageCaching for authentication and vpn portal web pages
https://developer-docs.citrix.com/projects/netscaler-command-reference/en/12.0/aaa/aaa-parameter/aaa-parameter/

Related

Does the must-revalidate cache-control header tell the browser to only download a cached file if it has changed?

If I want browsers to load PDF files from cache until they changed on the server, do I have to use max-age=0 and must-revalidate as cache-control headers?
If I would use another value (larger than 0) for max-age would that mean the revalidation would only happen once the max-age value was exceeded?
What would happen if I would only set the must-revalidate header without max-age?
I was reading through this question and I am not 100% sure.
Also, what exactly does revalidate mean? Does it mean the client asks the server if the file has changed?
On the contrary, I've read that cache-control no-cache pretty much does what I want to achieve. Cache and check with the server if there is a new version... so whats the correct way?
I assume you are asking about which headers should you configure to be sent from your server, and by "client" you mean "modern web browser"? Then the quoted question/answer is correct, so:
Yes, you should set both, but max-age=0 is enough, (must-revalidate is the default behavior)
Yes, correct, the response would be served from local cache until max-age expires, after that it would be revalidated (once), then again served from local cache and so on
It is kind of undefined, and differs between browsers and the way you send request (clicking link from html, hitting reload button, typing directly in address bar and hitting enter). Generally, response should not be served directly from cache but it could either just be revalidated or full response can be requested from server.
Revalidate means that client asks server to send the content only if it has been changed since it was last retrieved. In order for this to work, in response to initial request server will send either one or both of:
Etag header (which contains hashed value of the content), which client will cache and send back in revalidation request as If-None-Match header, so server can compare clients cached Etag value with the current Etag on server side. If the value did not change, server will respond with 304 Not Modified (and empty body), and if the value changed, server will respond with 200 and full (new) content
Last-Modified (which contains timestamp of the last content modification), and client will send that in revalidation request in If-Modified-Since header, which will be used on server side to detirmine the response (304 or 200)
Cache-control: no-cache might achieve the same effect in most of the (simple) cases. The situation where things get complicated is when there are intermediate caches between client and the server, or when you want to tweak client behavior (for example when sending AJAX requests) and that is when most of the caching directives come into use

Cloudflare scrubbing Access-Control-Allow-Origin header when in Under Attack Mode

I need to be able to set the Access-Control-Allow-Origin response header with my server, however when I switch to Under Attack Mode (which I need right now because I'm being DDOSed), Cloudflare scrubs this and a bunch of other headers which is breaking some site functionality I need.
I sent a message to Cloudflare and am waiting to hear back, any ideas in the meantime?

How to enable Keep Alive connection in AWS S3 or CloudFront?

How to enable Keep Alive connection in AWS S3 or CloudFront? I uploaded images to S3 and found that the urls don't have keep alive connection. They cannot be cached by client application even I added cache-control headers to each image file.
From the tag wiki for Keep-Alive:
A feature of HTTP where the same connection is used for multiple
requests, speeding up downloading of web pages with multiple
resources.
I'm not aware of any relation that this has to cache behavior. I usually see mentions of Keep-Alive headers in relation to long-polling, which wouldn't make any sense to enable on S3.
I think you are incorrectly linking keep-alive headers with your browser's ability to cache static content. The cache-control headers should be all that is needed for caching of static content in the browser.
Are you verifying that the response from CloudFront includes the cache-control headers you have set on the S3 objects? Perhaps you need to invalidate the CloudFront cache after you updated the headers.
Related to your question I think the problem is in setting correct TTL(>0) to your origin/behaviours in Cloudfront.
Also AWS Cloudfront (from 30 March 2017) enables you to set up custom read and keep-alive timeouts for custom origins.

phantomjs/casperjs force page caching

I am trying to force phantoms to in-memory cache some webpage (GET) that is sending "Cache-Control: no-cache, must-revalidate” header to us.
I ve tried to do this by modifying Cache-Control header in casper.options.onResourceReceived but it seems the headers are kind of a READ-ONLY in this callback?!
I would appreciate some directions to investigate in this problem …..
If the server doesn't want a request cached, then there is nothing you can do. PhantomJS is just another browser, so it will follow those instructions.

BlazeDS data push over SSL

I have an application that uses the data push technology of blazeDS to send data to a Flex Client event 5 seconds. The application works fine when I run it via HTTP with or without a proxy. When I run it via https the data push doesn't work anymore. I get the following error
rootCause [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2
text="Error #2032: Stream Error.
URL: https://localhost/admin/messagebroker/streamingamfsecure?command=open&version=1
Has anyone successfully got streaming to work over SSL?
Thanks,
Pratima
Questions to ask yourself (and post here)
Is the request showing up in your access logs?
Does Tomcat/whatever server up normal HTML pages via HTTPS?
What do the response headers look like? Does clearing your cache change anything?
What browser are you using?
Can you set explicate caching headers?
Try one of these:
Cache-Control: no-store
Cache-Control: no-store, must-revalidate
Cache-Control: no-store,max-age=0,must-revalidate
Cache-Control: max-age=0,must-revalidate
Cache-Control: must-revalidate
2032 is a bit of a vague error from the framework.
However, things to check (in addition to Stu's list)
Can you hit the https:// page in a browser directly?
I notice in your example that you haven't specified the port number for SSL. Unless you've gone to the trouble of setting up some Apache SSL redirects, chances are this is a mistake.
If you paste the URL into a browser, you should be able to hit it, and get an empty response. Anything else, and you've got a problem (often one that doesn't relate to BlazeDS.)
Is your certificate valid?
If you're using a Self signed cert (as is common in development), does your browser have a security exception defined? Various browsers will block attempts to hit invalid certs in different ways, but no self-resepcting browser would allow this call through until an exception has been set up.
Is your channel defined correctly?
When switching from http:// to https://, you need to update your Channel class on the flex client to SecureAMFChannel and the endpoint class in your services-config.xml to SecureAMFEndpoint.
Broadly speaking, https with BlazeDS (either push, or RPC) works just fine, assuming you configure it properly.