I’ve created a Cloudflare page rule to cache a static page as follows:
domain.com/
Cache Level: cache everything
Edge Cache TTL: 2 hours
There’s generally no problem, but several times the server has returned an error, or there’s been a connection timeout, and Cloudflare has cached this result. How can I test for a valid response before caching the page?
You should be able to check the status of the response before you do any caching.
response.ok returns true if the status code is within 200 to 299. You can interrogate the status directly with response.status
const response = await fetch(event.request);
if(response.ok) {
// ... do caching logic
}
Related
I set up a Cloudflare worker to redirect to our API gateway since we don't have control of the DNS and can't just set up a CNAME. The redirect works and it passes along the body and all the headers except Authorization. It receives it, and when I look at the worker console it lists it as redacted. It also redacts the user_key param I'm passing but it passes that through.
const base = 'https://myurl.com'
const statusCode = 308;
addEventListener("fetch", event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url);
const { pathname, search } = url;
const destinationURL = base + pathname + search;
return Response.redirect(destinationURL, statusCode);
}
First, note that the redactions you are seeing are purely for display in the workers console. This is a feature to protect sensitive secrets from being logged, but it doesn't affect the content of any live request.
Now, with regard to what your Worker is actually doing:
This worker returns a 308 redirect response back to the client. It is then up to the client to follow the redirect, sending the same request to the new URL.
It is the client, then, that decides whether to send the Authorization header to the new location -- the behavior is NOT controlled by Cloudflare Workers. As it turns out, many clients intentionally drop the Authorization header when following redirects to a different domain name. For example, the Go HTTP client library does this, and node-fetch recently started doing this as well. (I happen to disagree with this change, for reasons I explained in a comment.)
If the client is a web browser, then the behavior is complicated. If the Authorization header was added to the request as part of HTTP basic auth (i.e. the user was prompted by the browser for a username and password), then the header will be removed when following the redirect. However, if the Authorization header was provided by client-side JavaScript code when it called fetch(), then the header will be kept through the redirect.
Probably the best way to solve this is: Don't use a 3xx redirect. Instead, have the Worker directly forward the request to the new URL. That is, instead of this:
return Response.redirect(destinationURL, statusCode);
Try this:
return fetch(destinationURL, request);
With this code, the client will not receive a redirect. Instead, the Worker will directly forward the request to the new URL, and then forward the response back to the client. The Worker acts as a middleman proxy in this case. From the client's point of view, no forwarding took place, the original URL simply handled the request.
I'm using Vue CLI and axios.
I have a searchbar where the user can input (potentially) any website and read info about the HTTP request and response.
Some of the information I need to get are: HTTP protocol, Status code, Location (if redirected), Date and Server.
What I'm doing is a simple axios GET request taking the input from the searchbar.
I'm trying to get my head around the CORS domain issues, but even then, when I input a CORS supported site like myjson I can access only the CORS-safelisted response headers which are not what I'm looking for.
This is the axios call:
axios
.get(url)
.then((r) => {
console.log(r);
console.log(r.headers.server); //undefined
})
.catch((e) => {
console.error(e);
});
Is the brief I'm presenting even possible?
UPDATE
I've then tried removing the chrome extension I used to enable CORS requests and installed Moesif Origin & CORS Changer extension. After restarting my PC I have now access to the remaining response headers.
I don't really know exactly what went wrong with the previous extension, but hopefully this helps somebody.
It's also worth pointing out that at the current date I'm writing this edit, myjson site has been flagged by chrome as non-safe for privacy issues. I've simply made HTTP requests to other sites and got the response headers as described.
The response to a cross-origin request for https://myjson.dit.upm.es/about contains the CORS-related headers
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, PATCH, PUT, DELETE, POST, OPTIONS
but no Access-Control-Expose-Headers. Without that, a cross-origin client cannot access the Server header, because it is not CORS-safelisted.
It would work if you had your server make the request and evaluate the headers, not the axios client.
I was under the impression that cross-origin requests getting blocked was primarily a security thing that prevents evil websites from getting or updating information on your web service.
I've noticed however that even though a request gets blocked on my front-end, the code still executes in the backend.
For example:
import express = require('express')
const app = express()
const port = 80
app.use(function (req, res, next) {
// res.header("Access-Control-Allow-Origin", "*")
// res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
next()
});
app.get('/foo', (req, res) => {
console.log("test1")
res.json({
data: "Hello"
})
console.log("test2")
})
app.listen(port, () => {
console.log(`app listening at http://localhost:${port}`)
})
If my frontend then makes a request the express service, the request will fail because of cross-origin but the express service will still log
test1
test2
Should express block the service from continuing to run since the origin is not permitted? Isn't a security threat if the express code still executes even if the front-end gets an error?
There are two types of CORs requests in the browser implementation and which is used depends on the specific request being made.
For requests that do not require pre-flight, the request is made to the back-end and then the browser examines the resulting headers to see if the CORs request is permitted or not. If it's not permitted (as in the case you show), then the result of the request is blocked from the client so it can't see the result of the request (even though the server fully processed it).
As best I can tell from what you show, everything here is working as expected. The request is considered a "simple request" and does not require preflight. Browser-based Javascript will not be allowed to get results from this route if it's not the same origin.
For requests that do require preflight (there's a list of things in a request that can trigger preflight), then the browser will first ask the server if the request is permitted by sending an OPTIONS request to the same route. The server then decides whether to allow the request or not. If the browser gets permission from the server, then it sends the real request.
You can see what triggers preflight here.
Presumably, the browser doesn't use preflight all the time because it's less efficient (requiring double the requests).
We have worklight app with app security defined in application-descriptor.xml. We have challenge handler to handle the challenges. In wlCommonInit() function, we call WL.Client.Connect() function which in turns triggers the challenge handler. User can type in user id / password and authenticate successfully. All good till this point.
In challenge handler after successful authenticate we call ChallengeHandler.submitSuccess() method to inform worklight about successfull authentication.
This call should result into WL.client.connect() onSuccess callback function, but instead it makes lot of request to URL ../App/iphone/init and retuns with 401. Eventually after 1-2 minutes it gets HTTP 200 for particular request and then enters into onSuccess().
Any idea why so many requests which result into 401?
Below is code snippet, in main.js...
WL.Client.connect({
onSuccess : callbackOnSuccess,
onFailure : callbackOnFailure
});
in challengeHandler.js..
$('#loginButton').bind('click', function () {
var reqURL = '/j_security_check';
var options = {};
options.parameters = {
j_username : $('#username').val(),
j_password : $('#password').val()
};
options.headers = {};
ChallengeHandler.submitLoginForm(reqURL, options, ChallengeHandler.submitLoginFormCallback);
});
ChallengeHandler.submitLoginFormCallback = function(response) {
WASLTPARealmChallengeHandler.submitSuccess();
};
Theory:
Do you have a single MobileFirst Server or multiple?
If you have only one server, it would be helpful to get network traffic log from a tool such as Wireshark
If you multiple servers, do you then also happen to have a Load Balancer involved?
In order for authentication to successfully pass there will be several requests - the first for triggering the challenge handler and the second to carry the user credentials. These need to reach the same server.
In case the Load Balancer is misconfigured requests may hit different MobileFirst Servers. It does sound like the requests are getting bounced between servers then meaning that a authentication request hits one server but the credentials requests hits another...
So in the case of multiple servers you need to make sure that Sticky Sessions options is enabled in the used Load Balancer
So - if I visit a URL at my remote server via my browser, example.host.com, I get some JSON back - great.
If I put that exact same URL into some javascript that makes a XMLHttpRequest from a page being served from a server on my local machine, I get nothing, with a status=0 and a statusText=null. Pertinent facts:
The remote server's response header has access-control-allow-origin: '*'
When I make the XMLHttpRequest, it adds referer: "http://localhost:2154/HV" and origin: "http://localhost:2154" to its request header. These of course weren't there when I just put the URL into my browser.
MDN says the status reporting I described above usually happens when a request is unsent.
I've built my local server with node + express
The code for my XHR is as follows:
function fetchit(host, n){
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function(){
if(this.readyState == 4){
//do cool stuff
}
}
xmlhttp.withCredentials = true;
xmlhttp.open('GET', 'http://'+host+'/?cmd=getMsg&n='+n);
xmlhttp.send();
}
I get the impression that this has something to do with the origin that XHR is sticking on the request header, but I thought the line I mentioned in the response header would make origin not matter. Clearly there is something I don't understand about CORS - thanks in advance for any ideas,
Alright - I figured this out. The problem is that credentialed requests are a bigger pain in the butt than non-credentialed requests. This MDN page explains what needed to happen. TL;DR:
The browser will not expose the response text of a credentialed request if the response header doesn't have access-control-allow-credentials: true
access-control-allow-origin can't be wildcarded in credentialed requests.