Varnish, using URL for backend instead of IP address - reverse-proxy

I started setting up a reversed proxy server with varnish. I'm not experienced setting up varnish.
I am trying to use url of the backend instead of ip address with no luck:
1- Approach a:
backend default {
.host = "www.backend.mysite.com";
.port = "80";
}
Issue a: Restarting varnish keeps failing.
2- Approach b:
sub vcl_recv {
set req.http.Host = "www.backend.mysite.com";
...
}
Issue b: with this approach, when I enter mysite.com in browser bar, it gets redirected to www.backend.mysite.com.
I don't think this is an accepted behavior for this rule. Correct me if I am wrong.
Thanks,
Shab

your first try should work but your varnish server needs to have access to internet or at least to dns servers.
when you start varnish it will make a dns lookup and replace www.backend.mysite.com by the first ip it is given by dns.

Related

ktor redirecting to 0.0.0.0 when doing an https redirect

I've added an http redirect to my Ktor application and it's redirecting to https://0.0.0.0 instead of to the actual domain's https
#ExperimentalTime
fun Application.module() {
if (ENV.env != LOCAL) {
install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
}
Intercepting the route and printing out the host
routing {
intercept(ApplicationCallPipeline.Features) {
val host = this.context.request.host()
i seem to be getting 0:0:0:0:0:0:0:0 for the host
Do i need to add any special headers to Google Cloud's Load Balancer for this https redirect to work correctly? Seems like it's not picking up the correct host
As your Ktor server is hidden behind a reverse proxy, it isn't tied to the "external" host of your site. Ktor has specific feature to handle working behind reverse proxy, so it should be as simple as install(XForwardedHeaderSupport) during configuration and referencing request.origin.remoteHost to get actual host.
Let's try to see what's going on.
You create a service under http://example.org. On the port 80 of the host for example.org, there is a load balancer. It handles all the incoming traffic, routing it to servers behind itself.
Your actual application is running on another virtual machine. It has its own IP address, internal to your cloud, and accessible by the load balancer.
Let's see a flow of HTTP request and response for this system.
An external user sends an HTTP request to GET / with Host: example.org on port 80 of example.org.
The load balancer gets the request, checks its rules and finds an internal server to direct the request to.
Load balancer crafts the new HTTP request, mostly copying incoming data, but updating Host header and adding several X-Forwarded-* headers to keep information about the proxied request (see here for info specific to GCP).
The request hits your server. At this point you can analyze X-Forwarded-* headers to see if you are behind a reverse proxy, and get needed details of the actual query sent by the actual user, like original host.
You craft the HTTP response, and your server sends it back to the load balancer.
Load balancer passes this respone to the external user.
Note that although there is RFC 7239 for specifying information on request forwarding, GCP load balancer seems to use de-facto standard X-Forwarded-* headers, so you need XForwardedHeaderSupport, not ForwardedHeaderSupport (note additional X).
So it seems either Google Cloud Load Balancer is sending the wrong headers or Ktor is reading the wrong headers or both.
I've tried
install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
//install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
install(ForwardedHeaderSupport)
//install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
//install(ForwardedHeaderSupport)
//install(XForwardedHeaderSupport)
install(HttpsRedirect)
All these combinations are working on another project, but that project is using an older version of Ktor (this being the one that was released with 1.4 rc) and that project is also using an older Google Cloud load balancer setup.
So i've decided to roll my own.
This line will log all the headers coming in with your request,
log.info(context.request.headers.toMap().toString())
then just pick the relevant ones and build an https redirect:
routing {
intercept(ApplicationCallPipeline.Features) {
if (ENV.env != LOCAL) {
log.info(context.request.headers.toMap().toString())
// workaround for call.request.host that contains the wrong host
// and not redirecting properly to the correct https url
val proto = call.request.header("X-Forwarded-Proto")
val host = call.request.header("Host")
val path = call.request.path()
if (host == null || proto == null) {
log.error("Unknown host / port")
} else if (proto == "http") {
val newUrl = "https://$host$path"
log.info("https redirecting to $newUrl")
// redirect browser
this.context.respondRedirect(url = newUrl, permanent = true)
this.finish()
}
}
}

cloudflare worker rewrite Host Header

How do I set up another Host Header in the cloudflare worker?
For example, I have set up a 1.2.3.4 ip for my site's www record
By default www requests are sent with the header www.ex.com but I want to send the www requests with the new.ex.com header
You need to configure a DNS record for new.ex.com so that it points to the same IP address. Then, you can make a fetch() request with new.ex.com in the URL.
If you cannot make new.ex.com point at the right IP, another alternative is to make a fetch() request using the resolveOverride option to specify a different hostname's IP address to use:
fetch("https://new.ex.com", {cf: {resolveOverride: "www.ex.com"}});
Note that this only works if both hostnames involved are under your zone. Documentation about resolveOverride can be found here.
You cannot directly set the Host header because doing so could allow bypassing of security settings when making requests to third-party servers that also use Cloudflare.
// Parse the URL.
let url = new URL(request.url)
// Change the hostname.
url.hostname = "check-server.example.com"
// Construct a new request
request = new Request(url, request)
Note that this will affect the Host header seen by the origin
(it'll be check-server.example.com). Sometimes people want the Host header to remain the same.
// Tell Cloudflare to connect to `check-server.example.com`
// instead of the hostname specified in the URL.
request = new Request(request,
{cf: {resolveOverride: "check-server.example.com"}})

Nginx - SSL - Redirect IP to domain issue

I've a django app running on ubuntu-18.04 & nginx at digitalocean.
I've set a pointed a domain to the app and enabled the self signed SSL using Certbot ( by following this).
I have SECURE_SSL_REDIRECT = True in settings.py
I'm trying to redirect all requests to the IP address to the domain name.
I've added
server {
listen 80;
server_name x.x.x.x;
return 301 $scheme://mydomain.com$request_uri;
}
When access the app as https://x.x.x.x, it's not getting redirected. Instead it shows a Privacy error and by accepting it I can access the app with IP in url.
(I have not done the SSL steps for IP)
Should I need to redo all the steps to enable SSL for the IP address too for getting a redirect irrespective of http or https? - (I'm not sure whether this will work or not)
Thanks for any help.
Edit:
Thanks #Richard Smith for comments. I've got it working.

Congiuring varnish with multiple domains+ssl Support

I am currently been involved in implementation of varnish with loadbalancer as back-end which shall forward traffic accordingly to multiple web server.
I am trying to achieve:
Public Traffic -> haproxy/DNS -> [Varnish (x2) / nginx(ssl) ] -> Loadbalancer -> Web server(x4)
I am able to configure Varnish , nginx as ssl/443 terminator for one domain.
(i.e if i point dns to varnish eth and access webserver serves the page)
varnish config
backend loadbalancer { .host = "xxx.xxx.xxx.xxx"; .port = "80" }
backend loadbalancer_ssl { .host = "xxx.xxx.xxx.xxx"; .port = "443"; }
sub vcl_recv {
# Set the director to cycle between web servers.
if (server.port == 443) {
set req.backend = loadbalancer_ssl;
}
else {
set req.backend = loadbalancer;
}
}
# And other vcl rules for security and other.
Nginx Config
location / {
# Pass the request on to Varnish.
proxy_pass http://127.0.0.1;
proxy_http_version 1.1;
#SSL certificate and config
=> How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to web server directly? (so that i don't have to worry about multiple interface for ssl support)
=> Or any standard approach to achieve with 443 terminator?
Note: why i am trying to achieve this: To create multiple layer for security and using existing hardware devices.
Already in place:
All server has (multiple interface for ssl using lightty).
Load balancer -> Hardware -> which will balance load between those web server.
Any experts sharing there view would be great.
I have decided to go with nginx as ssl terminator and achieving my question answers as below. I decided to update this, if anyone finds it useful.
From my above query:
How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> How it works is we need to have socket listening for https/ either nginx/pound/or anything else that can read ssl.
(which i was not quite convinced previously is to use this point as ssl terminator however i think i am fine as beyond that level now i have planned to make it internal zone.
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to webserver directly? (so that i dont have to worry about multiple interface for ssl support)
One can achieve this either with multiple interface (if you have multiple domains).
or, same interface if you are dealing with subdomains.
Or, you can create one secure page for all ssl required pages (depends upon trade-off)
=> Or any standard approach to achieve with 443 terminator?
I decided to go with nginx to use those feature that nginx provides (interms of security layer).

Apache reverse proxy forwarding https header

I have successfully installed reverse proxy on Apache. It works like a charm. I'm using it to proxy https request to http. My problem is that I need to forward variable SERVER_HTTPS to my end server, to indicate if person is using ssl connection or just http. I have found one way to do:
Each time I can forward HTTP_X_FORWARDED_PROTO variable and check on end server:
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
$_SERVER['HTTPS']='on';
But this variant is not good for me, because I can't edit the end servers scripts. Lets say that I don't even have access to it. But I know how to check if it is forwarded. So, Generally my question is: Is there any way that I can forward this variable? I have seen one more variant, with Rewrite engine , but it didn't work for me and there is no detailed information. Maybe If I will set my server on Nginx + apache this will send this header variable?
If you can edit the end server configuration, take a look at mod_rpaf RPAF_SetHTTPS option :
https://github.com/gnif/mod_rpaf