Say I have a load balancer (Nginx or another, it doesn't matter) and I want it to route to a machine based on IP. The IP is not known at configuration time. So, for example, I have a load balancer in front of machines m1 and m2. A request comes from IP10 and it gets routed to m1, all subsequent requests from IP10 also get routed to m1. Another request comes from IP11 and it gets routed to m2, all subsequent requests from IP11 also get routed to m2.
Is this possible, if so how?
From your description I understand that you don't have a specific requirement for where the first request from a specific IP will be routed as long as all the subsequent requests will follow the same route.
If that's the case the action you want to perform is a load balancing method called session stickiness or persistent session.
In nginx you can achieve that with the following configuration:
http {
upstream mybackend {
ip_hash;
server m1.ltd;
server m2.ltd;
}
server {
listen 80;
location / {
proxy_pass http://mybackend;
}
}
}
Here is the link to the specific nginx docs.
Related
I've added an http redirect to my Ktor application and it's redirecting to https://0.0.0.0 instead of to the actual domain's https
#ExperimentalTime
fun Application.module() {
if (ENV.env != LOCAL) {
install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
}
Intercepting the route and printing out the host
routing {
intercept(ApplicationCallPipeline.Features) {
val host = this.context.request.host()
i seem to be getting 0:0:0:0:0:0:0:0 for the host
Do i need to add any special headers to Google Cloud's Load Balancer for this https redirect to work correctly? Seems like it's not picking up the correct host
As your Ktor server is hidden behind a reverse proxy, it isn't tied to the "external" host of your site. Ktor has specific feature to handle working behind reverse proxy, so it should be as simple as install(XForwardedHeaderSupport) during configuration and referencing request.origin.remoteHost to get actual host.
Let's try to see what's going on.
You create a service under http://example.org. On the port 80 of the host for example.org, there is a load balancer. It handles all the incoming traffic, routing it to servers behind itself.
Your actual application is running on another virtual machine. It has its own IP address, internal to your cloud, and accessible by the load balancer.
Let's see a flow of HTTP request and response for this system.
An external user sends an HTTP request to GET / with Host: example.org on port 80 of example.org.
The load balancer gets the request, checks its rules and finds an internal server to direct the request to.
Load balancer crafts the new HTTP request, mostly copying incoming data, but updating Host header and adding several X-Forwarded-* headers to keep information about the proxied request (see here for info specific to GCP).
The request hits your server. At this point you can analyze X-Forwarded-* headers to see if you are behind a reverse proxy, and get needed details of the actual query sent by the actual user, like original host.
You craft the HTTP response, and your server sends it back to the load balancer.
Load balancer passes this respone to the external user.
Note that although there is RFC 7239 for specifying information on request forwarding, GCP load balancer seems to use de-facto standard X-Forwarded-* headers, so you need XForwardedHeaderSupport, not ForwardedHeaderSupport (note additional X).
So it seems either Google Cloud Load Balancer is sending the wrong headers or Ktor is reading the wrong headers or both.
I've tried
install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
//install(ForwardedHeaderSupport)
install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
install(ForwardedHeaderSupport)
//install(XForwardedHeaderSupport)
install(HttpsRedirect)
or
//install(ForwardedHeaderSupport)
//install(XForwardedHeaderSupport)
install(HttpsRedirect)
All these combinations are working on another project, but that project is using an older version of Ktor (this being the one that was released with 1.4 rc) and that project is also using an older Google Cloud load balancer setup.
So i've decided to roll my own.
This line will log all the headers coming in with your request,
log.info(context.request.headers.toMap().toString())
then just pick the relevant ones and build an https redirect:
routing {
intercept(ApplicationCallPipeline.Features) {
if (ENV.env != LOCAL) {
log.info(context.request.headers.toMap().toString())
// workaround for call.request.host that contains the wrong host
// and not redirecting properly to the correct https url
val proto = call.request.header("X-Forwarded-Proto")
val host = call.request.header("Host")
val path = call.request.path()
if (host == null || proto == null) {
log.error("Unknown host / port")
} else if (proto == "http") {
val newUrl = "https://$host$path"
log.info("https redirecting to $newUrl")
// redirect browser
this.context.respondRedirect(url = newUrl, permanent = true)
this.finish()
}
}
}
ttI am adding an nginx reverse proxy in front of my existing nextcloudpi server in order to be able to route traffic to different servers in my network depending to the domain that is used. Currently the nextcloudpi server is the only one running, so all traffic is directly routed to it.
The server is only accessible via https and letsencrypt handles the certifactes on the nextcloudpi server.
In order to route traffic from my reverse proxy to the nextcloudpi server via https, I have set up the default.conf file to look like this:
server {
listen 443;
listen [::]:443;
server_name <my-public-url>;
location / {
proxy_pass https://<hostname-of-my-nextcloudpi-server>;
}
}
Unfortunately that doesn't work. Firefox returns SSL_ERROR_RX_RECORD_TOO_LONG and Chrome ERR_SSL_PROTOCOL_ERROR
I have also not seen anywhere where traffic is proxied to a https location. I am aware that in my internal network I can and should just route to the target location on port 80 but since the server is already set up to use https I want to keep it that way.
Thanks for your help
I'm having some difficulty in understanding the difference between reverse proxy (i.e. using proxy_pass directive with a given upstream server) and a 301 permanent redirect. How are they similar/different?
Reverse Proxy
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
HHTP Redirect
Apache Example: http://www.inmotionhosting.com/support/website/htaccess/redirect-without-changing-url
NGINX example:
server {
listen 80;
server_name domain1.com;
return 301 $scheme://domain2.com$request_uri;
}
Hence, it seems that both approaches have no difference from an end-user perspective. I want to ensure efficient bandwidth usage, while utilizing SSL. Currently, the app server uses its own self-signed SSL certificate with Nginx. What is the recommended approach for redirecting users from a website hosted by a standard web hosting company (hostgator, godaddy, etc.) to a separate server app server?
With a redirect the server tells the client to look elsewhere for the resource. The client will be aware of this new location. The new location must be reachable from the client.
A reverse proxy instead forwards the request of the client to some other location itself and sends the response from this location back to the client. This means that the client is not aware of the new location and that the new location does not need to be directly reachable by the client.
I am trying to build a fleet management software at the likes of google maps or bing maps and I need the GPS devices to send messages to the server and have the server store them (mySQL).
I have a Apache server (let's say "myserver.com") which only processes/accepts http requests for security reasons. The problem with this configuration is that it does not processes the gps messages because the device does not include http headers on its messages by default.
So, I was thinking on putting a nginx server in between them and make the gps send its messages to the nginx server, which then adds http headers to the original message and forwards it to the Apache server.
I tried finding any good tutorials online but so far haven't been able to find a good one.
Anyone can help me? Thank you.
I'm a bit confused on what you mean by 'gps messages'. Is it just http traffic without the appropriate header? If so, you want to use the proxy module. You can find current documentation for it at here.
Here is an example:
http {
upstream backend_apache {
server apache_server1_ip:80;
server apache_server2_ip:80;
}
server {
listen 80;
server_name myserver.com;
location / {
proxy_set_header Host $host;
proxy_pass http://backend_apache;
}
}
}
I am currently been involved in implementation of varnish with loadbalancer as back-end which shall forward traffic accordingly to multiple web server.
I am trying to achieve:
Public Traffic -> haproxy/DNS -> [Varnish (x2) / nginx(ssl) ] -> Loadbalancer -> Web server(x4)
I am able to configure Varnish , nginx as ssl/443 terminator for one domain.
(i.e if i point dns to varnish eth and access webserver serves the page)
varnish config
backend loadbalancer { .host = "xxx.xxx.xxx.xxx"; .port = "80" }
backend loadbalancer_ssl { .host = "xxx.xxx.xxx.xxx"; .port = "443"; }
sub vcl_recv {
# Set the director to cycle between web servers.
if (server.port == 443) {
set req.backend = loadbalancer_ssl;
}
else {
set req.backend = loadbalancer;
}
}
# And other vcl rules for security and other.
Nginx Config
location / {
# Pass the request on to Varnish.
proxy_pass http://127.0.0.1;
proxy_http_version 1.1;
#SSL certificate and config
=> How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to web server directly? (so that i don't have to worry about multiple interface for ssl support)
=> Or any standard approach to achieve with 443 terminator?
Note: why i am trying to achieve this: To create multiple layer for security and using existing hardware devices.
Already in place:
All server has (multiple interface for ssl using lightty).
Load balancer -> Hardware -> which will balance load between those web server.
Any experts sharing there view would be great.
I have decided to go with nginx as ssl terminator and achieving my question answers as below. I decided to update this, if anyone finds it useful.
From my above query:
How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> How it works is we need to have socket listening for https/ either nginx/pound/or anything else that can read ssl.
(which i was not quite convinced previously is to use this point as ssl terminator however i think i am fine as beyond that level now i have planned to make it internal zone.
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to webserver directly? (so that i dont have to worry about multiple interface for ssl support)
One can achieve this either with multiple interface (if you have multiple domains).
or, same interface if you are dealing with subdomains.
Or, you can create one secure page for all ssl required pages (depends upon trade-off)
=> Or any standard approach to achieve with 443 terminator?
I decided to go with nginx to use those feature that nginx provides (interms of security layer).