NGINX - Add http headers and forward messages to Apache server? - apache

I am trying to build a fleet management software at the likes of google maps or bing maps and I need the GPS devices to send messages to the server and have the server store them (mySQL).
I have a Apache server (let's say "myserver.com") which only processes/accepts http requests for security reasons. The problem with this configuration is that it does not processes the gps messages because the device does not include http headers on its messages by default.
So, I was thinking on putting a nginx server in between them and make the gps send its messages to the nginx server, which then adds http headers to the original message and forwards it to the Apache server.
I tried finding any good tutorials online but so far haven't been able to find a good one.
Anyone can help me? Thank you.

I'm a bit confused on what you mean by 'gps messages'. Is it just http traffic without the appropriate header? If so, you want to use the proxy module. You can find current documentation for it at here.
Here is an example:
http {
upstream backend_apache {
server apache_server1_ip:80;
server apache_server2_ip:80;
}
server {
listen 80;
server_name myserver.com;
location / {
proxy_set_header Host $host;
proxy_pass http://backend_apache;
}
}
}

Related

Header Enrichment not working on HTTPs - nginx

I have multiple questions regarding Header Enrichment with SSL + nginx
Why Header Enrichment dose not work with Https ?
One of my project have HE(Header Enrichment) enabled on simple HTTP but when we look for specific headers like msisdn in HTTPs they are missing.
I am using nginx hence i tried to add headers and return the request from http to https but no result? How can i achieve this ? Following is the sample of nginx code block.
server {
listen 80; ## listen for ipv4; this line is default and implied
listen [::]:80; ## listen for ipv6
location / {
add_header X-my-header my-header-content;
return 301 https://$host$request_uri?msisdn=$http_MSISDN;
}
}
I have tried adding Query parameter and it works fine but i am more concerned about headers way.
Thank you.
(1),(2)
ISPs impalement HE by injecting some headers in the request. This can be done in Http as they can easily inspect the request but this is not possible in case of Https as the request is encrypted.
There are some trials to provide alternative solutions but there is neither solid nor standardized one till now.
See more: https://blog.apnic.net/2016/10/13/challenges-of-https/
(3)
I suggest to ask this in separate question. However, I noticed that you didnt configured https endpoint in nginx. Please refer to:
http://nginx.org/en/docs/http/configuring_https_servers.html
(4)
Query paramters are part of URL which can be HTTP or HTTPs or any other protocol. These differ from headers which are part of message itself.

Reverse proxy - route to the same machine based on IP

Say I have a load balancer (Nginx or another, it doesn't matter) and I want it to route to a machine based on IP. The IP is not known at configuration time. So, for example, I have a load balancer in front of machines m1 and m2. A request comes from IP10 and it gets routed to m1, all subsequent requests from IP10 also get routed to m1. Another request comes from IP11 and it gets routed to m2, all subsequent requests from IP11 also get routed to m2.
Is this possible, if so how?
From your description I understand that you don't have a specific requirement for where the first request from a specific IP will be routed as long as all the subsequent requests will follow the same route.
If that's the case the action you want to perform is a load balancing method called session stickiness or persistent session.
In nginx you can achieve that with the following configuration:
http {
upstream mybackend {
ip_hash;
server m1.ltd;
server m2.ltd;
}
server {
listen 80;
location / {
proxy_pass http://mybackend;
}
}
}
Here is the link to the specific nginx docs.

Difference HTTP Redirect vs Reverse Proxy in NGINX

I'm having some difficulty in understanding the difference between reverse proxy (i.e. using proxy_pass directive with a given upstream server) and a 301 permanent redirect. How are they similar/different?
Reverse Proxy
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
HHTP Redirect
Apache Example: http://www.inmotionhosting.com/support/website/htaccess/redirect-without-changing-url
NGINX example:
server {
listen 80;
server_name domain1.com;
return 301 $scheme://domain2.com$request_uri;
}
Hence, it seems that both approaches have no difference from an end-user perspective. I want to ensure efficient bandwidth usage, while utilizing SSL. Currently, the app server uses its own self-signed SSL certificate with Nginx. What is the recommended approach for redirecting users from a website hosted by a standard web hosting company (hostgator, godaddy, etc.) to a separate server app server?
With a redirect the server tells the client to look elsewhere for the resource. The client will be aware of this new location. The new location must be reachable from the client.
A reverse proxy instead forwards the request of the client to some other location itself and sends the response from this location back to the client. This means that the client is not aware of the new location and that the new location does not need to be directly reachable by the client.

Front end servers return themselves as visiting host

I have tested quite a few front end proxies like HAProxy, Apache, Nginx and Lighttpd but in my logs on my tornado backend servers I am just getting visited by the front end server. Meaning, I would like to know the real ip from the client that is visiting, so my log analyzer thinks I am getting more visits than 1.
What would be the simplest way to do this ?
Backend is tornado python, frontend could be any of the above, but I currently have nginx configured.
You have a couple of options. The easiest to implement is simply consume the x-forwarded-for header:
http://en.wikipedia.org/wiki/X-Forwarded-For
To enable x-forwarded-for in haproxy simply add:
option forwardfor
http://cbonte.github.io/haproxy-dconv/configuration-1.4.html#4.2-option%20forwardfor
If you do not want to consume the x-forwarded-for header then you can try to push having the "PROXY protocol" implemented in tornado or look at using something like gunicorn. http://gunicorn.org/ . The PROXY protocol works by adding the original L4 information to the end of the L7 data. The receiving server must understand the PROXY protocol or it just looks like a corrupt packet.

Congiuring varnish with multiple domains+ssl Support

I am currently been involved in implementation of varnish with loadbalancer as back-end which shall forward traffic accordingly to multiple web server.
I am trying to achieve:
Public Traffic -> haproxy/DNS -> [Varnish (x2) / nginx(ssl) ] -> Loadbalancer -> Web server(x4)
I am able to configure Varnish , nginx as ssl/443 terminator for one domain.
(i.e if i point dns to varnish eth and access webserver serves the page)
varnish config
backend loadbalancer { .host = "xxx.xxx.xxx.xxx"; .port = "80" }
backend loadbalancer_ssl { .host = "xxx.xxx.xxx.xxx"; .port = "443"; }
sub vcl_recv {
# Set the director to cycle between web servers.
if (server.port == 443) {
set req.backend = loadbalancer_ssl;
}
else {
set req.backend = loadbalancer;
}
}
# And other vcl rules for security and other.
Nginx Config
location / {
# Pass the request on to Varnish.
proxy_pass http://127.0.0.1;
proxy_http_version 1.1;
#SSL certificate and config
=> How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to web server directly? (so that i don't have to worry about multiple interface for ssl support)
=> Or any standard approach to achieve with 443 terminator?
Note: why i am trying to achieve this: To create multiple layer for security and using existing hardware devices.
Already in place:
All server has (multiple interface for ssl using lightty).
Load balancer -> Hardware -> which will balance load between those web server.
Any experts sharing there view would be great.
I have decided to go with nginx as ssl terminator and achieving my question answers as below. I decided to update this, if anyone finds it useful.
From my above query:
How would i achieve configuring varnish as dns entry Point with ssl termination for multiple domains?
=> How it works is we need to have socket listening for https/ either nginx/pound/or anything else that can read ssl.
(which i was not quite convinced previously is to use this point as ssl terminator however i think i am fine as beyond that level now i have planned to make it internal zone.
=> Is it possible to somehow configure varnish to accept all connections and bypass ssl to webserver directly? (so that i dont have to worry about multiple interface for ssl support)
One can achieve this either with multiple interface (if you have multiple domains).
or, same interface if you are dealing with subdomains.
Or, you can create one secure page for all ssl required pages (depends upon trade-off)
=> Or any standard approach to achieve with 443 terminator?
I decided to go with nginx to use those feature that nginx provides (interms of security layer).