I need to count the incoming and outgoing traffice on my nginx proxy server - nginx-reverse-proxy

I am using $bytes_Sent and $request_length header to calculate the bandwidth used.
I have enabled this header in log format(nginx.conf file)as below:
'"bytes_sent": "$bytes_sent",```
Which is giving me below values in logs file:
"request_data_received":"1111", "bytes_sent": "140",
But when i try send these values to my proxy server, it returns 0 value
``` proxy_set_header BYTES_SENT $bytes_sent;
proxy_set_header BYTES_RECEIVED $request_length;``

Related

Safari doesn't display video, even tho servers partial responses look okay and other browsers work

I'm having an issue with Safari not displaying a video in a HTML page I'm returning, whereas it works on other browsers. There are multiple parts of the system but the concept is that there is an NGINX proxy in front of an express web server which can further proxy users to a different location for their resources using http-proxy.
I have looked at many different discussions about this but none of them have managed to resolve my issue.
I tried curling all those different parts of the system to make sure they all support byte-range requests and partial responses and everything seems fine to me:
Curling the NGINX:
curl -k --range 0-1000 https://nginx-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1082 0 --:--:-- --:--:-- --:--:-- 1080
Curling the express (to bypass nginx):
curl -k --range 0-1000 http://express-url.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 2166 0 --:--:-- --:--:-- --:--:-- 2161
Curling the final resource:
curl -k --range 0-1000 https://final-resource.com/videos/elements-header.mp4 -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1001 100 1001 0 0 1041 0 --:--:-- --:--:-- --:--:-- 1040
When I do verbose output of curl all of them return 206 status code as well, so it looks to me as if the requirements for Safari are satisfied.
Also here's my nginx configuration:
location / {
server_tokens off;
proxy_pass http://localhost:8085;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
if ( $request_filename ~ "^.*/(.+\.(mp4))$" ){
set $fname $1;
add_header Content-Disposition 'inline; filename="$fname"';
}
more_clear_headers 'access-control-allow-origin';
more_clear_headers 'x-powered-by';
more_clear_headers 'x-trace';
}
and there isn't any additional http or server configuration that might mess some of this up. Server is listening on 443 using ssl with http2. I also tried by adding add_headers Accept-Ranges bytes and proxy_force_ranges on; but that didn't help either.
Does anyone maybe have any idea what I'm doing wrong ? Attached I'm also providing requests and responses in Safari that I'm getting for initial byte range request which checks whether my servers are compatible with it and the follow up that's supposed to fetch the data which fails for some reason. Any help is appreciated
Initial request
Followup

traefik reverse proxy set Host instead of X-Forwarded-Host

I want to configure traefik to forward a request to another host, but instead of setting X-Forwarded-Host to host.name I want it to set the header filed Host to host.name but still opening the connection to my.ip
This is the part of my current traefik toml.
[frontends]
[frontends.mypath]
backend = "backendhost"
passHostHeader = true
[frontends.mypath.routes.test]
rule = "Host:host.name;Path:/my/path/"
[backends]
[backends.backendhost]
[backends.backendhost.servers.myip]
url = "http://my.ip:80"
basically I want traefik to behave in the way as I can do it with curl:
curl -L -H "Host: host.name" http://my.ip/my/path
so the requested server thinks it is requested as http://host.name/my/path.
The answer needs to be applicable directly to the traefik configuration. It should not include using further services/containers/reverse proxies.
The in addition to X-Forwarded-Host will look like this:
[frontends]
[frontends.mypath]
backend = "backendhost"
[frontends.mypath.headers.customrequestheaders]
Host = "host.name"
[frontends.mypath.routes.test]
rule = "Host:host.name;Path:/my/path/"
[backends]
[backends.backendhost]
[backends.backendhost.servers.myip]
url = "http://my.ip:80"

Hashicorp Vault behind nginx reverse proxy

I am trying to use vault behind nginx proxy, using App role auth method within vault. I need to apply secret_id_bound_cidrs as one of the restrictions for the role so only specific hosts can login and access Vault APIs. I have tried everything, and the closest I got was to use proxy protocol options in vault. However, when I send a request to vault from a host, the remote_add in vault is set to the server hosting vault and not the actual client IP, so the validation fails.
My nginx.conf is as follows :
location /vault/
{
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_pass http://vault:8200/;
}
My vault config is as follows:
Please note, I am using consul and vault as docker services which allows me to refer to consul as just the name of the service here. Hence consul:8500
{
"backend": {
"consul": {
"address": "consul:8500",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"proxy_protocol_behavior":"use_always",
"ui": true
}
My role is configured as follows where x.x.x.x is the IP I need to allow access to:
bind_secret_id false
local_secret_ids false
policies [test-policy]
secret_id_bound_cidrs [ x.x.x.x/32]
secret_id_num_uses 0
secret_id_ttl 0s
token_bound_cidrs []
token_explicit_max_ttl 0s
token_max_ttl 30m
token_no_default_policy false
token_num_uses 0
token_period 0s
token_policies [test-policy]
token_ttl 20m
token_type default
Can someone please help with any pointers on what I am missing here?
The proxy_protocol_behaviour field belongs in the listener/tcp block, but you have it out on its own.
Aside from that, I'm not 100% certain that NGINX will use the right PROXY protocol with the way you have set it up - see these sites for more comments:
https://github.com/hashicorp/vault/issues/3196
https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/#proxy_protocol

JAX RS injected URIInfo returning localhost for REST requests in reverse proxy

I have a set of IBM Websphere Liberty profiles servers inside a HAProxy reverse proxy. Everything works ok but HAProxy is doing something on requests so I can't get the correct URL in the requests using uriInfo.getBaseUri() or uriInfo.getRequestUriBuilder().build("whatever path")... they both return localhost:9080 as host and port, so I can't build correct URLs pointing to the service. (The request is a standard http://api.MYHOST.com/v1/... REST request )
Of course, I get a uriInfo object using #Context in the method so it gets the request information.
Front end configuration:
reqadd X-Forwarded-Proto:\ http
# Add CORS headers when Origin header is present
capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Methods:\ GET,\ HEAD,\ OPTIONS,\ POST,\ PUT if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Credentials:\ true if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
And Back-end configuration is:
option forwardfor
http-request set-header Host api.MYHOST.com
http-request set-header X-Forwarded-Host %[dst]
http-request set-header X-Forwarded-Port %[dst_port]
Any ideas on how to get the real request?
The only way I managed to get correct host used in the request is injecting in the method parameters the HttpServletRequest object.
I also inject the UriInfo, which has all valid information except the host name:
#Context UriInfo uriInfo, #Context HttpServletRequest request
After that I use URIBuilder (not UriBuilder) from Apache HttpClient utils to change the Host to the correct one as jax-rs UriBuilder in immutable:
new URIBuilder(uriInfo.getBaseUriBuilder().path("/MyPath").queryParam("MyParameter",myParameterValue)).build()).setHost(request.getServerName()).toString()
I also had to include setPort() and setScheme() to make sure the correct port and scheme are used (the correct ones are in HttpServletRequest, not UriInfo)
I just faced this very issue on my Jersey based application, I used uriInfo.getBaseUriBuilder() to get a UrlBuilder and figured out that it's possible to change the hostname from localhost by using the .host() method
.host(InetAddress.getLocalHost().getHostName())
And you can remove the port part by setting it to -1
.port(-1)
So from a URL that looks like
https://127.0.0.1:8443/hello
I got
https://yourhostname/hello

HAproxy ACL . Block all connection to Haproxy By default and allow only Specific IP

I am trying to solving a scenario now using haproxy. The scenario as below
Block all IP by default
Allow only connection from a specific IP address
If any connections come from a whilelist IP, if should reject if it exceed more than 10 concurrent connection in 30 sec
I want to do this to reduce number of API calls into my server. Could any one please help me with this?
Thanks
First two things are easy, simply allow only whitelisted IP
acl whitelist src 10.12.12.23
use_backend SOMESERVER if whitelist
The third - throttling - requires to use stick-tables (there are many data type - counters conn, sess, http, rates...) as a rate counter:
# max entries count request in 60s periods
stick-table type ip size 200k expire 100s store http_req_rate(60s)
next you have to fill the table, by tracking each request eg. by IP
tcp-request content track-sc0 src
# more info at http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4.2-tcp-request%20connection
and finally the acl:
# is there more than 5req/1min from IP
acl http_rate_abuse sc0_http_req_rate gt 5
# update use_backend condition
use_backend SOMESERVER if whitelisted !http_rate_abuse
For example some working config file with customized errors:
global
log /dev/log local1 debug
defaults
log global
mode http
option httplog
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend http
bind *:8181
stick-table type ip size 200k expire 100s store http_req_rate(60s)
tcp-request content track-sc0 src
acl whitelist src 127.0.0.1
acl http_rate_abuse sc0_http_req_rate gt 5
use_backend error401 if !whitelist
use_backend error429 if http_rate_abuse
use_backend realone
backend realone
server local stackoverflow.com:80
# too many requests
backend error429
mode http
errorfile 503 /etc/haproxy/errors/429.http
# unauthenticated
backend error401
mode http
errorfile 503 /etc/haproxy/errors/401.http
Note: the error handling is a bit tricky. Because above error backends are missing server entries, haproxy will throw HTTP 503, errorfile catch them and send different errors (with different codes).
Example /etc/haproxy/errors/401.http content:
HTTP/1.0 401 Unauthenticated
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>401 Unauthenticated</h1>
</body></html>
Example /etc/haproxy/errors/429.http content:
HTTP/1.0 429 Too many requests
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>429 Too many requests</h1>
</body></html>