Hashicorp Vault behind nginx reverse proxy - authorization

I am trying to use vault behind nginx proxy, using App role auth method within vault. I need to apply secret_id_bound_cidrs as one of the restrictions for the role so only specific hosts can login and access Vault APIs. I have tried everything, and the closest I got was to use proxy protocol options in vault. However, when I send a request to vault from a host, the remote_add in vault is set to the server hosting vault and not the actual client IP, so the validation fails.
My nginx.conf is as follows :
location /vault/
{
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_pass http://vault:8200/;
}
My vault config is as follows:
Please note, I am using consul and vault as docker services which allows me to refer to consul as just the name of the service here. Hence consul:8500
{
"backend": {
"consul": {
"address": "consul:8500",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"proxy_protocol_behavior":"use_always",
"ui": true
}
My role is configured as follows where x.x.x.x is the IP I need to allow access to:
bind_secret_id false
local_secret_ids false
policies [test-policy]
secret_id_bound_cidrs [ x.x.x.x/32]
secret_id_num_uses 0
secret_id_ttl 0s
token_bound_cidrs []
token_explicit_max_ttl 0s
token_max_ttl 30m
token_no_default_policy false
token_num_uses 0
token_period 0s
token_policies [test-policy]
token_ttl 20m
token_type default
Can someone please help with any pointers on what I am missing here?

The proxy_protocol_behaviour field belongs in the listener/tcp block, but you have it out on its own.
Aside from that, I'm not 100% certain that NGINX will use the right PROXY protocol with the way you have set it up - see these sites for more comments:
https://github.com/hashicorp/vault/issues/3196
https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/#proxy_protocol

Related

I need to count the incoming and outgoing traffice on my nginx proxy server

I am using $bytes_Sent and $request_length header to calculate the bandwidth used.
I have enabled this header in log format(nginx.conf file)as below:
'"bytes_sent": "$bytes_sent",```
Which is giving me below values in logs file:
"request_data_received":"1111", "bytes_sent": "140",
But when i try send these values to my proxy server, it returns 0 value
``` proxy_set_header BYTES_SENT $bytes_sent;
proxy_set_header BYTES_RECEIVED $request_length;``

traefik reverse proxy set Host instead of X-Forwarded-Host

I want to configure traefik to forward a request to another host, but instead of setting X-Forwarded-Host to host.name I want it to set the header filed Host to host.name but still opening the connection to my.ip
This is the part of my current traefik toml.
[frontends]
[frontends.mypath]
backend = "backendhost"
passHostHeader = true
[frontends.mypath.routes.test]
rule = "Host:host.name;Path:/my/path/"
[backends]
[backends.backendhost]
[backends.backendhost.servers.myip]
url = "http://my.ip:80"
basically I want traefik to behave in the way as I can do it with curl:
curl -L -H "Host: host.name" http://my.ip/my/path
so the requested server thinks it is requested as http://host.name/my/path.
The answer needs to be applicable directly to the traefik configuration. It should not include using further services/containers/reverse proxies.
The in addition to X-Forwarded-Host will look like this:
[frontends]
[frontends.mypath]
backend = "backendhost"
[frontends.mypath.headers.customrequestheaders]
Host = "host.name"
[frontends.mypath.routes.test]
rule = "Host:host.name;Path:/my/path/"
[backends]
[backends.backendhost]
[backends.backendhost.servers.myip]
url = "http://my.ip:80"

Is it possible to implement OIDC in front of Nginx Stream with OpenResty?

I would like to know if it is possible to use the OpenResty OIDC module as an authentication proxy within an NGINX stream configuration.
(I don't have acccess to NGINX Plus unfortunately)
I have used NGINX with Stream configurations in the past to proxy access to upstream tcp resources and it works like a charm.
I am currently looking at implementing an OIDC proxy in front of various resources, both static html and dynamic apps, because we have an in-house OIDC IDAM provider. I came across OpenResty, and in particular the lua-resty-oidc module, and thanks to some wonderful guides, (https://medium.com/#technospace/nginx-as-an-openid-connect-rp-with-wso2-identity-server-part-1-b9a63f9bef0a , https://developers.redhat.com/blog/2018/10/08/configuring-nginx-keycloak-oauth-oidc/ ), I got this working in no time for static pages, using an http server nginx config.
I can't get it working for stream configurations though. It looks like the stream module is enabled as standard for OpenResty, but from digging around I don't think the 'access_by_lua_block' function is allowed in the stream context.
This may simply not be supported, which is fair enough when begging off other people's great work, but I wondered if there was any intention to include suport within OpenResty / lua-resty-oidc in the future, or whether anyone knew of a good workaround.
This was my naive attempt to get it working but the server complains about the
'access_by_lua_block' command at run time.
2019/08/22 08:20:44 [emerg] 1#1: "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
nginx: [emerg] "access_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:49
events {
worker_connections 1024;
}
stream {
lua_package_path "/usr/local/openresty/?.lua;;";
resolver 168.63.129.16;
lua_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
lua_ssl_verify_depth 5;
# cache for discovery metadata documents
lua_shared_dict discovery 1m;
# cache for JWKs
lua_shared_dict jwks 1m;
upstream geyser {
server geyser-api.com:3838;
}
server {
listen 443 ssl;
ssl_certificate /usr/local/openresty/nginx/ssl/nginx.crt;
ssl_certificate_key /usr/local/openresty/nginx/ssl/nginx.key;
access_by_lua_block {
local opts = {
redirect_uri_path = "/redirect_uri",
discovery = "https://oidc.provider/discovery",
client_id = "XXXXXXXXXXX",
client_secret = "XXXXXXXXXXX",
ssl_verify = "no",
scope = "openid",
redirect_uri_scheme = "https",
}
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass geyser;
}
}
Anyone have any advice?
i don't think that's possible.
However to be sure, you should try creating an issue on the official github
https://github.com/zmartzone/lua-resty-openidc/issues
They helped me solve a similar issue before

JAX RS injected URIInfo returning localhost for REST requests in reverse proxy

I have a set of IBM Websphere Liberty profiles servers inside a HAProxy reverse proxy. Everything works ok but HAProxy is doing something on requests so I can't get the correct URL in the requests using uriInfo.getBaseUri() or uriInfo.getRequestUriBuilder().build("whatever path")... they both return localhost:9080 as host and port, so I can't build correct URLs pointing to the service. (The request is a standard http://api.MYHOST.com/v1/... REST request )
Of course, I get a uriInfo object using #Context in the method so it gets the request information.
Front end configuration:
reqadd X-Forwarded-Proto:\ http
# Add CORS headers when Origin header is present
capture request header origin len 128
http-response add-header Access-Control-Allow-Origin %[capture.req.hdr(0)] if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Methods:\ GET,\ HEAD,\ OPTIONS,\ POST,\ PUT if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Credentials:\ true if { capture.req.hdr(0) -m found }
rspadd Access-Control-Allow-Headers:\ Origin,\ X-Requested-With,\ Content-Type,\ Accept if { capture.req.hdr(0) -m found }
And Back-end configuration is:
option forwardfor
http-request set-header Host api.MYHOST.com
http-request set-header X-Forwarded-Host %[dst]
http-request set-header X-Forwarded-Port %[dst_port]
Any ideas on how to get the real request?
The only way I managed to get correct host used in the request is injecting in the method parameters the HttpServletRequest object.
I also inject the UriInfo, which has all valid information except the host name:
#Context UriInfo uriInfo, #Context HttpServletRequest request
After that I use URIBuilder (not UriBuilder) from Apache HttpClient utils to change the Host to the correct one as jax-rs UriBuilder in immutable:
new URIBuilder(uriInfo.getBaseUriBuilder().path("/MyPath").queryParam("MyParameter",myParameterValue)).build()).setHost(request.getServerName()).toString()
I also had to include setPort() and setScheme() to make sure the correct port and scheme are used (the correct ones are in HttpServletRequest, not UriInfo)
I just faced this very issue on my Jersey based application, I used uriInfo.getBaseUriBuilder() to get a UrlBuilder and figured out that it's possible to change the hostname from localhost by using the .host() method
.host(InetAddress.getLocalHost().getHostName())
And you can remove the port part by setting it to -1
.port(-1)
So from a URL that looks like
https://127.0.0.1:8443/hello
I got
https://yourhostname/hello

Why Icecast2 does not want to give the stream through https?

On a server with Ubuntu 14.04 LTS installed Icecast2 2.4.1 with SSL support. Also on this server work HTTPS website.
I want insert on the page HTML5-player that will also take the stream through the SSL (otherwise - mixed content error).
The site has a commercial SSL certificate, Icecast - a self-signed.
Icecast config file:
<icecast>
<location>****</location>
<admin>admin#*************</admin>
<limits>
<clients>1000</clients>
<sources>2</sources>
<threadpool>5</threadpool>
<queue-size>524288</queue-size>
<source-timeout>10</source-timeout>
<burst-on-connect>0</burst-on-connect>
<burst-size>65535</burst-size>
</limits>
<authentication>
<source-password>*****</source-password>
<relay-password>*****</relay-password>
<admin-user>*****</admin-user>
<admin-password>*****</admin-password>
</authentication>
<hostname>************</hostname>
<listen-socket>
<port>8000</port>
<ssl>1</ssl>
</listen-socket>
<mount>
<mount-name>/stream</mount-name>
<charset>utf-8</charset>
</mount>
<mount>
<mount-name>/ogg</mount-name>
<charset>utf-8</charset>
</mount>
<fileserve>1</fileserve>
<paths>
<basedir>/usr/share/icecast2</basedir>
<logdir>/var/log/icecast2</logdir>
<webroot>/usr/share/icecast2/web</webroot>
<adminroot>/usr/share/icecast2/admin</adminroot>
<alias source="/" dest="/status.xsl"/>
<ssl-certificate>/etc/icecast2/icecast2.pem</ssl-certificate>
</paths>
<logging>
<accesslog>access.log</accesslog>
<errorlog>error.log</errorlog>
<loglevel>4</loglevel>
</logging>
<security>
<chroot>0</chroot>
<changeowner>
<user>icecast2</user>
<group>icecast</group>
</changeowner>
</security>
</icecast>
Certificate for Icecast (/etc/icecast2/icecast2.pem) generated by:
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout icecast2.pem -out icecast2.pem
I expect to get the output stream from the addresses https://domain.name:8000/stream https://domain.name:8000/ogg for insertion into the player via tag audio, but in response - silence. Thus the addresses with a simple http everything works fine.
I did not understand what all the same mistake...
Thanks in advance for your help!
I ran into this issue recently and didn't have a lot of time to solve it, nor did I see see much documentation for doing so. I assume it's not the most widely used icecast config, so I just proxied mine with nginx and it works fine.
Here's an example nginx vhost. Be sure to change domain, check your paths and think about the location you want the mount proxied to and how you want to handle ports.
Please note this will make your stream available on port 443 instead of 8000. Certain clients (such as facebookexternalhit/1.1) may try to hang onto the stream as thought it's a https url waiting to connect. This may not be the behavior you expect or desire.
Also, if you want no http available at all, be sure to change bind-address back to the local host. eg:
<bind-address>127.0.0.1</bind-address>
www.example.com.nginx.conf
server {
listen 80;
server_name www.example.com;
location /listen {
if ($ssl_protocol = "") {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
}
#### SSL
server {
ssl on;
ssl_certificate_key /etc/sslmate/www.example.com.key;
ssl_certificate /etc/sslmate/www.example.com.chained.crt;
# Recommended security settings from https://wiki.mozilla.org/Security/Server_Side_TLS
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:
ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA
-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES2
56-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /usr/share/sslmate/dhparams/dh2048-group14.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:5m;
# Enable this if you want HSTS (recommended)
add_header Strict-Transport-Security max-age=15768000;
listen 443 ssl;
server_name www.example.com;
location / {
proxy_pass http://127.0.0.1:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The icecast2 package provided for Debian-based versions doesn't provide SSL support (so it has not https:// support) since it is supported by openssl libraries that have licensing difficulties with the GNU GPL.
To know if icecast2 was compiled with openssl support, run this:
ldd /usr/bin/icecast2 | grep ssl
if it's compiled with it, then a line like this one should de displayed:
libssl.so.1.1 => /usr/lib/x86_64-linux-gnu/libssl.so.1.1 (0x00007ff5248a4000)
If instead you see nothing, you have no support for it.
To get the correct version you may want to obtain it from xiph.org directly:
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
Guys the issue is related to the certificate file.
First of all, you need to have for example
<paths>
<ssl-certificate>/usr/share/icecast2/icecast.pem</ssl-certificate>
</paths>
and
<listen-socket>
<port>8443</port>
<ssl>1</ssl>
</listen-socket>
in your configuration. But that is not everything you need!
If you get your certificate for example from let's encrypt or sslforfree, you will have a certificate file and a private key file.
But for Icecast, you need both files together.
What you should do:
1- Open the private key and copy the content of this file
2- Open the certificate file and paste the content of your private key that you copied, at the end of this file and save it as icecast.pem.
Then use this file and you should be fine.
Thanks to the person who introduces it here:
Icecast 2 and SSL
In your icecast2.xml file
If set to 1 will enable HTTPS on this listen-socket. Icecast must have been compiled against OpenSSL to be able to do so.
<paths>
<basedir>./</basedir>
<logdir>./logs</logdir>
<pidfile>./icecast.pid</pidfile>
<webroot>./web</webroot>
<adminroot>./admin</adminroot>
<allow-ip>/path/to/ip_allowlist</allow-ip>
<deny-ip>/path_to_ip_denylist</deny-ip>
<tls-certificate>/path/to/certificate.pem</tls-certificate>
<ssl-allowed-ciphers>ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS</ssl-allowed-ciphers>
<alias source="/foo" dest="/bar"/>
</paths>
<listen-socket>
<port>8000</port>
<bind-address>127.0.0.1</bind-address> </listen-socket>
<listen-socket>
<port>8443</port>
<tls>1</tls> </listen-socket>
<listen-socket>
<port>8004</port>
<shoutcast-mount>/live.mp3</shoutcast-mount> </listen-socket>