nginx ldap auth bypass for specifc networks - authentication

I am running nginx v1.6.3 on debian jessie 8.5 with this module compiled: https://github.com/kvspb/nginx-auth-ldap
When connecting to a site from different subnets I want the following behaviour:
Subnet A: needs auth via ldap
Subnet B: no auth
I tried the geo modul to turn on ldap_auth only if subnet A matches, but it still needs auth.
Parts of my config
geo $val {
default 0;
10.0.0.0/24 1;
}
server {
...
location / {
if ($val) {
ldap_auth ....
}
}
error.log:
2016/06/23 23:48:50 [emerg] 3307#0: "auth_ldap" directive is not allowed here in /etc/nginx/sites-enabled/proxy:32
I thought about adding an switch auth_ldap_bypass to the ldap_auth nginx module, but I'm not into programming modules for nginx. Maybe there is a solution out there.

Related

Varnish 6.0lts won't handle secure websockets on a remote proxy?

I'm having a hard time with this setup. I have a node.js box serving HTTP on 3000, websockets on 3001, and secure websockets on 3002. Out in front of that I have a remote Hitch/Varnish caching proxy on its own server that's listening on 443/80 and connecting the first server as its default backend via 3000. A user who visits the site URL https://foo.tld hits the varnish proxy and sees the site, where some javascript on the site tells their browser to connect to wss://foo.tld:3002 for secure websockets.
My problem is getting websockets to pass transparently through to the backend. In the VCL I have the standard
if (req.http.upgrade ~ "(?i)websocket") {
return (pipe);
}
and
sub vcl_pipe {
#Declare pipe handler for websockets
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
set bereq.http.connection = req.http.connection;
}
}
Which doesn't work in this case. To list what I have tried so far with no success:
1: Creating a second backend in VCL named "websockets" that is the same backend IP but on either port 3001 or 3002 and adding "set req.backend_hint = websockets;" before the pipe summon in the first snippet above.
2: Turning off HTTPS and trying to connect it over pure HTTP.
3: Modifying varnish.service to try and make varnish listen on ports other than, or in addition to, -a :80 and -a :8443,proxy, in which cases Varnish simply refuses to start. One attempt was to simply use HTTP only and attempt to run varnish on 3001 to get ws:// working without SSL but varnish refuses to start.
4: Most recently I attempted the following in VCL to try and pick up client connections coming in on 3001:
if (std.port(server.ip) == 3001) {
set req.backend_hint = websockets;
}
My goal is for the Varnish box to pick up secure websocket traffic (wss://) on 3002 (via hitch at 443 using the normal secure websocket connection protocol) and have that passed transparently to the backend websocket server, whether SSL encrypted across that leg of the connection or not. I have set up other, smaller servers like this before and getting websockets working is trivial if Varnish and the backend service are either on the same machine or behind a regulating CDN like Cloudflare, so it has been extra frustrating trying to figure out just what this remote proxy setup needs. I feel like part of the solution is having Varnish or Hitch (not sure) listening on 3002 to accept the connections at which point the normal req.http.upgrade and pipe functions would come into play, but the software refuses to cooperate.
--------Update--
I have broken down the problem into the simplest form I can. The main server (backend) is now serving plain HTTP on 8080 and WS:// on 6081. I have removed hitch and TLS from the equation entirely, but even in this simplified form it is impossible to connect to websockets through Varnish. I can verify that the Websocket server is working correctly on the backend. Connecting to the backend IP address with a browser shows websockets functioning perfectly there. It's Varnish that's the problem.
My current hitch.conf (not relevant here but provided per request):
frontend = "[*]:443"
frontend = "[*]:3001"
backend = "[127.0.0.1]:8443" # 6086 is the default Varnish PROXY port.
workers = 4 # number of CPU cores
daemon = on
# We strongly recommend you create a separate non-privileged hitch
# user and group
user = "redacted"
group = "redacted"
# Enable to let clients negotiate HTTP/2 with ALPN. (default off)
# alpn-protos = "h2, http/1.1"
# run Varnish as backend over PROXY; varnishd -a :80 -a localhost:6086,PROXY ..
write-proxy-v2 = on # Write PROXY header
syslog = on
log-level = 1
# Add pem files to this directory
# pem-dir = "/etc/pki/tls/private"
pem-file = "/redacted/hitch-bundle.pem"
Current default.vcl (stripped down to almost nothing just for testing this. The backend is NOT running on the same machine, it is remote):
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
# Default backend definition. Set this to point to your content server.
backend default {
.host = "remote.server.ip";
.port = "8080";
}
backend websockets {
.host = "remote.server.ip";
.port = "6081";
}
sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
#Allow websockets to pass through the cache (summons pipe handler below)
if (req.http.Upgrade ~ "(?i)websocket") {
set req.backend_hint = websockets;
return (pipe);
} else {
set req.backend_hint = default;
}
}
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
set bereq.http.connection = req.http.connection;
}
return (pipe);
}
Varnish's systemd exec parameters:
ExecStart=/usr/sbin/varnishd \
-a http=:80 \
-a proxy=localhost:8443,PROXY \
-a ws=:6081 \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m \
-p pipe_timeout=1800
Working in plain HTTP and insecure websockets like this, it should be very simple to get a working model. I don't understand what could possibly be going wrong.
Varnish Cache, the open source version of Varnish, doesn't support backend connections over TLS.
While you can offload TLS using Hitch, the connection to your websocket server will not be encrypted.
Basic VCL example
Here's a very basic VCL example where web & websocket requests are split and sent to separate backends:
vcl 4.1;
backend web {
.port = "3000";
}
backend ws {
.port = "3001";
}
sub vcl_recv {
if (req.http.Upgrade ~ "(?i)websocket") {
set req.backend_hint = ws;
return (pipe);
} else {
set req.backend_hint = web;
}
}
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
}
return (pipe);
}
Need more input
However, I'm probably missing a lot of context. I also didn't specify a .host parameter in the backends, so the assumption is that all services are hosted locally.
Please add your full VCL, your Hitch config and the varnishd runtime parameters to your question. This will add context and allows me to come up with a better solution.
What about Hitch?
If you terminate TLS in Hitch, both HTTPS & secure websockets will be handled by Hitch where the plain-text HTTP & websockets will still be directly handeled by Varnish.
See https://www.varnish-software.com/developers/tutorials/terminate-tls-varnish-hitch for a Hitch tutorial that also explains how Varnish should be configured.
I'm a big advocate of using the PROXY protocol in Varnish. The hitch tutorial has a specific section about this: https://www.varnish-software.com/developers/tutorials/terminate-tls-varnish-hitch/#enable-the-proxy-protocol-in-varnish
Custom ports
The standard ports to access the service are 80 for HTTP and insecure websockets and 443 for HTTPS and secure websockets.
If you want to use custom ports for the websockets, it is possible to configure them in Hitch and Varnish.
Let's say you want to main ports 3001 and 3002 for your websockets. This means you need 2 frontends in Hitch:
One for HTTPs on 443
One for secure WS on 3002
See https://www.varnish-software.com/developers/tutorials/terminate-tls-varnish-hitch/#listening-address for more information about the frontend config.
Varnish on the other hand needs to have 3 listening addresses:
One for HTTP on port 80 (-a http=:80)
One for offloaded HTTPS & secure WS with PROXY support on port 8443 (-a proxy=:8443,PROXY)
One for insecure WS on port 3001 (-a ws=:3001)
Next steps
Please use the information and see if this helps to find a solution. If not, please share your VCL file, your Hitch config and varnishd runtime.
Update
Now that you provided more input, the picture starts to become more clear. The fact that you eliminated the TLS part for now will make it a lot easier to debug.
Assuming the names of your listening interfaces for varnishd are http and ws (as mentioned in your systemd unit file), we can use the following varnishlog commands to debug:
varnishlog -g request -q "ReqStart[3] eq 'http'"
This command will show logs for all log transactions where the http listening interface is used.
If you want to make it more granular, you can also add the request URL as a filtering criterium. This will narrow down the number of transactions:
varnishlog -g request -q "ReqStart[3] eq 'http' and ReqUrl eq '/'"
Please add a complete log transaction for one of the failed requests. This will help us understand why requests are failing.
You can do the same for requests on the ws listening interface by using the commands below:
varnishlog -g request -q "ReqStart[3] eq 'ws'"
varnishlog -g request -q "ReqStart[3] eq 'ws' and ReqUrl eq '/'"
I'm assuming you're successful at starting the varnishd program but unsuccessful at getting decent output out of Varnish. The varnishlog program will provide the insight we need. Please add the logging output to your question so I can look into it.

How to Correct 'nginx: [emerg] "stream" directive is not allowed here'

The Question
Why does the following Nginx configuration return nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/sites-enabled/default:1?
Nginx Configuration...
stream {
map $ssl_preread_server_name $upstream {
example.com 1051;
}
upstream 1051 {
server 127.0.0.1:1051;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
Version / Build information...
OS: Debian 10
Here is the stripped down nginx -V output confirming the presence of the modules I understand I need...
nginx version: nginx/1.14.2
TLS SNI support enabled
configure arguments: ... --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module ...
The Context
I have a single static IP address. At the static IP address, I am setting up a reverse proxy Nginx server to forward traffic to a variety of backend services. Several of the services are websites with unique domain names.
+-----+ +----------------------+ +---------+
| WAN | <----> | Nginx Reverse Proxy | <----> | Service |
+-----+ +----------------------+ +---------+
At boot, the service uses systemd to run this port forwarding ssh command to connect to the reverse proxy: ssh -N -R 1051:localhost:443 tunnel#example.com (That is working well.)
I want the certificate to reside on the service - not the reverse proxy. From what I understand I need to leverage SNI on Nginx to passthrough the SSL connections bases on domain name. But I cannot get the Nginx reverse proxy to passthrough SSL.
Resources
Here are a few of the resources I have pored over...
https://serverfault.com/questions/625362/can-a-reverse-proxy-use-sni-with-ssl-pass-through
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
https://www.amitnepal.com/nginx-ssl-passthrough-reverse-proxy
https://serverfault.com/questions/1049158/nginx-how-to-combine-ssl-preread-protocol-with-ssl-preread-server-name-ssh-mul
The problem was I tried to embed a stream block inside an http block. I was not properly accounting for the include in /etc/nginx/nignx.conf file.

Nomad High Availability with Traefik

I've decided to give Nomad a try, and I'm setting up a small environment for side projects in my company.
Although the documentation on Nomad/Consul is nice and detailed, they don't reach the simple task of exposing a small web service to the world.
Following this official tutorial to use Traefik as a load balancer, how can I make those exposed services reachable?
The tutorial has a footnote stating that the services could be accessed from outside the cluster by port 8080.
But in a cluster where I have 3 servers and 3 clients, where should I point my DNS to?
Should a DNS with failover pointing to the 3 clients be enough?
Do I still need a load balancer for the clients?
There are multiple ways you could handle distributing the requests across your servers. Some may be more preferable than the other depending on your deployment environment.
The Fabio load balancer docs have a section on deployment configurations which I'll use as a reference.
Direct with DNS failover
In this model, you could configure DNS to point to the IPs of all three servers. Clients would receive all three IPs back in response to a DNS query, and randomly connect to one of the available instances.
If an IP is unhealthy, the client should retry the request to one of the other IPs, but clients may experience slower response times if a server is unavailable for an extended period of time and the client is occasionally routing requests to that unavailable IP.
You can mitigate this issue by configuring your DNS server to perform health checking of backend instances (assuming it supports it). AWS Route 53 provides this functionality (see Configuring DNS failover). If your DNS server does not support health checking, but provides an API to update records, you can use Consul Terraform Sync to automate adding/removing server IPs as the health of the Fabio instances changes in Consul.
Fabio behind a load balancer
As you mentioned the other option would be to place Fabio behind a load balancer. If you're deploying in the cloud, this could be the cloud provider's LB. The LB would give you better control over traffic routing to Fabio, provide TLS/SSL termination, and other functionality.
If you're on-premises, you could front it with any available load balancer like F5, A10, nginx, Apache Traffic Server, etc. You would need to ensure the LB is deployed in a highly available manner. Some suggestions for doing this are covered in the next section.
Direct with IP failover
Whether you're running Fabio directly on the Internet, or behind a load balancer, you need to make sure the IP which clients are connecting to is highly available.
If you're deploying on-premises, one method for achieving this would be to assign a common loopback IP each of the Fabio servers (e.g., 192.0.2.10), and then use an L2 redundancy protocol like Virtual Router Redundancy Protocol (VRRP) or an L3 routing protocol like BGP to ensure the network routes requests to available instances.
L2 failover
Keepalived is a VRRP daemon for Linux. There can find many tutorials online for installing and configure in.
L3 failover w/ BGP
GoCast is a BGP daemon built on GoBGP which conditionally advertises IPs to the upstream network based on the state of health checks. The author of this tool published a blog post titled BGP based Anycast as a Service which walks through deploying GoCast on Nomad, and configuring it to use Consul for health information.
L3 failover with static IPs
If you're deploying on-premises, a more simple configuration than the two aforementioned solutions might be to configure your router to install/remove static routes based on health checks to your backend instances. Cisco routers support this through their IP SLA feature. This tutorial walks through a basic setup configuration http://www.firewall.cx/cisco-technical-knowledgebase/cisco-routers/813-cisco-router-ipsla-basic.html.
As you can see, there are many ways to configure HA for Fabio or an upstream LB. Its hard to provide a good recommendation without knowing more about your environment. Hopefully one of these suggestions will be useful to you.
In the following case a network of nomad nodes are in 192.168.8.140-250 range and a floating IP 192.168.8.100 are the one that is DNAT from the firewall for 80/443 ports.
The traefik is coupled to a keepalived in the same group. The keepalived will assign its floating ip to the node where traefik is running. There will be only one keepalived in master state.
its not the keepalived "use case" but it's good at broadcast arp when it comes alive.
job "traefik" {
datacenters = ["dc1"]
type = "service"
group "traefik" {
constraint {
operator = "distinct_hosts"
value = "true"
}
volume "traefik_data_le" {
type = "csi"
source = "traefik_data"
read_only = false
attachment_mode = "file-system"
access_mode = "multi-node-multi-writer"
}
network {
port "http" {
static = 80
}
port "https" {
static = 443
}
port "admin" {
static = 8080
}
}
service {
name = "traefik-http"
provider = "nomad"
port = "http"
}
service {
name = "traefik-https"
provider = "nomad"
port = "https"
}
task "keepalived" {
driver = "docker"
env {
KEEPALIVED_VIRTUAL_IPS = "192.168.8.100/24"
KEEPALIVED_UNICAST_PEERS = ""
KEEPALIVED_STATE = "MASTER"
KEEPALIVED_VIRTUAL_ROUTES = ""
}
config {
image = "visibilityspots/keepalived"
network_mode = "host"
privileged = true
cap_add = ["NET_ADMIN", "NET_BROADCAST", "NET_RAW"]
}
}
task "server" {
driver = "docker"
config {
image = "traefik:2.9.6"
network_mode = "host"
ports = ["admin", "http", "https"]
args = [
"--api.dashboard=true",
"--entrypoints.web.address=:${NOMAD_PORT_http}",
"--entrypoints.websecure.address=:${NOMAD_PORT_https}",
"--entrypoints.traefik.address=:${NOMAD_PORT_admin}",
"--certificatesresolvers.letsencryptresolver.acme.email=email#email",
"--certificatesresolvers.letsencryptresolver.acme.storage=/letsencrypt/acme.json",
"--certificatesresolvers.letsencryptresolver.acme.httpchallenge.entrypoint=web",
"--entrypoints.web.http.redirections.entryPoint.to=websecure",
"--entrypoints.web.http.redirections.entryPoint.scheme=https",
"--providers.nomad=true",
"--providers.nomad.endpoint.address=http://192.168.8.140:4646" ### IP to your nomad server
]
}
volume_mount {
volume = "traefik_data_le"
destination = "/letsencrypt/"
}
}
}
}
For keepalived to run, you should allow some CAP in docker plugin config
plugin "docker" {
config {
allow_privileged = true
allow_caps = [...,"NET_ADMIN","NET_BROADCAST","NET_RAW"]
}
}

NGINX RP as gateway to all my LAN services

I'm trying to setup a Reverse Proxy based on NGinx on a Raspberry.
What I have :
- 1 synology server at home (location 1)
- 1 synology server at one of my friends' home (location 2)
- 1 Raspberry with Raspbian & Nginx RP (RPi_NGinx)
- 1 Raspberry with Raspbian & self hosted Jitsi meet server (Rpi_Jitsi)
- 1 Raspberry with Raspbian & PiVPN (OpenVPN server)
- 1 Asus Router
I only have one external IP and one domain name (let's say: myowndomain.com) and I can set as many CNAME.
See Diagram
What I want to do is setup NGinx so I can
- connect from internet to my synology NAS (SynoHome) , using dsm.myowndomain.com
- connect from internet to my router , using rtr.myowndomain.com
- connect from internet to my jitsi meet self hosted server router , using jitsi.myowndomain.com
- connect from internet over VPN to other home ressources on my LAN using vpn.myowndomain.com
- make sure my other synology (SynoBackup) will continue to replicate with my SynoHome,
What I already did:
- Setup NGinx
- Configured some /etc/nginx/sites-available/xxx.myowndomain.com.conf,
- Configured some links on /etc/nginx/sites-enabled/xxx.myowndomain.com.conf,
- modified win/sys32/drivers/etc/host in order to test my setup from inside my network
All my xxx.myowndomain.com.conf look like:
server {
listen 80;
server_name dsm.myowndomain.com;
location / {
proxy_pass https://192.168.200.200:5001;
}
}
So far I can only access to my Synology Admin UI. All other uses cases tests leads either to 502 Bad Gateway, or to a deadloop (Asus Router WebGUI) that reloads the same page.
Some NGINX expert who wants to help a noob?
Thank you
Try it like this:
server {
listen 80;
server_name dsm.myowndomain.com;
location / {
proxy_pass https://192.168.200.200:5001/ /;
}
}

How to authenticate Logstash output to a secure Elasticsearch URL (version 5.6.5)

I am using Logstash and Elasticsearch versions 5.6.5. So far used elasticsearch output with HTTP protocol and no authentication. Now Elasticsearch is being secured using basic authentication (user/password) and CA certified HTTPS URL. I don't have any control over the elasticsearch server. I just use it to output from Logstash.
Now when I try to configure the HTTPS URL of elasticsearch with basic authentication, it fails to create the pipeline.
Output Configuration
output {
elasticsearch {
hosts => ["https://myeslasticsearch.server.io"]
user => "esusername"
password => "espassword"
ssl => true
}
}
Errors
1. Error registering plugin {:plugin=>"#<LogStash::OutputDelegator:0x50aa9200
2. Pipeline aborted due to error {:exception=>#<URI::InvalidComponentError: bad component(expected user component):
How to fix this?
I notice that there is a field called cacert which requires some PEM file. But I am not sure what to put there since the Elasticsearch server is using a CA certified SSL not a self-signed one.
Addtional question: I don't have any xpack installed. Is 'xpack' required to be purchased for HTTPS output to Elasticsearch from Logstash?
I found the root cause of the issue. There were three things to fix:
The logstash version I tested with was wrong 5.5.0. I downloaded the correct version to match with Elasticsearch Version 5.6.5.
The host I used was running on 443 port. When I didn't specify the port as below logstash appended 9200 with it, due to which the connection failed.
hosts => ['https://my.es.server.com']
Below configuration corrected the port used by logstash.
hosts => ['https://my.es.server.com:443']
I was missing proxy connection settings.
proxy => 'http://my.proxy.com:80'
Overall settings that worked.
output {
elasticsearch {
hosts => ['https://my.es.server.com:443']
user => 'esusername'
password => 'espassword'
proxy => 'http://my.proxy:80'
index => "my-index-%{+YYYY.MM.dd}"
}
}
No need for 'ssl' field.
Also NO need for 'xpack' installation for this requirement.