I currently have a NGINX configuration with many subdomains. I started a mumble-server on port 27845, it works if I try to access it on a Mumble client with <ip_adress>:27845.
I tried to use NGINX to provide a subdomain mumble.example.com on port 80, which use proxy_pass http://localhost:27845. But when I try to connect to mumble.example.com on my Mumble client, it says that :
This server is using an older encryption standard, and is no longer supported by modern versions of Mumble.
How can I use a valid SSL configuration for NGINX on the port 80 ? I can use solely ports 80 and 443, and 443 is used for OpenVPN.
Subdomain conf NGINX :
server {
server_name mumble.example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location / {
proxy_pass http://localhost:27845;
}
}
Nginx is an HTTP proxy, but mumble protocol is based on TCP/UDP, so I don't think Nginx can proxy for murmur service.
Related
So i have a Nginx server working as a reverse proxy with this configuration
http {
server {
listen 80;
listen [::]:80;
server_name www.example.tdl;
access_log logs/reverse-access.log;
error_log logs/reverse-error.log;
location / {
proxy_pass https://127.0.0.1:443;
}
}
}
My only problem is that when i connect to example.tdl it redirects correctly, but not with HTTPS. But its redirecting to the https port of my Apache server so i dont know where is the problem.
I never used nginx before so this is probably a configuration error.
How can i make that nginx redirects to that port with https?
This may sound dumb but what you've got is that nginx is working as a proxy for you. You connect to nginx via HTTP, nginx forwards your request to apache via HTTPS, then gets the answer back and transmit it to you using HTTP.
You need to make nginx listen for HTTPS with listen 443 ssl; and supply it with a certificate (ssl_certificate <path>;) and a key (ssl_certificate_key <path>;). After that add the following piece of code to the server block:
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
This will redirect everything to HTTPS.
So I'm having some issues with SSL certificates.
I have a react app running on port 80.
and a node backend running on port 443.
I have a domain pointing to the IP (xx.xx.xxx.xx) which directs to the react app. I'm using nginx to proxy the requests from frontend to backend as I have both on the same server.
Here is the nginx config:
server {
listen 80 ssl;
server_name xx.xx.xxx.xx;
ssl_client_certificate /etc/letsencrypt/live/domain.com/cert.pem;
ssl_certificate /etc/letsencrypt/live/domain.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
root /home/ubuntu/build;
index index.html;
access_log /var/log/nginx/build.access.log;
error_log /var/log/nginx/build.error.log;
location / {
try_files $uri /index.html =404;
}
}
upstream backend {
server 127.0.0.1:443;
server 127.0.0.1:443 max_fails=1 fail_timeout=30s backup;
keepalive 64;
}
server {
listen 443 ssl;
server_name xx.xx.xxx.xx;
ssl_client_certificate /etc/letsencrypt/live/domain.com/cert.pem;
ssl_certificate /etc/letsencrypt/live/domain.com/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
keepalive_timeout 10;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_set_header Connection '';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
I'm receiving the following error when a request is made to the backend:
net::ERR_CERT_COMMON_NAME_INVALID
this is because the certificate is valid for 'domain.com' and not the IP that the backend is operating on (I know you must use a fully qualified domain for the cert).
My question is what can I do differently (with nginx) that will allow my requests to be made over https on a reverse proxy?
You're using the standard ports 80 and 443 differently. These ports are the entry points to your server and are not advised to be used as ports running inside reverse proxies.
When using reverse proxies, we map other ports to either port 80 or port 443 so the can be publicly accessible via HTTP or HTTPS respectively.
If we want to access everything by HTTPS, we will need to map both react and node apps to 443 via reverse proxy, and redirect all HTTP access going to HTTPS.
So as suggested steps to fix:
1) Use different ports, say 3000 for react and 3001 for node.
2) Configure your server block listening to port 80 to redirect to https like return 301 https://<yourdomainhere.com>
3) Remove ssl lines in your port 80 server block. Only use them inside server blocks listening to port 443
4) Modify your upstream {} block to use port 3001 for node app. Retain the use of proxy_pass http://backend;, it's fine as it is.
5) Add a new location block with proxy_pass http://localhost:3000; inside the server block that listens to port 443. You will now have two location blocks, one for react and one for node.
6) Define your server_name per block with yourdomainhere.com since IP addresses are generally not allowed to be issued with SSL certificates. I suggest using a different server block to redirect the IP address to your domain with HTTPS prefix
7) check for errors, then restart nginx.
I have setup a Nginx Ingress to proxy traffic to a Kubernetes cluster I have setup with kubeadm. This seems to be working well.
On the host (where the Master node is setup) I have a number of other services running that are being proxied by another Nginx (publicly facing).
What I want to achieve is route all the traffic to a specific domain (pointing to the cluster) from the first Nginx (facing the public) to the Nginx running in the cluster.
Internet -----> Nginx Public -----> Nginx Ingress -----> Cluster
Nginx Ingress is listening on TLS/SSL traffic.
So I want to passthrough SSL traffic to it via the public Nginx.
I attempted it with the following which didnt seem to work.
upstream cluster {
server 10.109.70.33:443 max_fails=10 fail_timeout=10s;
}
server {
listen 80;
listen [::]:80;
listen 443;
listen [::]:443;
server_name *.dev-new.test.co;
access_log /var/log/nginx/cluster-access.log;
error_log /var/log/nginx/cluster-error.log;
location / {
proxy_pass https://cluster;
}
}
You need to add
proxy_set_header Host $host;
in your proxy_pass block. This is needed so the server knows which virtual host you are trying to look into
This is my setup:
Got a domain : domain.com .
Within my local network I got DNS provided by an active directory box,
an IIS web server running on port 80, host name iis.domain.com,
and this has sites iis1.domain.com, iis2.domain.com
an Apache web server running on port 80, host name apache.domain.com,
with the sites apache1.domain.com and apache2.domain.com.
Within my local network I can access all these sites just fine.
I also have external dns entries for iis1,iis2,apache1 and apache2.
I only have one public IP address and I would like to setup another box which would get port forwarded into the internet ( port forward port 80 and 443 ).
I would like to know what to install on that box and how to configure it.
I have looked at nginx, haproxy and IIS ARR, but I would like to know which of these are the easiest to setup and have the least overhead.
In my mind I would like to specify something like.... ok if it's a request for site iis1.domain.com then take that one to the IIS web server, and if it is for apache1.domain.com then go to the Apache web server.
I would like to go with a Linux solution, but I am not sure which and how to set it up.
Thank you in advance.
P.S.
I saw a possible solution here.
Would something like this work ?
server {
listen 80 default_server;
server_name iis1.domain.com;
location / {
proxy_pass http://iis1.domain.com/;
}
}
server {
listen 80 default_server;
server_name apache1.domain.com;
location / {
proxy_pass http://apache1.domain.com/;
}
}
I would go with haproxy (easiest in my opinion)
just be very careful with your External vs Internal DNS.
the example you have in your question forwards to a dns....which points back to the proxy(external)....which points to the dns...i think you get my meaning.
HAProxy would point to your backends IP address so both internal and external DNS would point to your proxy and get routed fine to its intended backend
the HAProxy config would look something like this :
global
# default globals
defaults
# default globals
frontend http-in
bind YOUR.IP.GOES.HERE:80
bind YOUR.IP.GOES.HERE:443 ssl crt PATH/TO/CERT-FILE.PEM no-sslv3
mode http
option httplog
option httpclose
option forwardfor
acl iis1 hdr(Host) -i iis1.domain.com
acl iis2 hdr(Host) -i iis2.domain.com
acl apache1 hdr(Host) -i apache1.domain.com
acl apache2 hdr(Host) -i apache2.domain.com
use_backend iis if iis1
use_backend iis if iis2
use_backend apache if apache1
use_backend apache if apache2
backend iis
server IIS xxx.xxx.xxx.xxx:80 check
backend apache
server APACHE xxx.xxx.xxx.yyy:80 check
I managed to actually get this to work by installing a linux box with nginx. Port 80 from this box is forwarded into the internet.
In /etc/nginx I added a line to look for other config files -> include /etc/nginx/sites-enabled/*.conf; .
So in /etc/nginx/sites-enabled/ I created one config file, with this info:
server {
listen 80;
server_name apache1.domain.com;
location /{
proxy_pass http://apache1.domain.com;
}
}
server {
listen 80;
server_name apache2.domain.com;
location /{
proxy_pass http://apache2.domain.com;
}
}
server {
listen 80;
server_name iis1.domain.com;
location /{
proxy_pass http://iis1.domain.com;
}
}
server {
listen 80;
server_name iis2.domain.com;
location /{
proxy_pass http://iis2.domain.com;
}
}
I'm serving two sites with Nginx. First site (say A) has a SSL certificate and second site (say B) doesn't. Site A works fine when opening on https and B on http. But when I access site B on https, nginx serves the SSL cert and contents of site A with domain of B, which shouldn't happen.
Nginx config for site A is as follows. For site B, it's just a reverse proxy to a Flask app.
server {
listen 80;
server_name siteA.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name siteA.com;
ssl_certificate /path/to/cert.cert
ssl_certificate_key /path/to/cert_key.key;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:RC4-SHA:AES256-GCM-SHA384:AES256-SHA256:CAMELLIA256-SHA:ECDHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:CAMELLIA128-SHA;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
# and then the `location /` serving static files
}
I can't figure out what is wrong here.
Apparently I need a dedicated IP for site A.
Quoting from What exactly does "every SSL certificate requires a dedicated IP" mean?
When securing some connection with TLS, you usually use the certificate to authenticate the server (and sometimes the client). There's one server per IP/Port, so usually there's no problem for the server to choose what certificate to use. HTTPS is the exception -- several different domain names can refer to one IP and the client (usually a browser) connects to the same server for different domain names. The domain name is passed to the server in the request, which goes after TLS handshake. Here's where the problem arises - the web server doesn't know which certificate to present. To address this a new extension has been added to TLS, named SNI (Server Name Indication). However, not all clients support it. So in general it's a good idea to have a dedicated server per IP/Port per domain. In other words, each domain, to which the client can connect using HTTPS, should have its own IP address (or different port, but that's not usual).
Nginx was listening on port 443 and when request for site B went on https, the TLS handshake took place and the certificate of site A was presented before serving the content.
The ssl_certificate parameter should be closed with ; to get expected output.
Also make sure that you have followed the correct syntax in all the config file parameters by using following command and then restart or reload the service:
sudo nginx -t
NGINX supports SNI, so it's possible to serve different domains with different certificates from the same IP address. This can be done with multiple server blocks. NGINX has documented this in
http://nginx.org/en/docs/http/configuring_https_servers.html
For me HTTP2 and IPv6 are important, so I to listen to [::] and set ipv6only=off. Apparently this option should only be set for the first server block, otherwise NGINX will not start.
duplicate listen options for [::]:443
These server blocks
server {
listen [::]:443 ssl http2 ipv6only=off;
server_name siteA.com www.siteA.com;
ssl_certificate /path/to/certA.cert
ssl_certificate_key /path/to/certA_key.key;
}
server {
listen [::]:443 ssl http2;
server_name siteB.com www.siteB.com;
ssl_certificate /path/to/certB.cert
ssl_certificate_key /path/to/certB_key.key;
}
If you host multiple sites in you server and in one Nginx config if you have listen 443 ssl http2 default_server;
The default_server will give the same cert to all domains. removing it will fix the problem.
While following this tutorial I total missed this part:
Note: You may only have one listen directive that includes the default_server modifier for each IP version and port combination. If you have other server blocks enabled for these ports that have default_server set, you must remove the modifier from one of the blocks.