I have a web-app consisting of front- and back-end services. I want to secure my front-end service with let's encrypt certificate, but then I have to use secured connection between front- and back-end. I have the back-end service served on a custom port. For securing back-end I want to use nginx to proxy my server. However, I am struggling to get it right. Here is my nginx configuration:
server {
listen 8082;
server_name <my_domain_name>;
ssl on;
ssl_certificate /etc/letsencrypt/live/<my_domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<my_domain>/privkey.pem;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 SSLv3;
location / {
proxy_pass http://0.0.0.0:8081;
}
}
First, I just wanted to get it through, without ssl. But it does not work like this, nothing is served on 8082. If it works, I thought I could use my letsencrypt certificates here, though I'm not sure, whether it is possible and I understand things correctly.
I would appreciate any help! Thanks a lot in advance!
Update
I figured out the problem was in iptables. After I added the port 8082 to them, it worked. What I don't understand, why I can connect to the port 8081, although it is not in the iptables.
However, now I get ERR_SSL_PROTOCOL_ERROR when I try https://my_domain:8082.
I also tried to add ssl to the listen directive, like listen 8082 ssl;. Then I get ERR_CONNECTION_RESET.
Just for the record. The problem was indeed in the directive listen.
Adding
listen 8082 ssl;
and removing
ssl on;
solved it.
It is a mystery, why it didn't work and gave me ERR_CONNECTION_RESET before. Now it works.
location #backend {
proxy_pass http://backend;
}
#backend is a named location which allows you to reference it like a variable i.e. like
location / {
error_page 404 = #backend;
}
For your problem try something like
location / {
proxy_pass http://backend;
}
Related
I know that NGINX is not supposed to be used as a forward proxy but I have a requirement to do so ... Anyway, obviously it is not to hard to get http to work as a forward proxy but issues arise when trying to configure https. I generated some self signed certs and then try to connect to https://www.google.com and it gives me the error ERR_TUNNEL_CONNECTION_FAILED. The issue has to do with my certs somehow but I have no idea how to fix the issue. Does anyone know how to achieve this functionality ?
Here is my config
server {
listen 443 ssl;
root /data/www;
ssl on;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
resolver 8.8.8.8;
proxy_pass https://$http_host$uri$is_args$args;
}
}
The reason NGINX does not support HTTPS forward proxying is because it doesn't support the CONNECT method. However, if you are interested in using it as a HTTPS forwarding proxy you can use the ngx_http_proxy_connect_module
I was able to configure SSL/TLS forward proxying with this configuration, using the stream module.
stream {
upstream web_server {
server my_server_listening_on:443;
}
server {
listen 443;
proxy_pass web_server;
}
}
Resources:
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru
First of all my problem is different.
I have used listen 443 default ssl; also listen 443 ssl; and commenting out # but seems nothing is working. Port 80 works fine but on port 443 I get this error.
Currently this is the default file for nginx.
server {
listen 80;
listen 443 ssl;
#listen 443 default ssl;
server_name .******.org;
keepalive_timeout 70;
#ssl on;
ssl_certificate /etc/ssl/private/lol/www.*******.crt;
ssl_certificate_key /etc/ssl/private/lol/www.********.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
On ssl_protocols I also tried to only use SSLv3 TLSv1 but its same. My nginx version is 1.2.1.
I have gone through many online sites even here but I think my problem is not being solved with any of those methods mentioned by different geeks.
So finally I am here.
Any suggestions?
P.S: I am using cloudflare, but there I have turned Universal SSL Off as I want to use other ssl.
You should write two server blocks one for http and one for https like:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/public/;
index index.html;
#other settings
}
server {
listen 443;
server_name localhost;
root /var/www/public/test/;
index index.html;
ssl on;
ssl_certificate /etc/nginx/certs/wss.pem;
ssl_certificate_key /etc/nginx/certs/wss.pem;
#other settings
}
I have tried it with the default nginx settings and both ports work fine.
If you are experiencing this issue with Google Compute Engine / Google HTTP loadbalancer... Ensure you have your instance group setup with separate named ports for http: 80 and https: 443.
Or it will randomly select a port.
This came about in my case due to originally setting up the HTTP loadbalancer when it was still in beta. Then when I added another loadbalancer it refreshed the settings and started randomly failing.
It was failing 50% of the time, because I only had Nginx setup with a vhost for port 80, and it was trying to push HTTP requests to port 80 on the web boxes.
The error you get is most likely, because you send a unencrypted HTTP-request to the SSL-port.
Something like
wget http://example.com:443/
This is a client problem (the server just tells you that it refuses to answer non-encrypted messages on to-be-encrypted channels)
It is client problem.
I was having the same issue. Turns out the https prefix was being dropped in the URL.
In the browser inspect the network traffic to verify that the browser is sending an http request, not https. Issue found!
Manually type in the wanted URL with https to retrieve the page successfully.
Now you can go about applying a focused fix to your client.
I have a site which was perfectly running with apache on some old ubuntu server and also has https for it. But now for some reasons i need to move to different(new ubuntu server with high configuration) server and trying to serve my site using Nginx, and so installed nginx (nginx/1.4.6 (Ubuntu)). Below is my nginx.conf file settings
server {
listen 8005;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /root/apps/project/static/;
}
location /media/ {
alias /root/apps/media/;
}
}
# Https Server
server {
listen 443;
location / {
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Url-Scheme $scheme;
# proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
server_tokens off;
ssl on;
ssl_certificate /etc/ssl/certificates/project.com.crt;
ssl_certificate_key /etc/ssl/certificates/www.project.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols SSLv3 TLSv1; # SSLv2
ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_prefer_server_ciphers on;
}
Since i was already having https certificate(project.com.crt) and key(www.project.com.key) running on another server, i had just copied them to new server(which does not contain any domain as of now, and has only IP) and placed in at path /etc/ssl/certificates/ and trying to use them directly. Now i had restarted Nginx and tried to access my IP 23.xxx.xxx.xx:8005 with https:// 23.xxx.xxx.xx:8005 and i am getting the below error in firefox
Secure Connection Failed
An error occurred during a connection to 23.xxx.xxx.xx:8005. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.
But when i access the IP without https, i can able to serve my site.
So whats wrong with my Https settings in the above nginx conf file ?
Whether we can't serve the certificate files by simply copying and pasting at some folder ? do we need to create any extra certificate for my new server ?
Change
listen 443;
to
listen 443 ssl;
and get rid of this line
ssl on;
That should fix your SSL issue, but it looks like you have several issues in your configuration.
So whats wrong with my Https settings in the above nginx conf file ?
You don't have a SSL/TLS server listening on the port the client is trying to connect to. The ssl_error_rx_record_too_long occurs because the client's SSL stack is trying to interpret a HTTP response as SSL/TLS data. A Wireshark trace should confirm the issue. Look at the raw bytes (follow the stream).
I don't know why the configuration is not correct. Perhaps someone with Nginx config experience can help. Or, the folks on Server Fault or Webmaster Stack Exchange.
This problem happens when the Client gets non-SSL content over SSL connection. the Server send HTTP content but the client awaits HTTPS content. You can check two main things to fix it, but it can caused by another side effects too.
Make sure you put ssl on listen directive;
listen [PORT_NUMBER] ssl;
Check your Host IP Address you tried to connect. DNS can be correct but maybe you have another point on your hosts file or your local DNS server.
I'm serving two sites with Nginx. First site (say A) has a SSL certificate and second site (say B) doesn't. Site A works fine when opening on https and B on http. But when I access site B on https, nginx serves the SSL cert and contents of site A with domain of B, which shouldn't happen.
Nginx config for site A is as follows. For site B, it's just a reverse proxy to a Flask app.
server {
listen 80;
server_name siteA.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name siteA.com;
ssl_certificate /path/to/cert.cert
ssl_certificate_key /path/to/cert_key.key;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:RC4-SHA:AES256-GCM-SHA384:AES256-SHA256:CAMELLIA256-SHA:ECDHE-RSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:CAMELLIA128-SHA;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
keepalive_timeout 70;
# and then the `location /` serving static files
}
I can't figure out what is wrong here.
Apparently I need a dedicated IP for site A.
Quoting from What exactly does "every SSL certificate requires a dedicated IP" mean?
When securing some connection with TLS, you usually use the certificate to authenticate the server (and sometimes the client). There's one server per IP/Port, so usually there's no problem for the server to choose what certificate to use. HTTPS is the exception -- several different domain names can refer to one IP and the client (usually a browser) connects to the same server for different domain names. The domain name is passed to the server in the request, which goes after TLS handshake. Here's where the problem arises - the web server doesn't know which certificate to present. To address this a new extension has been added to TLS, named SNI (Server Name Indication). However, not all clients support it. So in general it's a good idea to have a dedicated server per IP/Port per domain. In other words, each domain, to which the client can connect using HTTPS, should have its own IP address (or different port, but that's not usual).
Nginx was listening on port 443 and when request for site B went on https, the TLS handshake took place and the certificate of site A was presented before serving the content.
The ssl_certificate parameter should be closed with ; to get expected output.
Also make sure that you have followed the correct syntax in all the config file parameters by using following command and then restart or reload the service:
sudo nginx -t
NGINX supports SNI, so it's possible to serve different domains with different certificates from the same IP address. This can be done with multiple server blocks. NGINX has documented this in
http://nginx.org/en/docs/http/configuring_https_servers.html
For me HTTP2 and IPv6 are important, so I to listen to [::] and set ipv6only=off. Apparently this option should only be set for the first server block, otherwise NGINX will not start.
duplicate listen options for [::]:443
These server blocks
server {
listen [::]:443 ssl http2 ipv6only=off;
server_name siteA.com www.siteA.com;
ssl_certificate /path/to/certA.cert
ssl_certificate_key /path/to/certA_key.key;
}
server {
listen [::]:443 ssl http2;
server_name siteB.com www.siteB.com;
ssl_certificate /path/to/certB.cert
ssl_certificate_key /path/to/certB_key.key;
}
If you host multiple sites in you server and in one Nginx config if you have listen 443 ssl http2 default_server;
The default_server will give the same cert to all domains. removing it will fix the problem.
While following this tutorial I total missed this part:
Note: You may only have one listen directive that includes the default_server modifier for each IP version and port combination. If you have other server blocks enabled for these ports that have default_server set, you must remove the modifier from one of the blocks.
I need to use Nginx as an SSL proxy, which forwards traffic to different back ends depending on the subdomain.
I have seem everywhere that I should define multiple "server {" sections but that doesn't work correctly for SSL. Doing that I would always have the SSL being processed in the first virtual host as the server name is unknown until you process the https traffic.
Scenario:
One IP address
One SSL wildcard wildcard
Multiple backends which needs to be accessed like the following:
https://one.mysite.com/ -> http://localhost:8080
https://two.mysite.com/ -> http://localhost:8090
Nginx says "if" is evil: http://wiki.nginx.org/IfIsEvil, but what else can I do?
I have tried this, but it doesn't work, I get an 500 error but nothing in the error logs.
server {
listen 443;
server_name *.mysite.com;
ssl on;
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
location / {
if ($server_name ~ "one.mysite.com") {
proxy_pass http://localhost:8080;
}
if ($server_name ~ "two.mysite.com") {
proxy_pass http://localhost:8090;
}
}
Has anyone managed to accomplish this with Nginx? Any help/alternatives, link, would be much appreciated.
I found the solution which is basically to define the SSL options and the SSL certificate outside the "server" block:
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name *.mysite.com;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
server_name one.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 443 ssl;
server_name two.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8090;
}
}
Key things:
"ssl on;" is the only thing that needs to be within the "server" blocks that listen in https, you can put it outside too, but what will make the "server" blocks that listen in port 80 to use https protocol and not the expected http.
Because the "ssl_certificate", "ssl_ciphers: and other "ssl_*" are outside the "server" block, Nginx does the SSL offloading without a server_name. Which is what it should do, as the SSL decryption cannot happen based on any host name, as at this stage the URL is encrypted.
JAVA and curl don't fail to work now. There is no server_name - host miss match.
The short answer is to use Server Name Indication. This should work by default in common browsers and cURL.
according to http://www.informit.com/articles/article.aspx?p=1994795, you should indeed have two "server" sections, with two different server names.
In each one, you should include your ssl_* directives.