Reverse proxy in nginx for nextcloud? - ssl

How do I set a reverse proxy for nextcloud?
This is my current config but it doesn't work:
server {
listen 8000;
server_name cloud.prjctdesign.com;
return 301 https://$host$request_uri;
}
server {
listen 4430 ssl http2;
server_name cloud.prjctdesign.com;
ssl_certificate /certs/cloud.prjctdesign.com.crt;
ssl_certificate_key /certs/cloud.prjctdesign.com.key;
include /etc/nginx/conf/ssl_params.conf;
client_max_body_size 10G; # change this value it according to $UPLOAD_MAX_SIZE
location / {
proxy_pass http://192.168.178.32;
include /etc/nginx/conf/proxy_params;
}
}
Also I enabled SSL using a let's encrypt cert. I run Nextcloud in the official VM image provided by Nextcloud / Techandme
I believe there is something wrong with the HSTS but I have no idea how it works. Also I based my forwarding off of this

I figured it out.
The reference to the ssl certificate is incorrect. Either run NGINX on the same server you are running nextcloud and redirect nginx to the position of the .cert file as in these lines:
ssl_certificate /certs/cloud.prjctdesign.com.crt;
ssl_certificate_key /certs/cloud.prjctdesign.com.key;
or generate a new cert on the nginx server and point the config towards it.

Related

How to listen to different port on Nginx and proxy the request?

I am a newbie to Nginx config and all, I have a process which is an express app, running on port 3000 using pm2 and I have allowed port 3000 using ufw as well, and have made a server instance on Nginx to proxy it,
server {
# SSL configuration
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name .mysite.co;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/django/mysite;
}
proxy_cache mysite;
location / {
include proxy_params;
proxy_pass http://unix:/home/django/mysite/mysite.sock;
}
gzip_comp_level 3;
gzip_types text/plain text/css image/*;
ssl_certificate /etc/letsencrypt/live/mysite.co/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.co/privkey.pem; # managed by Certbot
}
server {
if ($host = www.mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mysite.co) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name .mysite.co;
return 404; # managed by Certbot
}
server{
listen 3000;
listen 443 ssl http2;
server_name .mysite.co:3000;
location / {
proxy_pass https://localhost:3000;
}
}
I ran netstat -napl | grep 3000 and I could confirm that the process is running and pm2 status also says its running and no errors in log as well.
How could I make this work? Thanks for the help in advance.
You won't be able to use nginx to listen on port 3000 as well as your node process as only one service can really listen on the port at once. So you'll need to ensure nginx is listening for connections on a different port. I imagine what you're trying to do is to listen on port 80 / 443 and then send the request onto your express service which is listening on port 3000?
In this case your bottom server block is nearly correct. To get this working without TLS/SSL (just on port 80) you'll want to use something like this:
server {
listen 80;
server_name node.mysite.co
location / {
proxy_pass http://localhost:3000;
}
}
The following is a very basic example and you'll probably want to toggle some other settings. This will make "http://node.mysite.co" go proxy through to whatever service (in this case an Express server) is listening on port 3000 locally.
You do not need to make a firewall (ufw) exception for port 3000 in this case as it's a local proxy pass. You should close the port on the firewall so people can't access it directly, this way the must go through nginx.
If you want to get SSL/TLS working, you'll want another block that'll look something like the following. Again, this is very basic and doesn't have a lot of settings you probably want to research and set (such as cipher choices).
server {
listen 443 ssl;
server_name node.mysite.co
ssl_certificate certs/mysite/server.crt;
ssl_certificate_key certs/mysite/server.key;
location / {
proxy_pass http://localhost:3000;
}
}
You'll need to replace the cert and key path to point to your SSL/TLS ceritifcate and key respectively. This will enable you to access https://node.mysite.co and it'll be proxied onto the service on port 3000 as well.
Once you've done that you might then choose to go back and change the http (port 80) server to a redirect to https to force https only connections.
Also note that I've ensured the server_name is different to your existing django server_name with a subdomain (node.mysite.co). You might wish to change this value but you can't have two server blocks listening on the same port and server_name, otherwise nginx would have no idea what to do with the request. I'm sure you're doing this anyway but I wanted to make sure it was explicit and would work with your existing setup.
If you wish the site to be served only for mysite.co:3000
If for some reason you want the user to go to port 3000 on the domain mysite.co, then you will need to set the "listen" to 3000 and keep the server name as "mysite.co". This will allow someone to go to mysite.co:3000 in their browser and hit your node service. I imagine this isn't really what you want for a public facing website though, it also won't line up very nicely with your port 443 version.
Note: I don't claim to be an nginx expert, but I've used it for all my node projects for the past few years and I find this setup to be pretty clear. There might be some nicer syntax you can use.

How to configure NGINX SSL (SNI)

I have this NGINX configuration as follows:
# jelastic is a wildcard certificate for *.shared-hosting.xyz
server {
listen 443;
server_name _;
ssl on;
ssl_certificate /var/lib/jelastic/SSL/jelastic.chain;
ssl_certificate_key /var/lib/jelastic/SSL/jelastic.key;
}
# fullchain2 is a certificate for custom domain
server {
listen 443 ssl;
server_name my-custom-domain-demo.xyz www.my-custom-domain-demo.com;
ssl_certificate /var/lib/nginx/ssl/my-custom-domain-demo.xyz/fullchain2.pem;
ssl_certificate_key /var/lib/nginx/ssl/my-custom-domain-demo.xyz/privkey2.pem;
}
# additional configuration for other custom domains follows
The NGINX server receives requests with host having a pattern like of *.shared-hosting.xyz, e.g. website1.shared-hosting.xyz, website2.shared-hosting.xyz
and also with variable hosts having different domains like my-custom-domain-demo.xyz or another-custom-domain-demo.xyz etc.
Now the problem is the lower server NGINX configuration overrides the upper configuration. Having it, the upper does not work anymore,
and accessing *.shared-hosting.xyz returns certificate error, and browser is telling the certificate is for my-custom-domain-demo.xyz only.
What can be done with this such that the lower NGINX config triggers for *.shared-hosting.xyz domains and every other additional server configuration will not trigger
when host is in the pattern of *.shared-hosting.xyz?
The server_name _; is irrelevant (and is not required in modern versions of nginx). If a server with a matching listen and server_name cannot be found, nginx will use the default server.
In the absence of a default_server suffix to the listen directive, nginx will use the first server block with a matching listen.
If your configurations are spread across multiple files, there evaluation order will be ambiguous, so you need to mark the default server explicitly.
Try this for the jelastic server block:
server {
listen 443 ssl default_server;
ssl_certificate /var/lib/jelastic/SSL/jelastic.chain;
ssl_certificate_key /var/lib/jelastic/SSL/jelastic.key;
...
}
See this document for more.

How to install a letsencrypt cert with nginx?

I've used letsencrypt to install an SSL cert for the latest nginx on ubuntu.
The setup is fine and works great with the exception of:
I don't know enough about SSL to know what's going on but I have a suspicion:
I installed the SSL cert for Apache a while back and just now moved to Nginx for it's http/2 support. As the nginx plugin is not stable yet I had to install the cert myself and this is what I did:
In my nginx config (/etc/nginx/conf/default.conf) I added:
server {
listen 80;
server_name [domain];
return 301 https://$host$request_uri;
}
server {
listen 443 http2;
listen [::]:443 http2;
server_name [domain];
ssl on;
ssl_certificate /etc/letsencrypt/live/[domain]/cert.pem;
ssl_certificate_key /etc/letsencrypt/live/[domain]/privkey.pem;
}
Is it possible that this breaks the chain somehow? What is the proper way here?
Thanks guys
1) For strong Diffie-Hellman and avoid Logjam attacks see this great manual.
You need extend your nginx config with these directives (after you will generate dhparams.pem file):
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/ssl/dhparams.pem;
2) For correct certificate chain use fullchain.pem, not cert.pem, see this great tutorial for details.
And you will get A grade :)
3) and as bonus try this great service:
"Generate Mozilla Security Recommended Web Server Configuration Files".

nginx proxy based on host when using https

I need to use Nginx as an SSL proxy, which forwards traffic to different back ends depending on the subdomain.
I have seem everywhere that I should define multiple "server {" sections but that doesn't work correctly for SSL. Doing that I would always have the SSL being processed in the first virtual host as the server name is unknown until you process the https traffic.
Scenario:
One IP address
One SSL wildcard wildcard
Multiple backends which needs to be accessed like the following:
https://one.mysite.com/ -> http://localhost:8080
https://two.mysite.com/ -> http://localhost:8090
Nginx says "if" is evil: http://wiki.nginx.org/IfIsEvil, but what else can I do?
I have tried this, but it doesn't work, I get an 500 error but nothing in the error logs.
server {
listen 443;
server_name *.mysite.com;
ssl on;
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
location / {
if ($server_name ~ "one.mysite.com") {
proxy_pass http://localhost:8080;
}
if ($server_name ~ "two.mysite.com") {
proxy_pass http://localhost:8090;
}
}
Has anyone managed to accomplish this with Nginx? Any help/alternatives, link, would be much appreciated.
I found the solution which is basically to define the SSL options and the SSL certificate outside the "server" block:
ssl_certificate ssl/mysite.com.crt;
ssl_certificate_key ssl/mysite.com.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP;
ssl_prefer_server_ciphers on;
server {
listen 80;
server_name *.mysite.com;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
server_name one.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 443 ssl;
server_name two.mysite.com;
ssl on;
location / {
proxy_pass http://localhost:8090;
}
}
Key things:
"ssl on;" is the only thing that needs to be within the "server" blocks that listen in https, you can put it outside too, but what will make the "server" blocks that listen in port 80 to use https protocol and not the expected http.
Because the "ssl_certificate", "ssl_ciphers: and other "ssl_*" are outside the "server" block, Nginx does the SSL offloading without a server_name. Which is what it should do, as the SSL decryption cannot happen based on any host name, as at this stage the URL is encrypted.
JAVA and curl don't fail to work now. There is no server_name - host miss match.
The short answer is to use Server Name Indication. This should work by default in common browsers and cURL.
according to http://www.informit.com/articles/article.aspx?p=1994795, you should indeed have two "server" sections, with two different server names.
In each one, you should include your ssl_* directives.

Why does listen `443 default_server ssl` work for multiple server names in nginx?

I run nginx for static content and as a proxy to Apache/mod_wsgi serving django. I have example.com and test.example.com as proxy to Apache/Django and static.example.com which serves all static files directly through nginx. I have a wildcard SSL cert so that each of these sub-domains can use SSL (and I only have one IP).
Why is it that when using listen 443 default_server ssl; in either test.example.com or example.com, SSL works for both yet I have to explicitly listen to 443 for static.example.com?
ssl_certificate /etc/ssl/certs/example.chained.crt;
ssl_certificate_key /etc/ssl/private/example.key;
server {
listen 80;
listen 443;
server_name static.example.com;
# ... serves content ...
}
server {
listen 80;
listen 443 default_server ssl;
server_name example.com;
# ... proxy pass to http://example.com:8080 (apache) ...
}
server {
listen 80;
# why don't I need `listen 443;` here?
server_name test.example.com;
# ... proxy pass to http://test.example.com:8080 (apache) ...
}
The SSL protocol by itself (without the SNI extension) uses the ip address of the server to request the SSL certificate. With SNI it also passes the hostname (doesn't work for Win XP), but that should't be relevant here.
Server directives are not an exact match. It's the "closest" match. It may appear "work", but it may be ending up in the wrong server directive. It's hard to tell without any more information, like the server root.
The point is something will always work since you appear to be using a single ip address.