I'm using nginx for the proxy server. My application has a feature where user can use their own domain instead of my domain. For that, they need to point their CNAME to my domain.
This is my Nginx configuration
server {
server_name scan.mydomain.com anonymous.mydomain.com "";
access_log /etc/nginx/log/local-wc.access.log;
error_log /etc/nginx/log/local-wc.error.log;
location / {
root /var/www/html/qcg-scanning-frontend/dist/webapp/;
index index.html;
try_files $uri $uri/ /index.html;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/anonymous.mydomain.com-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/anonymous.mydomain.com-0001/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = scan.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = anonymous.mydomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name scan.mydomain.com anonymous.mydomain.com "";
listen 80;
return 404; # managed by Certbot
}
this configuration is working fine when browsed using my domain scan.mydomain.com and anonymous.mydomain.com but using any pointed domain like new.example.com, it gives 404 page (maybe due to return 404 statement).
For SSL, I'm using lets-encrypt certbot.
How can I configure to
Allow traffic from all CNAME pointed domains to my server?
Provide SSL certificate to all the domains?
I used CaddyServer which is far better than nginx and satisfies all requirements.
https://caddyserver.com/
Features of Caddy
Support for third party domain CNAME pointing
JSON based configuration
API support for the configuration
On-demand TLS
Default serves SSL/TLS to all the domains in the production server
No hassle to install and manage SSL certificates for the domains.
Related
I am making a multi-tenant platform. I have my main url as example.com and every new user will get a subdomain with username.example.com this is working. It is running on an Ubuntu droplet on Digital ocean.
I want to go one step further and allow them to add custom domains which point to my app by creating a an A name record on their DNS. I got this working as well by setting things up manually and writing additional server block for custom domain. I started with certbot for generating the certificates but then modified a lot of code manually.
Here is what my nginx file look like at /nginx/sites-available/example.com:
server {
server_name example.com *.example.com;
# pass to NODEJS app running at :3000
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com-0001/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com-0001/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
server_name customdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/customdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/customdomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
listen 80;
listen [::]:80;
server_name ~^(?<subdomain>.+)\.example.com$;
return 301 https://$subdomain.example.com$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name customdomain.com$;
return 301 https://customdomain$request_uri;
}
So My questions are :
Is there a way I can do this automatically - getting the certificate for custom domain on the fly and allowing it to go to my nodejs app?
Should I be creating multiple files under available-domains instead of multiple server blocks in same file ?
Should I just make it under default instead?
see that the location block is repeating in every server block, is it possible to do this in a more DRY approach ?
I am very new to all this, so if there is a better way to do the multi-tenant setup with SSL and custom domains ?
thank you.
I've been trying to set up SSL for my websites to no avail. I'm using NGINX on Ubuntu 18.04 as a reverse proxy for two NodeJS Express web servers. I used Certbot following these instructions. However, when trying to access my site via HTTPS, I get a "Site can't be reached"/"Took too long to respond" error.
Here's what my NGINX config in /etc/nginx/sites-available looks like:
server {
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name MYURL.com www.MYURL.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/MYURL.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/MYURL.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
access_log /var/log/nginx/MYURL.access.log;
error_log /var/log/nginx/MYURL.error.log;
client_max_body_size 50M;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://localhost:3001;
}
}
When I replace the listen [::]:443 ssl and listen 443 ssl lines with listen 80; and try to access the site with HTTP, it works fine.
Any idea what the problem might be?
EDIT: Also, I feel I should mention that my UFW status has 22/tcp (LIMIT), OpenSSH (ALLOW), and Nginx Full (ALLOW), as well as their v6 counterparts
It turns out the DigitalOcean firewall was not allowing HTTPS connections. I allowed HTTPS and switched proxy_pass https://localhost:3001; to http:// and everything works now!
Good day everyone,
I'm trying to publish my sample ASP.NET Core application on ubuntu 16.04 and the proxy server is Nginx.
My server has SSL Certificate provided by LetsEncript Everything is working properly. But when I'm trying to use the web application that serves with the example port 8080, it doesn't work and the nginx page is still showing even I already commented out it on default file.
server {
if ($host = www.mywebsite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = mywebsite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
server_name mywebsite.com www.mywebsite.com;
return 404; # managed by Certbot
}
Full default file
(I needed to non-disclose the exact domain name due to privacy)
By the way, my real domain is working properly and localhost:8080 is running properly inside the server.
You have to declare you location inside the server {} with 443 inside.
location / {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mywebsite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mywebsite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
Save your default file then restart your nginx
sudo systemctl restart nginx
I have a 3 servers: 1 nginx and 2 apache webservers
All traffic needs to go through the nginx to the apache servers
nginx (192.168.1.100)
web1 (192.168.1.101)
web2 (192.168.1.201)
I am having difficulty passing "development" subdomains to the correct servers for clients whose domains are not yet pointed at my nginx server.
I have a root domain for the business "mydomain.com" such that "web1.mydomain.com" should point directly to "web1" and "web2.mydomain.com" should point to "web2"
Further, if I add another sub-domain to the front of web1.mydomain.com or web2.mydomain.com, it should forward the request to the correct server.
example: test.net.web1.mydomain.com should forward to web1 and be served by the vhost on web1 for test.net.
I have tried several server_name configurations but cannot get the subdomains to route correctly.
upstream web1 {
server 192.168.1.101:80;
}
server_name web1.mydomain.com;
proxy_pass http://web1;
server_name *.web1.mydomain.com;
proxy_pass http://$1.web1;
server_name (.*?).web1.mydomain.com;
proxy_pass http://$1.web1;
server_name (.*?).web1.mydomain.com;
proxy_pass http://web1;
server_name .web1.mydomain.com;
proxy_pass http://web1;
Neither "web1.mydomain.com" or "test.net.web1.mydomain.com" will forward to the apache server. I either get a "This site can't be reached" or the default test page for nginx.
Also I have used mxtools and the domain web1.mydomain.com and web2.mydomain.com are pointed at the nginx server ip address.
current .conf file for web1:
upstream web1 {
server 192.168.1.101:80;
}
server {
listen 80;
server_name .web1.mydomain.com;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto http;
proxy_http_version 1.1;
proxy_redirect off;
proxy_pass http://web1;
}
}
I have multiple sites on a subdomain that I am trying to serve using nginx, some of the subdomains are being served with SSL and some are not. I'm having some trouble getting the non-SSL sites to serve properly. Whenever I try to access them, they immediately redirect (with the correct host) to the SSL/HTTPS version. I've attached my location block below. I've read the nginx block on request processing but can't figure out how to force the unencrypted hosts not to get forwarded. (http://nginx.org/en/docs/http/request_processing.html)
server {
listen 80;
server_name dev.example.ca dev.example.server2.example.tl;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/example/example/socket.sock;
}
location /static {
autoindex on;
alias /home/litobro/example/example/static/;
}
}
server {
listen 80;
server_name dutyroster.example.ca;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/example2/dutyroster/socket.sock;
}
location /static {
autoindex on;
alias /home/example/example2/static/;
}
location /socket.io {
proxy_pass http://unix:/home/example/example2/socket.sock;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
server {
listen 80;
server_name ex3.server2.example.tl example.ca www.example.ca;
location ~ .well-known/acme-challenge/ {
root /var/www/letsencrypt;
default_type text/plain;
}
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.ca www.example.ca example.server2.example.tl;
ssl_certificate /etc/letsencrypt/live/example.ca/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.ca/privkey.pem;
include snippets/ssl-params.conf;
location = /favicon.ico { access_log off; log_not_found off; }
location / {
include proxy_params;
proxy_pass http://unix:/home/example/example3/socket.sock;
}
}
I ordered the server blocks such that the port 80 requests would hopefully get hit first but this still seems to not be working. Thanks in advance for the help! (The server blocks are mostly stripped of the actual domains, though I think I got consistency on the domain replacements)
The cause of the problem is the HSTS header, which is set inside snippets/ssl-params.conf. This header tells the browser that the website will connect only over HTTPS. Here is an example of how this header is set with Nginx:
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
If the header value contains includeSubDomains flag, like in the example above, then the HSTS policy will also apply to all subdomains of the main domain. And that was the reason why your browser tried to send all requests to the subdomain over HTTPS.
Keep in mind that modern browsers store the list of HSTS websites in a special cache, so simply removing or modifying the header in Nginx may not cause any immediate effect. You will need to clear the HSTS cache manually in a manner specific to your browser.
It is also worth mentioning that using the includeSubDomains flag is considered to be a good practice, so keeping it and issuing a certificate for your subdomains instead might be a good idea. Currently there are several certificate authorities, like Let's Encrypt, that provide free of charge and easy to install certificates.
sorry, i don't have many experience with nginx, but you have two server blocks in the end of your conf file. The first listening to port 80 you have "return 301 https://$host$request_uri;" is this line needed?
try commenting that line and see if your non-ssl server names still redirect to https.