I have this nginx configuration
server {
listen 80;
server_name app.com www.app.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
server_name app.com www.app.com;
ssl on;
ssl_certificate /etc/nginx/ssl/app.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
location = /favicon.ico {
root /opt/myapp/app/programs/web.browser/app;
access_log off;
expires 1w;
}
location ~* "^/[a-z0-9]{40}\.(css|js)$" {
root /opt/myapp/app/programs/web.browser;
access_log off;
expires max;
}
location ~ "^/packages" {
root /opt/myapp/app/programs/web.browser;
access_log off;
}
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
and deployed to ec2 using mup with normal settings
It is deployed and i can access the site app.com
But the https://app.com is not working
as In the config file all the requests are rewriting to https
What is happening here
I can access the site when I enter app.com that means it is
forwarding app.com ad https://app.com
I cannot access https://app.com that means nginx is not working
Which of the above two scenarios are true?
I'm out of options. I checked with ssl checkers they are showing that ssl certificate is not installed.
then why my app is working when enter app.com?
Now Meteor Up has the built in SSL Support. No more hard work.
Just add the SSL certificates and the key and do mup setup.
We use stud to terminate SSL
I am not NGINX knowledgeable but looking at my working production configs I see a number of parameters you have not included in yours.
In particular you may need the following at the top in order to proxy websocket connections:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
My 443 server also includes the following in addition to what you already have:
server {
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
add_header Strict-Transport-Security "max-age=31536000;";
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
Finally I would try commenting out your location directives for bug checking. The issue should not be with your SSL certificate, it should still allow you to visit (with a warning) for a self-signed or misconfigured certificate. Hope this helps.
Related
Now this might be a very simple issue but I can't seem to figure out how get SSL to work with Nginx. I will list what I have done so far:
Used certbot to create a fullchain.pem and privkey.pem file
Added the following code to /etc/nginx/conf.d/pubgstats.info
server {
listen 80;
server_name pubgstats.info www.pubgstats.info;
location '/.well-known/acme-challenge' {
root /srv/www/pubg-stats;
}
location / {
proxy_pass http://localhost:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /secure {
auth_pam "Secure zone";
auth_pam_service_name "nginx";
}
}
server {
listen 443;
ssl on;
ssl_certificate /srv/www/pubg-stats/certs/fullchain.pem;
ssl_certificate_key /srv/www/pubg-stats/certs/privkey.pem;
server_name pubgstats.info www.pubgstats.info;
location / {
root /srv/www/pubg-stats/;
}
}
From what I understand, the configuration listens on port 80 and upgrades a HTTP request to HTTPS. The code was mostly taken from this article. I added the SSL part of the configuration as stated here. Now visiting the site over HTTP works. On HTTPS, the connection is reset. What am I missing in the configuration and what's the best way to configure SSL with Nginx in this case?
I don't understand why you didn't add this to /etc/nginx/nginx.conf, but the issue appears to be that you've declared multiple server blocks for the same server. In that case, nqinx will usually choose the first depending on different criteria.
With this configuration, nginx will use SSL by default. If that is not what you want, remove default_server. You don't need ssl on as that is now obsolete and replaced with the ssl parameter in the listen directive.
server {
listen 80;
listen 443 default_server ssl;
ssl_certificate /srv/www/pubg-stats/certs/fullchain.pem;
ssl_certificate_key /srv/www/pubg-stats/certs/privkey.pem;
server_name pubgstats.info www.pubgstats.info;
location '/.well-known/acme-challenge' {
root /srv/www/pubg-stats;
}
location / {
proxy_pass http://localhost:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /secure {
auth_pam "Secure zone";
auth_pam_service_name "nginx";
}
}
I am trying to configure Sonarqube that it works with SSL. I followed the following instructions:
https://docs.sonarqube.org/latest/setup/operate-server/
Below is my configuration:
server {
listen 443 ssl;
root /opt/sonarqube/sonarqube-6.7.7/web/;
index index.html index.htm;
server_name sonar;
location / {
root /var/www/sonar;
proxy_pass http://localhost:9000;
}
}
I have tested my SSL-Certificate and it works fine with a website, that I have created, but with Sonar it is not working.
Below is the error what I get in the Firefox browser:
Errorcode: SSL_ERROR_RX_RECORD_TOO_LONG
thank you for your answers. #Steffen Ullrich: you are right.
server {
listen 9090 ssl;
ssl_certificate <CERT_NAME>.pem;
ssl_certificate_key <DOMAIN>.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ...;
ssl_dhparam <DHPARAM>.pem;
ssl_prefer_server_ciphers on;
server_name sonar;
location / {
proxy_pass http://localhost:9000;
proxy_redirect http://localhost:9000 https://<DOMAIN.net>:9090;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_request_buffering off;
}
}
Thank you all for your help.
I have the plan to manage multiple websites on the same server and I'm currently handling the http request from nginx then handling it to apache.
This is what the configuration I currently have for my first website:
# Force HTTP requests to HTTPS
server {
listen 80;
server_name myfirstwebsite.net;
return 301 https://myfirstwebsite.ne$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name myfirstwebsite.ne ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/cert.pem;
ssl_certificate_key /etc/pki/tls/certs/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/http/access.log;
error_log /var/log/nginx/iflogs/http/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4433;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4433;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Now, My question is, for the second, third website and so on, I'm thinking in modifying the line:
proxy_pass https://127.0.0.1:4433;
for
proxy_pass https://secondwebsite.net:4433;
but what I don't want to do is that the goes out of the internet and looks up for that dns and then comes back to the same server, but serve in the same server (which is why I had localhost:4433 in the first website), so I don't get latency issues.
Is there any solution for this?
Also, I want to know if there will be issues if I serve multiple servers using the same port (in this case 4433) or do I have to use a different port for each website.
Thank you in advance.
Multiple server confs
One way to do this would be to have multiple server blocks, ideally over different conf files. Something like this would do for your second server in a new file (e.g. /etc/nginx/sites-available/mysecondwebsite):
# Force HTTP requests to HTTPS
server {
listen 80;
server_name mysecondwebsite.net;
access_log off; # No need for logging on this
error_log off;
return 301 https://mysecondwebsite.net$request_uri;
}
server {
listen 443 ssl;
root /var/opt/httpd/ifdocs;
server_name mysecondwebsite.net ;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
ssl on;
ssl_certificate /etc/pki/tls/certs/cert.pem;
ssl_certificate_key /etc/pki/tls/certs/cert.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
access_log /var/log/nginx/iflogs/http/access.log;
error_log /var/log/nginx/iflogs/http/error.log;
###include rewrites/default.conf;
index index.php index.html index.htm;
# Make nginx serve static files instead of Apache
# NOTE this will cause issues with bandwidth accounting as files wont be logged
location ~* \.(gif|jpg|jpeg|png|wmv|avi|mpg|mpeg|mp4|htm|html|js|css)$ {
expires max;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4434;
}
# proxy the PHP scripts to Apache listening on <serverIP>:8080
location ~ \.php$ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass https://127.0.0.1:4434;
}
location ~ /\. {
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
You would then create a symlink using ln -s /etc/nginx/sites-available/mysecondwebsite /etc/nginx/sites-available/ and restart nginx. To answer your question about ports, you can only have one TCP application listening on any single port. This post provides a few more details about that.
You could also define an upstream in your server block like so:
upstream mysecondwebsite {
server 127.0.0.1:4434; # Or whatever port you use
}
And then reference this upstream using proxy pass like so:
proxy_pass http://mysecondwebsite;
This way if you change the port, you will only have to change it in one place in your server conf. Also, this is how you would scale your application with multiple Apache servers and implement load balancing.
I have set up www.myapp.io which connects to a MEAN-stack application hosted by nginx. It works, now, I want to add SSL to it. I have followed this link to secure with let's encrypt.
However, after the configuration, https://www.myapp.io isn’t working: www.myapp.io redirected you too many times. ERR_TOO_MANY_REDIRECTS.
The follows is /etc/nginx/sites-enabled/myapp.io, does anyone know where is wrong?
server {
listen 80;
server_name myapp.io www.myapp.io;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name myapp.io www.myapp.io;
ssl_certificate /etc/letsencrypt/live/myapp.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.io/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:EC$
ssl_session_timeout 1d;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
location ~ /.well-known {
allow all;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
proxy_set_header Proxy "";
proxy_pass https://127.0.0.1:3000;
}
}
(I did not put ssl_session_cache shared:SSL:50m;, because I already have ssl_session_cache shared:SSL:10m; in /etc/nginx/nginx.conf.)
The config file before adding ssl, which worked:
server {
listen 80;
server_name myopp.io *.myopp.io;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Accept-Encoding "";
proxy_set_header Proxy "";
proxy_pass http://127.0.0.1:3000;
}
}
PS: The site is managed via cloudflare, at the moment, the SSL setting on clouldflare is Flexible, I don't know if I need to change it.
As #dave_thompson_085 suggested in his comment, changing Flexible to Full in Cloudflare will make https://www.myapp.io reachable...
I've been looking for a solution for this for quite a few hours already. I'm rather new to Nginx as well, so if someone could help me with a demo config, it would be superb.
1 public IP address (this is what's causing so much trouble)
Nginx as proxy
Exchange 2013
Current situation:
http: apps.domain.org, video.domain.org, geo.domain.org . Traffic on port 80 goes to the Nginx server.
https: mail.domain.org . Traffic on port 443 goes straight to Exchange 2013.
Now, we need https / SSL on our apps.domain.org .
Our firewall only checks the IP addresses and forwards traffic.
So basically, my idea is to have all traffic go to Nginx.
There, I need to know what's for mail.domain.org and redirect it to Exchange. Specifically, I need everything to work. OWA, autodiscover: OK. But I'm struggling with what seems to be RPC.
Someone mentioned I should use a stream config in Nginx to manage that.
But I don't know how to differentiate, so that only mail.domain.org uses a stream, while apps.domain.org is in a http config?
My current config (thanks to the links below, but in particular tigunov's comment about getting Outlook Anywhere aka RPC to work) gets me further than before. Currently failing at a FolderSync attempt when I try Microsoft's Remote Connectivity Analyzer. In Outlook, the credentials box still pops up.
server {
(server_name , SSL-certs etc)
# Set global proxy settings
proxy_pass_header Date;
proxy_pass_header Server;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
keepalive_timeout 3h;
proxy_read_timeout 3h;
#reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 3G;
#proxy_pass_header Authorization;
proxy_pass_request_headers on;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_buffering off;
proxy_set_header Connection "Keep-Alive";
}
Test now results in: (everything fine, including ActiveSync - OPTIONS), but:
Attempting the FolderSync command on the Exchange ActiveSync session.
The test of the FolderSync command failed.
Exception details:
Message: The request was aborted: The request was canceled.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.Tools.ExRca.Extensions.RcaHttpRequest.GetResponse()
Elapsed Time: 526 ms.
No further details to be seen in the connectivity tool.
This configuration is based on Tad DeVries' configuration found here and Daniel Kempkens' fix for autodiscover and RPC issues found here.
Note that since I don't have an Exchange environment to test against, I'm not sure if this configuration will work properly, but it's worth a try.
server {
listen 80;
#listen [::]:80;
server_name mail.gwtest.us autodiscover.gwtest.us;
return 301 https://$host$request_uri;
}
server {
listen 443;
#listen [::]:443 ipv6only=on;
ssl on;
ssl_certificate /etc/ssl/nginx/mail.gwtest.us.crt;
ssl_certificate_key /etc/ssl/nginx/mail.gwtest.us.open.key;
ssl_session_timeout 5m;
server_name mail.gwtest.us;
location / {
return 301 https://mail.gwtest.us/owa;
}
proxy_http_version 1.1;
proxy_read_timeout 360;
proxy_pass_header Date;
proxy_pass_header Server;
proxy_pass_header Authorization;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
more_set_input_headers 'Authorization: $http_authorization';
more_set_headers -s 401 'WWW-Authenticate: Basic realm="exch1.test.local"';
location ~* ^/owa { proxy_pass https://exch1.test.local; }
location ~* ^/Microsoft-Server-ActiveSync { proxy_pass https://exch1.test.local; }
location ~* ^/ecp { proxy_pass https://exch1.test.local; }
location ~* ^/rpc { proxy_pass https://exch1.test.local; }
#location ~* ^/mailarchiver { proxy_pass https://mailarchiver.local; }
error_log /var/log/nginx/owa-ssl-error.log;
access_log /var/log/nginx/owa-ssl-access.log;
}