Nginx proxy_ssl_certificate not working as expected - ssl

I'm using nginx as a proxy to a backend server.
The backend server is also using nginx and enforcing client certificate authentication using the ssl_client_certificate and ssl_verify_client directives.
In my nginx server I set the following:
location /proxy {
proxy_pass https://www.backend.com;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/cert/client.crt;
proxy_ssl_certificate_key /etc/nginx/cert/client.key;
}
according to the nginx docs.
However, the backend is still responding with a 400 reponse code "No required SSL certificate was sent".
Note that when issuing requests to the backend server using wget with the client certificate, I get a valid 200 OK response:
wget --certificate=/etc/nginx/cert/client.crt --private-key=/etc/nginx/cert/client.key https://www.backend.com
What am I missing in my nginx configuration?

This post seemed to solve this problem for me.
I had to add the following setting to get things working for me.
proxy_ssl_server_name on;

Related

LDAP as a Certificate authority

I am working on a system that is a closed network, that has very limited access to the internet. We set up an ngnix docker (config at the bottom) to handle SSL. The system has its own Certificate Authority, so we submitted a Certificate signing request, and got it back from them. When i put the cert and key into ngnix, i got the cert served , however it had an err_cert_authority_invalid See Image. When i look at the CA URI it is and ldap url ldap://{ldap stuff}
I can do an ldap search, with the data from the certs CA URI that will return the cACertificate
The cert has and ldap string as the cert authority which is not something i have worked with before, so i think i have all the things i need, but i am unsure how to put it together.
My best guess is that there is some issue with needing an ldap client installed on the docker running ngnix to resolve the CA? Or is it that i have to use ldap search to get the other certs and install them?
server {
listen 443 default_server ssl ipv6only=off;
server_name some.thing.edu;
ssl_certificate /etc/ssl/certs/sigend.crt;
ssl_certificate_key /etc/ssl/private/private.key;
location / {
proxy_pass http://app_web;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
I have worked through this, my mis understanding was that the CA certs need to be on the browser side and not validated on the server side.
However we still had an issue because we were connecting via a VPN and not SSO set up on our machines. We got around this by using an ldap search to pull back the certs, and then install them directly into the key store and trust them.
ldapsearch -x -D $adminDN -w $pass -h $ldapHost -b $base
The base variable was the value of the Certificate Authority ldap uri (without the ldap:// and some un rul encoding)

Unable to make NginX load balancing

I am new to the nginx config.
I am trying to do a load balancing example with nginx and wcf rest service in windows platform.
Here is what I have in my conf/nginx.conf file:-
upstream servers_customserver {
server 127.0.0.1:62133;
server 127.0.0.1:64897;
server 127.0.0.1:64921;
}
server {
listen 8070;
location /test {
proxy_pass http://servers_customserver/;
}
My motive is whenever, I try to enter a website name which contains "/test" then redirect to one of the urls in the
"servers_customserver".
Nginx is fine in localhost:8070.
But whenever I did localhost:8070/test, I am getting "404 Not Found nginx/1.12.0" in the browser. I am sure that my services are up.
Do, I need to work with my services in IIS or any webservers to make this to work?
Could some one guide me in solving this error.
Thanks.
Luckily,
After adding the following steps to the location block, the load balancing stuff works for me.
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host #server_name;
proxy_redirect off;
Thanks.

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Gitlab behind Nginx and HTTPS -> insecure or bad gateway

I'm running Gitlab behind my Nginx.
Server 1 (reverse proxy): Nginx with HTTPS enabled and following config for /git:
location ^~ /git/ {
proxy_pass http://134.103.176.101:80;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
If I dont change anything on my GitLab settings this will work but is not secure because of external http request like:
'http://www.gravatar.com/avatar/c1ca2b6e2cd20fda9d215fe429335e0e?s=120&d=identicon'. This content should also be served over HTTPS.
so if I change the gitlab config on hidden server 2 (http gitlab):
external_url 'https://myurl'
nginx['listen_https'] = false
as said in the docu. I will get a bad gateway error 502. with no page loaded.
what can I do ?
EDIT: Hacked it by setting:
gitlab_rails['gravatar_plain_url'] = 'https://www.gravatar.com/avatar/%{hash}?s=%{size}&d=identicon'
to https... this workes but is not a clean solution. (clone url is still http://)
I run a similar setup and I ran into this problem as well. According to the docs:
By default, when you specify an external_url starting with 'https', Nginx will no longer listen for unencrypted HTTP traffic on port 80.
I see that you are forwarding your traffic over HTTP and port 80, but telling GitLab to use an HTTPS external URL. In this case, you need set the listening port.
nginx['listen_port'] = 80 # or whatever port you're using.
Also, remember to reload the gitlab configuration after making changes to gitlab.rb. You do that with this command:
sudo gitlab-ctl reconfigure
For reference, here is how I do the redirect:
Nginx config on the reverse proxy server:
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Ssl on;
proxy_pass http://SERVER_2_IP:8888;
}
The GitLab config file, gitlab.rb, on the GitLab server:
external_url 'https://gitlab.domain.com'
nginx['listen_addresses'] = ['SERVER_2_IP']
nginx['listen_port'] = 8888
nginx['listen_https'] = false

nginx location directive : authentication happening in wrong location block?

I'm flummoxed.
I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff.
Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080.
My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful.
To clarify my use scenario;
https://my-domain.com/www/script.cgi should be proxied to
http://localhost:8080/script.cgi
https://my-domain.com/anythingelse should be proxied to
http://localhost:5984/anythingelse
ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-(
Here's the config, thanks for any insight.
server {
listen 443;
ssl on;
# Any url starting /www needs to be mapped to the root
# of the back end application server on 8080
location ^~ /www/ {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Everything else has to be sent to the couchdb server running on
# port 5984 and for security, this is protected with auth_basic
# authentication.
location / {
auth_basic "Restricted";
auth_basic_user_file /path-to-passwords;
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Maxim helpfully answered this for me by mentioning that browsers accessing the favicon would trigger this behaviour and that the config was correct in other respects.