LDAP as a Certificate authority - ldap

I am working on a system that is a closed network, that has very limited access to the internet. We set up an ngnix docker (config at the bottom) to handle SSL. The system has its own Certificate Authority, so we submitted a Certificate signing request, and got it back from them. When i put the cert and key into ngnix, i got the cert served , however it had an err_cert_authority_invalid See Image. When i look at the CA URI it is and ldap url ldap://{ldap stuff}
I can do an ldap search, with the data from the certs CA URI that will return the cACertificate
The cert has and ldap string as the cert authority which is not something i have worked with before, so i think i have all the things i need, but i am unsure how to put it together.
My best guess is that there is some issue with needing an ldap client installed on the docker running ngnix to resolve the CA? Or is it that i have to use ldap search to get the other certs and install them?
server {
listen 443 default_server ssl ipv6only=off;
server_name some.thing.edu;
ssl_certificate /etc/ssl/certs/sigend.crt;
ssl_certificate_key /etc/ssl/private/private.key;
location / {
proxy_pass http://app_web;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}

I have worked through this, my mis understanding was that the CA certs need to be on the browser side and not validated on the server side.
However we still had an issue because we were connecting via a VPN and not SSO set up on our machines. We got around this by using an ldap search to pull back the certs, and then install them directly into the key store and trust them.
ldapsearch -x -D $adminDN -w $pass -h $ldapHost -b $base
The base variable was the value of the Certificate Authority ldap uri (without the ldap:// and some un rul encoding)

Related

GoLang HTTPS API

I have a server in windows using Plesk.
I have a domain (example.com), also I bought a SSL certificate for this domain.
I just installed successfully and configure the domain and ssl in my server. So now I can join to my web using https://www.example.com or example.com and will be redirect to https://....
Until that everything works fine.
But now I have been developed an API in GoLang which can’t start to listen at 443 port for some reason. (I thought that maybe because is being already used ?) So I changed to 8081 port. Now when I want to make a request to my API I have to use https://www.example.com:8081/api/v1/users for example.
The problem is that some applications show me a error “Certificate invalid” which I think is because the port is not 443. Is there any way that I can run go in 443?
The code in GO is this: (The crt and key are the ones provided by GoDaddy, is where I bought the SSL)
func main() {
router := NewRouter()
handler := cors.AllowAll().Handler(router)
log.Fatal(http.ListenAndServeTLS(":8081", "tls.crt", "tls.key", handler))
}
Run the whole Golang application behind nginx (reverse proxy):
Create a Virtual Host Server Block in Nginx using your domain.
https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04
Setup your SSL certs
Point that domain to your Golang App
server {
server_name example.com;
location / {
proxy_pass http://localhost:8081;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass_request_headers on;
proxy_read_timeout 150;
}
ssl_certificate /path/to/chainfile/example.com/abcd.pem;
ssl_certificate_key /path/to/privatekeyfile/example.com/abcd.pem;
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}

Nginx proxy over https to server - different hostname on cert than Host header

I want to receive traffic at https://example.com, on server 1. I then want to proxy that traffic over https to server 2. Server 2 has Nginx set up with the exact same tls certificate and key as Server 1, so it should theoretically be able to serve the requests. However, when Nginx on server 2 tries to proxy a request to server 2, it sends it to server2.example.com, which differs from the common name on the cert, which is just example.com.
Is there a way to configure nginx to expect the name on the tls cert offered by the host (during tls handshake) to which it is proxying requests to be different from the address of the host to which it is proxying?
Example config on server 1:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example config on server 2:
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /srv/tls/example.com.crt;
ssl_certificate_key /srv/tls/example.com.key;
location / {
proxy_pass http://localhost:12345;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Example curl from server 1:
$ curl https://server2.example.com/chat -H "Host: example.com"
curl: (51) Unable to communicate securely with peer: requested domain name does not match the server's certificate.
If need be, I could generate a new self-signed cert and use that on server 2. However, I assumed it would be faster to just change the Nginx configuration. If the config change is not possible, I'll create a new cert.
You can use the proxy_ssl_name directive to specify the server name of the proxied host's certificate.
For example:
location / {
proxy_pass https://server2.example.com;
proxy_set_header Host $host;
proxy_ssl_name $host;
...
}
See this document for details.

NGINX ignore bad certificate and configuration and just run?

We have an app that uploads automatically generated SSL certificate to our NGINX load balancers. One time the we had this issue that a "bad certificate" got uploaded and then a automated nginx reload is thereafter executed, our server went offline for a while causing DNS issues (DNS not found) for our server domain. Causing a huge downtime to our clients.
However it is a feature / function in our application to allow apps to upload SSL cerficate and our backend server installs it automatically, is there a way to tell to ignore bad NGINX conf files and crt/key's altogether? Looking at the before logs I can remember that I saw something like SSL handshake error before the incident.
Here's how our main nginx-jelastic.conf looks like:
######## HTTP SECTION PROTOTYPE ########
http {
server_tokens off ;
### other settings hidden for simplicity
include /etc/nginx/conf.d/*.conf;
}
######## TCP SECTION PROTOTYPE ########
So what I am thinking if it's possible for nginx to just ignore all bad NGINX conf files that is located there. Here's a sample of what gets uploaded in the conf.d folder:
#
www.example-domain.com HTTPS server configuration
#
server {
listen 443 ssl;
server_name www.example-domain.com;
ssl_certificate /var/lib/nginx/ssl/www.example-domain.com.crt;
ssl_certificate_key /var/lib/nginx/ssl/www.example-domain.com.key;
access_log /var/log/nginx/localhost.access_log main;
error_log /var/log/nginx/localhost.error_log info;
proxy_temp_path /var/nginx/tmp/;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
set $upstream_name common;
include conf.d/ssl.upstreams.inc;
proxy_pass http://$upstream_name;
proxy_next_upstream error;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Host $http_host;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-URI $uri;
proxy_set_header X-ARGS $args;
proxy_set_header Refer $http_refer;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For some reason the certificate and key indicated in the configuration could be wrong, that that is going to wreck the nginx server and since our domain is pointed to this server via A record, it us a total disaster if the nginx fails as DNS issues happens and it could take 24-48 hours for DNS to get back.

Nginx proxy_ssl_certificate not working as expected

I'm using nginx as a proxy to a backend server.
The backend server is also using nginx and enforcing client certificate authentication using the ssl_client_certificate and ssl_verify_client directives.
In my nginx server I set the following:
location /proxy {
proxy_pass https://www.backend.com;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_ssl_certificate /etc/nginx/cert/client.crt;
proxy_ssl_certificate_key /etc/nginx/cert/client.key;
}
according to the nginx docs.
However, the backend is still responding with a 400 reponse code "No required SSL certificate was sent".
Note that when issuing requests to the backend server using wget with the client certificate, I get a valid 200 OK response:
wget --certificate=/etc/nginx/cert/client.crt --private-key=/etc/nginx/cert/client.key https://www.backend.com
What am I missing in my nginx configuration?
This post seemed to solve this problem for me.
I had to add the following setting to get things working for me.
proxy_ssl_server_name on;

nginx location directive : authentication happening in wrong location block?

I'm flummoxed.
I have a server that is primarily running couchdb over ssl (using nginx to proxy the ssl connection) but also has to serve some apache stuff.
Basically I want everything that DOESN'T start /www to be sent to the couchdb backend. If a url DOES start /www then it should be mapped to the local apache server on port 8080.
My config below works with the exception that I'm getting prompted for authentication on the /www paths as well. I'm a bit more used to configuring Apache than nginx, so I suspect I'm mis-understanding something, but if anyone can see what is wrong from my configuration (below) I'd be most grateful.
To clarify my use scenario;
https://my-domain.com/www/script.cgi should be proxied to
http://localhost:8080/script.cgi
https://my-domain.com/anythingelse should be proxied to
http://localhost:5984/anythingelse
ONLY the second should require authentication. It is the authentication issue that is causing problems - as I mentioned, I am being challenged on https://my-domain.com/www/anything as well :-(
Here's the config, thanks for any insight.
server {
listen 443;
ssl on;
# Any url starting /www needs to be mapped to the root
# of the back end application server on 8080
location ^~ /www/ {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Everything else has to be sent to the couchdb server running on
# port 5984 and for security, this is protected with auth_basic
# authentication.
location / {
auth_basic "Restricted";
auth_basic_user_file /path-to-passwords;
proxy_pass http://localhost:5984;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
Maxim helpfully answered this for me by mentioning that browsers accessing the favicon would trigger this behaviour and that the config was correct in other respects.