Support for two-way TLS/HTTPS with ELB - ssl

One way (or server side) TLS/HTTPS with Amazon Elastic Load Balancing is well documented
Support for two-way (or client side) TLS/HTTPS is not as clear from the documentation.
Assuming ELB is terminating a TLS/HTTPS connection:
Does ELB support client authenticated HTTPS connections?
If so, does a server served by ELB recieve a X-Forwarded-* header to identify the client authenticated by ELB?
ELB does support TCP forwarding so an EC2 hosted server can establish a two-way TLS/HTTPS connection but in this case I am interested in ELB terminating the TLS/HTTPS connection and identifying the client.

I don't see how it could, in double-ended HTTPS mode, because the ELB is establishing a second TCP connection to the back-end server, and internally it's decrypting/encrypting the payload to/from the client and server... so the server wouldn't see the client certificate directly, and there are no documented X-Forwarded-* headers other than -For, -Proto, and -Port.
With an ELB running in TCP mode, on the other hand, the SSL negotiation is done directly between the client and server with ELB blindly tying the streams together. If the server supports the PROXY protocol, you could enable that functionality in the ELB so that you could identify the client's originating IP and port at the server, as well as identifying the client certificate directly because the client would be negotiating directly with you... though this means you are no longer offloading SSL to the ELB, which may be part of the point of what you are trying to do.
Update:
It doesn't look like there's a way to do everything you want to do -- offload SSL and identify the client certificatite -- with ELB alone. The information below is presented “for what it’s worth.”
Apparently HAProxy has support for client-side certificates in version 1.5, and passes the certificate information in X- headers. Since HAProxy also supports the PROXY protocol via configuration (something along the lines of tcp-request connection expect-proxy) ... so it seems conceivable that you could use HAProxy behind a TCP-mode ELB, with HAProxy terminating the SSL connection and forwarding both the client IP/port information from ELB (via the PROXY protocol) and the client cert information to the application server... thus allowing you to still maintain SSL offload.
I mention this because it seems to be a complementary solution, perhaps more feature-complete than either platform alone, and, at least in 1.4, the two products work flawlessly together -- I am using HAProxy 1.4 behind ELB successfully for all requests in my largest web platform (in my case, ELB is offloading the SSL -- there aren't client certs) and it seems to be a solid combination in spite of the apparent redundancy of cascaded load balancers. I like having ELB being the only thing out there on the big bad Internet, though I have no reason to think that directly-exposed HAProxy would be problematic on its own. In my application, the ELBs are there to balance between the HAProxies in the A/Z's (which I had originally intended to also auto-scale, but the CPU utilization stayed so low even during our busy season that I never had more than one per Availability Zone, and I've never lost one, yet...) which can then do some filtering, forwarding, and and munging of headers before delivering the traffic to the actual platform in addition to giving me some logging, rewriting, and traffic-splitting control that I don't have with ELB on its own.

In case your back end can support client authenticated HTTPS connections itself, you may use ELB as TCP on port 443 to TCP on port your back end listens to. This will make ELB just to resend unencrypted request directly to your back end. This config also doesn't require installation of SSL certificate to a load balancer.
Update: with this solution x-forwarded-* headers are not set.

You can switch to single instance on Elastic Beanstalk, and use ebextensions to upload the certs and configure nginx for mutual TLS.
Example
.ebextensions/setup.config
files:
"/etc/nginx/conf.d/00_elastic_beanstalk_ssl.conf":
mode: "000755"
owner: root
group: root
content: |
server {
listen 443;
server_name example.com;
ssl on;
ssl_certificate /etc/nginx/conf.d/server.crt;
ssl_certificate_key /etc/nginx/conf.d/server.key;
ssl_client_certificate /etc/nginx/conf.d/ca.crt;
ssl_verify_client on;
gzip on;
send_timeout 300s;
client_body_timeout 300s;
client_header_timeout 300s;
keepalive_timeout 300s;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-SSL-client-serial $ssl_client_serial;
proxy_set_header X-SSL-client-s-dn $ssl_client_s_dn;
proxy_set_header X-SSL-client-i-dn $ssl_client_i_dn;
proxy_set_header X-SSL-client-session-id $ssl_session_id;
proxy_set_header X-SSL-client-verify $ssl_client_verify;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
}
"/etc/nginx/conf.d/server.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJDkzCCAvygAwIBAgIJALrlDwddAmnYMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD
...
LqGyLiCzbVtg97mcvqAmVcJ9TtUoabtzsRJt3fhbZ0KKIlzqkeZr+kmn8TqtMpGn
r6oVDizulA==
-----END CERTIFICATE-----
"/etc/nginx/conf.d/server.key":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN RSA PRIVATE KEY-----
MIJCXQIBAAKBgQCvnu08hroXwnbgsBOYOt+ipinBWNDZRtJHrH1Cbzu/j5KxyTWF
...
f92RjCvuqdc17CYbjo9pmanaLGNSKf0rLx77WXu+BNCZ
-----END RSA PRIVATE KEY-----
"/etc/nginx/conf.d/ca.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJCizCCAfQCCQChmTtNzd2fhDANBgkqhkiG9w0BAQUFADCBiTELMAkGA1UEBhMC
...
4nCavUiq9CxhCzLmT6o/74t4uCDHjB+2+sIxo2zbfQ==
-----END CERTIFICATE-----

Related

Pass authentication in the backend through NGINX using a user certificate and private key

My program on Golang can't work with user certificate (http client) due unsupported old tls algorithms. I want to solve it with reverse proxy using nginx. Here is a pic describes thoughted scheme https://ibb.co/Jxcy52G .
[client] ----> [NGINX:80] ----(proxy pass using cert,privkey)----> [TOMCAT:8443]
https://TOMCAT:8443 requires authentication with a client certificate. I want to hide this fact from my app. App must not be required to provide a client certificate. Instead, I would like NGINX to use a certificate that's stored on the server.
My nginx.conf config:
server {
listen 80;
# server_name _;
location / {
proxy_ssl_certificate "cert.pem";
proxy_ssl_certificate_key "key.pem";
proxy_ssl_server_name on;
# proxy_ssl_verify off;
proxy_ssl_name "iscs.telecomtest.ru";
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_pass https://195.11.xx.16:8443;
proxy_buffering off;
}
}
When i try to open page http://my-nginx:80/, I get:
2019/06/18 15:16:22 [error] 25896#14556: *133 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 127.0.0.1, server: , request: "GET /? HTTP/1.1", upstream: "https://195.11.xx.16/?", host: "127.0.0.1:19101"
Thank you in advance.

Configure Nginx to forward client certificate to backend

I have a spring boot service configured for two way ssl to verify clients using certificates. It is behind nginx proxy server. So requirements are to configure nginx to provide transparent https connection from the client and forward client certificate to the webservice(backend) to be verified. Also to configure one way ssl for other services that don't require client authentication.
Something like:
|Client| -->httpS + Client Cert--->|NGINX|--->httpS + Client Cert--->|Service 1|
|Client| ------------>httpS----------->|NGINX| ------------>http------------>|Service 2|
My nginx config:
server {
listen 443;
server_name xx.xx.xx.xxx;
ssl on;
ssl_certificate /path/to/server/cert.crt;
ssl_certificate_key /path/to/server/key.key;
ssl_client_certificate /path/to/ca.crt;
ssl_verify_client optional;
location /service1/ {
proxy_pass https://service1:80/;
#Config to forward client certificate or to forward ssl connection(handshake) to service1
}
location /service2/ {
proxy_pass http://service2:80/;
#http connection
}
}
Also, is there a way to get the common name from the certificate to verify the client and take decisions in nginx? as using the CA is not enough.
Thanks..
This is not possible. What you are attempting to do is make the nginx proxy into a "man in the middle" and this will not be allowed by TLS by design.

Client-side SSL not working with AWS API Gateway

I generated a client-side SSL Certificate on API Gateway and added it to my nginx configuration as below:
listen *:443;
ssl on;
server_name api.xxxx.com;
ssl_certificate /etc/letsencrypt/live/api.xxxx.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.xxxx.com/privkey.pem;
ssl_verify_client on;
ssl_client_certificate /etc/nginx/ssl/awsapigateway.crt;
location /home/ubuntu/api {
# if ($ssl_client_verify != SUCCESS) { return 403; }
# proxy_pass http://my.http.public.endpoint.com;
# proxy_set_header X-Client-Verify $ssl_client_verify;
}
The client certificate doesn't work after testing via the AWS API gateway test console. It ends up with Error 400 - No required SSL certificate was sent. API Gateway should be sending its client cert to my server with each request, so that I can validate that requests are genuinely coming from API Gateway.
I believe the reason it is not working is I am adding the PEM-encoded public key from the AWS API gateway console directly to awsapigateway.crt. Is that correct?
Additionally, does nginx support self-signed client SSL certificates, which is what AWS is providing us?
Api Gateway team here.
It looks like the nginx configuration is correct. And for our simple test case we use a node server and simply write the PEM certificate from the console directly to the crt file that is set as the ca, or in this case the ssl_client_certificate.
I'd also test using the actual deployed API if for some reason the test function in the console has an issue. Make sure to use the Stage settings to specify the cert.

Nginx Proxy pass certificate autentificate to MS IIS

Nginx 1.9.5 (linux Centos7)--> MS IIS 8.5
So i try to use nginx as client revers proxy for IIS where need client certificate authentication at IIS level.
nginx:443->>IIS:443+client certificate authentications.
example location proxy pass
also here are commented commands which i try.
location ^~ /test/ {
#proxy_buffering off;
#proxy_http_version 1.0;
#proxy_request_buffering off;
#proxy_set_header Connection "Keep-Alive";
#proxy_set_header X-SSL-CERT $ssl_client_cert;
# proxy_ssl_name domain.lv;
#proxy_ssl_trusted_certificate /etc/nginx/ssl/root/CA.pem;
#proxy_ssl_verify_depth 2;
proxy_set_header HOST domain.com;
proxy_ssl_certificate /etc/nginx/ssl/test.pem;
proxy_ssl_certificate_key /etc/nginx/ssl/test_key.pem;
proxy_ssl_verify off;
proxy_pass https://10.2.4.101/;
}
At IIS simple.
create new website.
import CA cert in trusted root.
set ssl cert required.
Test what i get :
Directly browser to IIS client cert required--worked.
Nginx to other nginx client cert required--worked.
Nginx to IIS client cert ignore--worked
Nginx to IIS client cert required or accept - NOT work
ERROR:
Nginx side:
*4622 upstream timed out (110: Connection timed out) while reading response header from upstream
IIS side:
500 0 64 119971
So i hope someone could know why?
EDIT
1. also try from different server with nginx 1.8 nothing helped..
proxy_ssl_verify off;
proxy_ssl_certificate /etc/nginx/ssl/test/test.pem;
proxy_ssl_certificate_key /etc/nginx/ssl/test/test_key.pem;
proxy_pass https://domain.com;
2.Try same with apache 2.4 all worked with
SSLProxyEngine On
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
SSLProxyMachineCertificateFile /etc/httpd/ssl/test.pem
ProxyPass "/test" "https://domain.com"
Maybe something with ssl renegotiation in nginx???
Your hunch about TLS renegotiation is correct. Nginx has not allowed TLS renegotiation since version 0.8.23 (see http://nginx.org/en/CHANGES). However, by default IIS will use TLS renegotiation when requesting a client certificate. (I haven't been able to find the reasons for this - I would be grateful if someone could enlighten me!)
You can use a packet sniffer such as wireshark to see this in action:
IIS and Nginx first perform a TLS handshake using the server certificate only.
Nginx requests the resource.
The resource requires client authentication, so IIS sends a 'Hello Request' message to Nginx to initiate TLS renegotiation.
Nginx doesn't respond to the Hello Request as TLS renegotiation has been disabled.
IIS then closes the connection as it gets no response. (See the section on renegotiation at https://technet.microsoft.com/en-us/library/cc783349(v=ws.10).aspx)
To solve this problem, you must force IIS to request a client certificate on the initial TLS handshake. You can do this using the netsh utility from powershell or the command line:
Open a powershell prompt with administrator rights.
Enter netsh
Enter http
Enter show sslcert. You should see a list of all current SSL bindings on your machine:
Make a note of the IP:port and certificate hash of the certificate that you want to enable client certificate negotiations for. We are now going to delete this binding and re-add it with the Negotiate Client Certificate property set to enabled. In this example, the IP:port is 0.0.0.0:44300 and the certificate hash is 71472159d7233d56bc90cea6d0c26f7a29db1112.
Enter delete sslcert ipport=[IP:port from above]
Enter add sslcert ipport=[IP:port from above] certhash=[certificate hash from above] appid={[any random GUID (can be the same one from the show sslcert output)]} certstorename=MY verifyclientcertrevocation=enable verifyrevocationwithcachedclientcertonly=disable clientcertnegotiation=enable
You can now confirm that this has worked by running show sslcert again. You should see an almost identical output, but with Negotiate Client Certificate set to Enabled:
Note that this method only works for individual certificates - if you need to change or renew the certificate you will have to run these steps again. Of course, you should wrap these up in a batch script or MSI installer custom action for ease of deployment and maintenance.

How to pass a client certificate through two server nginx?

I have some reason to use two nginx servers before the application server.
Both nginx servers using an SSL connection.
Nginx1 (SSL 443 and ssl_verify_client on) -> Nginx2 (SSL 443) -> App (9000).
On the first Nginx1 server I use the option: proxy_set_header client_cert $ssl_client_cert;
On the second server Nginx2 I use the option: underscores_in_headers on;
The problem is that the second Nginx2 server is sent only the first line of the certificate - "----- BEGIN CERTIFICATE -----".
How to pass a client certificate to the application server?
Nginx terminates SSL with no exception, so if you want this config anyway - you will need to have SSL config again and keep certificates on the server (here is relevant SO answer) or based on Nginx support discussion to use HAProxy in TCP mode. Here is the sample configuration article.
I found a Workaround for proxy client certificate
# NGINX1
...
map $ssl_client_raw_cert $a {
"~^(-.*-\n)(?<1st>[^\n]+)\n((?<b>[^\n]+)\n)?((?<c>[^\n]+)\n)?((?<d>[^\n]+)\n)?((?<e>[^\n]+)\n)?((?<f>[^\n]+)\n)?((?<g>[^\n]+)\n)?((?<h>[^\n]+)\n)?((?<i>[^\n]+)\n)?((?<j>[^\n]+)\n)?((?<k>[^\n]+)\n)?((?<l>[^\n]+)\n)?((?<m>[^\n]+)\n)?((?<n>[^\n]+)\n)?((?<o>[^\n]+)\n)?((?<p>[^\n]+)\n)?((?<q>[^\n]+)\n)?((?<r>[^\n]+)\n)?((?<s>[^\n]+)\n)?((?<t>[^\n]+)\n)?((?<v>[^\n]+)\n)?((?<u>[^\n]+)\n)?((?<w>[^\n]+)\n)?((?<x>[^\n]+)\n)?((?<y>[^\n]+)\n)?((?<z>[^\n]+)\n)?((?<ab>[^\n]+)\n)?((?<ac>[^\n]+)\n)?((?<ad>[^\n]+)\n)?((?<ae>[^\n]+)\n)?((?<af>[^\n]+)\n)?((?<ag>[^\n]+)\n)?((?<ah>[^\n]+)\n)?((?<ai>[^\n]+)\n)?((?<aj>[^\n]+)\n)?((?<ak>[^\n]+)\n)?((?<al>[^\n]+)\n)?((?<am>[^\n]+)\n)?((?<an>[^\n]+)\n)?((?<ao>[^\n]+)\n)?((?<ap>[^\n]+)\n)?((?<aq>[^\n]+)\n)?((?<ar>[^\n]+)\n)?((?<as>[^\n]+)\n)?((?<at>[^\n]+)\n)?((?<av>[^\n]+)\n)?((?<au>[^\n]+)\n)?((?<aw>[^\n]+)\n)?((?<ax>[^\n]+)\n)?((?<ay>[^\n]+)\n)?((?<az>[^\n]+)\n)*(-.*-)$"
$1st;
}
server {
...
location / {
...
proxy_set_header client_cert $a$b$c$d$e$f$g$h$i$j$k$l$m$n$o$p$q$r$s$t$v$u$w$x$y$z$ab$ac$ad$ae$af$ag$ah$ai$aj$ak$al$am$an$ao$ap$aq$ar$as$at$av$au$aw$ax$ay$az;
...
}
...
}
# NGINX 2
server {
...
underscores_in_headers on;
...
location / {
proxy_pass_request_headers on;
proxy_pass http://app:9000/;
}
...
}