aws api gateway client-side ssl certificate verification with nginx - ssl

Similar to this other question here I'm attempting to verify SSL Client Certificates with nginx that have been sent via AWS API Gateway.
I noticed that in the documentation, AWS API Gateway only sends the client certificate along with HTTP requests. Does this mean that HTTPS should not be configured?
Contrary to the link to the question I posted above, the domain that nginx is hosted on does not have https certificates setup.
Any help, or a link to a working configuration using ssl_verify_client without ssl configured for the domain would be greatly appreciated.
Here is the nginx configuration I'm working with currently:
daemon off;
events {
worker_connections 4096;
}
http {
server {
listen 2345 default_server;
ssl_trusted_certificate /certs/api-gateway.crt;
ssl_client_certificate /certs/api-gateway.crt;
ssl_verify_client on;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
ssl_prefer_server_ciphers on;
location /ping {
proxy_pass http://my.http.public.endpoint.com;
}
location / {
if ($ssl_client_verify != SUCCESS) { return 403; }
proxy_pass http://my.http.public.endpoint.com;
proxy_set_header X-Client-Verify $ssl_client_verify;
}
}
}

You're misinterpreting the docs, though the reason is easily understandable.
API Gateway will use the certificate for all calls to HTTP integrations in your API.
The phrase to parse is "HTTP integrations" -- as opposed to Lambda or AWS Service proxy -- not "HTTP" as in "HTTP without SSL". They're using "HTTP" in a generic sense to describe a type, not the specific details of the transport.
SSL client certificates do not work without HTTPS, and won't work without an SSL certificate configured on the server.

Related

Can you terminate SSL to serve custom error pages, but re-encrypt before passing to the target server?

I have a development server that's down a lot and I'm trying to use my stable static web server to provide custom error pages on error-ed out connections. However I don't feel comfortable leaving clear-text communication going between the proxy/load-balancer and the dev server. How, or can, I decrypt and re-encrypt communications between client-proxy and proxy-devServer while intercepting any error responses?
I have a sample config, but I'm pretty sure I'm misunderstanding it.
server {
listen 443;
#send to the dev server
proxy_pass 192.168.1.2:443;
#decrypt downstream ssl
ssl_certificate /etc/ssl/certs/frontend.crt;
ssl_certificate_key /etc/ssl/certs/frontend.key;
#Serve custom error page
error_page 500 502 503 504 /custom_50x.html;
location = /custom_50x.html {
root /var/www/errors/html;
internal;
}
#Encrypt upstream communication to the dev server
proxy_ssl on;
proxy_ssl_certificate /etc/ssl/certs/backend.crt;
proxy_ssl_certificate_key /etc/ssl/certs/backend.key;
}
The Nginx http server cannot pass through SSL connections (AFAIK), so you must terminate SSL at this server. An upstream SSL connection is established by using https:// in the proxy_pass statement. See this document for details.
For example:
server {
listen 443 ssl;
#decrypt downstream ssl
ssl_certificate /etc/ssl/certs/frontend.crt;
ssl_certificate_key /etc/ssl/certs/frontend.key;
location / {
#send to the dev server
proxy_pass https://192.168.1.2;
# Using `https` with an IP address, you will need to provide
# the correct hostname and certificate name to the upstream
# server. Use `$host` if it's the same name as this server.
proxy_set_header Host $host;
proxy_ssl_server_name on;
proxy_ssl_name $host;
}
#Serve custom error page
error_page 500 502 503 504 /custom_50x.html;
location = /custom_50x.html {
root /var/www/errors/html;
internal;
}
}
The proxy_ssl directive relates to the stream server only. The proxy_ssl_certificate directives relate to client certificate authentication, which you may or may not require. Also, you were missing an ssl suffix on the listen statement. See this document for more.

NGINX SSL Forward Proxy Config

I know that NGINX is not supposed to be used as a forward proxy but I have a requirement to do so ... Anyway, obviously it is not to hard to get http to work as a forward proxy but issues arise when trying to configure https. I generated some self signed certs and then try to connect to https://www.google.com and it gives me the error ERR_TUNNEL_CONNECTION_FAILED. The issue has to do with my certs somehow but I have no idea how to fix the issue. Does anyone know how to achieve this functionality ?
Here is my config
server {
listen 443 ssl;
root /data/www;
ssl on;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / {
resolver 8.8.8.8;
proxy_pass https://$http_host$uri$is_args$args;
}
}
The reason NGINX does not support HTTPS forward proxying is because it doesn't support the CONNECT method. However, if you are interested in using it as a HTTPS forwarding proxy you can use the ngx_http_proxy_connect_module
I was able to configure SSL/TLS forward proxying with this configuration, using the stream module.
stream {
upstream web_server {
server my_server_listening_on:443;
}
server {
listen 443;
proxy_pass web_server;
}
}
Resources:
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru

Secure TCP traffic to backend server with nginx

I have a web-app consisting of front- and back-end services. I want to secure my front-end service with let's encrypt certificate, but then I have to use secured connection between front- and back-end. I have the back-end service served on a custom port. For securing back-end I want to use nginx to proxy my server. However, I am struggling to get it right. Here is my nginx configuration:
server {
listen 8082;
server_name <my_domain_name>;
ssl on;
ssl_certificate /etc/letsencrypt/live/<my_domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<my_domain>/privkey.pem;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 SSLv3;
location / {
proxy_pass http://0.0.0.0:8081;
}
}
First, I just wanted to get it through, without ssl. But it does not work like this, nothing is served on 8082. If it works, I thought I could use my letsencrypt certificates here, though I'm not sure, whether it is possible and I understand things correctly.
I would appreciate any help! Thanks a lot in advance!
Update
I figured out the problem was in iptables. After I added the port 8082 to them, it worked. What I don't understand, why I can connect to the port 8081, although it is not in the iptables.
However, now I get ERR_SSL_PROTOCOL_ERROR when I try https://my_domain:8082.
I also tried to add ssl to the listen directive, like listen 8082 ssl;. Then I get ERR_CONNECTION_RESET.
Just for the record. The problem was indeed in the directive listen.
Adding
listen 8082 ssl;
and removing
ssl on;
solved it.
It is a mystery, why it didn't work and gave me ERR_CONNECTION_RESET before. Now it works.
location #backend {
proxy_pass http://backend;
}
#backend is a named location which allows you to reference it like a variable i.e. like
location / {
error_page 404 = #backend;
}
For your problem try something like
location / {
proxy_pass http://backend;
}

Different SSL ssl_verify_clent for different SSL ports in NGINX not working

I have a definition in Nginx where by different ports, I need different SSL client verify options.
When I connect to :443/location1, Nginx will request a client cert, but will fail with "HTTP 400, Bad Request, Require Client Cert". It seems as if NGinx uses the server rule for port 444 which has a "ssl_verify_client off" on connect, but on the route, NGinx checks to see if a client cert was given since it's rule for port 443, says client verify is required and then fails in the actual HTTP request.
I dug around and can't seem to find any docs around this. Clearly same IP:PORT is an issue, but everything thus far indicates by PORT I can change the config but that doesn't seem to be the case.
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl-certs/a.cert;
ssl_certificate_key /etc/nginx/ssl-certs/a.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_client_certificate /etc/nginx/ssl-certs/ca.pem;
ssl_verify_client on;
location /location1 {
[..]
}
}
server {
listen 444;
ssl on;
ssl_certificate /etc/nginx/ssl-certs/a.cert;
ssl_certificate_key /etc/nginx/ssl-certs/a.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_verify_client off;
location /location2 {
[..]
}
}
I eventually figured it out.
Client rejection is mandatory, but can happen either after the connection has been made or during the handshake.
NGINX will allow the handshake to complete, then enforce if the
client was verified.
APACHE (at least the last version I used) fails
the handshake.

Error code: ssl_error_rx_record_too_long on nginx ubuntu server

I have a site which was perfectly running with apache on some old ubuntu server and also has https for it. But now for some reasons i need to move to different(new ubuntu server with high configuration) server and trying to serve my site using Nginx, and so installed nginx (nginx/1.4.6 (Ubuntu)). Below is my nginx.conf file settings
server {
listen 8005;
location / {
proxy_pass http://127.0.0.1:8001;
}
location /static/ {
alias /root/apps/project/static/;
}
location /media/ {
alias /root/apps/media/;
}
}
# Https Server
server {
listen 443;
location / {
# proxy_set_header Host $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Url-Scheme $scheme;
# proxy_redirect off;
proxy_pass http://127.0.0.1:8001;
}
server_tokens off;
ssl on;
ssl_certificate /etc/ssl/certificates/project.com.crt;
ssl_certificate_key /etc/ssl/certificates/www.project.com.key;
ssl_session_timeout 20m;
ssl_session_cache shared:SSL:10m; # ~ 40,000 sessions
ssl_protocols SSLv3 TLSv1; # SSLv2
ssl_ciphers ALL:!aNull:!eNull:!SSLv2:!kEDH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+EXP:#STRENGTH;
ssl_prefer_server_ciphers on;
}
Since i was already having https certificate(project.com.crt) and key(www.project.com.key) running on another server, i had just copied them to new server(which does not contain any domain as of now, and has only IP) and placed in at path /etc/ssl/certificates/ and trying to use them directly. Now i had restarted Nginx and tried to access my IP 23.xxx.xxx.xx:8005 with https:// 23.xxx.xxx.xx:8005 and i am getting the below error in firefox
Secure Connection Failed
An error occurred during a connection to 23.xxx.xxx.xx:8005. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.
But when i access the IP without https, i can able to serve my site.
So whats wrong with my Https settings in the above nginx conf file ?
Whether we can't serve the certificate files by simply copying and pasting at some folder ? do we need to create any extra certificate for my new server ?
Change
listen 443;
to
listen 443 ssl;
and get rid of this line
ssl on;
That should fix your SSL issue, but it looks like you have several issues in your configuration.
So whats wrong with my Https settings in the above nginx conf file ?
You don't have a SSL/TLS server listening on the port the client is trying to connect to. The ssl_error_rx_record_too_long occurs because the client's SSL stack is trying to interpret a HTTP response as SSL/TLS data. A Wireshark trace should confirm the issue. Look at the raw bytes (follow the stream).
I don't know why the configuration is not correct. Perhaps someone with Nginx config experience can help. Or, the folks on Server Fault or Webmaster Stack Exchange.
This problem happens when the Client gets non-SSL content over SSL connection. the Server send HTTP content but the client awaits HTTPS content. You can check two main things to fix it, but it can caused by another side effects too.
Make sure you put ssl on listen directive;
listen [PORT_NUMBER] ssl;
Check your Host IP Address you tried to connect. DNS can be correct but maybe you have another point on your hosts file or your local DNS server.