I am using a self signed certificate in the upstream. The upstream is reachable from the cURL but not from NGinX. Here is the process I followed.
I changed hosts file and add upstream IP with a domain name.
10.0.1.2 xxx.yyy.com
Then I used below command to access the application and it was successful.
curl GET "https://xxx.yyy.com/test" --cacert /etc/upstream.ca-cert.crt -v
Then I wanted to access the application through a NGinX. So I want to create secure connection between client and NGinX server and also between NGinX server and the application. The connection between client and NGinX works fine but the handshake between NGinX server and the application not works properly.
These are the configuration.
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name xxx.yyy.com;
location / {
include /etc/nginx/proxy_params;
proxy_pass https://backend-server;
proxy_ssl_certificate /etc/nginx/ssl/upstream.ca-cert.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/upstream.ca-cert.key;
proxy_ssl_server_name on;
rewrite ^(.*):(.*)$ $1%3A$2;
}
upstream backend-server {
ip_hash;
zone backend 64k;
server 10.0.1.2:443 max_fails=1000 fail_timeout=30s;
}
Below is the error log in NGinX.
2019/12/05 06:46:40 [error] 5275#0: *2078 peer closed connection in SSL handshake while SSL handshaking to upstream, client: xxx.xxx.xxx.xxx, server: xxx.yyy.com, request: "GET /test HTTP/1.1", upstream: "https://10.0.1.2:443/carbon", host: "xxx.yyy.com"
Related
My program on Golang can't work with user certificate (http client) due unsupported old tls algorithms. I want to solve it with reverse proxy using nginx. Here is a pic describes thoughted scheme https://ibb.co/Jxcy52G .
[client] ----> [NGINX:80] ----(proxy pass using cert,privkey)----> [TOMCAT:8443]
https://TOMCAT:8443 requires authentication with a client certificate. I want to hide this fact from my app. App must not be required to provide a client certificate. Instead, I would like NGINX to use a certificate that's stored on the server.
My nginx.conf config:
server {
listen 80;
# server_name _;
location / {
proxy_ssl_certificate "cert.pem";
proxy_ssl_certificate_key "key.pem";
proxy_ssl_server_name on;
# proxy_ssl_verify off;
proxy_ssl_name "iscs.telecomtest.ru";
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_pass https://195.11.xx.16:8443;
proxy_buffering off;
}
}
When i try to open page http://my-nginx:80/, I get:
2019/06/18 15:16:22 [error] 25896#14556: *133 peer closed connection in SSL handshake while SSL handshaking to upstream, client: 127.0.0.1, server: , request: "GET /? HTTP/1.1", upstream: "https://195.11.xx.16/?", host: "127.0.0.1:19101"
Thank you in advance.
I have configured Nginx as reverse proxy and each client calls are validated using the certificates. but when I browse in the client machine I get "400 Bad Request No required SSL certificate was sent"
I enabled error log and it says "client sent no required SSL certificate while reading client request headers, client: x.x.x.x, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "y.y.y.y", referrer: "https://y.y.y.y/"
I am not able to make out what is the problem it is trying to say.
my Nginx config changes
server {
error_log "C:/Error/error.log" debug;
listen 443 ssl;
server_name localhost;
#ssl_protocols TLSv1 TLSv1.1;
ssl_certificate "C:/Test/server.crt";
ssl_certificate_key "C:/Test/server.key";
ssl_client_certificate "C:/Test/ca.crt";
ssl_verify_client on;
#ssl_session_cache off;
#proxy_ssl_server_name on;
#proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#proxy_ssl_session_reuse off;
location / {
root html;
index index.html index.htm;
proxy_pass https://10.10.10.10/webservice;
}
Thanks,
Vinod G
Your configuration tries to authenticate a client using it's certificate and it looks like the client is not sending it.
** ssl_client_certificate** is to indicate you want to validate client certificate against the trusted CAs you're pointing to. The server would then ask the client to send a certificate and must be failing when it doesn't receive it.
A pictorial guide of the process can be read here for a better understanding:
https://comodosslstore.com/blog/what-is-ssl-tls-client-authentication-how-does-it-work.html
To debug further:
Tools like wireshark can be used to examine if client is sending a cert
https://www.linuxbabe.com/security/ssltls-handshake-process-explained-with-wireshark-screenshot
Use a tool like Postman to set the client certificate and check if the server responds as expected
https://blog.getpostman.com/2017/12/05/set-and-view-ssl-certificates-with-postman/
common issues in this area and how to resolve them
https://www.thesslstore.com/blog/tls-handshake-failed/
I have some reason to use two nginx servers before the application server.
Both nginx servers using an SSL connection.
Nginx1 (SSL 443 and ssl_verify_client on) -> Nginx2 (SSL 443) -> App (9000).
On the first Nginx1 server I use the option: proxy_set_header client_cert $ssl_client_cert;
On the second server Nginx2 I use the option: underscores_in_headers on;
The problem is that the second Nginx2 server is sent only the first line of the certificate - "----- BEGIN CERTIFICATE -----".
How to pass a client certificate to the application server?
Nginx terminates SSL with no exception, so if you want this config anyway - you will need to have SSL config again and keep certificates on the server (here is relevant SO answer) or based on Nginx support discussion to use HAProxy in TCP mode. Here is the sample configuration article.
I found a Workaround for proxy client certificate
# NGINX1
...
map $ssl_client_raw_cert $a {
"~^(-.*-\n)(?<1st>[^\n]+)\n((?<b>[^\n]+)\n)?((?<c>[^\n]+)\n)?((?<d>[^\n]+)\n)?((?<e>[^\n]+)\n)?((?<f>[^\n]+)\n)?((?<g>[^\n]+)\n)?((?<h>[^\n]+)\n)?((?<i>[^\n]+)\n)?((?<j>[^\n]+)\n)?((?<k>[^\n]+)\n)?((?<l>[^\n]+)\n)?((?<m>[^\n]+)\n)?((?<n>[^\n]+)\n)?((?<o>[^\n]+)\n)?((?<p>[^\n]+)\n)?((?<q>[^\n]+)\n)?((?<r>[^\n]+)\n)?((?<s>[^\n]+)\n)?((?<t>[^\n]+)\n)?((?<v>[^\n]+)\n)?((?<u>[^\n]+)\n)?((?<w>[^\n]+)\n)?((?<x>[^\n]+)\n)?((?<y>[^\n]+)\n)?((?<z>[^\n]+)\n)?((?<ab>[^\n]+)\n)?((?<ac>[^\n]+)\n)?((?<ad>[^\n]+)\n)?((?<ae>[^\n]+)\n)?((?<af>[^\n]+)\n)?((?<ag>[^\n]+)\n)?((?<ah>[^\n]+)\n)?((?<ai>[^\n]+)\n)?((?<aj>[^\n]+)\n)?((?<ak>[^\n]+)\n)?((?<al>[^\n]+)\n)?((?<am>[^\n]+)\n)?((?<an>[^\n]+)\n)?((?<ao>[^\n]+)\n)?((?<ap>[^\n]+)\n)?((?<aq>[^\n]+)\n)?((?<ar>[^\n]+)\n)?((?<as>[^\n]+)\n)?((?<at>[^\n]+)\n)?((?<av>[^\n]+)\n)?((?<au>[^\n]+)\n)?((?<aw>[^\n]+)\n)?((?<ax>[^\n]+)\n)?((?<ay>[^\n]+)\n)?((?<az>[^\n]+)\n)*(-.*-)$"
$1st;
}
server {
...
location / {
...
proxy_set_header client_cert $a$b$c$d$e$f$g$h$i$j$k$l$m$n$o$p$q$r$s$t$v$u$w$x$y$z$ab$ac$ad$ae$af$ag$ah$ai$aj$ak$al$am$an$ao$ap$aq$ar$as$at$av$au$aw$ax$ay$az;
...
}
...
}
# NGINX 2
server {
...
underscores_in_headers on;
...
location / {
proxy_pass_request_headers on;
proxy_pass http://app:9000/;
}
...
}
I have setup a Docker private registry (v2) on a CentOS 7 box following their offical documentation: https://docs.docker.com/registry/deploying/
I am running docker 1.6.0 on a Fedora 21 box.
The registry is running on port 5000, and is using an SSL key signed by a trusted CA. I set a DNS record for 'docker-registry.example.com' to be the internal IP of the server. Running 'docker pull docker-registry.example.com:5000/tag/image', it works as expected.
I setup an nginx server, running nginx version: nginx/1.8.0, and setup a dns record for 'nginx-proxy.example.com' pointing to the nginx server, and setup a site. Here is the config:
server {
listen 443 ssl;
server_name nginx-proxy.example.com;
add_header Docker-Distribution-Api-Version: registry/2.0 always;
ssl on;
ssl_certificate /etc/ssl/certs/cert.crt;
ssl_certificate_key /etc/ssl/certs/key.key;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header Docker-Distribution-Api-Version registry/2.0;
location / {
proxy_pass http://docker-registry.example.com:5000;
}
}
When I try to run 'docker pull nginx-proxy.example.com/tag/image' I get the following error:
FATA[0001] Error response from daemon: v1 ping attempt failed with error: Get https://nginx-proxy.example.com/v1/_ping: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
My question is twofold.
Why is the docker client looking for the /v1_/ping?
Why am I seeing the 'malformed http response'
If I run 'curl -v nginx-proxy.example.com/v2' I see:
[root#alex amerenda] $ curl -v https://nginx-proxy.example.com/v2/
* Hostname was NOT found in DNS cache
* Trying 10.1.43.165...
* Connected to nginx-proxy.example.com (10.1.43.165) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.example.com,O="example, Inc.",L=New York,ST=New York,C=US
* start date: Sep 15 00:00:00 2014 GMT
* expire date: Sep 15 23:59:59 2015 GMT
* common name: *.example.com
* issuer: CN=GeoTrust SSL CA - G2,O=GeoTrust Inc.,C=US
> GET /v2/ HTTP/1.1
> User-Agent: curl/7.37.0
> Host: nginx-proxy.example.com
> Accept: */*
> \x15\x03\x01\x00\x02\x02
If I do 'curl -v docker-registry.example.com' I get a 200 OK response. So nginx has to be responsible for this. Does anyone have an idea why this is happening? It is driving me insane!
proxy_pass http://docker-registry.example.com:5000;
you are passing the request with plain HTTP (i.e. no https)
\x15\x03\x01\x00\x02\x02
And you are getting a SSL response back. So it looks like you must use https:// and not http:// to access port 5000. And you even know that you are using SSL:
The registry is running on port 5000, and is using an SSL key signed by a trusted CA...
Apart from that: please use the names reserved for examples like example.com and don't use domain names in your example which don't belong to you.
I'm using nginx and I recently add certificate on a website, and I got strange error.
Here is a part of my access.log :
x.y.z.w - - [12/Nov/2014:15:16:09 +0100] "-" 400 0 "-" "-" Host : -
x.y.z.w - - [12/Nov/2014:15:16:09 +0100] "-" 400 0 "-" "-" Host : -
I see nothing in error.log but when I force error.log to be more precise, I got :
2014/11/12 15:16:09 [info] 16027#0: *24870 client closed prematurely connection while SSL handshaking, client: x.y.z.w, server: sub.domain.com
2014/11/12 15:16:09 [info] 16027#0: *24871 client closed prematurely connection while SSL handshaking, client: x.y.z.w, server: sub.domain.com
Here is a part of my nginx config file :
server
{
listen 80;
server_name sub.domain.com;
root /var/www;
rewrite ^ https://$server_name$request_uri? permanent;
}
server
{
listen 443 ssl;
server_name sub.domain.com;
root /var/www;
ssl_certificate /var/server.crt;
ssl_certificate_key /var/server.key;
...
There is no error on client side.
It is normal ? Where does it come from ?
It is normal ? Where does it come from ?
It might be from clients which close before finishing the handshake. This might be the case if they get the certificate inside the handshake, fail to verify the certificate because it is self-signed or other reasons, and have to check with the user if they should continue.
Assuming you are using nginx default access log format, this means the handshake has completed but the client didn't send any valid request after that (HTTP 400 code -> invalid request).
This could for instance be due to some SSL scanner (not really surprising considering the current context).