hi # all i have a problem with pass through the TCP Communication to a Synology NAS with SSL.
I want to connect with the Synology Drive Client to the NAS, and the Drive Client Software communicate over the TCP Port 6690 with the NAS.
When i try to connect i get an 500 Error.
Without SSL it works fine, but than the Synology encrypt the communication with a own untrusted Cert., that should not be the solution.
The Build:
Internet| --> |Router(Port forwarding 6690)| --> |nginx| -->| NAS(192.168.10.2)|
Nginx:
stream{
log_format log_stream '$remote_addr [$time_local] $protocol [$ssl_preread_server_name] [$ssl_preread_alpn_protocols]'
'$status $bytes_sent $bytes_received $session_time';
access_log /var/log/nginx/access.log log_stream;
ssl_certificate /etc/letsencrypt/live/{mydomain}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/{mydomain}/privkey.pem;
ssl_protocols TLSv1.1 TLSv1.2;
server {
listen 6690 ssl;
proxy_pass 192.168.10.2:6690;
}
}
Log:
xx.xx.xxx.xxx [08/Nov/2019:15:09:37 +0100] TCP [-] [-]500 0 0 0.000
xx.xx.xxx.xxx [08/Nov/2019:15:09:37 +0100] TCP [-] [-]500 0 0 0.000
xx.xx.xxx.xxx [08/Nov/2019:15:10:37 +0100] TCP [-] [-]500 0 0 0.000
xx.xx.xxx.xxx [08/Nov/2019:15:10:37 +0100] TCP [-] [-]500 0 0 0.000
xx.xx.xxx.xxx [08/Nov/2019:15:11:37 +0100] TCP [-] [-]500 0 0 0.000
xx.xx.xxx.xxx [08/Nov/2019:15:11:37 +0100] TCP [-] [-]500 0 0 0.000
i also try to check the SSL handshake with:
openssl s_client -host mydomain.net -port 6690
and that works fine.
Does somebody has any idea where is my mistake??? :-(
Similar setup with the goal to provide access to Synology Drive behind "safer" Rev-Proxy.
The problem is not the SSL handshake.
The problem is: no http protocol running no port 6690 for Synology Drive etc. (Reference in German, contains assumptions about non-http protocol on 6690: https://www.synology-forum.de/showthread.html?74773-Cloud-Station-über-Reverse-Proxy/page3)
In addition, I call with the Drive-Client to my NginxRevProxy (which is working in general) and get from the access.log:
172.18.0.1 - - [09/Nov/2019:10:17:26 +0000] "%R\x18\x14F\x0B\x00\x00" 400 157 "-" "-" "-"
So your approach (and mine) are not sufficient.
Possible path to a solution, which is beyond my current knowledge:
"Upgrade" Nginx to a reverse tcp-proxy, if this works with Synology Drive, use "fancyness" of Nginx in conjunction with own auth-script within and for the TCP-Proxy. (i.e. "abuse" + extend the TCP-Loadbalancer shown here: https://www.debinux.de/2014/12/nginx-als-tcp-proxy-beispiel-dovecot/)
Related
I am configuring readiness and liveness probes for my kuberenetes deployment.
Here is how i added it:
ports:
- name: http
containerPort: {{ .Values.service.internalPort }}
protocol: TCP
livenessProbe:
tcpSocket:
port: http
readinessProbe:
tcpSocket:
port: http
But this is causing the error logs in the pod:
2021/03/24 03:23:06 http: TLS handshake error from 10.244.0.1:48476: EOF
If i remove the probes and create the deployment, this logs will not appear.
I have an ingress setup such that all the http requests to that container as https. Because my container expects only https requests to it.
I thought this error logs are shown because the tcp probes are not sending https requests here.
Is there some other way to setup probes without these error logs?
if you are looking forward to send to the HTTPS request to the service you have to change the scheme.
livenessProbe:
httpGet:
path: /
port: 443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 443
scheme: HTTPS
you can check more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#http-probes
scheme: Scheme to use for connecting to the host (HTTP or HTTPS). Defaults to HTTP.
if HTTPS is set kubelet will send to HTTPS request or else by default it will be HTTP.
if request if failing you will see logs like : 400 bad request
10.165.18.52 - - [24/March/2021:17:06:40 +0000] "GET / HTTP/1.1" 400 271 "-" "kube-probe/1.16"
for the successful request, it will be 200 request
10.165.18.52 - - [24/March/2021:18:10:06 +0000] "GET / HTTP/1.1" 200 "-" "kube-probe/1.16"
I'm using HAProxy Ingress Controller (https://github.com/helm/charts/tree/master/incubator/haproxy-ingress) for TLS-termination for my app.
I have a simple Node.JS server listening on 8080 for HTTP, and 1935 as a simple echo server (not HTTP).
And I use HAProxy Ingress controller to wrap the ports in TLS. (8080 -> 443 (HTTPS), 1935 -> 1936 (TCP + TLS))
I installed HAProxy Ingress Controller with
helm upgrade --install haproxy-ingress incubator/haproxy-ingress \
--namespace test \
-f ./haproxy-ingress-values.yaml \
--version v0.0.27
, where the content of haproxy-ingress-values.yaml is
controller:
ingressClass: haproxy
replicaCount: 1
service:
type: LoadBalancer
tcp:
1936: "test/simple-server:1935:::test/ingress-cert"
nodeSelector:
"kubernetes.io/os": linux
defaultBackend:
nodeSelector:
"kubernetes.io/os": linux
And here's my ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
secretName: ingress-cert
rules:
- http:
paths:
- path: /
backend:
serviceName: "simple-server"
servicePort: 8080
The cert is self-signed.
If I test the TLS handshake with
echo | openssl s_client -connect "<IP>":1936
Sometimes (about 1/3 of the times) it fails with
CONNECTED(00000005)
139828847829440:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:332:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 316 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
The same problem doesn't happen for 443 port.
See here for the details of the settings to reproduce the problem.
[edit]
As pointed out by #JoaoMorais, it's because the default statistic port is 1936.
Although I didn't turn on statistics, it seems like it still interferes with the behavior.
There're two solutions that work for me.
Change my service's 1936 port to another
Change the stats port by adding values like below when installing the haproxy-ingress chart.
controller:
stats:
port: 5000
HAProxy by default allows to reuse the same port number across the same or other frontend/listen sections and also across other haproxy process. This can be changed adding noreuseport in the global section.
The default HAProxy Ingress configuration uses port number 1936 to expose stats. If such port number is reused by eg a tcp proxy, the incoming requests will be distributed between both frontends - sometimes your service will be called, sometimes the stats page. Changing the tcp proxy or the stats page (doc here) to another port should solve the issue.
I am using a self signed certificate in the upstream. The upstream is reachable from the cURL but not from NGinX. Here is the process I followed.
I changed hosts file and add upstream IP with a domain name.
10.0.1.2 xxx.yyy.com
Then I used below command to access the application and it was successful.
curl GET "https://xxx.yyy.com/test" --cacert /etc/upstream.ca-cert.crt -v
Then I wanted to access the application through a NGinX. So I want to create secure connection between client and NGinX server and also between NGinX server and the application. The connection between client and NGinX works fine but the handshake between NGinX server and the application not works properly.
These are the configuration.
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
server_name xxx.yyy.com;
location / {
include /etc/nginx/proxy_params;
proxy_pass https://backend-server;
proxy_ssl_certificate /etc/nginx/ssl/upstream.ca-cert.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/upstream.ca-cert.key;
proxy_ssl_server_name on;
rewrite ^(.*):(.*)$ $1%3A$2;
}
upstream backend-server {
ip_hash;
zone backend 64k;
server 10.0.1.2:443 max_fails=1000 fail_timeout=30s;
}
Below is the error log in NGinX.
2019/12/05 06:46:40 [error] 5275#0: *2078 peer closed connection in SSL handshake while SSL handshaking to upstream, client: xxx.xxx.xxx.xxx, server: xxx.yyy.com, request: "GET /test HTTP/1.1", upstream: "https://10.0.1.2:443/carbon", host: "xxx.yyy.com"
I'm trying to get a websocket/push-server to run with SSL and can't get it to work.
Here are some informations about what I'm using and what I've done so far:
Ubuntu 16.04.6 LTS server with Plesk 17.8.11
Apache version 2.4.18
Ratchet and zmq extension installed
autobahn.js for websocket connection
PHP-FPM (7.2.19) served by Apache
Firewall disabled for testing
Apache mods enabled: proxy, proxy_balacer, proxy_fcgi, proxy_http, proxy_wstunnel
When I use an non-SSL connection via autobahn.js to ws://my.domain.net:8888/ everything works perfectly fine.
As soon as I try to use wss://my.domain.net/wss/ I get 504 connection timeouts in my browser.
Part of my push-server.php:
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://0.0.0.0:5555'); // set to 0.0.0.0 for testing, was 127.0.0.1 before but didn't work either
$pull->on('message', array($pusher, 'onMessage'));
$webSock = new React\Socket\Server('0.0.0.0:8888', $loop);
$webServer = new Ratchet\Server\IoServer(
new Ratchet\Http\HttpServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
)
),
$webSock
);
$loop->run();
The server is listening on both ports:
php 3957 root 17u IPv4 823972456 0t0 TCP *:8888 (LISTEN)
ZMQbg/0 3957 3958 root 17u IPv4 823972456 0t0 TCP *:8888 (LISTEN)
ZMQbg/1 3957 3959 root 17u IPv4 823972456 0t0 TCP *:8888 (LISTEN)
php 3957 root 16u IPv4 823972455 0t0 TCP *:5555 (LISTEN)
ZMQbg/0 3957 3958 root 16u IPv4 823972455 0t0 TCP *:5555 (LISTEN)
ZMQbg/1 3957 3959 root 16u IPv4 823972455 0t0 TCP *:5555 (LISTEN)
My apache config:
SSLProxyEngine on
ProxyRequests Off
SetEnv proxy-initial-not-pooled 1
ProxyPass /wss/ https://my.domain.net:8888/
ProxyPassReverse /wss/ https://my.domain.net:8888/
I also tried to use
ProxyPass /wss/ wss://my.domain.net:8888/
but then I get the following error message
[proxy:warn] AH01144: No protocol handler was valid for the URL /wss/. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
Now when I try to open a connection with autobahn.js to wss://my.domain.net/wss/ the request loads for a while and ends in a 504 Gateway Time-out and I get the following error message:
[proxy_http:error] (103)Software caused connection abort: [client x.x.x.x:33362] AH01102: error reading status line from remote server my.domain.net:8888
[proxy:error] [client x.x.x.x:33362] AH00898: Error reading from remote server returned by /wss/
Part of the Request Headers:
Cache-Control: no-cache
Connection: Upgrade
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Sec-WebSocket-Key: WG/ajAsTaVhBgEeEz5wiUg==
Sec-WebSocket-Protocol: wamp
Sec-WebSocket-Version: 13
Upgrade: websocket
Part of the Response Headers:
Connection: keep-alive
Content-Length: 578
Content-Type: text/html
Server: nginx
Opening the https://my.domain.net/wss/ in a browser ends in an Gateway Time-out as well, served by nginx.
I've spent like 2 days already googling and trying different solutions but nothing seems to work.
If you need more information, please let me know.
Thanks in advance!
I'm using nginx and I recently add certificate on a website, and I got strange error.
Here is a part of my access.log :
x.y.z.w - - [12/Nov/2014:15:16:09 +0100] "-" 400 0 "-" "-" Host : -
x.y.z.w - - [12/Nov/2014:15:16:09 +0100] "-" 400 0 "-" "-" Host : -
I see nothing in error.log but when I force error.log to be more precise, I got :
2014/11/12 15:16:09 [info] 16027#0: *24870 client closed prematurely connection while SSL handshaking, client: x.y.z.w, server: sub.domain.com
2014/11/12 15:16:09 [info] 16027#0: *24871 client closed prematurely connection while SSL handshaking, client: x.y.z.w, server: sub.domain.com
Here is a part of my nginx config file :
server
{
listen 80;
server_name sub.domain.com;
root /var/www;
rewrite ^ https://$server_name$request_uri? permanent;
}
server
{
listen 443 ssl;
server_name sub.domain.com;
root /var/www;
ssl_certificate /var/server.crt;
ssl_certificate_key /var/server.key;
...
There is no error on client side.
It is normal ? Where does it come from ?
It is normal ? Where does it come from ?
It might be from clients which close before finishing the handshake. This might be the case if they get the certificate inside the handshake, fail to verify the certificate because it is self-signed or other reasons, and have to check with the user if they should continue.
Assuming you are using nginx default access log format, this means the handshake has completed but the client didn't send any valid request after that (HTTP 400 code -> invalid request).
This could for instance be due to some SSL scanner (not really surprising considering the current context).