Is it possible to identify TLS info. in requests response? - ssl

I am using python's requests module. I can get the server's response headers and application layer data as:
import requests
r = requests.get('https://yahoo.com')
print(r.url)
My question: Does requests allow retrieving Transport layer data (the server's TLS selected version, ciphersuite, etc. ?).

Here is a quick ugly monkey patching version that works:
import requests
from requests.packages.urllib3.connection import VerifiedHTTPSConnection
SOCK = None
_orig_connect = requests.packages.urllib3.connection.VerifiedHTTPSConnection.connect
def _connect(self):
global SOCK
_orig_connect(self)
SOCK = self.sock
requests.packages.urllib3.connection.VerifiedHTTPSConnection.connect = _connect
requests.get('https://yahoo.com')
tlscon = SOCK.connection
print 'Cipher is %s/%s' % (tlscon.get_cipher_name(), tlscon.get_cipher_version())
print 'Remote certificates: %s' % (tlscon.get_peer_certificate())
print 'Protocol version: %s' % tlscon.get_protocol_version_name()
This yields:
Cipher is ECDHE-RSA-AES128-GCM-SHA256/TLSv1.2
Remote certificates: <OpenSSL.crypto.X509 object at 0x10c60e310>
Protocol version: TLSv1.2
However it is bad because monkey patching and relying on a unique global variable, which also means you can not inspect what happens at redirect steps, and so on.
Maybe with some work that can be turned out as a Transport Adapter, to get the underlying connection as a property of the request (probably of the session or something). That may create leaks though, because in the current implementation the underlying socket is thrown away as quickly as possible (see How to get the underlying socket when using Python requests).
Update, now using a Transport Adapter
This works, and is in line with the framework (no global variable, should handle redirects, etc. there may something to do for proxies though, like adding an override for proxy_manager_for too), but it is a lot more code.
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.connectionpool import HTTPSConnectionPool
from requests.packages.urllib3.poolmanager import PoolManager
class InspectedHTTPSConnectionPool(HTTPSConnectionPool):
#property
def inspector(self):
return self._inspector
#inspector.setter
def inspector(self, inspector):
self._inspector = inspector
def _validate_conn(self, conn):
r = super(InspectedHTTPSConnectionPool, self)._validate_conn(conn)
if self.inspector:
self.inspector(self.host, self.port, conn)
return r
class InspectedPoolManager(PoolManager):
#property
def inspector(self):
return self._inspector
#inspector.setter
def inspector(self, inspector):
self._inspector = inspector
def _new_pool(self, scheme, host, port):
if scheme != 'https':
return super(InspectedPoolManager, self)._new_pool(scheme, host, port)
kwargs = self.connection_pool_kw
if scheme == 'http':
kwargs = self.connection_pool_kw.copy()
for kw in SSL_KEYWORDS:
kwargs.pop(kw, None)
pool = InspectedHTTPSConnectionPool(host, port, **kwargs)
pool.inspector = self.inspector
return pool
class TLSInspectorAdapter(HTTPAdapter):
def __init__(self, inspector):
self._inspector = inspector
super(TLSInspectorAdapter, self).__init__()
def init_poolmanager(self, connections, maxsize, block=False, **pool_kwargs):
self.poolmanager = InspectedPoolManager(num_pools=connections, maxsize=maxsize, block=block, strict=True, **pool_kwargs)
self.poolmanager.inspector = self._inspector
def connection_inspector(host, port, connection):
print 'host is %s' % host
print 'port is %s' % port
print 'connection is %s' % connection
sock = connection.sock
sock_connection = sock.connection
print 'socket is %s' % sock
print 'Protocol version: %s' % sock_connection.get_protocol_version_name()
print 'Cipher is %s/%s' % (sock_connection.get_cipher_name(), sock_connection.get_cipher_version())
print 'Remote certificate: %s' % sock.getpeercert()
url = 'https://yahoo.com'
s = requests.Session()
s.mount(url, TLSInspectorAdapter(connection_inspector))
r = s.get(url)
Yes, there is a lot of confusion in naming between socket and connection: requests uses a "connection pool" that has a set of connections, which are in fact, for HTTPS, a PyOpenSSL WrappedSocket, which has itself an underlying real TLS connection (that is a PyOpenSSL Connection object). Hence the strange forms in connection_inspector.
But this returns the expected:
host is yahoo.com
port is 443
connection is <requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x10bb372d0>
socket is <requests.packages.urllib3.contrib.pyopenssl.WrappedSocket object at 0x10bb37410>
Protocol version: TLSv1.2
Cipher is ECDHE-RSA-AES128-GCM-SHA256/TLSv1.2
Remote certificate: {'subjectAltName': [('DNS', '*.www.yahoo.com'), ('DNS', 'add.my.yahoo.com'), ('DNS', '*.amp.yimg.com'), ('DNS', 'au.yahoo.com'), ('DNS', 'be.yahoo.com'), ('DNS', 'br.yahoo.com'), ('DNS', 'ca.my.yahoo.com'), ('DNS', 'ca.rogers.yahoo.com'), ('DNS', 'ca.yahoo.com'), ('DNS', 'ddl.fp.yahoo.com'), ('DNS', 'de.yahoo.com'), ('DNS', 'en-maktoob.yahoo.com'), ('DNS', 'espanol.yahoo.com'), ('DNS', 'es.yahoo.com'), ('DNS', 'fr-be.yahoo.com'), ('DNS', 'fr-ca.rogers.yahoo.com'), ('DNS', 'frontier.yahoo.com'), ('DNS', 'fr.yahoo.com'), ('DNS', 'gr.yahoo.com'), ('DNS', 'hk.yahoo.com'), ('DNS', 'hsrd.yahoo.com'), ('DNS', 'ideanetsetter.yahoo.com'), ('DNS', 'id.yahoo.com'), ('DNS', 'ie.yahoo.com'), ('DNS', 'in.yahoo.com'), ('DNS', 'it.yahoo.com'), ('DNS', 'maktoob.yahoo.com'), ('DNS', 'malaysia.yahoo.com'), ('DNS', 'mbp.yimg.com'), ('DNS', 'my.yahoo.com'), ('DNS', 'nz.yahoo.com'), ('DNS', 'ph.yahoo.com'), ('DNS', 'qc.yahoo.com'), ('DNS', 'ro.yahoo.com'), ('DNS', 'se.yahoo.com'), ('DNS', 'sg.yahoo.com'), ('DNS', 'tw.yahoo.com'), ('DNS', 'uk.yahoo.com'), ('DNS', 'us.yahoo.com'), ('DNS', 'verizon.yahoo.com'), ('DNS', 'vn.yahoo.com'), ('DNS', 'www.yahoo.com'), ('DNS', 'yahoo.com'), ('DNS', 'za.yahoo.com')], 'subject': ((('commonName', u'*.www.yahoo.com'),),)}
Other ideas:
You may remove a lot of code if you do monkey patching like in https://stackoverflow.com/a/22253656/6368697 with basically poolmanager.pool_classes_by_scheme['http'] = MyHTTPConnectionPool; but this is still monkey patching, and it is sad that PoolManager does not give a nice API for the pool_classes_by_scheme variable to be able to easily override it
PyOpenSSL ssl_context may be able to hold callbacks that will be called during the TLS handshake and get the underlying data; then in init_poolmanager you would just need to setup the ssl_context in kwargs before calling superclass; this example in https://gist.github.com/aiguofer/1eb881ccf199d4aaa2097d87f93ace6a <= or maybe not because in fact the structure comes from ssl.create_default_context and ssl is far less powerful than PyOpenSSL and I see no way to add callbacks using ssl, where they exist for PyOpenSSL. YMMV.
PS:
Once you find out you have _validate_conn that you can override as it gets the proper connection object, life is easier
And especially if you do the import on top correctly, you need to use the urllib3 packages that are distributed inside requests, not the "real" urllib3 otherwise you get a lot of strange errors, because the same methods in both do not have the same signatures...

Related

Datastax Driver giving connection error after enabling client to node SSL on Cassandra port 9142

Enabled SSL on cassandra nodes on port 9142. The service is running fine when testing it from local but getting AllNodesFailedException when deploying on ECS cluster. Using the same keystore locally. Non SSL Port 9042 is working ok.
Failed to instantiate [com.datastax.oss.driver.api.core.CqlSession]:
Factory method 'session' threw exception; nested exception is
com.datastax.oss.driver.api.core.AllNodesFailedException: Could not
reach any contact point, make sure you've provided valid addresses
(showing first 3 nodes, use getAllErrors() for more):
Node(endPoint=ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142,
hostId=null, hashCode=6551c917):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-28-203.us-west-2.compute.internal/10.18.28.203:9142],
Node(endPoint=ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142,
hostId=null, hashCode=36985f57):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-8-110.us-west-2.compute.internal/10.18.8.110:9142],
Node(endPoint=ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142,
hostId=null, hashCode=8eab7e9):
[io.netty.channel.ConnectTimeoutException: connection timed out:
ip-10-18-7-47.us-west-2.compute.internal/10.18.7.47:9142]
cassandra.yaml properties
server_encryption_options:
internode_encryption: none
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: true
optional: true
keystore: /etc/cassandra/conf/casskeystore
keystore_password: changeit

How do you make a SSL server with a Pre-Shared Key in Erlang?

I'm trying to create a simple SSL server that uses a Pre-Shared Key to authenticate a client (no ssl certs). It seems this functionality exists in the Erlang ssl module, but it's unclear to me how to use it effectively. It's worth mentioning I'm attempting to use it in Elixir (eventually as a GenServer).
I've made a simple example that should just accept a connection, and then proceed with a handshake when a client connects.
# Server
defmodule SimpleSSLServer do
def start() do
:ssl.start()
# Limit Cipers to what supports PSKs
ciphers_suites =
:ssl.cipher_suites(:all, :"tlsv1.3")
|> Enum.filter(&match?(%{cipher: cip} when cip in [:aes_128_gcm], &1))
{:ok, listen_socket} =
:ssl.listen(8085,
reuseaddr: true,
user_lookup_fun:
{&user_lookup/3, "001122334455ff001122334455ff001122334455ff001122334455ff"},
ciphers: ciphers_suites
)
{:ok, transport_socket} = :ssl.transport_accept(listen_socket)
{:ok, _socket} = :ssl.handshake(transport_socket)
end
def user_lookup(:psk, _id, state) do
{:ok, state}
end
end
SimpleSSLServer.start()
# Client
openssl s_client -connect 127.0.0.1:8085 -psk 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f -tls1_3 -ciphersuites TLS_AES_128_GCM_SHA256
This application compiles, and when you run the server and connect a client the following message occurs:
** (exit) exited in: :gen_statem.call(#PID<0.187.0>, {:start, :infinity}, :infinity)
** (EXIT) an exception was raised:
** (MatchError) no match of right hand side value: {:state, {:static_env, :server, :gen_tcp, :tls_gen_connection, :tcp, :tcp_closed, :tcp_error, :tcp_passive, 'localhost', 8085, ...super long amount of state
(ssl 10.6.1) tls_handshake_1_3.erl:652: :tls_handshake_1_3.do_start/2
(ssl 10.6.1) tls_connection_1_3.erl:269: :tls_connection_1_3.start/3
(stdlib 3.17) gen_statem.erl:1203: :gen_statem.loop_state_callback/11
(ssl 10.6.1) tls_connection.erl:154: :tls_connection.init/1
(stdlib 3.17) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
(stdlib 3.17) gen.erl:220: :gen.do_call/4
(stdlib 3.17) gen_statem.erl:693: :gen_statem.call_dirty/4
(ssl 10.6.1) ssl_gen_statem.erl:1187: :ssl_gen_statem.call/2
(ssl 10.6.1) ssl_gen_statem.erl:223: :ssl_gen_statem.handshake/2
test.exs:18: SimpleSSLServer.start/0
Any help is appreciated, this is likely something very obvious I'm missing!

How to force disconnect from ActiveMQ connection with Stomp.py

When listening to a message queue using a durable connection I get an error in the listener. I simulate this by hitting CTRL-Z to quit the program. Trying to re-connect gives me a an error that says:
on_error! : "javax.jms.InvalidClientIDException: Broker: BMRSBROKER - Client: <Client-id> already connected from tcp://10.18.57.69:4241
at org.apache.activemq.broker.region.RegionBroker.addConnection(RegionBroker.java:255)
at org.apache.activemq.broker.jmx.ManagedRegionBroker.addConnection(ManagedRegionBroker.java:227)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.advisory.AdvisoryBroker.addConnection(AdvisoryBroker.java:116)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.security.JaasAuthenticationBroker.addConnection(JaasAuthenticationBroker.java:75)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.plugin.AbstractRuntimeConfigurationBroker.addConnection(AbstractRuntimeConfigurationBroker.java:118)
at org.apache.activemq.broker.BrokerFilter.addConnection(BrokerFilter.java:98)
at org.apache.activemq.broker.MutableBrokerFilter.addConnection(MutableBrokerFilter.java:103)
at org.apache.activemq.broker.TransportConnection.processAddConnection(TransportConnection.java:849)
at org.apache.activemq.broker.jmx.ManagedTransportConnection.processAddConnection(ManagedTransportConnection.java:77)
at org.apache.activemq.command.ConnectionInfo.visit(ConnectionInfo.java:139)
at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:333)
at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:197)
at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:300)
at org.apache.activemq.transport.stomp.StompTransportFilter.sendToActiveMQ(StompTransportFilter.java:97)
at org.apache.activemq.transport.stomp.ProtocolConverter.sendToActiveMQ(ProtocolConverter.java:202)
at org.apache.activemq.transport.stomp.ProtocolConverter.onStompConnect(ProtocolConverter.java:774)
at org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:265)
at org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:85)
at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at org.apache.activemq.transport.tcp.SslTransport.doConsume(SslTransport.java:108)
at org.apache.activemq.transport.stomp.StompSslTransportFactory$1$1.doConsume(StompSslTransportFactory.java:70)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:233)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
at java.lang.Thread.run(Thread.java:745)
"
I have tried to unsubscribe and disconnect my connection using the method below but it won't disconnect me.
class MyListener(stomp.ConnectionListener):
"""This is a listener class that listens for new messages using the STOMP protocol"""
def __init__(self, conn):
self.conn = conn
self.client_id = client_id
def on_error(self, headers, message):
self.disconnect()
def on_message(self, headers, message):
...
def on_disconnected(self):
self.disconnect()
def disconnect(self):
try:
self.conn.unsubscribe(
destination="/topic/bmrsTopic",
id=self.client_id
)
except:
print('unsubscribe failed')
# first disconnect before trying to reconnect
print('first disconnect before trying to reconnect')
try:
self.conn.disconnect()
except:
print('disconnect failed')
How can I force the AMQ server to forget my previous connection?
You cannot for the broker to disconnect other client resources from another connection. Instead you should be configuring the connection with an idle timeout value for both the client and the broker so that both sides handle the remote dropping without the socket detecting the close.
You can also configure the broker transport connector with Heart-Beat Grace Period values for stomp clients that don't advertise heart beats, refer to the ActiveMQ broker STOMP documentation.
The STOMP specification outlines how heart beats work:

TLS-Encrypted Connection with RabbitMQ Using pika

I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...

Erlang SSL server stops accepting connections

Setup :
Erlang cluster with two Erlang nodes, different names, identical SSL setup (certificates, keys, authority)
the two nodes are listening for connections on the same port
the accept scheme is simple and doesn't have an acceptor pool in front : ListenSocket = ssl:listen() when the app starts -> then, in the children, I do AcceptSock = ssl:transport_accept(ListenSocket) + ssl:ssl_accept(AcceptSock) + mysup:start_child() which will start a new gen_server to listen on ListenSocket (in the gen_server init() I have timeout == 0, btw - to make the gen_server receive a timeout message which will be handled with handle_info(timeout...) which does the accept scheme above).
Expected behavior :
I expect all of this to work all the time :)
Observed behavior :
from time to time, one or both servers stop accepting ssl connections from the iOS apps. telnet to that port works - and it even passes transport_accept().
from the iOS app, I get a "SSLHandshake failed, error -9806" and it doesn't look like transport_accept() was successful (I have error logging before and after that line and I do not see any error messages printed in the log - theoretically, it looked like the iOS app is not trying to connect to that port, but it did try, because it says SSLHandshake failed).
I followed this thread - and got the followings :
openssl s_client -connect myserver:4321 -servername myserver -ssl3 -tls1 -prexit
CONNECTED(00000003)
write:errno=60
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Start Time: 1460057622
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
same command executed with the second server (that is still accepting connections) returned a lot more infos and doesn't time out.
Any help is appreciated, thank you.