What is SSL context? - ssl

When programming for a SSL, no matter which language you choose (C++, Java, Ruby etc.), you probably encounter SSLContext object which would be used. I do not know what does SSLContext semantically means? When I search google for it, there just come many pages explaining the syntactical usage of such object for various programming languages.
My Question: What does SSLContext mean/do in terms of SSL? Regardless of the language which implements it.

SSL Context is a collection of ciphers, protocol versions, trusted certificates, TLS options, TLS extensions etc. Since it is very common to have multiple connections with the same settings they are put together in a context and the relevant SSL connections are then created based on this context. And to create a new connection you need only refer to the context which thus saves time and memory compared to the case you would have to re-create of all these settings.
EDIT: #EJP nicely describes this "collection" as factory. A SSL context is not the same as a SSL session even both are collections of settings. A session is what you get after the SSL handshake and covers only the cipher and protocol version both parties agreed on and also the exchanged key. Whereas the context covers all the ciphers and protocol versions and also the list of trusted certificates the local system (client or server) is willing to support when establishing a new TLS connection. Thus a SSL session describes an established SSL relation while the SSL context describes what you need to establish an SSL relation.

SSLContext: Instances of this class represent a secure socket protocol implementation which acts as a factory for secure socket factories or SSLEngines. This class is initialized with an optional set of key and trust managers and source of secure random bytes.
SSLSession: In SSL, sessions are used to describe an ongoing relationship between two entities. Each SSL connection involves one session at a time, but that session may be used on many connections between those entities, simultaneously or sequentially. The session used on a connection may also be replaced by a different session. Sessions are created, or rejoined, as part of the SSL handshaking protocol. Sessions may be invalidated due to policies affecting security or resource usage, or by an application explicitly calling invalidate. Session management policies are typically used to tune performance.
SSLSessionContext: A SSLSessionContext represents a set of SSLSessions associated with a single entity. For example, it could be associated with a server or client who participates in many sessions concurrently. An SSLSessionContext can be configured with a session timeout.

Related

How long is an established SSL connection valid?

Suppose I am sending "hello" to an api over ssl. My understanding is there would be a symmetric key exchange established over ssl and then the message "hello" will be encrypted using that symmetric key and sent over to the other server.
Now my question is, the next time I send a "hello 2", does the symmetric key exchange happen again? My guess would be that if it's a persistent connection, there would be no need for the key exchange again. Can someone confirm?
Meta: this doesn't appear to me to be programming, although it might be development and is mostly dupe How long does SSL connection between a client and a server persist? .
It depends on the application protocol used on top of SSL (which since 1999 is really TLS, although many things e.g. implementation classes still use the old name) and usually the implementations at both ends. For example, HTTP/1.1 defaults to connection persistence (which was often done in 1.0 as an extension called keep-alive), but either endpoint can change this by specifying connection: close, and even if the connection is kept open can choose to close it anytime later, perhaps after a minute or two, perhaps after a day or a week. The HTTPS implementation in browsers usually keeps connections open for a little while but has limits on the total connections open so those that haven't been used recently may need to be closed when others are opened. Other applications, libraries, and platforms vary. Other protocols also vary; for example an email agent using SMTPS would normally make a connection, transmit one or more emails, and then disconnect.
In addition, SSL-now-TLS through 1.2 supports session resumption, which allows the keyexchange (and other handshake results) performed on one connection to be saved (at both endpoints, or with the 'ticket' option, at client only) and reused on a new connection, for as long as the endpoint(s) agree; implementations usually call this session caching. See e.g. RFC 5246 section 7.3 specifically the part starting in the middle of page 36, and for one fairly common server (Apache) see SessionCache and SessionCacheTimeout directives. Resumption uses a new handshake but not a full keyexchange on that handshake.
However, this creates a security vulnerability if an endpoint's sesssion cache is compromised, so TLS 1.3 replaces it with a different method using dynamically created PSKs; see RFC 8446 section 2.2. This allows either a partial handshake (doing the actual keyexchange with [EC]DHE but authentication tied to the previous session by the PSK rather than full certificate-based authentication) which provides forward secrecy, or a minimal handshake (using the PSK both as the new initial secret and for authentication) which does not.
If you want an answer for specific software, and specific server(s), you need to look at the capabilities and configuration, and often also the current status, of that software and those server(s).

ssl connection, using a hostname that is not in the SAN list of the host's certificate

I am quite new to ssl stuffs but I am afraid I can guess the final answer of the following problem/question:
We are building hardware (let's call them servers) that WILL have IP address modifications along there lifetime. Each Server must be reachable in a secured manner. We are planning to use a TLS 1.3 secured connection to perform some actions on the servers (update firmware, change configuration and so on). As a consequence we need to provide the server's with one certificate (each) so that they can state their identity. PKI issue is out of the scope of this question (we suppose) and we can take for granted that the clients and the servers will share a common trusted CA to ensure the SSL handshake goes ok. The server's will serve http connection on there configured (changeable) IP addresses only. There is no DNS involved on the loop.
We are wondering how to set the servers' certificates appropriately.
As IP will change, it cannot be used as the common name in the server's certificate.
Therefore, we are considering using something more persistent such as a serial number or a MAC address.
The problem is, as there is no DNS in the loop, the client can not issue http request to www.serialNumberOfServer.com and must connect to http://x.y.z.t (which will change frequently (at least frequently enough so that we don't issue a new server's certificate at each time))
If we get it right, ssl handshake requires to have the hostname (that's in the URL we are connecting to) matching either the commonName of the server's Certificate or one of its Subject's Alternative Name (SAN). Right? Here, it would be x.y.z.t.
So we think we are stucked in a situation in which the server cannot use it's IP to prove its identity and the client wants to use it exclusively to connect to the server.
Is there any work around?
Are we missing something?
Any help would be very (VERY) appreciated. Do not hesitated in cas you should need more detailed explanation!
For what it's worth, the development environment will be Qt using the QNetworkAccessManager/QSSlstuffs framework.
If you're not having the client use DNS at all, then you do have a problem. The right solution is to use DNS or static hostname lists (/etc/hosts, eg, on unix* or hosts.txt on windows eg.). That will let you set names appropriately.
If you can only use IP addresses, another option is to put all of your IP addresses into the certificate that the server might use. This is only doable if you have a reasonable small number of addresses that they might get assigned to.
Or you could keep a cache of certificates on the server with one address for each, and have part of the webserver start process to select the right certificate. Requires a bit more complex startup.
Edit: Finally, some SSL stacks (e.g. openssl) let you decide whether or not each particular verification error should be accepted as an error or that it can be ignored. This would let you override the errors on the client side. However, this is hard to implement properly and very prone to security issues if you don't bind the remote certificate properly it means you're subjecting yourself to man-in-the-middle or other attacks by blindly accepting any old certificate. I don't remember if Qt's SSL library gives you this level of flexibility or not (I don't believe so but didn't go pull up the documentation).
Went back on the subject 9 mont later!
Turns out there is an easy solution (at least with Qt framework)
Qt's QNetworkRequest::setPeerVerifyName does the job for us. It allows to connect to an host using its IP and verify a given CN during SSL handshake
See Qt's documentation extract below:
void QNetworkRequest::setPeerVerifyName(const QString &peerName)
Sets peerName as host name for the certificate validation, instead of the one used for the TCP connection.
This function was introduced in Qt 5.13.
See also peerVerifyName.
Just tested it positively right now

Can I use kafka over Internet?

Is kafka suitable for Internet-use?
More precisely, what I want is to expose kafka topics as "public interface", then external consumers (or producers) can connect to it. Is it possible?
I hear there are problems if I want to use the cluster in both internal and external networks, because it is then hard to configure advertised.host.name. Is that true?
And do I have to expose zookeeper as well? I think the new consumer/producer api no longer need that.
Kafka's wire protocol is TCP-based and works fine over the public internet. In the latest versions of Kafka you can configure multiple interfaces for both internal and external traffic. Examples of Kafka over the internet in production include several Kafka-as-a-Service offerings from Heroku, IBM MessageHub, and Confluent Cloud.
You do not need to expose zookeeper if the Kafka clients use the new consumer API.
You may also choose to expose a REST Proxy such as the open source Confluent REST Proxy as a more client firewall friendly interface since it runs over HTTP(S) and will not be blocked by most corporate or personal firewalls.
I would personally not expose the Kafka server directly to clients via TCP for these reasons, only to name a few:
If a bad client opens too many connections this may affect the stability of the Kafka platform and may affects other clients too
Too many open files on the Kafka server, HW/SW settings and OS tuning is needed to limit uncontrolled clients
If you need to add a Kafka server to increase scalability, you may need to go through a lot of low level configuration (firewall, IPs visibility, certificates, etc.) on both client and server side. Other product address these problems using gateways or proxies: Coherence uses extend proxy clients, tibco EMS uses routed destinations, other SW (many JMS servers) use Store&Forward mechanisms, etc.
Maintenance of the Kafka nodes, in case of clients attached to the Kafka servers, will have to consider also the needs of clients and the SLA (service level aggreement) that have been defined with the client (ex. 24*7*365)
If you use Kafka also as a back end service, a multi layered architecture should be taken into consideration: FE gateways and BE services, etc.
Other considerations require to understand what exacly you consider to be an external (over the internet) consumer/producer in your system. Is it a component of your system that needs to access the Kafka servers? Are they internal or external to your organization, etc.
...
Naturally all these considerations can be correctly addressed also using a TCP direct connection to the Kafka servers, but I would personally use a different solution.
HTTP proxies
Or at least I would use a dedicated FE Kafka server (or couple of servers for HA) dedicated for each client that forward the messages to the main Kafka group of servers
It is possible to expose Kafka over the internet (in fact, that's how managed Kafka providers such as Aiven and Instaclustr make their money) but you have to ensure that it is adequately secured. At minimum:
ZooKeeper nodes should reside in a private subnet and not be routable from outside. ZK's security is inadequate and, at any rate, it is no longer required to bootstrap Kafka clients with ZK address(es).
Limit access to the brokers at the network level. If all your clients connect from a trusted network, then set appropriate firewall rules. If in AWS, use VPC peering or Direct Connect if you are connecting cloud-to-cloud or cloud-to-ground. If most of your clients are on a trusted network but a relative minority are not, force the latter to go via a VPN tunnel. Finally, if you want to allow connectivity from arbitrary locations, you'll just have to allow * on port 9092 (or whichever port you configure the brokers to listen on); just make sure that the other ports are closed.
Enable TLS (SSL) for client-broker connections. This is easily configured with a self-signed CA. Depending on how you expose your listeners, you may need to disable SSL hostname verification on the client. (The certificate chain of trust breaks if the advertised host names don't match the certificate's common name.) The clients will need the CA certificate installed. (Same CA that signed the brokers' certs.)
Optionally, you may enable mutual TLS authentication; however, this is logistically more taxing, as it requires each client to have its own private key that is signed by a CA trusted by the broker.
Use SASL to authenticate the client to the broker and create individual users for each application and each person that is expected to access the cluster.
Issue minimally-sufficient cluster- and topic-level access privileges in the ACLs for each user, following the Principle of Least Privilege (PoLP).
One other thing to bear in mind: Not all tooling supports SASL/SSL connectivity and some tools actually require a connection to ZooKeeper nodes (which will not be reachable in the above setup). Make sure any tooling you rely on uses the 'new' style of connectivity directly to the Kafka brokers and does not require a Zookeeper connection.
Beyond configuring client TLS, brokers have to have public IPs which we try to avoid. Normally for other services we hide everything behind load balancers. Would this be possible with kafka?
I'm not sure the Confluent REST proxy hosted on a public server is a real option when you need the high performance batching of the java producer client.

Private key change on SSL certificate renewal

When renewing a certificate with a new private key, what happens with browsers that connected previously? Will the old certificate be cached and requests encrypted incorrectly? Is it possible at all to run multiple servers load balanced at layer 4 with some of them having new and others old certificates without causing connections to fail assuming no sticky sessions are used?
Clients usually do not cache SSL/TLS certificates. Only if you use the "Public Key Pinning Extension for HTTP (HPKP)" client do cache and check the provided certificate (or to be exact certain properties of that certificate). For changing the certificate HPKP can "allow" multiple certificates (e.g. one old and one new).
Regarding the load balancer: If they work on osi layer 4 I assume they work on TCP level. Therefore each server behind the balancer establishes it's own SSL/TLS session. If the sessions are not shared among the servers there should not be a problem event if not all servers use the same certificate - as long as all certificates are valid.
Clients can provide an SSL/TLS session id when starting the SSL/TLS connection, but the server decides if the session is known or not. Therefore if the client references a session from a different server nothing bad happens, client and server just establish a new session.

Two-way SSL Verifications

I'm trying to find out more information on the details of two-way SSL authentication. What I want to know is what verifications are done when one client receives another's certificate. (See the Verify Circle in the image below)
Two way verification http://publib.boulder.ibm.com/infocenter/tivihelp/v5r1/topic/com.ibm.itim.infocenter.doc/images/imx_twowaysslcacert.gif
Does someone has a list of all of the steps? Is there a standards document I can be pointed to? Does each server implement it differently?
Mainly what I'm asking is... Does the server do a verification against the other server's hostname vs the certificates Common name (CN)?
As #user384706 says, it's entirely configurable.
The scenario you're talking about is one where a machine is both a server and a client (and is the client as far as the SSL/TLS connection is concerned).
You don't necessarily gain much more security by verifying that the connection originates from the CN (or perhaps Subject Alternative Name) of the certificate that is presented.
There are a couple of issues:
If the SSL/TLS server is meant to be used by clients that are both end-users and servers themselves, you're going to have two different rules depending on which type of client you're expecting for a particular certificate. You could have a rule base on whether the client certificate has the "server" extended key usage extension or only the client one, but this can get a bit complex (why not).
The client (which is also a server) may be coming through a proxy, depending on the network where it is, in which case the source IP address will not match what you'd expect.
Usually, client-certificate authentication relies on the fact that private keys are assumed to be kept protected. If a private key is compromised by an attacker on the server, the attacker may also have the ability to spoof the origin IP address when making the connection (or making the connection from the compromised server directly). This being said, servers tend to have private keys that are not password-protected, so it may help a little bit in case it was copied discretely.
I think some tools are so strict that they don't only verify the CN to be the FQDN of the incoming connection: they also check that it's the reverse DNS entry for the source IP address. This can cause a number of problems in practice, since some servers may have multiple CNAME entries in the DNS, in which case the CN would be legitimate, but not necessarily the primary FQDN for that IP address.
It all really depends on the overall protocol and general architecture of the system.
RFC 6125 (Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS)), recently published, considers this scenario out of scope.
The closest reference I can think of is SIP.
Mainly what I'm asking is... Does the
server do a verification against the
other server's hostname vs the
certificates Common name (CN)?
This is configurable.
It is possible to configure strict checking and not accept connections from entities sending a certificate that the CN does not match the FQDN despite the fact that the certificate is considered as trusted (e.g. signed by a trusted CA).
It is possible to relax this and do not do this check and accept the certificate or delegate the decision to the user. E.g. IE shows a pop up warning saying that certificate's name does not match FQDN. Do you want to proceed anyway?
From security perspective the safest is to do strict verification