Is this possible?
My understanding goes as far as this:
A non-proxy aware client will negotiate SSL directly with a listener, without every sending a CONNECT request to identify the destination host. A transparent proxy would need a destination host in order to forge a certificate with the hostname for the client.
I've read that some browser support the "server_name" extension in the client hello message which will identify the destination host and that if the extension is present this can be done. However, I'm unaware which browsers if any support this extension.
I would think that this should be possible but my efforts so far using squid and burp haven't been successful.
I understand that there's no way to obtain the destination host in the initial connection phase but I would think that with the correct configuration it would be possible to allow a forwarded connection in the initial phase then capture the returned certificate in order to read the destination then be able to inject the proxy's own CA-signed certificate at this point with the hostname derived from the legitimate certificate.
I think the best bet to get this working (if it's at all possible) is using squid's bump-server-first method http://wiki.squid-cache.org/Features/BumpSslServerFirst
I'm interested to hear if anyone has ever successfully gotten this to work.
Related
I am quite new to ssl stuffs but I am afraid I can guess the final answer of the following problem/question:
We are building hardware (let's call them servers) that WILL have IP address modifications along there lifetime. Each Server must be reachable in a secured manner. We are planning to use a TLS 1.3 secured connection to perform some actions on the servers (update firmware, change configuration and so on). As a consequence we need to provide the server's with one certificate (each) so that they can state their identity. PKI issue is out of the scope of this question (we suppose) and we can take for granted that the clients and the servers will share a common trusted CA to ensure the SSL handshake goes ok. The server's will serve http connection on there configured (changeable) IP addresses only. There is no DNS involved on the loop.
We are wondering how to set the servers' certificates appropriately.
As IP will change, it cannot be used as the common name in the server's certificate.
Therefore, we are considering using something more persistent such as a serial number or a MAC address.
The problem is, as there is no DNS in the loop, the client can not issue http request to www.serialNumberOfServer.com and must connect to http://x.y.z.t (which will change frequently (at least frequently enough so that we don't issue a new server's certificate at each time))
If we get it right, ssl handshake requires to have the hostname (that's in the URL we are connecting to) matching either the commonName of the server's Certificate or one of its Subject's Alternative Name (SAN). Right? Here, it would be x.y.z.t.
So we think we are stucked in a situation in which the server cannot use it's IP to prove its identity and the client wants to use it exclusively to connect to the server.
Is there any work around?
Are we missing something?
Any help would be very (VERY) appreciated. Do not hesitated in cas you should need more detailed explanation!
For what it's worth, the development environment will be Qt using the QNetworkAccessManager/QSSlstuffs framework.
If you're not having the client use DNS at all, then you do have a problem. The right solution is to use DNS or static hostname lists (/etc/hosts, eg, on unix* or hosts.txt on windows eg.). That will let you set names appropriately.
If you can only use IP addresses, another option is to put all of your IP addresses into the certificate that the server might use. This is only doable if you have a reasonable small number of addresses that they might get assigned to.
Or you could keep a cache of certificates on the server with one address for each, and have part of the webserver start process to select the right certificate. Requires a bit more complex startup.
Edit: Finally, some SSL stacks (e.g. openssl) let you decide whether or not each particular verification error should be accepted as an error or that it can be ignored. This would let you override the errors on the client side. However, this is hard to implement properly and very prone to security issues if you don't bind the remote certificate properly it means you're subjecting yourself to man-in-the-middle or other attacks by blindly accepting any old certificate. I don't remember if Qt's SSL library gives you this level of flexibility or not (I don't believe so but didn't go pull up the documentation).
Went back on the subject 9 mont later!
Turns out there is an easy solution (at least with Qt framework)
Qt's QNetworkRequest::setPeerVerifyName does the job for us. It allows to connect to an host using its IP and verify a given CN during SSL handshake
See Qt's documentation extract below:
void QNetworkRequest::setPeerVerifyName(const QString &peerName)
Sets peerName as host name for the certificate validation, instead of the one used for the TCP connection.
This function was introduced in Qt 5.13.
See also peerVerifyName.
Just tested it positively right now
We have a client whose code was written in Java 1.7. Java 1.7 by default refuses to connect via HTTPS to servers who return an SNI unrecognized_name warning. It's possible to turn off this behavior, but (of course) our client can't do that. Most other clients just ignore the warning.
We have a valid wildcard certificate for our domain, let's call it *.widgets.com. Anything in the domain widgets.com resolves to our HAProxy load balancer. We've installed that cert onto the load balancer, and we specify it in the front-end that listens on port 443. The cert is current and checks out fine when we test it from Qualys... except for that SNI warning.
Our client makes a call to a specific subdomain, say foo.widgets.com. The service is working fine, serving up content to anyone who calls it. Except for our client, of course, who won't connect to us after we return the SNI warning.
I've found lots of articles about how to solve this problem on Apache, but those don't help me with HAProxy. On HAProxy, I see that I can specify more than one cert, and I am told that HAProxy will "choose the right one". Do I need to get a separate, non-wildcard cert for foo.widgets.com? I don't want to buy another cert only to find out that that was not the solution.
Turns out the problem had little to do with HAProxy. Apparently we had an intrusion detection system in place that would terminate TLS prior to relaying down to HAProxy.
There is probably a way to make the IDS behave properly, presenting the correct certificate to the client. But we don't really need IDS on our non-prod environments, anyway. So we left it switched off, and the problem went away.
So if you're having a similar issue, after making sure that your certificate is good for the request you're testing, my advice would be to check whether you have any security software that could intercept traffic before it reaches your LB.
For my current project i need to implement http over tls at client ,for this i need a local server able to simulate this . is there any online or offline tool. which i can use and and see the handshake in wireshark .
For watching the TLS handshake you might not get far enough with wireshark. For such kind of monitoring you would ensure the security is low enough for wireshark to be able getting the session key from watching the handshake for decoding the later parts. Thus, you need to avoid any forward secrecy.
Otherwise any (https) server that is accessible (and willing to talk) to your client will do.
In case you are in an environment that is supported (e.g. any unix/linux), you might try using openssl. It allows setting up a server that will do handshake. And it will log the handshake such that you will be able looking at what is going on. This will eliminate the need for using wireshark for debugging.
For achieving this you need a Web Server accepting connection over TLS.I have achieved this on Apache Tomcat web Server.
TLS configuration needs to be done in the server.xml file present in config directory of Tomcat webserver. Connector tag needs to be added in the server.xml file that contains information like
TLS version to be used, port, List of supported Cipher Suites, Keystore path and password, Truststore path and password.
Any regular Rest client can be used to make a call like Postman client. But to use that over TLS/SSL certificate is needed to be installed in Chrome browser.
Hope this answers your doubt.
I am using the Let's Encrypt IIS client from https://github.com/Lone-Coder/letsencrypt-win-simple to generate a certificate for a server. Since the certificate is only valid for three months, I want it to auto-renew.
But the server for which I need that auto-renewing certificate is only bound to https:||mysubdomain.mydomain.com:443 and smtp:||mysubdomain.mydomain.com:25.
Both http:||mysubdomain.mydomain.com:80 and ftp:||mysubdomain.mydomain.com:21 point to a different server.
As you may have guessed, the error that is now thrown during the process is "The ACME server was probably unable to reach http:||mysubdomain.mydomain.com:80/.well-known/acme-challenge/abcdefgh...xyz".
It is completely clear to me why, but I can't fix it, because http:||mysubdomain.mydomain.com has to point to the other server. If the ACME server would try https:||mysubdomain.mydomain.com:443/.well-known/acme-challenge/abcdefgh...xyz, but ignore any certificate issue, he would successfully find the challenge.
Is there anything I can do, any feature I have overlooked, that would help me to get automated renewal working?
There are multiple options:
http-01
Redirect http://example.com/.well-known/acme-challenge/* to https://example.com/.well-known/acme-challenge/*, Boulder will happily follow any such redirect and ignore the provided certificate. That's the most simple way if you have access to the other server and can configure that redirect. It's a permanent redirect that you don't have to adjust, it'll be just fine every three months.
The option to use HTTPS directly has been removed due to security issues with some popular server software that uses the first host defined if some other virtual host doesn't define any HTTP host, which might lead to wrong issuances in multi-user environments aka shared hosting.
tls-sni-01
If you want to use just port 443, you can use another challenge type called tls-sni-01. But I think there's no client for Windows available yet that supports that challenge type.
dns-01
If you have control over the DNS via a simple API, you could also use the DNS challenge, it's completely independent of the port you can use.
I don't know exactly how to ask it, so I will try to explain with an example.
I have these resources on example.com, an HTTP/2 enabled server:
//example.com/css/file.css
//example.com/js/file.js
//example.com/images/file.png
What I want is to load one of these files through an alias domain cdn.example2.com that points to the domain example.com. So, the actual resources inside the HTML should look like:
//example.com/css/file.css
//cdn.example2.com/js/file.js -> points to //example.com/js/file.js
//example.com/images/file.png
My question here is: Shall all the resources in the second example be loaded by the browser over a single connection as they will be loaded when there is no alias domain?
Thanks for help.
If the aliases resolve to different IPs, there is no way the resources can be loaded over the same connection (called "connection re-use" by HTTP/2, if I'm not mistaken). That's a problem with CDNs from here on.
But for your peace of mind and utter rejoice of CDNs, connection re-use is a tricky thing and you may not have it even if all your domains resolve to the same IP, as is the case in your question.
To be future proof, you may want to ensure that your sites have the certificate extensions configured correctly to enable connection re-use.
In the current versions of Firefox and Chrome, I haven't observed connection re-use, even after crafting the certificates with all due care, and of course being sure that the two domains point to the same IP.
And just some food for thoughts: HTTP/2 over TLS requires SNI, which happens only when openning a connection. So when you connect for the first time to one domain, say example.com, the server obtains SNI data. But the server won't obtain such data if the same connection is re-used to send a request to cdn.example.com. Some servers or usage scenarios may be sensitive to this asymmetry, and that may have something to do with the way in which browsers implement (or not) connection re-use. But these are only speculations of yours truly...
The specification doesn't require its reuse, but it does explicitly include information on when reuse is acceptable -- such as two hosts that resolve to the same IP address.
https://www.rfc-editor.org/rfc/rfc7540#section-9.1.1
Connections that are made to an origin server, either directly or
through a tunnel created using the CONNECT method (Section 8.3), MAY
be reused for requests with multiple different URI authority
components. A connection can be reused as long as the origin server
is authoritative (Section 10.1). For TCP connections without TLS,
this depends on the host having resolved to the same IP address.
For "https" resources, connection reuse additionally depends on
having a certificate that is valid for the host in the URI. The
certificate presented by the server MUST satisfy any checks that the
client would perform when forming a new TLS connection for the host
in the URI.