SSL Wildcard Certificate - ssl

I'm looking for a wildcard certificate for a company *.domain.com.
I don't need extended validation.
Entry level questions:
Should I buy level 2 or level 3 certificate? What's the main difference?
Can I install (use) the same certificate/key pair in different machines (different IPs)? Some CAs ask for dedicated IP as a requirement, but I'd like to use SNI for multiple virtual hosts.
In general, is it a good idea to rely on SNI support?

Should I buy level 2 or level 3 certificate? What's the main difference?
The difference is the number of intermediate CAs, which should not matter. Different CAs might have additional differences between the certificates, like life-time etc but this depends on the CA.
Can I install (use) the same certificate/key pair in different machines (different IPs)? Some CAs ask for dedicated IP as a requirement, but I'd like to use SNI for multiple virtual hosts
There is no restriction using the same certificate for different IP, different ports, multiple machines etc as long as the host name in all cases matches the certificate. Of course each of the machines needs access to the private key, so you increase your attack surface with more machines.
In general, is it a good idea to rely on SNI support?
It depends what kind of systems you need to support. If you expect only newer browsers as clients SNI is ok. But, SNI is not supported by IE8 (Windows XP), some Android applications (because of an old version of Apache HTTP library), older versions of Java and older versions of script languages like Python, Perl etc which often gets used to automate tasks.
If you want to use the certificates not only for web but also for mail the situation might be even worse.

Related

SSL Certificate issue with using HTTP/2 as a communication mechanism inside LAN

We are running servers which run multiple processes each, who in turn currently communicate with each other using HTTP (Using 1-2 physical servers per client, and there are multiple clients with separate servers).
The servers are hosted locally per client.
We're thinking of migrating our nginx service, which is serving static files (multiple images, videos), to HTTP/2, in order to speed things up, as it is very common to request 1000 images at a time, which is an area where HTTP/2 excels.
For the client side we're using a chromium-based (Electron) client.
A problem arises from the above, where a TLS certificate is required when using HTTP/2 in the version of chromium we're using. Since this is a LAN there's no domain name, and even the IP addresses are not guaranteed to be static.
note: Using TLS is just a bonus, our main goal is to get the latency improvement from HTTP/2.
Is there a way around this?
The solution was to issue a self-signed certificate for a domain, which was added to the hosts file of all affected clients. The certificate authority was manually declared as trusted in all client machines.
For a more general solution, one could use a any DNS resolution option so long as it is consistent across for all clients, and any signed certificate, while a self-signed-signature would require a manual addition of the CA file to all the clients.

HTTP/2 for applications deployed on the intranet / Lack of SSL possibilities

So HTTP/2 adds performance I'd love to harness. I don't like concatenating my javascript for various reasons, and HTTP/2 would make that unnessesary anyway.
BUT. I'm developing a webapp which is going to be deployed inside customers local networks. Thus I cannot have SSL (neither domains nor IP addresses are fixed/known). Now Mozilla and Chrome said they will only support HTTP/2 with TLS. To have that without browser warnings I need proper certificates, which I can't have. So does this mean HTTP/2 is dead for intranet applications?
There are a few scenarios where using HTTP/2 without SSL makes a lot of sense. Relatively secure intranets is one of them, and website development is another. That said, you can still use HTTP/2 in your intranet by deploying SSL, and it is actually easier and cheaper in an intranet.
Usually there is much more control in an intranet, without any implied monetary cost. For example, you can setup a simple local DNS server (like DNSMasq or the one built-in into Windows) to have domains pointing to ip addresses, and configure, through DHCP, those addresses to be static.
The certificates issue is certainly more tricky. You can use the internal Certificate Authority of your client, if they have one already, or set it up for them.
And finally, if your client is so small that all the computers that will use your application are in the same building than the server, don't bother neither concatenating the files nor using HTTP/2.

Can I put multiple alternative certificates for a host, in a single certificate file?

I have a web service which is secured through HTTPS. I also have client software which talks to this web service, using libcurl (which may be linked to OpenSSL, or linked to GnuTLS; I don't know which one, it depends on how the user installed libcurl). Because the web service is only ever accessed through the client software and never through the browser, the web service utilizes a self-signed certificate. The client software, in turn, has a copy of this self-signed certificate and explicitly checks the connection against that certificate.
Because of Heartbleed, I want to change the private key and certificate. However I want my users to experience as little service disruption as possible.
For this reason, I cannot change the key/certificate on a fixed date and time. If I do this then all users must upgrade their client software at that exact date and time. Otherwise, the upgraded client software won't work before the server change, while old versions of the client software won't work after the server change.
Ideally, I want to tell my users that I'm going to change the certificate in 1 month, and that they have 1 month time to upgrade the client software. The client software should be compatible with both the old and the new certificate. Then, after 1 month, I can issue another client software update which removes support for the old certificate.
So now we've come to my question: can I append the old certificate and the new certificate into a single .crt file? Will this cause libcurl to accept both certificates? If not, what should I do instead? Does the behavior depend on the SSL library or version?
Tests on OS X seem to indicate that appending both certificates into a single file works, but I don't know whether this is OS X-specific behavior, or whether it works everywhere. My client software has to support a wide range of Unix systems, including Linux (multiple distros) and FreeBSD.
Short answer: You can't.
Long answer:
Yes you can put multiple certificates in a single .crt file, regardless of platforms.
However HTTPS can only serve one certificate, instead of a crt file. So it's not the file that is limiting you, it's the protocol.
You could have a look at SNI https://en.wikipedia.org/wiki/Server_Name_Indication
to be able to serve another certificate based on the SNI information sent by the client at the beginning of the SSL Handshake
Alternatively, you could use a separate TCP port (or IP, or both) that will serve the new certificate.
But you say
The client software, in turn, has a copy of this self-signed certificate and explicitly checks the connection against that certificate.
This then requires you to release a version of your software for your clients to run, to at least have the copy of the new certificate you are going to use.
I guess you should better use a certificate signed by well-known CA, to decouple your server certificate from its validation chain, but that indeed means paying.
Yes a cert file should be able to hold multiple certificates. I would expect this to be broadly supported.

SSL certificate for intranet web servers

I'm interested to purchase a wild card SSL certificate for my public domain (say example.com), so that we can run intranet web servers using a universally recognized CA (e.g., GoDaddy). I do plan to publish the DNS names publicly (e.g. internal.example.com), but their IP addresses are actually LAN addresses (e.g., 192...*). We want to use public DNS, because these web servers may actually be development laptops which travel around, and thus we will use Dynamic DNS to update. It's our intention that these web servers will only be available on the LAN each one is currently running on.
Will that work universally with all clients, e.g., TLS v1.2 ?
Thanks.
As long as the clients can route their traffic to these IP addresses, it will work (otherwise you won't get the connection, of course).
Certificate verification relies on two points:
Verifying that the certificate is genuine, trusted and valid in time.
Verifying that the identity of the certificate matches what you were looking for (host name verification).
This does not depend on how the DNS resolution mechanism. These mechanisms are also orthogonal to the SSL/TLS specifications (although they do recommend to verify the remote party's identity).
I've seen this sort of setup used on various clients and platforms (IE, Chrome, FF, Java clients on Windows/Linux/Mac) and it worked fine.
Of course, whether all implementations do this well is hard to guarantee. There might be some implementation that thinks it's a good idea to perform a reverse DNS lookup, for example.

Self signed certificate for machine to machine https connection

I need to set up https communication between a Tomcat application server and a back end system. The web server hosts a public website, so is in a DMZ.
My question is if there any advantage in using official CA certificates, over using self signed certificates in this situation (machine to machine communication)?
I keep hearing self signed certificates should not be used on production systems, but I'm not sure I understand why (for machine to machine communication).
The risk lies in how effective the defenses protecting the hosts in question are, including the network connection between them. Given that weaknesses and exploits are being found all the time, it is reasonable to say there could be issues with self-signed certificates used in a production environment - which includes hosts in a DMZ.
Here's the reason: man-in-the-middle. In short, if either host - or the network between them - becomes compromised, then the traffic between them will still be encrypted, but because the certificate is self-signed, a man-in-the-middle (aka "MITM") would be able to introduce a transparent proxy using a self-signed cert, which will be trusted by both sides.
If instead your hosts use a public CA, then the MITM approach cannot work.
If the annual $15-50 investment per host is more costly than the information on and between them - including what could be on them (e.g., compromised, serving malware), then the choice is simple: don't worry about buying certs. Otherwise, it's important to look into them.
The comment by Adam Hupp on this webpage provides a good, simple scenario:
http://www.vedetta.com/self-signed-ssl-certificates-vs-commercial-ssl-certificates-how-mozilla-is-killing-self-signed-certificates
And here's a more fleshed out description of the risk:
http://blog.ivanristic.com/2008/07/vast-numbers-of.html
And finally a balanced look at the two scenarios, though this article only considers self-signed OK when there is a fully-functional, properly protected and implemented Certificate Authority server installed:
http://www.networkworld.com/news/tech/2012/021512-ssl-certificates-256189.html
I see no advantage in using official certificates for this task - besides the fact that your marketing dept. could claim your infrastructure is "100% certified by $CA". Encryption algorithm/strength and cert duration can be the same, depending on how you configure it.
The recommendations you hear probably focus on the far more common usage of HTTPS for communication with browsers, which nowadays complain about self signed certs. For data transfer between servers, I think it's good practice to encrypt traffic the way you plan on doing it!