SSL certificates with unknown domain name - ssl

We're having an issue with securing an intranet / internet website with SSL where
we can't know the qualified domain name in advance.
Basically, I'm trying to make a program that will be installed on a webserver
outside my direct control, to be accessable over intra- or internet. In either case
I want it to be secure via SSL (https). To do this, I would like to include and
install a SSL certificate on the target machine. My installer is fully prepackaged
and should not require any particular during- or postinstall intervention from my
end. Problem is, I can't know ahead of time the target machine's name or domain
name, so as far as I can tell the SSL connection will be returning warnings (or
worse?) when accessed, since the certificate I include will (must) have a different
name on it.
I really want to avoid those warnings, but I still want to keep it secure. Is there
any way to install a SSL connection without certificate warnings without the domain
name known ahead of time?
Thanks for any help you folks can give.

What you want to do is not possible. Here's why.
A certificate will include a set of names (Common Name, possibly along with Subject Alternative Names, possibly including wildcard names).
The client's web browser will do the following:
The user wanted to visit "https://myapp.mydomain.com/blog/posts/1".
The request is via SSL and the domain name in the request is "myapp.mydomain.com".
Get the certificate from the Web server.
Ensure that at least one of the names in the certificate is exactly equal to, or wildcard-matches, the domain name in the request.
Display the page.
Therefore, you need a certificate with the exact domain name (or a wildcard matching the exact domain name) by which the application will be used. And the certificate needs to be available at the same time as, or later than, the time when the exact domain name of the website becomes known, and cannot be made available any earlier.
You seem to be under the misapprehension that somehow a certificate can "create" or "install" an SSL connection. That is false. The Web server - Apache, IIS, Nginx, LigHTTPD, or whichever one you happen to use - is the program that knows how to every aspect of SSL connectivity. The certificate is just a file that the Web server sends to the client, without even opening or using in any way.
Additionally, the author of a webapp to be distributed is not responsible for creating or distributing certificates, and should not be under the misapprehension that he is responsible. Only the website maintainer should be responsible for obtaining a certificate for his website. As another person remarked, in your installation process or perhaps in a post-installation process, you may ask the person installing the webapp for a certificate. But that is the best you can do.

The best you can do is to buy a wildcard SSL certificate - but wait, it's not what you think. You still need to know the second-level domain (the TLD being ".com") ahead of time. You can effectively ask for a cert that covers *.foo.com - then any site, a.foo.com, b.foo.com will be covered. Of course, these certs are more expensive that FQDN certs because you are doing the buggers out of some extra coin.
-Oisin

Each of those sites should have their own SSL certificate. Why not prompt the user to provide the cert file during installation?

In most (if not all) cases, the SSL certificate is associated with the webserver (apache, IIS, etc.) and is not part of your application. It's up to the admin of the web server to install the certificate and not you as the author of the program.
If your installation program does have the ability to modify the web server configuration, and you are willing to have it use a self-signed certificate, you can script the creation of the certificate to allow the domain name to be input. However, I sense this is not really available to you. Also, a self-signed certificate will generally cause certificate warnings.

If I understand you correctly there might be a solution to your problem now. This solution won't help you, however, if you have no control over specifying what SSL certificates are served from the web server where your program is installed (as mentioned by someone else). If your program itself contains a web server you won't have this issue.
If you start with a trusted https website, you can make cross-domain TLS (SSL) XmlHttpRequests to the web servers that are running your application. This is made possible using the opensource Forge project. The project uses a TLS implementation written in JavaScript and a small Flash swf to handle the cross-domain requests. Your program will need to serve an XML Flash policy file that grants the trusted website access to the web server running the application.
Your program will also need to generate a self-signed SSL certificate and upload it to the trusted website. From there, each program's certificate can be included as trusted via the JavaScript TLS implementation. Alternatively, you can have your program upload its certificate to be signed by a CA you create, using a common or subject alternative name that is appropriate for your use (it doesn't have to be the domain name). Then you can use JavaScript to trust the CA certificate and look for the correct name on each certificate.
For more details check out the Forge project at github:
http://github.com/digitalbazaar/forge/blob/master/README
The links to the blog posts at the end provide more in-depth information about how it works.

Related

Automated ACME subdomain SSL certificate generation for resources on different IP addresses

I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)

Can I put multiple alternative certificates for a host, in a single certificate file?

I have a web service which is secured through HTTPS. I also have client software which talks to this web service, using libcurl (which may be linked to OpenSSL, or linked to GnuTLS; I don't know which one, it depends on how the user installed libcurl). Because the web service is only ever accessed through the client software and never through the browser, the web service utilizes a self-signed certificate. The client software, in turn, has a copy of this self-signed certificate and explicitly checks the connection against that certificate.
Because of Heartbleed, I want to change the private key and certificate. However I want my users to experience as little service disruption as possible.
For this reason, I cannot change the key/certificate on a fixed date and time. If I do this then all users must upgrade their client software at that exact date and time. Otherwise, the upgraded client software won't work before the server change, while old versions of the client software won't work after the server change.
Ideally, I want to tell my users that I'm going to change the certificate in 1 month, and that they have 1 month time to upgrade the client software. The client software should be compatible with both the old and the new certificate. Then, after 1 month, I can issue another client software update which removes support for the old certificate.
So now we've come to my question: can I append the old certificate and the new certificate into a single .crt file? Will this cause libcurl to accept both certificates? If not, what should I do instead? Does the behavior depend on the SSL library or version?
Tests on OS X seem to indicate that appending both certificates into a single file works, but I don't know whether this is OS X-specific behavior, or whether it works everywhere. My client software has to support a wide range of Unix systems, including Linux (multiple distros) and FreeBSD.
Short answer: You can't.
Long answer:
Yes you can put multiple certificates in a single .crt file, regardless of platforms.
However HTTPS can only serve one certificate, instead of a crt file. So it's not the file that is limiting you, it's the protocol.
You could have a look at SNI https://en.wikipedia.org/wiki/Server_Name_Indication
to be able to serve another certificate based on the SNI information sent by the client at the beginning of the SSL Handshake
Alternatively, you could use a separate TCP port (or IP, or both) that will serve the new certificate.
But you say
The client software, in turn, has a copy of this self-signed certificate and explicitly checks the connection against that certificate.
This then requires you to release a version of your software for your clients to run, to at least have the copy of the new certificate you are going to use.
I guess you should better use a certificate signed by well-known CA, to decouple your server certificate from its validation chain, but that indeed means paying.
Yes a cert file should be able to hold multiple certificates. I would expect this to be broadly supported.

Can I reuse SSL certificate on a local machine with the same (locally configured) URL?

Here's a possible scenario.
Let's say I have a website "https://www.mywebsite.com" and there is a valid SSL certificate purchased for this domain.
I want to "mimick" this website on my LOCAL machine for a testing purpose.
So let's say I set up a locally-configured "https://www.mywebsite.com" (which is in essence https://localhost/mywebsite or something similar).
Would I be able to re-use the SSL certificate on my local testing website?
You can re-use your SSL certificate if you configure your DNS so that your test machine is the same domain name as server, which is probably a bad idea.
You can also re-use it on your test machine if you don't mind clicking the box "accept this whacked out ssl cert"... So I suppose that the answer is technically yes, although I wouldn't personally do it.
It depends what you are trying to test and why you need a certificate for testing.
If you use the certificate, it will correctly encrypt connections using SSL, but any client will get a certificate mis-match error. If you use a self-signed certificate instead, most clients will give you a warning about that, so it might be just as annoying or not.
If you are testing, for instance, a deployment script to make sure everything gets installed in the right place, it will work. If you are testing to make sure your code correctly redirects a non-secure connection to a secure one, it will work.
If you want to test the your website for functionality, usability, bugs, etc. then your testers will likely complain about the certificate warnings or errors, and you're probably better off doing something else.
I am not sure since the SSL certificate is bound by the domain name that was registered with the certificate. But you may be able to dupe the certificate by editing your hosts file to change localhost 127.0.0.1 to be mysite.com 127.0.0.1, ...in theory at least...if not this is a question for serverfault.com.
Hope this helps,
Best regards,
Tom.
You can't use it since the SSL cert is tied to the domain www.mywebsite.com unless you do a bit of trickery.
You can put an entry in your hosts file saying that domain is at 127.0.0.1, but that's not ideal as you could no longer reach the website.
If you just need a valid cert to test with, then a better alternative is to self-sign using the IIS Resource Kit.
I'm no expert on DNS, but this would introduce a pretty major vulnerability.
Basically if this was allowed, DNS poisoning could be used defeat the whole purpose of third party trust.
Think about it:
I infect your computer so that when you go to www.amazon.com it resolves www.amazon.com to a different domain. That domain uses amazon's ssl cert to fool you into thinking it's legit, so you send me your credit card information.
So, the answer to your question is, no you can't do this. You will still get errors, My guess is that somewhere on the verfication chain, it compares the domain that initiated the request with what its internal dns resolves the domain, to verify there is a match.
As others have said, you can test SSL with a Self Signed Cert, you just have to instruct your testers to import the cert, or go through the trouble of building your own trusted CA, and have testers add that CA as a trusted CA.
There is no point in stealing another sites SSL Cert.
Of course you could use the vulnerability in MD5 to create your own valid SSL cert.
http://www.digicert.com/news/2009-01-05-md5-ssl.htm

Silly SSL cert question for Windows 2000/ASP/IIS

I've got an ssl certificate for what I think is my domain and I want to apply it to two separate applications in that domain that run under ASP classic in IIS on Windows 2000.
I have the following stupid questions:
Are certificates issued for URLs or domains? Or subdomains?
Can I use the same cert for multiple websites (applications) within that domain, or do I need a separate one?
Can I inspect the cert file to determine for what or to whom it's issued?
Thanks!
1) Web certs are issued to a domain. Specifically the CN attribute of the certificate must match the domain used to access your site.
2) Certs are usually install per host (or virtual host). If you had cert for the domain wwwapps.domain.tld you could have one app at /calendar, and one app at /contacts.
3) Yes, depending on the format and where it is, this can be easy or hard. If you have a crt file and you are running under windows, just click on it. You should see the details.
If you want to inspect a certificate that is installed on a site, you usually have to click on the padalock icon.
On windows you can also open up the MMC, add the certificate snapin and see any/all installed certificates on the local machine, or your profile.
They are issued for domains. Subdomains require their own certs. You can buy a special wildcard cert for your domain that lets you create certs for your subdomains, but they are more expensive.
If you buy a cert for mydomain.com, you can use it for anything that starts with https://mydomain.com/
Yes. You can do this for any certs. check out the lock icon in your browser's address bar.
It's usually issued to a single web server host (basically a computer cname or a record) like foo.bar.com where foo is one name for the host which the certificate request was generated for and bar.com is its domain.
Thus it will work for any application or virtual directory that responds to https://foo.bar.com - like https://foo.bar.com/planner/ - but nothing more.
For https://*.bar.com you can get a wildcard certificate that lets you handle any number of hosts without any hassel - at a greater cost.
There are also multiple-SAN (UCC) certificates that can contain a specific number of host names in a single certificate like webmail.bar.com and autodiscover.bar.com for an Exchange 2007 server serving both web access and Outlook Anywhere from the same physical machine and NIC.
If it's in .cer format simply opening it in Windows will show the details, if it's a pfx or in some other transport format you'd need to import it.
You basically install the certificate on a Web Site node in IIS and anything you can fit beneath that (or modify using a modern firewall in front of it to still respond to the issued common name foo.bar.com) will work.
Thanks! I enabled port 443 for the site at the domain on the cert, loaded the cert via directory security in IIS for each subfolder, and enabled 128-bit encryption. Worked like a champ!

HTTPS Certificate for internal use

I'm setting up a webserver for a system that needs to be used only through HTTPS, on an internal network (no access from outside world)
Right now I got it setup with a self-signed certificate, and it works fine, except for a nasty warning that all browsers fire up, as the CA authority used to sign it is naturally not trusted.
Access is provided by a local DNS domain name resolved on local DNS server (example: https://myapp.local/), that maps that address to 192.168.x.y
Is there some provider that can issue me a proper certificate for use on an internal domain name (myapp.local)? Or is my only option to use a FQDN on a real domain, and later map it to a local IP address?
Note: I would like an option where it's not needed to mark the server public key as trusted on each browser, as I have not control over workstations.
You have two practical options:
Stand up your own CA. You can do it with OpenSSL and there's a lot of Google info out there.
Keep using your self-signed cert, but add the public key to your trusted certs in the browser. If you're in an Active Directory domain, this can be done automatically with group policy.
I did the following, which worked nicely for me:
I got a wildcard SSL cert for *.mydomain.com (Namecheap, for example, provide this cheaply)
I created a CNAME DNS record pointing "mybox.mydomain.com" at "mybox.local".
I hope that helps - unfortunately you'll have the expense of a wildcard cert for your domain name, but you may already have that.
You'd have to ask the typical cert people for that. For ease of use I'd get with the FQDN though, you might use a subdomain to your already registered one: https://mybox.example.com
Also you might want to look at wildcard certificates, providing a blanket cert for (e.g.) https://*.example.com/ - even usable for virtual hosting, should you need more than just this one cert.
Certifying sub- or sub-sub domains of FQDN should be standard business - maybe not for the point&click big guys that proud themselves to provide the certificates in just 2 minutes.
In short: To make the cert trusted by a workstation you'd have to either
change settings on the workstations (which you don't want) or
use an already trusted party to sign your key (which you're looking for a way around).
That's all your choices. Choose your poison.
I would have added this as a comment but it was a bit long..
This is not really an answer to your questions, but in practice I've found that it's not recommended to use a .local domain - even if it's on your "local" testing environment, with your own DNS Server.
I know that Active Directory uses the .local name by default when your install DNS, but even people at Microsoft say to avoid it.
If you have control over the DNS Server you can use a .com, .net, or .org domain - even if it's internal and private only. This way, you could actually buy the domain name that you are using internally and then buy a certificate for that domain name and apply it to your local domain.
I had a similar requirement, have our companys browsers trust our internal websites.
I didnt want our public DNS to issue public DNS for our internal sites, so the only way to make this work that I found was to use an internal CA.
Heres the writeup for this,
https://medium.com/#mike.reider/getting-firefox-chrome-to-trust-your-internal-websites-internal-certificate-authority-a53ba2d4c2af
i think the answer is NO.
out-of-the-box, browsers won't trust certificates unless it's ultimately been verified by someone pre-programmed into the browser, e.g. verisign, register.com.
you can only get a verified certificate for a globally unique domain.
so i'd suggest instead of myapp.local you use myapp.local.yourcompany.com, for which you should be able to get a certificate, provided you own yourcompany.com. it'll cost you thought, several hundred per year.
also be warned wildcard certificates might only go down to one level -- so you could use it for a.yourcompany.com and local.yourcompany.com but maybe not b.a.yourcompany.com or myapp.local.yourcompany.com, unless you pay more.
(does anyone know, does it depend on the type of wildcard certificate? are sub-sub-domains trusted by the major browsers?)
Development purpose only
This docker image solves the problem (thanks to local-ip.co): https://github.com/medic/nginx-local-ip.
It launches a reverse proxy in the port 443 with a public cert that works with any *.my.local-ip.co domain. Eg. your local IP is 192.168.10.10 → 192-168-10-10.my.local-ip.co already points to it (it's a public domain)! Assuming the app is running in your computer at the port 8080, you only need to execute this to proxy pass your app and expose it at the URL https://192-168-10-10.my.local-ip.co:
$ APP_URL=http://192.168.10.10:8080 docker-compose up
The domain is resolved with any public DNS you have configured in the devices where you want to access the app, but your traffic keeps local between your app and the client (through the proxy), so you can even use it to connect with devices within the same LAN network, without any of the traffic going out to internet, all the traffic is local.
The reason that is mostly useful for development is that anybody can launch an application with this same certificate, so is not really secure, but helpful when you need to expose your app with HTTPS while developing or testing (e.g. HTML5 apps in Android that are loaded with Webview).