I'm investigating a little problem for my employer. My company runs a website under an SSL certificate for the domain www.domainone.net.nz (Yes, New Zealand)
However, there's been a high-level marketing decision to change our primary domain to www.domaintwo.co.nz.
So, easy, right? Buy a new SSL cert for www.domaintwo.co.nz and get it running on IIS. Easy.
However, we have a few WebServices published that need to be accessed over HTTPS - there's some systems in place out in the wild that are using https://www.domainone.net.nz/
I would like to run BOTH certs at the same time, and give our partners and clients that are using these WebServices a set timeframe (six months, say) to roll over to the new domain, before revoking the www.domainone.net.nz cert.
This is a bit fiddly to search for - I keep getting explanations of wildcard SSL domains, which wouldn't help in this particular case, as the central domain name has changed.
Is this possible under IIS? My asp.dll shows version 6.0.3790.4195
It's possible, if you have separate IP addresses for both sites, simply create two sites, one with each SSL certificate and point the directories for both sites to the same place.
But with a single website, no it's not possible
you should be able to do this as long as you have two different IP's one for each of the SSL Certs, you may have to set up two sites that point to the same location to get it working properly, but im not sure.
Related
I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)
Good Afternoon,
I wanted to ask this question regarding SSL certificates. Our company manages several servers. For example:
location1.domain.com
location2.domain.com
location3.domain.com
Each of the links goes to a different server with different IP as it pertains to connecting to the system from the outside world. And at each location, there are browsers that connect to each server on the local network to the same network.
For example:
192.168.2.130
The server is an apache2 running ubuntu server 14. In addition, in all the tutorials that I have looked at, one needs to know the IP address of the machine. With many of these locations, the IP address often changes. They have dynamic IPs. What I was wondering is what kind of SSL certificate do I need? I thought about the wildcard certificate but did know if it was an overkill. I also would like for the location users within each location to not see the error message that comes from not having a correctly signed SSL certificate. Thanks in advance.
George
Unless the number of location is constantly changing, you don't need a wildcard certificate. Just get one per location. Certificates should always be assigned to a name, not ip, so how the request is routed doesn't really matter.
If the internal users actually connect via IPs, rather than names, that's something you need to fix, because you have to bind the certificate to a stable name. If you want the internal users to skip the global routing, you can use something like split-horizon dns for it. (basically you serve your local users different dns answers than the ones you publish to the internet)
The context:
I currently have a multitenant site (sub1.maindomain.com) and I am working on adding several other sites. Some of the new sites (sub2.maindomain.com, secdomain.com, ...) will probably also be multitenant.
I have certificates for each site I add, but only one IP address.
I'm working on Windows Server 2012, IIS 8.5.
The problem:
In order to allow multiple certificates I have to enable SNI in the https binding. Once I enable the SNI for the multitenant site (therefore editing the hostname) subdomains are no longer recognized (therefore no multitenancy).
Changing/Renaming/Restructuring the sub1.maindomain.com domain is not a real option, since it's being used by active clients for hosted pages among other things.
So far:
I am considering a wildcard certificate on which I can have the domains for all sites, (*.sub1.maindomain.com, *.maindomain.com, *.secdomain.com, ...) but I read that some browsers might have an issue with it and it is not recommended.
EDIT: It's been confirmed to me that I cannot consider the wildcard certificate option, mainly because of the price.
I have also tried using the Application Request Routing to solve the issue as described here but so far I it hasn't panned out.
From what I've tried so far I am either getting certificate errors in some or all of my sites, or "turning off" the multitenancy for the multitenant sites.
Any ideas on how to proceed?
Since we have a single multitenant app we allocated a second IP, given also that the cost is acceptable. The multitenant app is on one IP, the single tenant apps are all hosted on the other IP using the SNI feature to enable the use of multiple certificates.
So as I understand it intranet ssl certs will no longer be available from 2015, instead to get around this I could generate my own certificates for use and install them on the networked machines. My question is, in that case, does this mean issuing my own certificates would be bad practice? I can't think of any other solutions.
Presumably, by "intranet certificate", you mean a certificate that's issued to a local host name (e.g. "sqlserver" or "mail") or a private IP address.
There's one simple solution to this: use fully qualified domain names, even in an intranet. The clients connecting to your intranet servers will need to use the FQDNs too, but that's generally not very difficult. There's also nothing to prevent you from making your DNS resolve myinternalserver.mycompany.com to any IP address you'd like, including private IP addresses, even if the DNS servers are hosted outside your company's network. (For SSL/TLS verification, you don't even need reverse DNS to work, so that's not a problem.)
Managing your own CA is also a solution, but it can be quite a bit of administrative burden (depending on the size of your environment).
(From a security point of view, I think these intranet certificates shouldn't really exist anyway, since two completely different entities may be issued with distinct certificates valid for the same (relative) identity.)
I'm setting up a webserver for a system that needs to be used only through HTTPS, on an internal network (no access from outside world)
Right now I got it setup with a self-signed certificate, and it works fine, except for a nasty warning that all browsers fire up, as the CA authority used to sign it is naturally not trusted.
Access is provided by a local DNS domain name resolved on local DNS server (example: https://myapp.local/), that maps that address to 192.168.x.y
Is there some provider that can issue me a proper certificate for use on an internal domain name (myapp.local)? Or is my only option to use a FQDN on a real domain, and later map it to a local IP address?
Note: I would like an option where it's not needed to mark the server public key as trusted on each browser, as I have not control over workstations.
You have two practical options:
Stand up your own CA. You can do it with OpenSSL and there's a lot of Google info out there.
Keep using your self-signed cert, but add the public key to your trusted certs in the browser. If you're in an Active Directory domain, this can be done automatically with group policy.
I did the following, which worked nicely for me:
I got a wildcard SSL cert for *.mydomain.com (Namecheap, for example, provide this cheaply)
I created a CNAME DNS record pointing "mybox.mydomain.com" at "mybox.local".
I hope that helps - unfortunately you'll have the expense of a wildcard cert for your domain name, but you may already have that.
You'd have to ask the typical cert people for that. For ease of use I'd get with the FQDN though, you might use a subdomain to your already registered one: https://mybox.example.com
Also you might want to look at wildcard certificates, providing a blanket cert for (e.g.) https://*.example.com/ - even usable for virtual hosting, should you need more than just this one cert.
Certifying sub- or sub-sub domains of FQDN should be standard business - maybe not for the point&click big guys that proud themselves to provide the certificates in just 2 minutes.
In short: To make the cert trusted by a workstation you'd have to either
change settings on the workstations (which you don't want) or
use an already trusted party to sign your key (which you're looking for a way around).
That's all your choices. Choose your poison.
I would have added this as a comment but it was a bit long..
This is not really an answer to your questions, but in practice I've found that it's not recommended to use a .local domain - even if it's on your "local" testing environment, with your own DNS Server.
I know that Active Directory uses the .local name by default when your install DNS, but even people at Microsoft say to avoid it.
If you have control over the DNS Server you can use a .com, .net, or .org domain - even if it's internal and private only. This way, you could actually buy the domain name that you are using internally and then buy a certificate for that domain name and apply it to your local domain.
I had a similar requirement, have our companys browsers trust our internal websites.
I didnt want our public DNS to issue public DNS for our internal sites, so the only way to make this work that I found was to use an internal CA.
Heres the writeup for this,
https://medium.com/#mike.reider/getting-firefox-chrome-to-trust-your-internal-websites-internal-certificate-authority-a53ba2d4c2af
i think the answer is NO.
out-of-the-box, browsers won't trust certificates unless it's ultimately been verified by someone pre-programmed into the browser, e.g. verisign, register.com.
you can only get a verified certificate for a globally unique domain.
so i'd suggest instead of myapp.local you use myapp.local.yourcompany.com, for which you should be able to get a certificate, provided you own yourcompany.com. it'll cost you thought, several hundred per year.
also be warned wildcard certificates might only go down to one level -- so you could use it for a.yourcompany.com and local.yourcompany.com but maybe not b.a.yourcompany.com or myapp.local.yourcompany.com, unless you pay more.
(does anyone know, does it depend on the type of wildcard certificate? are sub-sub-domains trusted by the major browsers?)
Development purpose only
This docker image solves the problem (thanks to local-ip.co): https://github.com/medic/nginx-local-ip.
It launches a reverse proxy in the port 443 with a public cert that works with any *.my.local-ip.co domain. Eg. your local IP is 192.168.10.10 → 192-168-10-10.my.local-ip.co already points to it (it's a public domain)! Assuming the app is running in your computer at the port 8080, you only need to execute this to proxy pass your app and expose it at the URL https://192-168-10-10.my.local-ip.co:
$ APP_URL=http://192.168.10.10:8080 docker-compose up
The domain is resolved with any public DNS you have configured in the devices where you want to access the app, but your traffic keeps local between your app and the client (through the proxy), so you can even use it to connect with devices within the same LAN network, without any of the traffic going out to internet, all the traffic is local.
The reason that is mostly useful for development is that anybody can launch an application with this same certificate, so is not really secure, but helpful when you need to expose your app with HTTPS while developing or testing (e.g. HTML5 apps in Android that are loaded with Webview).