I am trying to set up a NGINX to perform client authentication against multiple clients. The problem I have is that those clients will have different certificates, basically different Root CAs:
[clientA.crt] ClientA > IntermediateA > RootA
[clientB.crt] ClientB > IntermediateB1 > IntermediateB2 > RootB
I looked at the NGINX documentation and I noticed the ssl_client_certificate directive. However, that property alone seems not work by itself, for example, if I configure it for only work for clientA for now:
ssl_client_certificate /etc/nginx/ssl/clientA.crt;
ssl_verify_client on;
Then I received a 400 error code back. By looking at other questions, I figured out that I also have to also use ssl_verify_depth: 3. Therefore, if I want concatenate both clientA and clientB in a bundle PEM to allow both clients, will I need use a high value? What's the purpose of this directive and what are the implications of setting to a high number with a bundled PEM?
The http://nginx.org/r/ssl_client_certificate directive is used to specify which certificates you trust for client-based authentication. Note that the whole list is basically sent every time a connection is attempted (use ssl_trusted_certificate as per the docs if that's not desired).
As per above, note that ssl_verify_depth basically controls how easy would it be to get into your system — if you set it at a high-enough value, and someone is able to obtain a certificate with one of the CAs that you trust, or through one of their intermediaries which they trust to generate their own certificates, then they'd be able to authenticate with your nginx, whether or not that's your desire.
As such, it'd normally be the practice that all certificates that are used for client-based authentication are generated by a privately sanctioned CA, hence, normally, there shouldn't be much of a length to the chain. If you want to equalise the depth number between the two CAs, to get the best protection from ssl_verify_depth, then it's conceivable to create an extra CA to add to the depth, then that CA to the trusted list instead of what's now an actual intermediary. (Note that it gets complicated once you involve a few intermediaries, the browser would need to know of their existence, which is usually cached, and can result in a number of ghost issues when non-cached.)
Also, note that you don't actually have to have a single CA in the specified file — it can include multiple unrelated "root" CAs, so, if you want to add multiple independent CAs, you don't actually have to bother creating another CA to certify them — you can just include such independent CAs as-is.
be sure that you use 2 different root certificates with actually different DN, if you are trying this locally with certificates which have the same DN this will not work, as stated by nginx here
https://trac.nginx.org/nginx/ticket/1871#ticket
We were stuck with this long when I found this old issue also in kubernetes.
https://github.com/kubernetes/ingress-nginx/issues/4234
Related
I'm in the process of connecting to an external server and am making a CSR to receive some certificates from them, and I have some questions regarding this.
Some tutorials state that you should save the private key as this will be used during installation of the certificate. However when using the Windows certificate manager (certmgr.msc) I think it generates the private key under the hood, and the resulting CSR-file does not contain any private key. So in that case I won't have access to any private key at all, unless I can export it from the certificate I receive later? I was also under the impression that a private key is not needed for installation of the certificate as it is just imported into the certificate store? If that's the case, does the private key have any use besides generating the public key?
I was also wondering about the location the certificate can be used. It seems that the certificate can only be used on the server that the CSR was created. However, my application will run on Azure so how can I get a certificate that can be used in the cloud?
Last question: The certificate provider supplies three certificates, one root, one intermediate and one "actual" certificate. What is the purpose of these different certificates?
Appreciate any insight or guiding to this process. There are a tons of guides out there, but many of them seem to contradict each other in some way or another.
(certmgr.msc) I think [] generates the private key under the hood,
Correct. You generate the key and CSR, send the latter to the CA, and (we hope!) get back a cert containing your publickey and identity (for SSL/TLS your identity is your domain name or names), plus any needed chain certs (usually one intermediate and a root, but this can vary). You import the cert to certmgr, which matches it up with the existing, stored but hidden privatekey to produce a pair of cert+privatekey which is now visible and usable.
To use this in a Windows program, like IIS, you also need the chain cert(s), see below, in your store -- for these just the cert(s) not the privatekey(s), which you don't have and can't get. If you use an established public CA like Comodo, GoDaddy, LetsEncrypt their root is usually already in your store, and if you use a CA run by your employer their root may well be already in your store for other reasons such as email; if not you should add it. The intermediate(s?) may or may not already be in your store and if not you should add it(them).
I was also under the impression that a private key is not needed for installation of the certificate as it is just imported into the certificate store?
It is needed, but you don't provide it, because it's already there.
It seems that the certificate can only be used on the server that the CSR was created. However, my application will run on Azure so how can I get a certificate that can be used in the cloud?
Initially, it is usable only on the system where the CSR and privatekey were generated. But using certmgr you can export the combination of the certificate and privatekey, and optionally the cert chain (which export wizard calls 'path'), to a PKCS12/PFX file. That file can be copied to and imported on other Windows systems and/or used by or imported to other types of software like Java (e.g. Tomcat and Jboss/Wildfly), Apache, Nginx, etc.
Note however that the domain name or names, or possibly a range of names matching a (single-level) wildcard, that you can use the cert for is determined when the cert is issued and can't be subsequently changed (except by getting a new cert).
The certificate provider supplies three certificates, one root, one intermediate and one "actual" certificate. What is the purpose of these different certificates?
Certificate Authorities are arranged in a hierarchy. Running -- particularly securing -- a root CA is difficult and expensive. As a result certs for end-entities (like you) are not issued directly by the root, but by a subordinate or intermediate CA. Sometimes there is more than one level of subordinate or intermediate. Thus when your server uses this certificate to prove its identity, in order for the browser or other client to validate (and thus accept) your cert you need to provide a 'chain' of certificates, each one signed by the next, which links your cert to the trusted root. As I said, one intermediate is common; this means your server needs to send its own cert, which is signed by the key in the intermediate, plus the intermediate cert, which is signed by the key in the root. The root needn't actually be sent, because the client already has it in their truststore, but it may be, and it is also desirable to validate the chain yourself before using it and for that you need to have the root even if you don't send it.
First let me state that I am a Linux noob. I am learning as I go here. Here is my situation. I have an Ubuntu 16lts server, with apache. The software we just installed comes with "samples" These samples are stored in the same directory structure as the program. The instructions have you add an alias and a directory to the apache2 config file. Like so
Alias /pccis_sample /usr/share/prizm/Samples/php
This actually worked :)
However now we want to make sure this site is SSL. I did manage to use openssl to import to Ubuntu the certificates we wanted to use. (i am open to using self signed though at this point its non prod so i dont care)
In trying to find out the right way to tell Apache i want to use SSL for this directory and which cert i want to use. Things went wonky on me. I did manage to get it to use ssl but with browser warning as one would epexct with a self signed cert. I had thought that i could just install the cert on our devs machines and that would go away. But no dice. Now in trying to fix all that i just done broke it. SOOOO What I am looking for is not neccessarily and spoon fed answer but rather any good tools, scripts, articles tips tricks gotchas that i can use to get this sucker done.
Thanks
You need to import your certificate(s) into the browsers trusted store. For each browser on each machine you test with. "What a pain!" you probably think. You are right.
Make it less painful - go through it once. Create your own Certificate Authority, and add that to your browsers trusted certificates/issuers listing. This way, you modify each one once, but then any certificate created by your CA certificate's key will be considered valid by those clients.
https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/
Note that when configuring Apache or other services, they will still need an issued/signed certificate that corresponds correctly to the hostname that is being used to address them.
Words of warning - consider these to be big, red, bold, and blinking.
DO NOT take the lazy way and do a wildcard, etc. DO keep your key and passphrase under strict control. Remember - your clients will implicitly trust any certificate signed by this key, so it is possible for someone to use the key and create certificates for other domains and effectively MITM the clients.
My Apache https server has returned the following header as a response for a request to https://lab20.example.com:
Public-Key-Pins:pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY="; pin-sha256="633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q="; max-age=2592000; includeSubDomains
Those pins are for purpose invalid - but still google chrome 52 allows to connect to my labs. It looks like HPKP is not working. I have also tested:
chrome://net-internals/#hsts - and after querying lab20.example.com indeed i see HSTS (confirmed working fine) but not HPKP - i do not see any dynamic_spki_hashes. Why ?
Do i need to activate something in chrome in order to be able to understand and process HPKP headers ?
Thanks,
The header will ONLY be accepted if it's valid and then used for future visits (within the max-age time).
This is specified in the spec:
The UA MUST note the Pins for a Host if and only if
...snip...
o The TLS connection was authenticated with a certificate chain
containing at least one of the SPKI structures indicated by at
least one of the given SPKI Fingerprints (see Section 2.6).
This is to stop you accidentally bricking your site and is a GREAT feature to reduce dangers of accidental badly implemented HPKP.
It does however make testing bad HPKP quite difficult. Either manually add the headers using that internals page you cited, get two different certs for your page, or set a valid header at top level (with includesubdomains) and use a different cert for subdomain to test.
OK, found out a reason, i have used my enterprise CA, but:
Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
I would like to have an SSL cert for *. That is, for any domain whatsoever. The idea is to sign it with a custom CA cert, install that CA cert on a target device, and use that device to connect to ip-based addresses (e.g. https://192.168.1.123). There's no DNS available, and the address may be different each time. The browser on the target device should work without warnings, but only as long as the wildcard cert presented is signed by the known CA (which is our custom CA the cert of which was installed on the device), to prevent any possible MITM attacks.
Would browsers understand such a wildcard cert? Is there any other workaround possible to allow using browsers to connect to arbitrary IP-based SSL servers without warning and with MITM protection at the same time (assuming that it's possible to customize the client list of CAs)?
There are two specifications about certificate identity validations for HTTPS: RFC 2818 (the HTTPS specification itself) Section 3.1 and RFC 6125 (a more recent RFC trying to harmonise how this is done for any protocol).
As far as I can interpret RFC 2818, it doesn't forbid it (although I guess it doesn't consider the use case at all):
Names may contain the wildcard
character * which is considered to match any single domain name
component or component fragment. E.g., .a.com matches foo.a.com but
not bar.foo.a.com. f.com matches foo.com but not bar.com.
RFC 6125 generally discourages the use of wildcard certificate altogether (section 7.2) and only allows it in principle in the left-most label (section 6.4.3). This more or less implies that there should be more than one label.
There is an additional errata for RFC 6125 on the subject: "RFC6125 bug: Checking of Wildcard Certs lacks spec of how many labels in presented identifier":
Note also that this issue begs the question of being able to determine what constitutes a so-called domain name "public suffix" (e.g. ".com", ".co.uk") -- we can't simply write into the spec "the wildcard must be in the left-most label position and there must be at least one? two? three? labels to the right of the wildcard's position".
Specifications aside, this is unlikely to work in practice. No commercial CA should ever issue one of those. They make mistakes once in a while perhaps, but hopefully nothing so foolish. You might be able to achieve this by deploying your own CA (or perhaps with a local automated CA within a MITM proxy you'd approve). Even if this was the case, some clients will simply refuse to validate such certificates. For example, Chromium even forbids *.com or *.co.uk.
Note that in both cases, wildcards are for DNS names, not IP addresses. Having a certificate for * as you would like wouldn't work for your use-case anyway. Perhaps looking into alternative DNS solutions (e.g. mDNS) to be able to use a name might help.
Yes there are wild card certs. You may please check with SSL cert provider to get more details .I doubt if this can be based on IP
While the public key infrastructure is broken, it is not that broken that you will get an certificate for * and the browser will accept it. The relevant RFC is RFC2818 and if you read it you'll see that the browser accept *.example.com and www*.example.com, but not www.*.example.com, www.*.com or even *.
I need to see how a web application will work with HTTPS. But I can't really find much information around about it. I tried to set up my local Apache but I can't find a CA autorithy to sign my certificate... Hints? Suggestions?
The possibilities to consider are:
Generate your own certificate (self-signed certificate)
Get a certificate issued by a known issuer
Get a certificate issued by an issuer not recognised by the browser
Nr. 1 is probably the most widely used solution. You can find instructions here. The only disadvantage is that browsers will complaint about the unknown CA. In Firefox, you can just add a permanent exception and get rid of the warning. (Neither Chrome nor Internet Explorer seem to provide such option.)
Nr. 2 normally costs money so it isn't a popular choice for dev environments.
Nr. 3 can be obtained for free (see https://www.cacert.org/) but they also trigger a browser warning. A difference with nr. 1 is that you have the possibility of adding the CA to your browser's trusted authorities; however, that's a serious decision that requires serious consideration because of its security implications. In general, I would not recommend it for mere testing.
Self-signed certificates (as already mentioned) are probably the easiest option for a single host.
If there are a few hosts, you could create a mini CA of your own. There are tools for this, for example:
CA.pl: a script provided with OpenSSL.
TinyCA: a tool with a GUI.