My Apache https server has returned the following header as a response for a request to https://lab20.example.com:
Public-Key-Pins:pin-sha256="klO23nT2ehFDXCfx3eHTDRESMz3asj1muO+4aIdjiuY="; pin-sha256="633lt352PKRXbOwf4xSEa1M517scpD3l5f79xMD9r9Q="; max-age=2592000; includeSubDomains
Those pins are for purpose invalid - but still google chrome 52 allows to connect to my labs. It looks like HPKP is not working. I have also tested:
chrome://net-internals/#hsts - and after querying lab20.example.com indeed i see HSTS (confirmed working fine) but not HPKP - i do not see any dynamic_spki_hashes. Why ?
Do i need to activate something in chrome in order to be able to understand and process HPKP headers ?
Thanks,
The header will ONLY be accepted if it's valid and then used for future visits (within the max-age time).
This is specified in the spec:
The UA MUST note the Pins for a Host if and only if
...snip...
o The TLS connection was authenticated with a certificate chain
containing at least one of the SPKI structures indicated by at
least one of the given SPKI Fingerprints (see Section 2.6).
This is to stop you accidentally bricking your site and is a GREAT feature to reduce dangers of accidental badly implemented HPKP.
It does however make testing bad HPKP quite difficult. Either manually add the headers using that internals page you cited, get two different certs for your page, or set a valid header at top level (with includesubdomains) and use a different cert for subdomain to test.
OK, found out a reason, i have used my enterprise CA, but:
Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
Related
Dig, wget, nslookup and curl commands work perfectly for a specific URL I have pointed to another server less than 24 hours ago.
Problem is, it just refuses to be resolved by the browser (Chrome, Safari and Firefox). The strangest part is that it is being successfully resolved by Postman (by testing the OPTIONS and the GET methods separately), but still doesn't return a proper response on the browser side of things.
DNS checks are returning positive, so this is when I started suspecting that the problem is actually within the headers of the HTTP protocol's requests which are sent - alongside the fact that different responses are being returned for the requests that don't include the default browser headers (being issued through the different command-line tools & Postman) and the ones who do (being issued by the browsers automatically or manually using the dev tools).
After fully flushing the current local system's DNS cache, including the browsers's and even trying another device on another network - I still get still no response on the browser.
Kept going, and attempted to verify that with a VPN (locally - which didn't work), and an online web proxy tool (which did work).
Finally, I extracted the router's default DNS server address, used nslookup to look up the URL again, this time specifically mentioning the desired DNS server (the one stated above), and after getting a successful response with the correct values, I am now pretty much sure the HTTP request is causing the problem.
The URL is hosted on Amazon S3 Static Hosting option, which I used many times before, and didn't have a problem with, with that exact same configuration. Looking up the recent changes/features that were possibly added, pointed out that I may need to explicitly set a CORS policy for the newly created bucket, on top of the usual public access policy that is needed.
After applying that as-well - it still doesn't seem to work.
As a quick change in direction that may possibly make some parts clearer about what's going on (and as I started to think that the browser might not be getting the correct Content-Type header in the response, which should be text/html header as its response, and therefore, possibly doesn't resolve the URL with the expected behavior), I went ahead and applied a 301 redirection on the S3 bucket, instead of the static files hosting, and again, it all works perfectly through the command line tools, but not through the browsers.
Anyway, the browser just doesn't seem to complete any of the requests being sent to the URL.
That might be the OPTIONS pre-flight request failing to respond correctly, and the browser just doesn't continue to issuing the GET request, or the URL is not being found by the DNS route the browser is taking, which is unclear to me currently if that is the option.
Any ideas? (besides the fact that sometimes it just takes longer time for some DNS servers that happen to be on the chosen route to update/refresh their cache, which doesn't appear to be affecting my local machine's DNS route specifically for this case. That, being said with caution, was verified by validating the different parts of DNS configuration and prioritization throughout the different possible parts on my system (Mac OS X), including the fact that the response gets back with the correct address successfully).
Found my answer here:
https://serverfault.com/questions/942030/aws-s3-static-hosting-how-to-debug-connection-timeout
As linked there, more details can be found here:
Non-Authoritative-Reason header field [HTTP]
Solution & Explanation: Because of the nature of the domain extension I have purchased (.dev extension) Chrome was silently using HTTPS because of the URL being part of Chrome's HTTP Strict Transport Security (HSTS), because all .dev domains should be using HTTPS only. Therefore, the issue was still showing up, even when explicitly typing http:// into the URL address bar.
This can be overridden by applying a CloudFront distribution with HTTPS support on top of the S3 Static Hosting, as usual (but still, it should be noted as HSTS listings can cause that for different cases, including this one as part of them, because of the .dev domain extension).
Useful Resources (for debugging purposes)
In addition to what is stated here:
https://gist.github.com/stollcri/7c09bafc97223481920e
You can issue a lookup query (and also add or delete your local set of HSTS listings) through the following Chrome's settings URL:
You can also check the current listings here: https://hstspreload.org/
I am using Firefox 59.0.1 on Ubuntu and I am seeing the following error when accessing my development environment which is behind a self-signed SSL cert.
Your connection is not secure
The owner of crmpicco.dev has configured their website improperly. To
protect your information from being stolen, Firefox has not connected
to this website.
This site uses HTTP Strict Transport Security (HSTS) to specify that
Firefox may only connect to it securely. As a result, it is not
possible to add an exception for this certificate.
Learn more…
Report errors like this to help Mozilla identify and block malicious
sites
crmpicco.dev uses an invalid security certificate.
The certificate is not trusted because it is self-signed.
Error code: SEC_ERROR_UNKNOWN_ISSUER
I have added "crmpicco.dev" to security.tls.insecure_fallback_hosts and set security.enterprise_roots.enabled to true, restarted Firefox but this has had no effect.
I know Chrome has their "badidea"/"thisisnotsafe" workaround, which I know isn't ideal but it at least works - whereas I am yet to find a Firefox equivalent.
What is the solution for this? Do I need to generate new self-signed certs even although the cert I have is from Feb 2018.
I have tried the numerous questions on here and Mozilla support to no effect.
The top level domain *.dev is owned by Google. For some time already there has been a pre-configured HSTS policy in Chrome which made it impossible to use self-signed certificates for this domain. Firefox recently added such policy too so you get the same behavior now.
There are several ways to deal with this. The best way is to not use any currently public or future public top level domains for your private purpose. By using such domains you risk to getting in conflict with usage policies enforced by the domain owner, like enforcing HSTS in case of *.dev. Also, it might even cause security problems. Instead use either domains you actually own or use top level domains which are reserved for internal and test use, like *.test, *.invalid or *.example.
If you really want to use *.dev internally (again, bad idea) you can do it by following the policy of this domain: don't use a self-signed certificate but use a certificate issued by a CA trusted by your browser. This means creating your own CA, adding it as trusted to the browser and then issue the certificates you want by this CA. But again, using public domains you don't own (no matter if top-level or not) is a receipt for trouble.
I am trying to set up a NGINX to perform client authentication against multiple clients. The problem I have is that those clients will have different certificates, basically different Root CAs:
[clientA.crt] ClientA > IntermediateA > RootA
[clientB.crt] ClientB > IntermediateB1 > IntermediateB2 > RootB
I looked at the NGINX documentation and I noticed the ssl_client_certificate directive. However, that property alone seems not work by itself, for example, if I configure it for only work for clientA for now:
ssl_client_certificate /etc/nginx/ssl/clientA.crt;
ssl_verify_client on;
Then I received a 400 error code back. By looking at other questions, I figured out that I also have to also use ssl_verify_depth: 3. Therefore, if I want concatenate both clientA and clientB in a bundle PEM to allow both clients, will I need use a high value? What's the purpose of this directive and what are the implications of setting to a high number with a bundled PEM?
The http://nginx.org/r/ssl_client_certificate directive is used to specify which certificates you trust for client-based authentication. Note that the whole list is basically sent every time a connection is attempted (use ssl_trusted_certificate as per the docs if that's not desired).
As per above, note that ssl_verify_depth basically controls how easy would it be to get into your system — if you set it at a high-enough value, and someone is able to obtain a certificate with one of the CAs that you trust, or through one of their intermediaries which they trust to generate their own certificates, then they'd be able to authenticate with your nginx, whether or not that's your desire.
As such, it'd normally be the practice that all certificates that are used for client-based authentication are generated by a privately sanctioned CA, hence, normally, there shouldn't be much of a length to the chain. If you want to equalise the depth number between the two CAs, to get the best protection from ssl_verify_depth, then it's conceivable to create an extra CA to add to the depth, then that CA to the trusted list instead of what's now an actual intermediary. (Note that it gets complicated once you involve a few intermediaries, the browser would need to know of their existence, which is usually cached, and can result in a number of ghost issues when non-cached.)
Also, note that you don't actually have to have a single CA in the specified file — it can include multiple unrelated "root" CAs, so, if you want to add multiple independent CAs, you don't actually have to bother creating another CA to certify them — you can just include such independent CAs as-is.
be sure that you use 2 different root certificates with actually different DN, if you are trying this locally with certificates which have the same DN this will not work, as stated by nginx here
https://trac.nginx.org/nginx/ticket/1871#ticket
We were stuck with this long when I found this old issue also in kubernetes.
https://github.com/kubernetes/ingress-nginx/issues/4234
We are having problems with some browsers attempting to get Bootstrap 3 (js and css) from the documented CDN (https://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/js/bootstrap.min.js). The main two browsers are IE8 and IE9 and we don't have option of telling client to upgrade. Other high level browsers (even IE11) seem to work properly.
We've had to resort to hosting files ourselves, but obviously I would much rather reference the CDN.
For a simple example, in IE11, if I do the following:
1) Type following in url...
2) Hit enter...
3) Click Run...
Now, IE11 will actually continue and run (obviously script will error out), but all these warnings are my best guess as to what might be tripping up IE8/9 (and maybe other lower level browsers). As I said, I've temporarily hosted the files on our own secure.benefittech.com domain, and no warnings occur when I do same steps.
Here are some screen shots from client browser (IE8) when attempting to run the real site referencing CDN urls.
This first one is showing the debugger not knowing what the .tooltip() method is (from Bootstrap.min.js).
Finally, this is the IE security bar warning they get when hitting the site
Any ideas on how this might be resolved or what info I could supply MaxCDN with to try and resolve this would be greatly appreciated - or do we have to continue to host files ourselves?
I realize IE8/9 are old browsers (neither of which I'd be running at this time), but as mentioned earlier, I don't have option to force client to upgrade and surprised no one else has raised this issue (when I contacted MaxCDN, they were surprised by the issue, but not being experienced in Certificate 'technology/language', I didn't really know what to provide them.
Do you have a test environment with IE8/9 where you could do some tests? It could be a problem with certificate chain building. Maybe some certs in the chain are not trusted.
Could you import SubCA certificate from http://secure.globalsign.com/cacert/gsdomainvalsha2g2r1.crt to intermediate CA store and Root CA from http://secure.globalsign.net/cacert/Root-R1.crt?
SubCA certificate (GlobalSign Domain Validation CA - SHA256 - G2) is pretty new (issued 20.02.2014) so if IE8/9 does not follow authority info access from end entity certificate (to build certificate chain) or it does not handle well that the certificate of subCA is in PEM format at http://secure.globalsign.com/cacert/gsdomainvalsha2g2r1.crt (should be DER IMO) or if by any chance GlobalSign Root CA is not trusted by IE8/9 then I believe this could be the reason for the IE warnings.
I would like to have an SSL cert for *. That is, for any domain whatsoever. The idea is to sign it with a custom CA cert, install that CA cert on a target device, and use that device to connect to ip-based addresses (e.g. https://192.168.1.123). There's no DNS available, and the address may be different each time. The browser on the target device should work without warnings, but only as long as the wildcard cert presented is signed by the known CA (which is our custom CA the cert of which was installed on the device), to prevent any possible MITM attacks.
Would browsers understand such a wildcard cert? Is there any other workaround possible to allow using browsers to connect to arbitrary IP-based SSL servers without warning and with MITM protection at the same time (assuming that it's possible to customize the client list of CAs)?
There are two specifications about certificate identity validations for HTTPS: RFC 2818 (the HTTPS specification itself) Section 3.1 and RFC 6125 (a more recent RFC trying to harmonise how this is done for any protocol).
As far as I can interpret RFC 2818, it doesn't forbid it (although I guess it doesn't consider the use case at all):
Names may contain the wildcard
character * which is considered to match any single domain name
component or component fragment. E.g., .a.com matches foo.a.com but
not bar.foo.a.com. f.com matches foo.com but not bar.com.
RFC 6125 generally discourages the use of wildcard certificate altogether (section 7.2) and only allows it in principle in the left-most label (section 6.4.3). This more or less implies that there should be more than one label.
There is an additional errata for RFC 6125 on the subject: "RFC6125 bug: Checking of Wildcard Certs lacks spec of how many labels in presented identifier":
Note also that this issue begs the question of being able to determine what constitutes a so-called domain name "public suffix" (e.g. ".com", ".co.uk") -- we can't simply write into the spec "the wildcard must be in the left-most label position and there must be at least one? two? three? labels to the right of the wildcard's position".
Specifications aside, this is unlikely to work in practice. No commercial CA should ever issue one of those. They make mistakes once in a while perhaps, but hopefully nothing so foolish. You might be able to achieve this by deploying your own CA (or perhaps with a local automated CA within a MITM proxy you'd approve). Even if this was the case, some clients will simply refuse to validate such certificates. For example, Chromium even forbids *.com or *.co.uk.
Note that in both cases, wildcards are for DNS names, not IP addresses. Having a certificate for * as you would like wouldn't work for your use-case anyway. Perhaps looking into alternative DNS solutions (e.g. mDNS) to be able to use a name might help.
Yes there are wild card certs. You may please check with SSL cert provider to get more details .I doubt if this can be based on IP
While the public key infrastructure is broken, it is not that broken that you will get an certificate for * and the browser will accept it. The relevant RFC is RFC2818 and if you read it you'll see that the browser accept *.example.com and www*.example.com, but not www.*.example.com, www.*.com or even *.