Puppet after 5 years, when certificates expire? - ssl

I'm reading that the default expiry date for puppet certificates is 5 years, and can set set with the attribute ca_ttl in puppet.conf.
I have 2 questions, given a setup of many agents connecting to a puppet master.
What happens when an agents certificate expires? Does it automatically create a new one on check-in to the master, or does this need to be done manually?
What happens when the CA certificate expires? Does the setup become completely disconnected, requiring you to SSH into each agent to remove expired certificates?

Agent Certificate Expiry
When an agent's certificate expires, future agent check-ins will fail very early on. I can't remember the exact error, but it'll be something like:
err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed.
When that happens, you have to delete the cert from the master, regenerate the certificate on the agent and then re-sign the certificate on the master: This will only affect the one agent.
The full process is documented here: https://docs.puppet.com/pe/latest/agent_cert_regen.html
NB: This is often fairly rare, as most people try to go for a livestock not pets estate, where machines are spun up and down frequently enough that agent machines don't exist for over 5 years.
PuppetServer/master Certificate Expiry
When the CA certificate itself expires, then everything is stopped: no communication can exist because the authority itself has expired. This is more common because a Puppet Master is more likely to exist for over 5 years.
But yes: if the certificate had already expired you'd need another way to configure things, such as SSH, console access or WinRM.
Puppet actually created a helper module to help with this process, as the OpenSSL steps are a little fiddly to try and do manually:
https://github.com/puppetlabs/puppetlabs-certregen
Manual process is also here:
https://docs.puppet.com/puppet/latest/ssl_regenerate_certificates.html

Related

(60) SSL peer certificate or SSH remote key was not OK

Having problems installing PayPalCommerce in OpenCart,
After installing and trying to connect to PayPal I get this error!
"(60) SSL peer certificate or SSH remote key was not OK"
As anybody else come across problem as the server certs are just fine?
As Stated Server Certificats are fine, was thinking of changing the Curl SSL Veerify to False but that would defeat the whole purpose. And as the latest Security 1.2 (whatever abrevation).
Update your certificate authority bundle so that your HTTPS connection to the PayPal API endpoint can verify the connection is trusted.
One can be downloaded here, among other places.
If your attempted connection is using a specific certificate file rather than a CA bundle, delete the old certificate and either obtain the endpoint's current one to use instead or use CA verification of it

SSL certificate is valid but browsers say invalid

I am looking a solution for hours but can't find any. I am using letsencrypt ssl via certbot.
My domain is ektaz.com when I check certificate on browser it says
Expires: 8 November 2021 Monday 16:24:33 GMT+03:00
When I check it from server side with certbot certificates I get result as
Expiry Date: 2021-11-08 13:24:33+00:00 (VALID: 39 days)
But all browsers says certificate is invalid I don't understand why.
Also I have renewed this certificate many times using certbot renew I had no issue so far. I have cleared all cache and tried result is the same. I restarted apache many times. Even restarted server but nothing changed.
Server OS : Ubuntu 20.04 LTS
Your certificate is likely not invalid at all.
There is a simple fix. I'm using nginx configuration style for this example:
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/cert.pem;
Lines like that need to be replaced by lines like this
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/fullchain.pem;
Then refresh your server's configuration.
This problem is popping up all over the place, including with both small and large websites.
The root cause is older tutorials for configuration of webservers that served the cert.pem file (because it worked) rather than the fullchain.pem file which makes sure a browser gets the full chain needed to validate the certificate.
Unfortunately, Apple, Mozilla, and some others have dropped the ball and are still using the same intermediate certificate (IdentTrust DST Global Root CA X3) which expired yesterday afternoon at 2:21:40 pm CST to check certificates that were using it before. iOS 15.0 (19A346) is the only released Apple software version that is automatically using the new intermediate certificate even when the server doesn't send the full chain.
The actual intermediate certificate being used by the server is issued to R3 by ISRG Root X1, but unless you configure your server to explicitly tell this to browsers by using the fullchain.pem within the server configuration, then sadly many software companies have dropped the ball and don't do it right on their own.
But once again, this is an easy fix. Just make that slight change to lines in your server's configuration file "cert.pem" -> "fullchain.pem" and you should be fine.
And there's no reason not to keep on using the fullchain.pem file permanently. In fact, even prior to this situation, various networks (college campus WiFi networks are notorious for this) will screw up your certificate's chain of authority unless you use the fullchain.pem file anyway. Let's Encrypt even recommends this now as the only proper way to configure your web server to use certificates.

LDAPS Microsoft Active Directory Multiple Certificates RFC6125

We have an Microsoft Active Directory Domain with a large pool of domain controllers (DC) that are are setup with LDAP. These are all setup with LDAPS and uses Certificate Services via a template to setup a certificate with the domain name (i.e. test.corp) in the Subject Alternate Name (SAN) for the LDAPS server to serve.
Since these are DC's, DNS is setup in a pool for each these systems to respond to requests to test.corp in a round robin fashion.
Each of these DC's have multiple templates and multiple certificates in the Local Computer\Personal Certificate Store.
Upon testing, using a nodejs module, ldapjs when making a LDAPS request using the domain name, test.corp we notice that a handful of servers fail with the following message:
Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match
certificate's altnames: Host: test.corp. is not in the cert's
altnames: othername:, DNS:.test.corp
As we investigated we found that these handful of LDAPS servers are serving the incorrect certificate. We determined this by using the following command
openssl s_client -connect .test.corp:636
If you take the certificate section of the output and put it in a file and use a tool such as the Certificate manager or certutil to read the file, you can see the certificate is not the correct one. (It does not have the domain "test.corp" SAN). We also verified this by comparing the Serial Numbers
As we investigated, since we have DC's that have multiple certificates in the Local Computer\Personal Certificate store, we came across the following article:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx
It suggests putting the certificate from the local computer\Personal certificate store to the Active Directory Domain Service\Personal store. We followed the steps outlined but we found the same results.
Upon further investigation, it was suggested to use a tool called ldp or adsiedit. We then proceeded to use these tools and spoofed the local machine's host file we were doing the test from, to point the domain (test.corp) to the ip's of one of the DC's that are giving us trouble. After a restart to clear any cache we tested the "ldp" and "adsiedit" tools to connect to test.corp. These systems did not report any errors.
We found this odd, we then ran the openssl command to see what certificate it was serving from this same system and we found it was still serving the incorrect certificate.
Upon further research, it appears that the "ldp" upon selecting the SSL checkbox and "adsiedit" tools were not compliant with RFC6125, specifically B.3
https://www.rfc-editor.org/rfc/rfc6125#appendix-B.3
, which basically states the identity of the certificate must match the identity of the request otherwise the handshake would fail. This identity verification is done by using the certificate common name (CN) or the SAN.
Based on this appears the tools "ldp" and "adsiedit" are not conforming to the RFC6125 standard.
All this to say, we need to first fix the handful of domain controllers that are serving the correct certificate. We are open to suggestions since we have been working on this problem for the past few months. Second, is there a way to get the MS tools in question to work to the RFC6125 standard?
This has been moved to:
https://serverfault.com/questions/939515/ldaps-microsoft-active-directory-multiple-certificates-rfc6125
RFC6125 specifically states that it does not supersede existing RFCs. LDAP cert handling is defined in RFC4513. Outside of that, RFC6125 has significant flaws. See also https://bugzilla.redhat.com/show_bug.cgi?id=1740070#c26
LDP will supposedly validate the SSL against the client store if you toggle the ssl checkbox on the connection screen.
That said, I'm not surprised that neither it nor ADSI edit enforce that part of the standard given they are often used to configure or repair broken configurations. Out of the box and without Certificate Services they use self signed certs on LDAPS. I would wager 80% of DCs never get a proper certificate for LDAP. If they enforced it most wouldn't be able to connect. A better design decision would have been to toggle off the validation.
I use a similar openssl command to verify my own systems. I think it's superior to LDP even if LDP were to validate the certificate. To save you some effort, I would suggest using this variant of the openssl command:
echo | openssl s_client -connect .test.corp:636 2>/dev/null | openssl x509 -noout -dates -issuer -subject -text
That should save you having to output to a file and having to read it with other tools.
I've found LDAPS on AD to be a huge pain for the exact reasons you describe. It just seems to pick up the first valid cert it can find. If you've already added it to the AD DS personal store, I'm not sure where else to suggest you go other than removing some of tother certs from the DCs computer store.

Error: Could not run: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A

I am trying to copy a current Puppet Master server on one domain and move it to another. Im finding that its very hard to try to change all the config remanence. Is there an easy way to do this, or a step by step best practice? I have grepped most of the old fqdn name and changed it to the new one, yet when I delete all certs, and re-issue new ones on the master, it wants to keep pulling a cert for the old FQDN.
Edit 1: I have resolved many of the issues I was previously getting. However I can not get past this SSL issue for the life of me.
[root#puppet lib]# puppet resource service apache2 ensure=running
Error: Could not run: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed: [unable to get local issuer certificate for /CN=puppet.foundry.test]
I have attempted to completely purge all certs from the master, using this link, and then regenerate all. But I still keep getting the same errors:
Error: Could not run: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A
Now Im not sure if I am having puppet SSL issues, or SSL issues in general.
Most likely you're connecting to a wrong server (default is hostname puppet).
Check your agent's config, you're mostly interested in server variable
puppet config print --section agent | grep "server = "
Also it's good to know where is puppet agent looking for its config:
$ puppet config print --section agent | grep "^config = "
config = /etc/puppetlabs/puppet/puppet.conf
Edit your config, set correct puppet master:
[agent]
server=puppet4.example.com
Just for sure, you can clean your cerfificate (on agent):
find /etc/puppetlabs/puppet/ssl -name $(hostname -f).pem -delete
on puppet server:
puppet cert clean {broken hostname}
And finally run puppet agent -t
You can use this link: http://bitcube.co.uk/content/puppet-errors-explained
Did you try to change the puppet master dns?
Try looking if the puppet master cert is the same as what you are writing in server on the node.
If not you can always use dns_alt_names = puppet_hostname.your_domain and all the names you want for the puppet master & CA.
Then try to restart the puppet master service, clean the slave certname from the master, remove all /var/lib/puppet/ssl/ folder from the slave, and run puppet again.
What puppet isn't telling you is that there is a cert mismatch. The master disconnects as soon as it determines that the cert is invalid or a mismatch. Because the disconnect is so sudden, puppet isn't told why it happens.
When this happens puppet could, for example, change that error message to be, "Hey! Here's a list of things you might check." and then suggest things like verify the cert expiration date, cert mismatch, etc. However why would anyone do that?
Here's one way you can get into this situation: Set up two puppet client machines with the same name by mistake. The second machine to use that name will work, but the first machine will no longer work.
How might someone get into that situation? Two machines can't have the same name! Of course not. But we have seen situations like this:
Machine A, B, C, D, E are all Puppet clients.
Machine C gets wiped and reloaded. The technician accidentally calls it "B". To get it working with Puppet, they "puppet cert clean B".
The technician realizes their mistake and reconfigures machine C with the proper name, performs "puppet cert clean C", and machine C now works fine.
A week later someone notices that machine B hasn't been able to talk to the master. It gets this error message. After hours of debugging they see that the client cert has one serial number but the master expects that client to have a very different serial number. Machine B's cert is cleaned, regenerated, etc. and everything continues.
Should Puppet Labs update the error message to hint that this may be the problem? They could, but then I wouldn't get rep points for writing this awesome answer. Besides, technicians should never make such a mistake, so why handle a case that obviously should never happen... except when it does.
Make sure that you are running puppet as root, or with sudo. I have received this exact error when I was my normal user and ran "puppet agent -t" without elevating my privileges.

SSL Certificate Expires

My idea of the SSL communication is a bit hazy and I needed some clarifications.
Architecture of my application - Internal Machine which has the application running is exposed to internet via a BIG IP server.
The certificate hierarchy on my website - Root (expires in 2040) R - Intermediate( expires in 2036) I - xxxx.com (expires in 2 days) F
I have the new certificate created with the same root and intermediate CAs. It is created with a different key. I also have the key.
My questions are :
1) When I perform a HTTP Post using a stand alone application from computer X (some random machine on internet) onto the exposed URL, the SSL handshake should occur at two places. a) Computer X and BIG -IP b) BIG-IP and the internal machine that has the application running. The standalone application should have the public certificates of the URL i.e., R and I, in its key store. Correct? Or should I have the xxxx.com certificate as well i.e., F as well? Who decides this?
2) This is a different scenario. I have placed the newly created certificate of xxxx.com (it has same Root and Intermediate certificates R and I) on the BIG IP server. The start period of this certificate is 1st Aug 2014. My internal instance, although, still has the old certificate. It expires on 3rd Sept 2014. I am able to post successfully even in this scenario. Why is it so? Since the keys are different for the new and old ones, the requests should fail during the SSL handshake of BIG-IP and Internal instance.
Kindly help me understand these two scenarios. I will be grateful.
Thanks
The root CAs should be in the client machine's trusted certificate repository. The server (BIG-IP) should have the intermediate cert and the cert for the fqdn (or SAN/wildcard) if you are offloading. If you are offloading at the BIG-IP and not re-encrypting to the origin server, then you don't need any certificates on the origin server. If you are, then it would be the same setup as the BIG-IP with intermediate and server cert.
In the event your intermediate and server certs are signed by a root NOT trusted by your client (internal certs or custom clients without the standard trusted CAs), you'll need to make sure your clients install the root CA manually or push it.