A newbie to Apache. I have tried to implement multiple SSL CA Certs on my HP-UX, Apache 2.2 web server but, whenever I try to hit my site with a secondary CA cert, the site doesn't recognize it.
For example, I have DoD root certificates and ORC root certificates that I need to be able to access my site. I have tried to use the SSLCACertificateFile directive by concatenating the certificates (with DoD certs first and then with ORC root certs first) and the site only recognizes the DoD certs - both times. I have tried to use the SSLCACertificatePath directive, making the hash file links with the extensions being sequential, and only the DoD certs are seen. The kicker is that I have tried to use the ORC certs ONLY with the SSLCACertificateFile directive and the DoD certs are the only ones that are seen.
I am stopping and starting my Apache process with each change. My permissions for these tests are 777. I am not seeing anything in the logs (another question as I expect my logs to be as verbose as IIS but my logs are at 0 bytes with a current time stamp).
What am I missing? Thanks.
There wasn't an issue with the certificates per se, but, the server wasn't releasing the root process for httpd even though the 'stop' command returned successfully.
Therefore, had to do a 'sudo pkill -9 httpd' to release the root process after a 'stop.' When started again, all certificates were seen.
Related
I am looking a solution for hours but can't find any. I am using letsencrypt ssl via certbot.
My domain is ektaz.com when I check certificate on browser it says
Expires: 8 November 2021 Monday 16:24:33 GMT+03:00
When I check it from server side with certbot certificates I get result as
Expiry Date: 2021-11-08 13:24:33+00:00 (VALID: 39 days)
But all browsers says certificate is invalid I don't understand why.
Also I have renewed this certificate many times using certbot renew I had no issue so far. I have cleared all cache and tried result is the same. I restarted apache many times. Even restarted server but nothing changed.
Server OS : Ubuntu 20.04 LTS
Your certificate is likely not invalid at all.
There is a simple fix. I'm using nginx configuration style for this example:
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/cert.pem;
Lines like that need to be replaced by lines like this
ssl_certificate /usr/local/etc/letsencrypt/live/domain.com/fullchain.pem;
Then refresh your server's configuration.
This problem is popping up all over the place, including with both small and large websites.
The root cause is older tutorials for configuration of webservers that served the cert.pem file (because it worked) rather than the fullchain.pem file which makes sure a browser gets the full chain needed to validate the certificate.
Unfortunately, Apple, Mozilla, and some others have dropped the ball and are still using the same intermediate certificate (IdentTrust DST Global Root CA X3) which expired yesterday afternoon at 2:21:40 pm CST to check certificates that were using it before. iOS 15.0 (19A346) is the only released Apple software version that is automatically using the new intermediate certificate even when the server doesn't send the full chain.
The actual intermediate certificate being used by the server is issued to R3 by ISRG Root X1, but unless you configure your server to explicitly tell this to browsers by using the fullchain.pem within the server configuration, then sadly many software companies have dropped the ball and don't do it right on their own.
But once again, this is an easy fix. Just make that slight change to lines in your server's configuration file "cert.pem" -> "fullchain.pem" and you should be fine.
And there's no reason not to keep on using the fullchain.pem file permanently. In fact, even prior to this situation, various networks (college campus WiFi networks are notorious for this) will screw up your certificate's chain of authority unless you use the fullchain.pem file anyway. Let's Encrypt even recommends this now as the only proper way to configure your web server to use certificates.
I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot
Previously I used RapidSSL certificate. After it expired I moved to Lets Encrypt (free ssl) and installed on my server. But site uses still old SQL certificate after couple of refreshes taking new SSL certificate and resources (css, images, scripts) are not loading gives NET::ERR_CERT_DATE_INVALID error.
I restarted Apache couple of times.
I'm using Ubuntu 16.04.
NET::ERR_CERT_DATE_INVALID means your SSL certificate date is invalid, that is because your old certificate has expired. Check your apache config to make sure that - certificate files mentioned are the desired ones. For detail debugging of your problem, you need to look at your apache server log could be located at /var/log/apache2.
I recently changed my certificate to LetsEncrypt's.
I placed the new certificate in the location of the old one:
cat /etc/haproxy/certs/fullchain.pem /etc/haproxy/certs/privkey.pem > /etc/haproxy/certs/mydomain.com.pem
And in my haproxy.cfg I have:
frontend https
bind :::8443 v4v6 ssl crt /etc/haproxy/certs/mydomain.com.pem no-sslv3
Then I ran systemctl reload haproxy, but it still brings the old one when I access it in my browser or using SSLLabs.
If I use curl -kv mydomain.com it shows the correct certificate.
I had this same issue where even after reloading the config, haproxy would randomly serve old certs. After looking around for many days the issue was that "reload" operation created a new process without killing the old one. The old processes were serving the outdated certs. You can check this by "ps aux | grep haproxy".
Fix
If your environment allows for a few seconds of downtime run "service haproxy stop" until no haproxy processes are left and then start haproxy.
**OR**
Sort by starting time and kill old processes while checking if the service is still running in between.
1 Year later EDIT
Instead of manually doing the fix mentioned above after every reload, we added a "hard-stop-after" for 600 seconds. Made sure to kill all old processes after adding the param and checked the same using ps aux. So now, older processes have to die after 600 seconds, and cannot serve outdated certs.
If you have the old pem file in /etc/haproxy/certs, HAproxy might be using it instead of new one.
I had a similar problem. HAproxy was using expired certificate that was first created for only dev.domain.com with Let's Encrypt. Later I changed certificate creation process to include multiple domains:
domain.dom www.domain.com and dev.domain.com.
The old dev.domain.com.pem was still in /etc/haproxy/certs folder. When I visited https://dev.domain.com, HAproxy used old pem certificate file and Chrome issued a warning for expired certificate.
When I deleted dev.domain.com.pem file and reloaded HAproxy, it started using new certificate and SSL is working correctly again.
My problem was historic and outdated wildcard cert that HAProxy (HA-Proxy version 1.8.19-1+deb10u3 2020/08/01) erroneously picked up and spitted out as outdated subdomain cert, both in the browser and in cURL.
Reloading, restarting, stopping+starting and even upgrading Debian did not help. What did help was to remove the outdated wildcard cert and reload.
On Debian, using GitLab, I ran into issues with my self-signed certificate.
Reading through the code after a lot of searching on the Internet (I guess, it's the last resort, FOSS is helpful), I found the following lines in gitlab-shell/lib/gitlab_net.rb which left me... perplexed.
if config.http_settings['self_signed_cert']
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
Most Stack Overflow responses about the diverse issues I've had until now have led me to believe that VERIFY_NONE, as you'd expect, doesn't verify anything. VERIFY_PEER seems, based on my reading, to be the correct setting for self-signed.
As I read it, it feels like taking steps to secure my connection using a certificate, and then just deciding to not use it? Is it a bug, or am I misreading the source?
gitlab-shell (on the GitLab server) has to communicate to the GitLab instance through an HTTPS or SSH URL API.
If it is a self-signed certificate, it doesn't want any error/warning when trying to access those GitLab URLs, hence the SSL::VERIFY_NONE.
But, that same certificate is also used by clients (outside of the GitLab server), using those same GitLab HTTPS URLs from their browser.
For them, the self-signed certificate is useful, provided they install it in their browser keystore.
For those transactions (clients to GitLab), the certificate will be "verified".
The OP Kheldar point's out in Mislav's post:
OpenSSL expects to find each certificate in a file named by the certificate subject’s hashed name, plus a number extension that starts with 0.
That means you can’t just drop My_Awesome_CA_Cert.pem in the directory and expect it to be picked up automatically.
However, OpenSSL ships with a utility called c_rehash which you can invoke on a directory to have all certificates indexed with appropriately named symlinks.
(See for instance OpenSSL Verify location)
cd /some/where/certs
c_rehash .