Installing a letsencrypt Certificate on Centos - apache

I am trying to understand the process of installing a letsencrypt certificate on Apache on Centos.
I have read the installation instructions, cloned the git repository, and there I’m stuck.
Has anybody had experience with this and what to do next?
Thanks

You didn't really make it clear what your error was, but I'll take a guess and say that you left off with cloning the Git repository.
From here, you'll need to run some commands with the letsencrypt-auto program that you just cloned to actually obtain a certificate and install it. Let's Encrypt and their automatic configuration feature isn't necessarily stable yet, so I recommend running the command to only obtain a certificate, then manually configure SSL yourself. Head into the directory that you cloned the Git repository to and run the following commands:
chmod +x letsencrypt-auto
./letsencrypt-auto certonly
Let's Encrypt will begin to download its dependencies and a prompt will finally appear requesting which domains you want a certificate for. Just fill it in and press enter. If all goes well, you'll get an output that looks similar to this:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/example.com/fullchain.pem. Your
cert will expire on 2016-03-08. To obtain a new version of the
certificate in the future, simply run Let's Encrypt again.
This path will differ from my path since I'm running Ubuntu 14.04. Note the path to the folder, which will hold all of the files you need. Now, head into your Apache configuration and edit the configuration file to link to the SSL certificates that you just created, restart Apache, and you should be good to go!
If you need any further instructions, let me know.

Related

Bitnami SSL bncert-tool failed for Gcloud

I am trying to renew my SSL Cert on Gcloud VM Instance SSH with Bitnami. But it's giving me the
"Please type a directory that contains a Bitnami installation. The default installation directory for Linux installers is a directory inside /opt."
every time i run the bncert-tool
I have followed the steps to try and revert to backup files as directed in this post (thinking i might have did it poorly last time) I copied the backup file to the bitnami.conf and httpd.conf but I still get the same error.
Copying contents of the backup file is this right?
Please help, my ssl expires in 15 days! Is it not easier to just get SSL Through Wordpress plugin? Is it possible to remove this Bitnami SSL Completely?

AWS Linux 2 - Lets Encrypt Multi Domain

I have already successfully installed certbot and have a working digital certificate. I was wondering how do I go about adding domain names to the certificate or do I need to recreate the certificate again?
I don't want to mess up the existing certificate. I haven't tried running this code yet I want to verify the process before I continue. I tried searching this and Google and my results were kind of confusing.
sudo certbot –apache -d mydomain.xyz -d mydomain2.xyz -d www.mydomain.xyz
SSL certificates cannot be modified once issued. They can be replaced with new certificates.
If you run the identical or modified certbot command, your existing certificate will not be modified or deleted. The certbot command will create a new certificate and store the certificate under a different name. Certbot stores certificates and additional files under the directory tree /etc/letsencrypt. You can archive/backup those files. Look at the archive and live folders.
Typically, your webserver will use symbolic links to point to the Let's Encrypt folder instead of copying the certificate to an Apache/Nginx folder.

Installing Zscaler Certificate to Anaconda3

After the obligatory installation of Zscaler through out the Company my Anaconda started giving me the SSL verification Error while installing modules and using requests to get the urls
Error(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)'))': /simple/'some_module'/
SSLError: HTTPSConnectionPool(host='www.amazon.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
With Zscaler being turned off it all works great, but the company policy does not allow that.....
I found some bypasses like setting verify to False but it is not what I want.
I would like to install the Zscaler certificate (which was provided to me by our IT department) to Anaconda
Now the problem seems to be that it uses conda’s generic certificates.
import ssl
print(ssl.get_default_verify_paths())
Output :
DefaultVerifyPaths(cafile=None, capath=None, openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='C:\ci\openssl_1581353098519\_h_env\Library/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='C:\ci\openssl_1581353098519\_h_env\Library/certs')
Any idea what could I possibly do to point conda to the Zscaler certificate that I have??
system inf: Windows 10, Anaconda3 -2020.02, Python 3.7
Thanks a lot in advance
What you can do is :
Open a browser and go to www.google.com
Next to the reload page button, you will see a lock (see picture below). click on it
Click on : Certificat
Click on the tab: Certification Path
Select Zsclaer Root CA5 and the click on View Certificat button
Click on the tab: Details and then click on Copy to file button
Export the certificat choosing the base-64 encoded X.509 (.CER)
Choose a path where to save the file
Open Anaconda Prompt
conda config -set ssl_verify path_of_the_file_that_you_just_saved
background
I had this same issue, but ran into a similar with my work laptop except where Zscaler blocked my curl, git, and anaconda traffic. The temporary fix was to disable ssl verification, but this introduces a number of security vulnerabilities such as man-in-the-middle attacks.
From what I could gather and my limited research, WSL2 doesn't have a automatic way of importing ssl certificates from the system.
https://github.com/microsoft/WSL/issues/5134
Solution
The long term solution is to get the Zscaler certificate and add it to your shell file. Run the following commands in WSL after getting the certificate and navigating to the directory.
echo "export SSL_CERT_FILE=<Path to Certificate>/ZscalerRootCA.pem" >> $HOME/.bashrc
which I got from
https://help.zscaler.com/zia/adding-custom-certificate-application-specific-trusted-store#curl-SSL_CERT_FILE
They have more commands for other applications
If you use any other shells, make sure to change .bashrc to the directory of the configuration of that file. In my case I use fish, so I replaced $HOME/.bashrc with $HOME/.config/fish/config.fish
echo "export SSL_CERT_FILE=<Path to Certificate>/ZscalerRootCA.pem" >> $HOME/.config/fish/config.fish
After adding the certificate, make sure to reload the shell. In my case, I ran using instructions from jeffmcneil
source ~/.config/fish/config.fish
for bash, you would want to run
source ~/.bashrc
or
. ~/.bashrc
from
https://stackoverflow.com/a/2518150/16150356
Solution for Windows OS
After your Zscaler root cert is installed in the Windows trust root store, just install pip-system-certs the successor to python-certifi-win32 which is no longer maintained. Both packages are available from either pypi or conda-forge, so use either pip, conda, or mamba to install pip-system-certs into every Python environment in which you use the Requests package. The pip-system-certs package patches certifi at runtime to use the Windows trusted root store. This solves the issue for the requests package without resorting to setting $REQUESTS_CA_BUNDLE and/or editing your cacert.pem files.
Solution for Ubuntu
Copy the Zscaler root certificate file, it must have .crt ending and be in PEM format, to /usr/local/share/ca-certificates and use sudo update-ca-certificates to update your /etc/ssl/certs/ca-certificates.crt file. However, even then, pip-system-certs doesn't quite seem to work, so add export $REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt to your .profile and restart your shell.
For more information read the following:
Requests uses the certifi CA Certificate bundle
Certifi, a "carefully curated" bundle of CA certs
install CA certificate in Ubuntu trusted root store
Zscaler help, adding custom certificate root stores
installing custom root stores
WARNING: I do not recommend editing any Python cacert.pem files. Note that they are all linked so editing one edits all, and your mamba/conda solver may complain that your package cache is invalid because the file size changed due to your edits. Look in each environments ssl/ folder including base env, and in the base env's pkgs/ca-certificates-<date> files. On Windows OS, cacert.pem is in Library\ssl instead of ssl/. Finally the cacert.pem file will be overwritten if/when you install or update the Python certifi package, so editing it is really not the ideal solution. A better alternative would be to put your Zscaler root cert in a new ssl/ folder in your home directory and set $REQUESTS_CA_BUNDLE to that location. If your company is using Zscaler, then I think it's the only root cert you need.

SSL verification behind McAfee Proxy on LAMP VM

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.

HAProxy and SSL Certification

So I want to do SSL certification on HAProxy to make the connection secure. I started of downloading HAProxy through appstore but later found out that the installation package doesnt support SSL. So I downloaded HAProxy 1.5.14 and compiled it with USE_OPENSSL=1. when I do haproxy -vv I am able to see that SSL is enabled in it.
The issue that I am facing is that when I compile and then install the file by using the command (sudo make install), I am unable to find the haproxy.cfg. I dont know where it is so I am unable to configure and set the setting to the requirement.
The installation package that I got is from the HAProxy official site and I would like someone's help. Please advice me how to solve this issue.
Thank you,
Safiul Hasan
The default config file location is:
/etc/haproxy/haproxy.cfg
You can also search your system for the file with this command:
find / -name 'haproxy.cfg'
If haproxy is already running successfully you can find out what config file it is using by looking at the command that is used to run it:
ps x | grep haproxy
This will result in output like this:
28548 ? S 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
The part after the "-f" is the path to the config file haproxy is currently using.
There are no default haproxy.cfg file, you have to create it from scratch.
Look for some samples on the internet to get one fitting your needs.
You can put your configuration file anywhere and ask to haproxy to use it by using the "-f" parameter.