SSL verification behind McAfee Proxy on LAMP VM - ssl

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.

Related

Compute engine- GCP Click to Deploy solution crashed

I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.

cannot connect cluster in amazon documentdb

I am struggling with this issue for a few days, I am trying to connect my db from Robo 3t and Studio 3t, but i got same error with both programs:
Note: I can access by ssh from my terminal, it means that the certificate is fine, the EC2 endpoint is fine, port etc... then the problem should be in another place, right?
SSH Tunnel error: I/O error: Not ASN.1 data
Stacktrace:
|/ SSH Tunnel error: I/O error: Not ASN.1 data
|___/ I/O error: Not ASN.1 data
But I as i said before, I can connect by ssh without any issue:
ssh -i "cert.pem" ec2-muyser#ec2-54-244-36-226.us-west-2.compute.amazonaws.com
I checked all the steps described in the AWS article below, an I also disabled TLS in the cluster param, as suggested in point 5, but I still having the issue.
https://aws.amazon.com/es/premiumsupport/knowledge-center/documentdb-cannot-connect/
I just edit the post to add a few screenshot from my Robo 3t config:
Regards.
I verified the same steps. I am able to connect successfully .
Looks like you are on macOS and you didn't select Self-signed Certificate as recommended in documentation -
https://docs.aws.amazon.com/documentdb/latest/developerguide/robo3t.html
These are two additional settings which you require to do on macOS.
i) If you are on Linux/macOS client machine, you might have to change the permissions of your private key using the following command:
chmod 400 /fullPathToYourPemFile/.pem
ii) if you are on macOS Catalina or above, choose Self-signed Certificate as the Authentication Method because the macOS does not accept certificates with validity greater than 825 days.

Digital Ocean CyberPanel (on Ubuntu 18.04): ACME certificates blocked forbidden - 283 Failed to obtain SSL for domain. [issueSSLForDomain]

I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.
There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best

Wget fails with certificate error

As part of an automated build, we run download some code from github. Minimal example:
wget github.com
Recently, the command started failing with a certificate error:
URL transformed to HTTPS due to an HSTS policy
--2017-10-05 11:43:45-- https://github.com/
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
ERROR: cannot verify github.com's certificate, issued by 'CN=DigiCert SHA2 Extended Validation Server CA,OU=www.digicert.com,O=DigiCert Inc,C=US':
Unable to locally verify the issuer's authority.
I tried updating the certificate store, and wget itself:
update-ca-certificates
apt-get install wget
The error is still the same.
My wget version is GNU Wget 1.17.1, and the OS is Ubuntu 16.04.3.
You can avoid checking the validity of the certificate adding the --no-check-certificate option on the wget command-line.
The answer turned out to lie somewhere in packet configuration. Unfortunately, I am unable to tell exactly why. The suspicion is some mono version installed from a ppa was messing with our cert store.

Installing a letsencrypt Certificate on Centos

I am trying to understand the process of installing a letsencrypt certificate on Apache on Centos.
I have read the installation instructions, cloned the git repository, and there I’m stuck.
Has anybody had experience with this and what to do next?
Thanks
You didn't really make it clear what your error was, but I'll take a guess and say that you left off with cloning the Git repository.
From here, you'll need to run some commands with the letsencrypt-auto program that you just cloned to actually obtain a certificate and install it. Let's Encrypt and their automatic configuration feature isn't necessarily stable yet, so I recommend running the command to only obtain a certificate, then manually configure SSL yourself. Head into the directory that you cloned the Git repository to and run the following commands:
chmod +x letsencrypt-auto
./letsencrypt-auto certonly
Let's Encrypt will begin to download its dependencies and a prompt will finally appear requesting which domains you want a certificate for. Just fill it in and press enter. If all goes well, you'll get an output that looks similar to this:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/example.com/fullchain.pem. Your
cert will expire on 2016-03-08. To obtain a new version of the
certificate in the future, simply run Let's Encrypt again.
This path will differ from my path since I'm running Ubuntu 14.04. Note the path to the folder, which will hold all of the files you need. Now, head into your Apache configuration and edit the configuration file to link to the SSL certificates that you just created, restart Apache, and you should be good to go!
If you need any further instructions, let me know.