cannot connect cluster in amazon documentdb - ssh

I am struggling with this issue for a few days, I am trying to connect my db from Robo 3t and Studio 3t, but i got same error with both programs:
Note: I can access by ssh from my terminal, it means that the certificate is fine, the EC2 endpoint is fine, port etc... then the problem should be in another place, right?
SSH Tunnel error: I/O error: Not ASN.1 data
Stacktrace:
|/ SSH Tunnel error: I/O error: Not ASN.1 data
|___/ I/O error: Not ASN.1 data
But I as i said before, I can connect by ssh without any issue:
ssh -i "cert.pem" ec2-muyser#ec2-54-244-36-226.us-west-2.compute.amazonaws.com
I checked all the steps described in the AWS article below, an I also disabled TLS in the cluster param, as suggested in point 5, but I still having the issue.
https://aws.amazon.com/es/premiumsupport/knowledge-center/documentdb-cannot-connect/
I just edit the post to add a few screenshot from my Robo 3t config:
Regards.

I verified the same steps. I am able to connect successfully .
Looks like you are on macOS and you didn't select Self-signed Certificate as recommended in documentation -
https://docs.aws.amazon.com/documentdb/latest/developerguide/robo3t.html
These are two additional settings which you require to do on macOS.
i) If you are on Linux/macOS client machine, you might have to change the permissions of your private key using the following command:
chmod 400 /fullPathToYourPemFile/.pem
ii) if you are on macOS Catalina or above, choose Self-signed Certificate as the Authentication Method because the macOS does not accept certificates with validity greater than 825 days.

Related

Compute engine- GCP Click to Deploy solution crashed

I am using Google Cloud deployment manager - Wordpress click to deploy solution.
I installed a certificate through the Virtual machine SSH on the compute engine page using Certbot. Immediately after I installed the certificate the page started showing
"ssl_error_bad_cert_domain " and didn´t open.
I went back to the SSH and deleted the certificate by using the certbot command $ sudo certbot delete . Since that didn´t solved the error I tried turning off and on and also restarting the VM which didn't resolve the issue either
I could see in the logs explorer there was an error coming from the VM saying: Invalid ssh key entry - expired key:[expired_key] so I requested a new one through the VM ssh using:
ssh-keygen -t rsa -f ~/.ssh/gcloud_instance1 -C username printed the content of cd ~/.ssh && cat gcloud_instance1.pub and then added that to the VM ssh-keys text-area. That did stop the errors on the logs explorer but didn't solve the issue since the Wordpress implementation still doesn't open.
Another thing to add is that when the VM was turned off and on the IP address changed, which I am not sure it is also causing the crash.
This is how the page currently looks like: webpage failing
This is the logs explorer : https://docs.google.com/spreadsheets/d/1cKXWkfaFbmUFakomwM_-TtoSelxWKBDxNBK7fnMy_PA/edit?usp=sharing
Any ideas what could be happening ?
Thanks
The "ssl_error_bad_cert_domain" error can happen when the SSL certificate is not properly installed or is not issued for the correct domain.
My guess is that you had a typo in the domain name or you need a static ip.
You can use this "-sudo certbot certificates" command to verify after the installation.
It is recommended to have a static IP for better stability, especially if you are using SSL certificates.

Use certificates from host inside ddev environment to connect a remote system

I try to connect a remote elastic cluster that is available from the host (Windows 10 Enterprise) system.
I tested the host's connection via curl https://url.to.target:443. Got that 'For sure, its search'-Response.
When i try the same from inside the webserver-container (Debian GNU/Linux 10 (buster)) it failes by:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it.
Is there a simple way use the hosts certificates store?
Copy yourcert.crt to .ddev/web-build folder.
Create a custom .ddev/web-build/Dockerfile, for example:
ARG BASE_IMAGE
FROM $BASE_IMAGE
COPY ./yourcert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates --fresh
When referencing the cert in your code use:
$myCert='/usr/local/share/ca-certificates/yourcert.crt';
Have you tried it by adding the insecure option to the .curlc file in your Home dir?
echo insecure >> $HOME/.curlrc
Shouldn't be used in production!

Digital Ocean CyberPanel (on Ubuntu 18.04): ACME certificates blocked forbidden - 283 Failed to obtain SSL for domain. [issueSSLForDomain]

I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.
There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best

Can't connect Filebeat to Logstash

I am new to elasticsearch and I am following the tutorial here:
I have hit a stumbling block as I can connect the servers with the ELK-stack configured with the server that is logging activity to FileBeat.
I have narrowed it down to an issue with the SSL certificates copied from the ELK server as when i check /var/log/messages I get the following error:
usr/bin/filebeat[13730]: transport.go:125: SSL client failed to
connect with: x509: certificate signed by unknown authority (possibly
because of "crypto/rsa: verification error" while trying to verify
candidate authority certificate "serial:16193853809450343771")
How ever, the keys have been copied over and these files are the same on both servers :
cat /etc/pki/tls/certs/logstash-forwarder.crt
When I try to read the syslogs, I get the following message :
sudo tail /var/log/syslog | grep filebeat:
tail: cannot open ‘/var/log/syslog’ for reading: No such file or directory.
I will appreciate any pointers on this
I found a similar issue in the elastic forum in the following link.
In summery, You should add to your FileBeatconfig:
insecure: true
And than see if you manage to connect. If you do, you can use this guidelines for how to configure your ssl connection

SSL verification behind McAfee Proxy on LAMP VM

I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.