I am new to elasticsearch and I am following the tutorial here:
I have hit a stumbling block as I can connect the servers with the ELK-stack configured with the server that is logging activity to FileBeat.
I have narrowed it down to an issue with the SSL certificates copied from the ELK server as when i check /var/log/messages I get the following error:
usr/bin/filebeat[13730]: transport.go:125: SSL client failed to
connect with: x509: certificate signed by unknown authority (possibly
because of "crypto/rsa: verification error" while trying to verify
candidate authority certificate "serial:16193853809450343771")
How ever, the keys have been copied over and these files are the same on both servers :
cat /etc/pki/tls/certs/logstash-forwarder.crt
When I try to read the syslogs, I get the following message :
sudo tail /var/log/syslog | grep filebeat:
tail: cannot open ‘/var/log/syslog’ for reading: No such file or directory.
I will appreciate any pointers on this
I found a similar issue in the elastic forum in the following link.
In summery, You should add to your FileBeatconfig:
insecure: true
And than see if you manage to connect. If you do, you can use this guidelines for how to configure your ssl connection
Related
I am struggling with this issue for a few days, I am trying to connect my db from Robo 3t and Studio 3t, but i got same error with both programs:
Note: I can access by ssh from my terminal, it means that the certificate is fine, the EC2 endpoint is fine, port etc... then the problem should be in another place, right?
SSH Tunnel error: I/O error: Not ASN.1 data
Stacktrace:
|/ SSH Tunnel error: I/O error: Not ASN.1 data
|___/ I/O error: Not ASN.1 data
But I as i said before, I can connect by ssh without any issue:
ssh -i "cert.pem" ec2-muyser#ec2-54-244-36-226.us-west-2.compute.amazonaws.com
I checked all the steps described in the AWS article below, an I also disabled TLS in the cluster param, as suggested in point 5, but I still having the issue.
https://aws.amazon.com/es/premiumsupport/knowledge-center/documentdb-cannot-connect/
I just edit the post to add a few screenshot from my Robo 3t config:
Regards.
I verified the same steps. I am able to connect successfully .
Looks like you are on macOS and you didn't select Self-signed Certificate as recommended in documentation -
https://docs.aws.amazon.com/documentdb/latest/developerguide/robo3t.html
These are two additional settings which you require to do on macOS.
i) If you are on Linux/macOS client machine, you might have to change the permissions of your private key using the following command:
chmod 400 /fullPathToYourPemFile/.pem
ii) if you are on macOS Catalina or above, choose Self-signed Certificate as the Authentication Method because the macOS does not accept certificates with validity greater than 825 days.
I am using a Docker container to run a bunch of services, all of those services make use of certificates to communicate to each other.
When starting up those services there is one in concrete that complains with the following error
> discovery_1 | INFO ttn: Got public keys for token validation
> discovery_1 | DEBUG Connected to gRPC server Address=localhost:1900
> discovery_1 | FATAL Could not start client for gRPC proxy error=x509: certificate is valid for discovery, not localhost
> ttnbackbone_discovery_1 exited with code 1
I have created the certificate for "discovery" user but still Docker runs it for the localhost, in some way, which I don't understand... I have also followed this tutorial of certificates usage from Docker but still I have the same error.
What can I do further?
THanks in advance,
REgards!
I encountered this today. x509 certificates have a Common Name attribute that some software use to match the DNS hostname of a server. Here was my error with a certificate with CN of localhost and a DNS hostname of docker1-staging:
error during connect: Get https://docker1-staging:2376/v1.26/containers/json: x509: certificate is valid for localhost, not docker1-staging
I'll have to regenerate the certificate used by the Docker server and make sure it has a CN value of docker1-staging. You'll have to do the same with a CN value of localhost.
I have setup private pub with SSL according to https://github.com/ryanb/private_pub#serving-faye-over-https-with-thin, also adding in daemonize: true (tested with and without).
I can browse to https://mydomain.com:4443/faye.js and that loads.
There are no errors on the page.
However, nothing is actually working i.e. no real time events trigger. When trying to PrivatePub.publish_to in the console I get:
OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
When I run the thin server un-daemonized I can see it returns <SSL_incomp> when trying to publish_to.
The SSL on the server is working correctly, how do I go about fixing this?
I managed to solve this by appending the contents of the ca-bundle to the crt file specified in the slim config
Please find the proper approach to resolve this issue.
When you use only yourdomain.crt file, private_pub wont work while its doing handshake with rails server.
So your SSL Certificate provider will provide you either the intermediate.crt or CAbundle files.
Just do
If you have ca-bundle file provided by CA
*cat yourdomain.crt whatever.ca-bundle > yourdomainfinal.crt*
If you have intermediate certificate
*cat yourdomain.crt intermediate.crt > yourdomainfinal.crt*
Then use the yourdomainfinal.crt and your private key yourdomain.key for pointing to the ssl verify while running the server.
Please find the block for thin server
---
chdir: "/home/your/project/path"
environment: "your environment"
timeout: 30
log: "/home/your/project/path/log/thin.log"
pid: /home/your/project/path/tmp/pids/thin.pid
max_conns: 1024
require: []
max_persistent_conns: 1000
wait: 30
threadpool_size: 20
servers: 1
threaded: true
socket: /tmp/thin.sock
ssl: true
ssl_key_file: /home/your/project/path/ssl/yourdomain.key
ssl_cert_file: /home/your/project/path/ssl/yourdomainfinal.crt
For Private pub
To use private pub over the ssl, please use the below configuration in the private_pub_thin.yml
---
port: 4443
ssl: true
ssl_key_file: /path/to/yourdomain.key
ssl_cert_file: /path/to/yourdomainfinal.crt
environment: "your environment"
rackup: private_pub.ru
And then run the server with the following command
*thin -C config/private_pub_thin.yml start*
If you are using bundler please don't forget to use
*RAILS_ENV="your environment" bundle exec thin -C config/private_pub_thin.yml start*
The above command is important when you are using bundler, if you don't do it then your private pub will start and no issues while running server, but it wont publish messages. That's what I observed.
And note, please check weather you have port 4443 allowed in firewall settings in your server using **sudo ufw status**
Thats it!!! if you followed all the above specified steps you should have private_pub working on production or uat over SSL.
SSL error occurs when we use the knife command to verify successful setup of the Chef-Workstation or when we try to upload a Chef-Cookbook. Using the following commands :
knife client list
knife node list
knife cookbook upload cookbookname
we get the following error on the Chef-Workstation:
OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol
To resolve this error we tried using rackfile software to create following 3 files:
hostname.key
hostname.pem
hostname.crt
on the Chef-Server.
We placed hostname.pem inside the chef folder on the server itself and inside certs folder on the workstation. Finally we tried to run the commands once again but did not succeed. Any help to resolve the SSL error will be sincerely appreciated.
The Chef Server certificate has not yet been pulled into the workstation's trusted_certs directory.
Run the command
knife ssl fetch
from your Chef Workstation.
This will pull the certificate from the Chef Server and place it in the Workstation's trusted_certs directory. The default location of the trusted_certs is in your .chef/trusted_certs directory within your chef-repo directory.
Then run
knife ssl check
to verify the certificate.
Certificates that are in the trusted_certs directory will be trusted by any execution of the knife command.
https://docs.chef.io/workstation/getting_started/#get-ssl-certificates
You need to register that certificate on each workstation. Also, make sure the certificate matches the correct URL (i.e. the API endpoint, not the web interface)
I just want to say that this is not normally something I do, but I have been tasked with it recently...
I have followed the heroku documentation for setting up SSL closely, but I am still encountering a problem.
I have added my cert to heroku using the following command:
heroku certs:add path_to_crt path_to_key
This part seems to work. I receive a message saying:
Adding SSL Endpoint to my_app ... done
I have also setup a CNAME for my hosting service to point to the endpoint associated with the cert command above. However, when I browse to the site I still receive a SSL error. It says my certificate isn't trusted and points to the *.heroku.com license, not the one I have just uploaded.
I have noticed that when I execute the following command:
heroku ssl
I receive the following:
my_domain_name has no certificate
My assumption is that there should be a certificate associated with this domain at this point.
Any ideas?
Edit: It appears that I did not wait long enough for the certificate stuff to trickle through the internets... however, my question regarding the "heroku ssl" command still puzzles me.
The Heroku ssl command is for legacy certificates:
$ heroku ssl -h
Usage: heroku ssl
list legacy certificates for an app
The command you need is heroku certs which will output the relevant certificate info for that project.