Filebeat: Certificate signed by unknown authority - ssl

I am getting this error from filebeat:
Failed to connect to backoff(elasticsearch(https://elk.example.com:9200)): Get https://elk.example.com:9200: x509: certificate signed by unknown authority
INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(https://elk.example.com:9200)) with 1468 reconnect attempt(s)
INFO [publish] pipeline/retry.go:189 retryer: send unwait-signal to consumer
INFO [publish] pipeline/retry.go:191 done
INFO [publish] pipeline/retry.go:166 retryer: send wait signal to consumer
INFO [publish] pipeline/retry.go:168 done
However, elasticsearch is having valid SSL by letsencrypt. (This is not a self-signed certificate).
Filebeat kubernetes config:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
I tried adding these parameters in config file and it worked. But, why do I need to bypass verification even if certificate is valid.
ssl.verification_mode: "none"

The reason is either an old default operating system truststore that does not feature your very much valid and well-known trusted CA chains or the Elasticsearch certificate is self-signed or signed by a private CA.
You can choose from a number of solutions:
Run your filebeat in an environment (server, container, etc) with an updated default truststore that knows the CA that signed your certificate - i.e: upgrade to a newer version of the operating system or updated container image.
Remove your ssl.verification_mode: "none" configuration and add a ssl.certificate_authorities point to one or more PEM files with the to-be trusted CA certificates.
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities: ["/path/to/ca.pem"]
Remove your ssl.verification_mode: "none" configuration and add a ssl.certificate_authorities configuration with the embedded to-be trusted CA certificate directly in the YAML configuration.
Example from filebeat configuration documentation.
certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
/D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
sxSmbIUfc2SGJGCJD4I=
-----END CERTIFICATE-----
Disclaimer: You have not provided a filebeat version so I assumed the latest one. Nevertheless, this kind of configuration will probably be the same across filebeat versions.

Related

Cassandra.yaml settings client SSL

Please help me to understand one thing.
There is block of settings in cassandra for client's path of keystore and truststore SSL. I.e. I should configure in server node of Cassandra paths to CLIENT keystore.
But how does it work? It seems It is correct would be locate keystore/truststore of cliet on Application host, like POD, docker or
Application server and use keystore to connect to DBMS Cassandra. How client can use
his keystore, which is located on server?
enabled: true
keystore: E:/apache-cassandra-2.1.4/conf/.keystore
keystore_password: cassandra
# require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
This block is used to check that the clients have valid certificate (that it's signed by trusted authority), use correct cipher, etc., or even enforce that client's certificate is registered in the truststore (if you set require_client_auth: true).
Basically, what you need is that your clients did use certificate that is signed by valid authority, known by Cassandra. DataStax has very detailed documentation on how to setup client-to-server SSL. Cassandra site also includes a lot, but some things could be specific to 4.0 release.

Setting custom CA cert with cloud_proxy_sql

I'm trying to use GCP's cloud_proxy_sql via one of our proxy server with custom CA signed certificates. I tried setting the cert using the config custom_ca_certs_file of gcloud config. Also double checked if the ca cert is set, using the command gcloud config list.
In spite of that, getting the below error in cloud_proxy_sql when trying to connect my SQL client via cloud_proxy_sql.
2020/08/19 11:37:36 Listening on 0.0.0.0:<My local port> for <Instance_connnection_name>
2020/08/19 11:37:36 Ready for new connections 2020/08/19 11:39:11 New connection for "<Instance_connnection_name>"
2020/08/19 11:39:12 couldn't connect to "<Instance_connnection_name>": x509: certificate signed by unknown authority
2020/08/19 11:40:08 Received TERM signal. Waiting up to 0s before terminating.
It seems like cloud_proxy_sql is not respecting the CA cert in gcloud config. How to configure the cert for cloud_proxy_sql?
The error message indicates that your client is not able to trust the certificate of https://www.googleapis.com.
This can happen due to:
The client does not know what root certificates to trust.
The outbound traffic is using a proxy server that is using a different, untrusted, certificate.
The 'ca-certificates.crt' file to be on /etc/ssl/certs, which is one of the locations some languages look for certificates.
I found more about this here:
Failure to connect to proxy "Certificate signed by unknown authority"

Allow kubernetes storageclass resturl HTTPS with self-signed certificate

I'm currently trying to setup GlusterFS integration for a Kubernetes cluster. Volume provisioning is done with Heketi.
GlusterFS-cluster has a pool of 3 VMs
1st node has Heketi server and client configured. Heketi API is secured with a self-signed certificate OpenSSL and can be accessed.
e.g. curl https://heketinodeip:8080/hello -k
returns the expected response.
StorageClass definition sets the "resturl" to Heketi API https://heketinodeip:8080
When storageclass was created successfully and I try to create a PVC, this fails:
"x509: certificate signed by unknown authority"
This is expected, as ususally one has to allow this insecure HTTPS-connection or explicitly import the issuer CA (e.g. a file simply containing the pem-String)
But: How is this done for Kubernetes? How do I allow this insecure connection to Heketi from Kubernetes, allowing insecure self-signed cert HTTPS or where/how do I import a CA?
It is not an DNS/IP problem, this was resolved with correct subjectAltName settings.
(seems that everybody is using Heketi, and it seems to be still a standard usecase for GlusterFS integration, but always without SSL, if connected to Kubernetes)
Thank you!
To skip verification of server cert, caller just need specify InsecureSkipVerify: true. Refer this github issue for more information (https://github.com/heketi/heketi/issues/1467)
In this page, they have specified a way to use self signed certificate. Not explained thoroughly but still can be useful (https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/tls-security.md#self-signed-keys).

How to configure Mosca for mqtts without the client having a certificate?

I have a Mosca MQTT broker running on a node instance and I would like to encrypt all the incoming communications with SSL/TLS (MQTTs protocol) but without the client having to link any certificate to the connexion (I guess it has to do with self-signed certificates) just as https works. I want all my clients to connect just with credentials specifying the MQTTs protocol and the communication can be encrypted. I was using Amazon MQ just before and that's how it works so I want the same.
I can't figure how to configure properly Mosca to do so, I don't know what kind of certificate I must use.
I added the secure field in the configuration as shown here
For the certificate I tried to create a self signed certificate as shown here
I also tried with certbot certificates (Let's Encrypt) registered for my domain name : mq.xxx.com .
I'm running everything on a ec2 (ubuntu 18) and my network and firewall are open for 1883 and 8883. My key and cert are at the root of my project where the deamon is running with good rights and ownership. I know my instance access them correctly.
new mosca.Server({
port: 1883,
secure: {
keyPath: "./privkey.pem",
certPath: "./cert.pem"
},
backend: {
type: 'redis',
redis: require('redis'),
host: "localhost",
port: 6379,
db: 0,
return_buffers: true,
},
persistence: {
factory: mosca.persistence.Redis
}
});
My server is running and working with simple mqtt on port 1883 but when I try to connect with ssl/tls with a client on port 8883 specifying that the server uses self-signed certificates (I tried with MQTT.fx) it fails saying : "unable to find valid certification path to requested target".
I can't make my head around this issue, I think somehow the client cannot "accept" or "verify" the certificate provided. Maybe I'm providing the wrong key or certificate to Mosca but there is only one of each resulting openssl or certbot. Maybe I created wrong but I follow many tutorials on the very same subject such as this one
What kind of certificate do I need to do ?
Is there something more to do with them ?
Thank you.
If you are using a self created certificate then the client will need a copy of certificate that signed the broker's certificate. This certificate will be added to the list of trusted sources so it can prove the broker is who it claims to be.
If you do not want to / can not distribute a certificate then you will need to use a certificate for your broker that is issued by CA (Certificate Authority) whoes signing certificate you already have (bundled into the OS/client that you are using).
The Lets Encrypt signing certificates should be bundled into most OSes by now but they are also cross signed by IdenTrust again who's certs should be bundled with most OSes. If you are having problems with the Lets Encrypt certs then I suggest you ask a new question with the exact details of how you configured mosca with those certs and more details of how you are configuring MQTT.fx and the errors you receive.

Use Berkshelf with custom CA certificate

I have a custom Chef server on premises with a TLS certificate that is signed by our own CA server. I added the CA certificate to .chef/trusted_certs and now knife ssl verify works fine.
But when I try to upload cookbooks using Berksfile I run into the following error:
$ berks upload
E, [2016-03-26T15:02:18.290419 #8629] ERROR -- : Ridley::Errors::ClientError: SSL_connect returned=1 errno=0 state=error: certificate verify failed
E, [2016-03-26T15:02:18.291025 #8629] ERROR -- : /Users/chbr/.rvm/gems/ruby-2.3-head#global/gems/celluloid-0.16.0/lib/celluloid/responses.rb:29:in `value'
I have tried to append the CA certificate to /ops/chefdk/embedded/ssl/certs/cabundle.pem but it made no difference.
Create a custom CA bundle file and then set $SSL_CERT_FILE (or $SSL_CERT_DIR if you want to use that format) in your environment.
Use --no-ssl-verify. Berkshelf does not respect chef's trusted certs.
Alternatively, there is an option to specify this in berks config file.
Don't ignore certificate validation. That is not the safest choice, especially with news about attackers having recently inserted malware in places like Node Package Manager. You can easily configure Berkshelf to trust the same certificates you trust with Chef.
In your ~/chef-repo/.berkshelf/config.json file, make sure the ca_path is set to point at your Chef trusted certificates, like this (assuming your chef repo is located at ~/chef-repo)
{
"ssl": {
"verify": true,
"ca_path": "~/chef-repo/.chef/trusted_certs"
}
}
Then, use knife to manage your Chef certificates (like this):
$ cd ~/chef-repo
$ knife ssl fetch https://supermarket.chef.io/
$ knife ssl fetch https://my.chef.server.example.org/
All the certificates you trust with Chef will also be trusted by Berks.