Configuring filebeat to graylog with TLS (connection reset) - ssl

I have successfully created a graylog server (in docker container) that ingests logs from filebeat on a separate machine.
However I of course would like to have the messages encrypted. I am attempting to set this up however I cannot seem to get graylog to accept the connection, instead it is always being reset by peer:
{"log.level":"error","#timestamp":"2023-01-04T15:08:57.746+0100","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(async(tcp://<graylog_ip>:5044)): read tcp 192.168.178.99:54372-\><graylog_ip>:5044: read: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}
(Without tls the connection works as intended, a new line appears in graylog every time one is added to my test log file.)
Setup Details
I created a filebeat.crt and filebeat.key file with openssl. I confirmed that the hostname for the certificate was the same hostname for the server with graylog on it:
openssl genrsa -out filebeat.key 2048
openssl req -new -x509 -key filebeat.key -out filebeat.crt -days 3650
From my knowledge, a CA should not be required since I have copied the key myself, filebeat can just encrypt the data it sends with the filebeat.crt, then the server can decrypt with filebeat.key (perhaps this is not correct of me to imagine?)
I then copied both files to the server and local machine. In my compose file I mounted the key into the graylog container and restarted. Then I set up the input configuration that was working previously to have:
bind_address: 0.0.0.0
charset_name: UTF-8
no_beats_prefix: false
number_worker_threads: 12
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: /etc/graylog/server/filebeat.crt
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: true
tls_key_file: /etc/graylog/server/filebeat.key
tls_key_password: ********
Then in filebeat I have the following configuration (I also tried converting and using filebeat.pem for the certificate, but no change):
output.logstash:
hosts: ["<graylog_ip>:5044"]
ssl.certificate: '/etc/pki/tls/certs/filebeat.crt'
ssl.key: '/etc/pki/tls/private/filebeat.key'
I really cannot see the issue, any help would be greatly appreciated!

First, try to debug filebeat using
/usr/bin/filebeat -e -d '*' -c filebeat_5.1.2.conf
Probably you will discover that CA is needed or something like that.
But my best quess is that filebeat tries to verify hostname and certivicate name, your generated certificate could not have CN identical to hostname.
Proper solution is using:
ssl.verification_mode: none
Well, this solution works for me.

Related

certificates openssl create CA and create client.crt ant client .key in graylog APP

Good morning, evening, night
My name is José Manuel and I am trying to put encrypted the comunications between graylog server that it receive log inputs and a client that it is another machine who send log inputs by a agent called SIDECAR who parse the log inputs received in the its own machine by fielbeat or NXLOG or metric beat or whatelse and the problem is that I make the certificate CA in PEM format with openssl called graylog-certificate.pem and the key graylog-key.pem and it is put in the file server.conf but inte frontend of graylog that it take the control of the sidecar in remote mode automatically to configure the configuration file you have to put the autorithy cert .pem and the client.crt and the client.key but the graylog manual don´t say how to do it so... can anybody help me to make this please?
this is the graylog manual --> https://go2docs.graylog.org/4-x/setting_up_graylog/https.html?TocPath=Setting%20up%20Graylog%7CSecuring%20Graylog%7C_____2
that´s all... Thanks a lot.
I try put graylog-certificate.pem and graylog-key.pem in server and client but nothing...
I hope a solution. Graylog staff say nothing to me.

Https for prometheus with self-signed ssl certificate

Trying to get up SSL for prometheus (started via docker). I generated key and crt myself using open ssl. Pair: key and crt works ok.
when I execute this command on my host:
openssl s_server -cert prometheus.crt -key prometheus.key
It's saying "ACCEPT"
Here is my Dockerfile for prometheus container:
https://pastebin.com/4wGtCGp6
When I build image and start it, it's saying:
level=error ts=2021-09-24T20:44:11.649Z caller=stdlib.go:105 component=web caller="http: TLS handshake error from 127.0.0.1:50458" msg="remote error: tls: bad certificate"
кричит постоянно
In the web.yml I configure SSL in a following way:
tls_server_config:
cert_file: /etc/prometheus/prometheus.crt
key_file: /etc/prometheus/prometheus.key
In the prometheus.yml I configure SSL in a following way:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
scheme: https
tls_config:
ca_file: /etc/prometheus/prometheus.crt
cert_file: /etc/prometheus/prometheus.crt
key_file: /etc/prometheus/prometheus.key
What could be the reason of this error ?
If it's self-signed, you shouldn't need a CA file, so try deleting that line in the tls_config and restarting the container.
I know this is old, so apologies if it's bad to answer an old question. Feel free to delete.

How to enable SSL on Apache Airflow?

I am using Airflow 1.7.0 with a LocalExecutor and documentation suggests that to enable SSL, we need to pass cert and key path and change the port to 443 as below
[webserver]
web_server_ssl_cert = <path to cert>
web_server_ssl_key = <path to key>
# Optionally, set the server to listen on the standard SSL port.
web_server_port = 443
base_url = http://<hostname or IP>:443
I have created cert and key generated using OpenSSL. The details supplied while creating the cert/key too are right.
However, the Airflow UI is still http and not https.
Any pointers would help!
Thank you!
Solved in this question How to enable SSL on Airflow Webserver? and answer https://stackoverflow.com/a/56760375/512111.
In short: generate a key, crt pair with
openssl req \
-newkey rsa:2048 -nodes -keyout domain.key \
-x509 -days 365 -out airflow.crt
and set in airflow.cfg like
web_server_ssl_cert = /path/to/airflow.crt
web_server_ssl_key = /path/to/airflow.key
Leave the webserver port unchanged. Restart the airflow webserver, go to https://hostname:port et voilà.
Airflow 1.7.0 doesn't support SSL. I just checked the webserver code of airflow 1.7.0. The code is given below. This function just starts the flask/gunicorn application on HTTP with the host and port. If you provide the certificate and mention the port as 443, it will simply start the application on http://<host>:443. It doesn't accept the SSL key and certificate. The webserver function of Airflow 1.7.0 is given below.
SSL feature is available with the latest version of the Apache Airflow. Please use the latest version for SSL support.
def webserver(args):
print(settings.HEADER)
from airflow.www.app import cached_app
app = cached_app(configuration)
workers = args.workers or configuration.get('webserver', 'workers')
if args.debug:
print(
"Starting the web server on port {0} and host {1}.".format(
args.port, args.hostname))
app.run(debug=True, port=args.port, host=args.hostname)
else:
print(
'Running the Gunicorn server with {workers} {args.workerclass}'
'workers on host {args.hostname} and port '
'{args.port}...'.format(**locals()))
sp = subprocess.Popen([
'gunicorn', '-w', str(args.workers), '-k', str(args.workerclass),
'-t', '120', '-b', args.hostname + ':' + str(args.port),
'airflow.www.app:cached_app()'])
sp.wait()
Go to AIRFLOW_HOME -> airflow.cfg. It has section named [webserver], under that there are two config properties like below:
web_server_ssl_cert =
web_server_ssl_key =
if there is no value like above means Airflow webserver is running on http (without certificate).
To enable SSL, use .p12 certificate (one you must have ordered) and use openssl to extract certificate and private key from .p12 file. openssl mostly ships with Linux so you can directly run on linux terminal.
Step1: Extract certificate using below command
openssl pkcs12 –in /path/cert.p12 -nokeys -clcerts –out /path/mycert.crt
Step2: Extract key using below command
openssl pkcs12 –in /path/cert.p12 -nocerts –out /path/mykey.key
Step3: Once certificate and key is generated, update airflow.cfg for
web_server_ssl_cert and web_server_ssl_key. Restart Airflow webserver.. are you
are done. Browse Airflow UI with https.

Why does Kubernetes' "kubectl" abort with “Authorization error”?

After having changed the IP configuration of the cluster (all external IPs changed, the internal private IPs remained the same), some kubectl commands do not work anymore for any container. The pods are all up and running, and seem to find themselves without problems. Here is the output:
bronger#penny:~$ time kubectl logs jb-plus--prod-615777041-71s09
Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)
real 0m30,539s
user 0m0,441s
sys 0m0,021s
Apparently, there is a 30 seconds timeout, and after that the authorisation error.
What may cause this?
I run Kubernetes 1.8 with Weave Net.
Based on the symptom new ip missing from the certificate. use the below command to validate.
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep DNS

SSL renew certificate on apache keeps using old certtificate file

I'm trying to renew my SSL certificate but there is some problem i'm probably missing. after i'v done the following steps the server keep using the old certificate and i do'nt know why.
here'w what i have done:
Create new csr file (domain.csr) + key file (domain.key)
openssl req -new -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr
Copy csr file content and paste it to my ssl provider + get approval.
get 5 files from them and upload them to the server (domain.der,domain.pem
,domain.cer, chain.cer , domain.p7b )
set on apache ssl.conf file ,
SSLCertificateFile (domain.cer) SSLCertificateKeyFile (domain.key).
restart apache
for some reason my server is still using my old certificate.
is the something i'm doing wrong?
Well you figured it out yourself but in case anyone else is in same situation, here's some of the things you can check.
First up check locally whether this works, by running the following openssl command on the server (a crucial step we skipped!):
openssl s_client -connect localhost:443
This will show the cert presented to the client from Apache. If that's not the right one, then you know Apache config is at fault. If it is the right one then something downstream is a problem.
In your case you terminate SSL at the load balancer and forgot to change the cert there. Another issue could be browser caching the SSL cert (restart it, Ctrl+F5 to force refresh or better yet try another browser or a third party website like ssllabs.com).
Assuming it's a problem with Apache then you need to check the config to check all instances of the cert have been replace. The below command will show all the vhosts and what config they are configured in:
/usr/local/apache2/bin/apachectl -S
Alternatively just use standard find and grep unix commands to search your Apache config for the old or new cert:
find /usr/local/apache2/conf -name "*.conf" -exec grep olddomain.cer {} \; -print
Both those commands assume apache is installed in /usr/local/apache2 but change the path as appropriate.
If all looks good and you've definitely restarted Apache then you can try a full stop and restart as I have noticed sometimes a graceful restart of Apache doesn't always pick up new config. Before starting the web server back up again, check you can't connect from your browser (to ensure you're connecting to the server you think you're connecting to) and that the process is down with the following command:
ps -ef | grep httpd
and then finally start.
Another thing to check is that the cert you are installing is the one you think it is, using this openssl command to print out the cert details (assuming the cert is in x509 format but there are similar commands for other formats):
openssl x509 -in domain.cer -text
And last but not least check the Apache log files to see if any errors in there. Though would expect that to mean no cert is loaded rather than just the old one.
Good answer from #Barry.
Another aspect is apache is not the front most web server. From this conversation. It is possible that there are other web servers in front of apache.
Something like - nginx. In our case it was AWS ELB. We had to change cert in ELB in order to change.
We had a similiar problem to what #Akshay is saying above.
In order for the server to update the certificates we had to run some commands for Google Cloud Compute Engine Load Balancer:
gcloud compute target-https-proxies update
Hope this helps someone that is using GCP for hosting apache.