How to secure ELK and Filebeat? - ssl

I have a server A on which I installed ELK by following the instructions :
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-18-04
ELK can be accessed from the IP address of my server and I have created Let's Encrypt certificates to secure my domain on Nginx.
server {
listen 80;
listen [::]:80;
server_name monitoring.example.com;
location / {
return 301 https://monitoring.example.com$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name monitoring.example.com;
auth_basic "Restricted Access";
auth_basic_user_file /var/www/monitoring-example-com/web/.htpasswd;
ssl_certificate /etc/letsencrypt/live/monitoring.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/monitoring.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have a server B that I want to monitor and on which I install Filebeat.
How to secure exchanges between ELK and Filebeat ?
Do I need to create an OpenSSL certificate or use the certificates generated with Let's Encrypt for Nginx ?

Do you use logstash in your pipeline or does filebeat output data directly into elasticsearch? Depending on this the answer changes slightly. Other aspects of your cluster setup also matter.
I'll assume that you are outputting data directly into elasticsearch.
The method you described with putting nginx in front of elasticsearch and doing basic authentication is OK for developer/test environment setup with one node cluster. I suspect this is all you want since you are monitoring just one server. If this is all you need you can stop reading.
You should however never use one node setup in production. Elasticsearch is a distributed storage and you should always use at least three nodes in production environments.
Why this matters when it comes to security? In multiple node cluster you have to secure both communication on REST API (default port 9200) and transport layer (the inter-node traffic default port 9300-9400). You may also want to be sure only trusted nodes are connected to the cluster. Nginx is not sufficient for this. One solution is to put the inter-node traffic into full mesh VPN that is setup between the cluster nodes. I recommend using tinc for this. The second is to setup TLS with one of the several security plugins available.
The best is to use both, because you'll probably also want not just encryption but also user management, role separation, audit logging etc.
There are several plugins you can use. The most obvious is to setup X-Pack Security. In this case please refer to the X-Pack documentation. The whole process is described there.
X-Pack is quite expensive. Luckily there are several alternatives the most prominent is searchguard. Community edition has missing a few features like LDAP or Field-level security, but it should be sufficient for most common use-cases. The documentation is not always straight forward, so I recommend doing a few test deployments.
Other alternatives include ReadonlyREST which has both enterprise and free version. Or the newest Open Distro this one is maintaining compatibility only with the OSS version of elasticsearch (it may break the basic licence features).
Edit: 11/18/2019
X-Pack under basic license now offers free basic security features. Pretty much same as searchguard community with addition that you can manage roles and users from Kibana GUI. My personal opinion is that now searchguard community is obsolete because X-Pack provides better features and you have one less dependency in your cluster, which makes updates and administration easier. For commercial use cases searchguard may still be the more sensible option especially for large clusters.

Check the following page which describes how to configure TLS to keep all data private from Filebeat -> Logstash -> Elasticsearch -> Kibana -> your web browser:
TLS for the Elastic Stack: Elasticsearch, Kibana, Beats, and Logstash
Elasticsearch
Basically on Elasticsearch enable transport SSL (in elasticsearch.yml) as follow:
# Enables security.
xpack.security.enabled: True
# Enables transport SSL.
xpack.security.transport.ssl.enabled: True
xpack.security.transport.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.verification_mode: certificate
# Enables SSL via HTTP.
xpack.security.http.ssl.client_authentication: optional
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.http.ssl.truststore.path: "certs/elastic-certificates.p12"
For more details, read: Encrypting communications in Elasticsearch.
Filebeat
And enable TLS on Filebeat hosts. Example filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- logstash-tutorial-dataset
output.logstash:
hosts: ["logstash.local:5044"]
ssl.certificate_authorities:
- certs/ca.crt
Read more:
Secure communication with Elasticsearch (to secure communication between Filebeat and Elasticsearch)
Secure communication with Logstash (to secure communication between Filebeat and Logstash)
Logstash
Then you need to enable TLS in Logstash (if in use), example logstash.yml:
node.name: logstash.local
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 'CHANGEME'
xpack.monitoring.elasticsearch.url: https://node1.local:9200
xpack.monitoring.elasticsearch.ssl.ca: config/certs/ca.crt
Read more: Secure communication with Logstash.

Related

SSL Client Authentication with Google Cloud Run

I'm trying to move an existing backend over to Google Cloud Run. Some of the endpoints (under a specific subdomain) require SSL Client Authentication. The way this is handled at the moment is on Nginx configuration level:
server {
listen 443 ssl http2;
server_name secure.subdomain.example.com;
[...]
# SSL Client Certificate:
ssl_client_certificate xxx.pem;
ssl_verify_client on;
[...]
location / {
if ($ssl_client_verify != "SUCCESS") { return 403 $ssl_client_verify; }
[...]
}
}
What would be the best approach to handle SSL client certificate authentication with Google Cloud Run? I assume this would need some sort of load balancer on the correct network layer and with support for cloud run?
Of course there is always the option to authenticate in the ExpressJS app, but if possible I would prefer it to happen before even reaching Cloud Run.
What would be the best approach to handle SSL client certificate
authentication with Google Cloud Run?
Cloud Run does not support SSL Client Certificate Authentication. The GFE (Google Front End) proxies requests for Cloud Run applications and does not pass-through requests. The only Google Cloud load balancers that support SSL client certificates are based on Google Maglev.
None of the Google Cloud managed compute services support SSL client certificate authentication (mutual TLS authentication).
Consider using Compute Engine instead of Cloud Run.Then configure Nginx to handle client authentication. For load balancing, use a pass-through load balancer such as External TCP/UDP Network Load Balancer
You can't achieve that with Cloud Run. The SSL connection is terminated at the load balancer side (On an HTTPS load balancer, or on the Cloud Run built-in load balancer). You only receive HTTP traffic to your service.
Indeed, you can add additional security information, in the request header, but you lost the SSL flavor.

How do I configure elastalert so it will connect to any server in the cluster?

How do I configure elastalert so it will connect to any available server in the cluster? The docs say:
es_host is the address of an Elasticsearch cluster where ElastAlert
will store data about its state, queries run, alerts, and errors. Each
rule may also use a different Elasticsearch host to query against.
but every example I can find just points to one IP address or hostname.
I have tried using a list of hostnames such as [elasticserver1, elasticserver2, elasticserver3], but that just causes elastalert to fail to start.
I guess you would need a upstream load balancer to wrap up those es nodes.
In my case, I use nginx to do load balancing for my es nodes. So the topology is something like this:
ElastAlert -> Nginx -> ES node 1
-> ES node 2
...
-> ES node n
Sample nginx config
upstream elasticsearch {
server {node 1}:9200;
server {node 2}:9200;
server {node n}:9200;
keepalive 15;
}
server {
listen 8080;
location / {
proxy_pass http://elasticsearch;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
Sample elastalert config.yml
es_host: "{nginx ip}"
es_port: "8080"
Here is the article I read about how to work with nginx
https://www.elastic.co/blog/playing-http-tricks-nginx
As you identified in your answer elastalert targets a cluster not a node: "The hostname of the Elasticsearch cluster the rule will use to query."

Can I use Squid to upgrade client TLS connections?

I'm trying to allow legacy systems (CentOS 5.x) to continue making connections to services which will shortly allow only TLS v1.1 or TLS v1.2 connections (Salesforce, various payment gateways, etc.)
I have installed Squid 3.5 on a Centos 7 server in a docker container, and am trying to configure squid to bump the SSL connections. My thought was that since squid acts as a MITM and opens one connection to the client and one to the target server that it would negotiate a TLS 1.2 connection to the target, while the client was connecting with SSLv3 or TLS 1.0.
Am I totally off-base here, or is this something that should be possible? If Squid can't do this, are there other proxies which can?
My current squid configuration looks like this:
access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
cache deny all
http_access allow all
http_port 3128 ssl-bump cert=/etc/squid/ssl_cert/myCA.pem generate-host-certificates=on version=1
ssl_bump stare all
ssl_bump bump all
I was able to get this working by only bumping at step1, and not peeking or staring. The final configuration that I used (with comments) is below:
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
# Write access and cache logs to disk immediately using the stdio module.
access_log stdio:/var/log/squid/access.log
cache_log /var/log/squid/cache.log
# Define ACLs related to ssl-bump steps.
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
# The purpose of this instance is not to cache, so disable that.
cache_store_log none
cache deny all
# Set up http_port configuration. All clients will be explicitly specifying
# use of this proxy instance, so https_port interception is not needed.
http_access allow all
http_port 3128 ssl-bump cert=/etc/squid/certs/squid.pem \
generate-host-certificates=on version=1
# Bump immediately at step 1. Peeking or staring at steps one or two will cause
# part or all of the TLS HELLO message to be duplicated from the client to the
# server; this includes the TLS version in use, and the purpose of this proxy
# is to upgrade TLS connections.
ssl_bump bump step1 all

OpenShift SSL cipher preferences configuration

I have a question regarding the configuration of SSL preferences on OpenShift.
As far I know, the SSL termination in OpenShift is executed on the HAproxy, which serves as reverse proxy to route to user gears.
Is there a possibility, to configure the SSL preferences, to use user specific order of prefered ciphers, and also to turn off some versions of SSL/TLS as it is possible for instance in tomcat, or is the SSL cipher and versions configuration platform specific and can't be changed by user?

Support for two-way TLS/HTTPS with ELB

One way (or server side) TLS/HTTPS with Amazon Elastic Load Balancing is well documented
Support for two-way (or client side) TLS/HTTPS is not as clear from the documentation.
Assuming ELB is terminating a TLS/HTTPS connection:
Does ELB support client authenticated HTTPS connections?
If so, does a server served by ELB recieve a X-Forwarded-* header to identify the client authenticated by ELB?
ELB does support TCP forwarding so an EC2 hosted server can establish a two-way TLS/HTTPS connection but in this case I am interested in ELB terminating the TLS/HTTPS connection and identifying the client.
I don't see how it could, in double-ended HTTPS mode, because the ELB is establishing a second TCP connection to the back-end server, and internally it's decrypting/encrypting the payload to/from the client and server... so the server wouldn't see the client certificate directly, and there are no documented X-Forwarded-* headers other than -For, -Proto, and -Port.
With an ELB running in TCP mode, on the other hand, the SSL negotiation is done directly between the client and server with ELB blindly tying the streams together. If the server supports the PROXY protocol, you could enable that functionality in the ELB so that you could identify the client's originating IP and port at the server, as well as identifying the client certificate directly because the client would be negotiating directly with you... though this means you are no longer offloading SSL to the ELB, which may be part of the point of what you are trying to do.
Update:
It doesn't look like there's a way to do everything you want to do -- offload SSL and identify the client certificatite -- with ELB alone. The information below is presented “for what it’s worth.”
Apparently HAProxy has support for client-side certificates in version 1.5, and passes the certificate information in X- headers. Since HAProxy also supports the PROXY protocol via configuration (something along the lines of tcp-request connection expect-proxy) ... so it seems conceivable that you could use HAProxy behind a TCP-mode ELB, with HAProxy terminating the SSL connection and forwarding both the client IP/port information from ELB (via the PROXY protocol) and the client cert information to the application server... thus allowing you to still maintain SSL offload.
I mention this because it seems to be a complementary solution, perhaps more feature-complete than either platform alone, and, at least in 1.4, the two products work flawlessly together -- I am using HAProxy 1.4 behind ELB successfully for all requests in my largest web platform (in my case, ELB is offloading the SSL -- there aren't client certs) and it seems to be a solid combination in spite of the apparent redundancy of cascaded load balancers. I like having ELB being the only thing out there on the big bad Internet, though I have no reason to think that directly-exposed HAProxy would be problematic on its own. In my application, the ELBs are there to balance between the HAProxies in the A/Z's (which I had originally intended to also auto-scale, but the CPU utilization stayed so low even during our busy season that I never had more than one per Availability Zone, and I've never lost one, yet...) which can then do some filtering, forwarding, and and munging of headers before delivering the traffic to the actual platform in addition to giving me some logging, rewriting, and traffic-splitting control that I don't have with ELB on its own.
In case your back end can support client authenticated HTTPS connections itself, you may use ELB as TCP on port 443 to TCP on port your back end listens to. This will make ELB just to resend unencrypted request directly to your back end. This config also doesn't require installation of SSL certificate to a load balancer.
Update: with this solution x-forwarded-* headers are not set.
You can switch to single instance on Elastic Beanstalk, and use ebextensions to upload the certs and configure nginx for mutual TLS.
Example
.ebextensions/setup.config
files:
"/etc/nginx/conf.d/00_elastic_beanstalk_ssl.conf":
mode: "000755"
owner: root
group: root
content: |
server {
listen 443;
server_name example.com;
ssl on;
ssl_certificate /etc/nginx/conf.d/server.crt;
ssl_certificate_key /etc/nginx/conf.d/server.key;
ssl_client_certificate /etc/nginx/conf.d/ca.crt;
ssl_verify_client on;
gzip on;
send_timeout 300s;
client_body_timeout 300s;
client_header_timeout 300s;
keepalive_timeout 300s;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-SSL-client-serial $ssl_client_serial;
proxy_set_header X-SSL-client-s-dn $ssl_client_s_dn;
proxy_set_header X-SSL-client-i-dn $ssl_client_i_dn;
proxy_set_header X-SSL-client-session-id $ssl_session_id;
proxy_set_header X-SSL-client-verify $ssl_client_verify;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
}
}
"/etc/nginx/conf.d/server.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJDkzCCAvygAwIBAgIJALrlDwddAmnYMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD
...
LqGyLiCzbVtg97mcvqAmVcJ9TtUoabtzsRJt3fhbZ0KKIlzqkeZr+kmn8TqtMpGn
r6oVDizulA==
-----END CERTIFICATE-----
"/etc/nginx/conf.d/server.key":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN RSA PRIVATE KEY-----
MIJCXQIBAAKBgQCvnu08hroXwnbgsBOYOt+ipinBWNDZRtJHrH1Cbzu/j5KxyTWF
...
f92RjCvuqdc17CYbjo9pmanaLGNSKf0rLx77WXu+BNCZ
-----END RSA PRIVATE KEY-----
"/etc/nginx/conf.d/ca.crt":
mode: "000400"
owner: root
group: root
content: |
-----BEGIN CERTIFICATE-----
MIJCizCCAfQCCQChmTtNzd2fhDANBgkqhkiG9w0BAQUFADCBiTELMAkGA1UEBhMC
...
4nCavUiq9CxhCzLmT6o/74t4uCDHjB+2+sIxo2zbfQ==
-----END CERTIFICATE-----