How do I configure elastalert so it will connect to any server in the cluster? - elastalert

How do I configure elastalert so it will connect to any available server in the cluster? The docs say:
es_host is the address of an Elasticsearch cluster where ElastAlert
will store data about its state, queries run, alerts, and errors. Each
rule may also use a different Elasticsearch host to query against.
but every example I can find just points to one IP address or hostname.
I have tried using a list of hostnames such as [elasticserver1, elasticserver2, elasticserver3], but that just causes elastalert to fail to start.

I guess you would need a upstream load balancer to wrap up those es nodes.
In my case, I use nginx to do load balancing for my es nodes. So the topology is something like this:
ElastAlert -> Nginx -> ES node 1
-> ES node 2
...
-> ES node n
Sample nginx config
upstream elasticsearch {
server {node 1}:9200;
server {node 2}:9200;
server {node n}:9200;
keepalive 15;
}
server {
listen 8080;
location / {
proxy_pass http://elasticsearch;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
Sample elastalert config.yml
es_host: "{nginx ip}"
es_port: "8080"
Here is the article I read about how to work with nginx
https://www.elastic.co/blog/playing-http-tricks-nginx

As you identified in your answer elastalert targets a cluster not a node: "The hostname of the Elasticsearch cluster the rule will use to query."

Related

How to Correct 'nginx: [emerg] "stream" directive is not allowed here'

The Question
Why does the following Nginx configuration return nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/sites-enabled/default:1?
Nginx Configuration...
stream {
map $ssl_preread_server_name $upstream {
example.com 1051;
}
upstream 1051 {
server 127.0.0.1:1051;
}
server {
listen 443;
proxy_pass $upstream;
ssl_preread on;
}
}
Version / Build information...
OS: Debian 10
Here is the stripped down nginx -V output confirming the presence of the modules I understand I need...
nginx version: nginx/1.14.2
TLS SNI support enabled
configure arguments: ... --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module ...
The Context
I have a single static IP address. At the static IP address, I am setting up a reverse proxy Nginx server to forward traffic to a variety of backend services. Several of the services are websites with unique domain names.
+-----+ +----------------------+ +---------+
| WAN | <----> | Nginx Reverse Proxy | <----> | Service |
+-----+ +----------------------+ +---------+
At boot, the service uses systemd to run this port forwarding ssh command to connect to the reverse proxy: ssh -N -R 1051:localhost:443 tunnel#example.com (That is working well.)
I want the certificate to reside on the service - not the reverse proxy. From what I understand I need to leverage SNI on Nginx to passthrough the SSL connections bases on domain name. But I cannot get the Nginx reverse proxy to passthrough SSL.
Resources
Here are a few of the resources I have pored over...
https://serverfault.com/questions/625362/can-a-reverse-proxy-use-sni-with-ssl-pass-through
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html
https://www.amitnepal.com/nginx-ssl-passthrough-reverse-proxy
https://serverfault.com/questions/1049158/nginx-how-to-combine-ssl-preread-protocol-with-ssl-preread-server-name-ssh-mul
The problem was I tried to embed a stream block inside an http block. I was not properly accounting for the include in /etc/nginx/nignx.conf file.

How to secure ELK and Filebeat?

I have a server A on which I installed ELK by following the instructions :
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-18-04
ELK can be accessed from the IP address of my server and I have created Let's Encrypt certificates to secure my domain on Nginx.
server {
listen 80;
listen [::]:80;
server_name monitoring.example.com;
location / {
return 301 https://monitoring.example.com$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name monitoring.example.com;
auth_basic "Restricted Access";
auth_basic_user_file /var/www/monitoring-example-com/web/.htpasswd;
ssl_certificate /etc/letsencrypt/live/monitoring.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/monitoring.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have a server B that I want to monitor and on which I install Filebeat.
How to secure exchanges between ELK and Filebeat ?
Do I need to create an OpenSSL certificate or use the certificates generated with Let's Encrypt for Nginx ?
Do you use logstash in your pipeline or does filebeat output data directly into elasticsearch? Depending on this the answer changes slightly. Other aspects of your cluster setup also matter.
I'll assume that you are outputting data directly into elasticsearch.
The method you described with putting nginx in front of elasticsearch and doing basic authentication is OK for developer/test environment setup with one node cluster. I suspect this is all you want since you are monitoring just one server. If this is all you need you can stop reading.
You should however never use one node setup in production. Elasticsearch is a distributed storage and you should always use at least three nodes in production environments.
Why this matters when it comes to security? In multiple node cluster you have to secure both communication on REST API (default port 9200) and transport layer (the inter-node traffic default port 9300-9400). You may also want to be sure only trusted nodes are connected to the cluster. Nginx is not sufficient for this. One solution is to put the inter-node traffic into full mesh VPN that is setup between the cluster nodes. I recommend using tinc for this. The second is to setup TLS with one of the several security plugins available.
The best is to use both, because you'll probably also want not just encryption but also user management, role separation, audit logging etc.
There are several plugins you can use. The most obvious is to setup X-Pack Security. In this case please refer to the X-Pack documentation. The whole process is described there.
X-Pack is quite expensive. Luckily there are several alternatives the most prominent is searchguard. Community edition has missing a few features like LDAP or Field-level security, but it should be sufficient for most common use-cases. The documentation is not always straight forward, so I recommend doing a few test deployments.
Other alternatives include ReadonlyREST which has both enterprise and free version. Or the newest Open Distro this one is maintaining compatibility only with the OSS version of elasticsearch (it may break the basic licence features).
Edit: 11/18/2019
X-Pack under basic license now offers free basic security features. Pretty much same as searchguard community with addition that you can manage roles and users from Kibana GUI. My personal opinion is that now searchguard community is obsolete because X-Pack provides better features and you have one less dependency in your cluster, which makes updates and administration easier. For commercial use cases searchguard may still be the more sensible option especially for large clusters.
Check the following page which describes how to configure TLS to keep all data private from Filebeat -> Logstash -> Elasticsearch -> Kibana -> your web browser:
TLS for the Elastic Stack: Elasticsearch, Kibana, Beats, and Logstash
Elasticsearch
Basically on Elasticsearch enable transport SSL (in elasticsearch.yml) as follow:
# Enables security.
xpack.security.enabled: True
# Enables transport SSL.
xpack.security.transport.ssl.enabled: True
xpack.security.transport.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "certs/elastic-certificates.p12"
xpack.security.transport.ssl.verification_mode: certificate
# Enables SSL via HTTP.
xpack.security.http.ssl.client_authentication: optional
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.keystore.path: "certs/elastic-certificates.p12"
xpack.security.http.ssl.truststore.path: "certs/elastic-certificates.p12"
For more details, read: Encrypting communications in Elasticsearch.
Filebeat
And enable TLS on Filebeat hosts. Example filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- logstash-tutorial-dataset
output.logstash:
hosts: ["logstash.local:5044"]
ssl.certificate_authorities:
- certs/ca.crt
Read more:
Secure communication with Elasticsearch (to secure communication between Filebeat and Elasticsearch)
Secure communication with Logstash (to secure communication between Filebeat and Logstash)
Logstash
Then you need to enable TLS in Logstash (if in use), example logstash.yml:
node.name: logstash.local
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 'CHANGEME'
xpack.monitoring.elasticsearch.url: https://node1.local:9200
xpack.monitoring.elasticsearch.ssl.ca: config/certs/ca.crt
Read more: Secure communication with Logstash.

How to specify ssl connection with Nginx stream?

Im trying to define reverse proxy with nginx.
I have a server which listens on port 943 (TCP with SSL). I use tekn0ir/nginx-stream docker. I have the following definitions in myotherservice.conf file:
upstream backend {
hash $remote_addr consistent;
server myserverip:943;
}
server {
listen localhost:943;
proxy_connect_timeout 300s;
proxy_timeout 300s;
proxy_pass backend;
}
When Im trying to connect loslhost:943, it refused. Im suspect its related to my SSL definitions. How should I define it?
Working with docker you must to bind port to all container interfaces in order to be able to expose ports:
...
server {
listen *:943;
...
Doc

nginx ldap auth bypass for specifc networks

I am running nginx v1.6.3 on debian jessie 8.5 with this module compiled: https://github.com/kvspb/nginx-auth-ldap
When connecting to a site from different subnets I want the following behaviour:
Subnet A: needs auth via ldap
Subnet B: no auth
I tried the geo modul to turn on ldap_auth only if subnet A matches, but it still needs auth.
Parts of my config
geo $val {
default 0;
10.0.0.0/24 1;
}
server {
...
location / {
if ($val) {
ldap_auth ....
}
}
error.log:
2016/06/23 23:48:50 [emerg] 3307#0: "auth_ldap" directive is not allowed here in /etc/nginx/sites-enabled/proxy:32
I thought about adding an switch auth_ldap_bypass to the ldap_auth nginx module, but I'm not into programming modules for nginx. Maybe there is a solution out there.

How to pass a client certificate through two server nginx?

I have some reason to use two nginx servers before the application server.
Both nginx servers using an SSL connection.
Nginx1 (SSL 443 and ssl_verify_client on) -> Nginx2 (SSL 443) -> App (9000).
On the first Nginx1 server I use the option: proxy_set_header client_cert $ssl_client_cert;
On the second server Nginx2 I use the option: underscores_in_headers on;
The problem is that the second Nginx2 server is sent only the first line of the certificate - "----- BEGIN CERTIFICATE -----".
How to pass a client certificate to the application server?
Nginx terminates SSL with no exception, so if you want this config anyway - you will need to have SSL config again and keep certificates on the server (here is relevant SO answer) or based on Nginx support discussion to use HAProxy in TCP mode. Here is the sample configuration article.
I found a Workaround for proxy client certificate
# NGINX1
...
map $ssl_client_raw_cert $a {
"~^(-.*-\n)(?<1st>[^\n]+)\n((?<b>[^\n]+)\n)?((?<c>[^\n]+)\n)?((?<d>[^\n]+)\n)?((?<e>[^\n]+)\n)?((?<f>[^\n]+)\n)?((?<g>[^\n]+)\n)?((?<h>[^\n]+)\n)?((?<i>[^\n]+)\n)?((?<j>[^\n]+)\n)?((?<k>[^\n]+)\n)?((?<l>[^\n]+)\n)?((?<m>[^\n]+)\n)?((?<n>[^\n]+)\n)?((?<o>[^\n]+)\n)?((?<p>[^\n]+)\n)?((?<q>[^\n]+)\n)?((?<r>[^\n]+)\n)?((?<s>[^\n]+)\n)?((?<t>[^\n]+)\n)?((?<v>[^\n]+)\n)?((?<u>[^\n]+)\n)?((?<w>[^\n]+)\n)?((?<x>[^\n]+)\n)?((?<y>[^\n]+)\n)?((?<z>[^\n]+)\n)?((?<ab>[^\n]+)\n)?((?<ac>[^\n]+)\n)?((?<ad>[^\n]+)\n)?((?<ae>[^\n]+)\n)?((?<af>[^\n]+)\n)?((?<ag>[^\n]+)\n)?((?<ah>[^\n]+)\n)?((?<ai>[^\n]+)\n)?((?<aj>[^\n]+)\n)?((?<ak>[^\n]+)\n)?((?<al>[^\n]+)\n)?((?<am>[^\n]+)\n)?((?<an>[^\n]+)\n)?((?<ao>[^\n]+)\n)?((?<ap>[^\n]+)\n)?((?<aq>[^\n]+)\n)?((?<ar>[^\n]+)\n)?((?<as>[^\n]+)\n)?((?<at>[^\n]+)\n)?((?<av>[^\n]+)\n)?((?<au>[^\n]+)\n)?((?<aw>[^\n]+)\n)?((?<ax>[^\n]+)\n)?((?<ay>[^\n]+)\n)?((?<az>[^\n]+)\n)*(-.*-)$"
$1st;
}
server {
...
location / {
...
proxy_set_header client_cert $a$b$c$d$e$f$g$h$i$j$k$l$m$n$o$p$q$r$s$t$v$u$w$x$y$z$ab$ac$ad$ae$af$ag$ah$ai$aj$ak$al$am$an$ao$ap$aq$ar$as$at$av$au$aw$ax$ay$az;
...
}
...
}
# NGINX 2
server {
...
underscores_in_headers on;
...
location / {
proxy_pass_request_headers on;
proxy_pass http://app:9000/;
}
...
}