I have a redis server, logstash indexer server, and an elasticsearch server.
How can I have the indexer server or even the shipper servers include the IPs in the log so that it's easier to sort in Kibana?
Or is this something that is done in the elasticsearch config?
When your input log to logstash, logstash will create an event and add hostname to the event. Logstash use hostname instead of IP because one server will have several IP. For example, 127.0.0.1, public IP etc. therefore it doesn't know which IP to use. So logstash use hostname.
Did it. I added this:
filter {
dns {
add_field => [ "IPs", "Logs, from %{host}" ]
}
}
filter {
dns {
type => [ "MESSAGES" ]
resolve => [ "host" ]
action => [ "replace" ]
}
}
Reason why I used a double filter was so that I still was able to keep the hostname after "replace" overwrote the host value to the IP address.
Related
I have a RabbitMQ server which receives messages to an exchange within a virtual host called "ce_func", this exchange is bound to a queue called "azure_trigger".
I'd like to use Azure Functions new RabbitMQ binding to collect from Rabbit. Unfortunately, this is limited to collecting only from virtual host '/' . I was hoping that I could use Rabbit's federation functionality to automatically route to an "azure_trigger" queue within the "/" virtual host of the same server but so far I've failed.
I created a Rabbit "upstream" and "policy" applied to that upstream but I can't figure out the configuration. I have a Federation Status of "Running" but it's only checking the "ce_func" virtual host, I can't see where I can set the target exchange as the "/" virtual host.
Does anyone have any pointers please?
If I understand correctly, you want to deliver message between queues in different vhosts.
RabbitMQ community recommend to use Shovel plugin to handle this situation:
The source and destination can be on the same broker (typically in different vhosts) or distinct brokers.
It is possible to reference any virtual host (vhost) in the in the uri field of the federation-upstream's configuration in the form:
"amqp://" [ username [ ":" password ] "#" ] host [ ":" port ] [ "/" vhost ]
So in simple terms you can wack the vhost on the end of the uri e.g. amqp://localhost:5672/myvhost... if your vhost name is blank then just make sure you include the trailing slash '/' e.g. amqp://localhost:5672/.
A note specific to the blank vhost from the rabbitmq docs (https://www.rabbitmq.com/uri-spec.html)
The vhost component may be absent; this is indicated by the lack of a
"/" character following the amqp_authority. An absent vhost component
is not equivalent to an empty (i.e. zero-length) vhost name.
How does Consul resolve the new redis master ip address after the old redis master get takedown ?
For example :
I did while true; do dig redis.service.google.consul +short; sleep 2; done the response is
192.168.248.43
192.168.248.41
192.168.248.42
192.168.248.41
192.168.248.42
192.168.248.43
...
My expectation is it's only resolve to 192.168.248.41 because it's master. But when the master is down, consul should resolve to 192.168.248.42 or 192.168.248.43, according which one is master
Here is my consul services config in 192.168.248.41
...
"services": [
{
"id": "redis01-xxx",
"name": "redis",
"tags": ["staging"],
"address": "192.168.248.41",
"port": 6379
}
]
...
Here is my consul services config in 192.168.248.42
...
"services": [
{
"id": "redis01-xxx",
"name": "redis",
"tags": ["staging"],
"address": "192.168.248.42",
"port": 6379
}
]
...
And same thing with 192.168.248.43.
My expectation is like this video https://www.youtube.com/watch?v=VHpZ0o8P-Ts
when i do dig, the consul will resolve to only one IP address (master). When the master is down and redis sentinel selects the new master. The consul will resolve to new redis master IP address.
I am very new in Consul. So, i am really appreciate if someone can give short example and suggestion feature of consul, so i can catch it faster.
When you register multiple services with the same "name", they will appear under <name>.service.consul (if they are healthy of course). In your case, you didn't define any health-checks for your redis services, that's why you see all of the service IPs in your DNS query response.
For use-case, where you only need IP of the Redis master, I would recommend registering service with a health check . I'm not familiar how one would query Redis, to find out if the node is the master. But you can use for an example script check, where your script would return 0, if current Redis node was elected master by the Redis sentinel. Consul will automatically recognize that service instance as healthy, and you will see only it's IP in the dig redis.service.consul query to Consul DNS interface.
I am using Logstash and Elasticsearch versions 5.6.5. So far used elasticsearch output with HTTP protocol and no authentication. Now Elasticsearch is being secured using basic authentication (user/password) and CA certified HTTPS URL. I don't have any control over the elasticsearch server. I just use it to output from Logstash.
Now when I try to configure the HTTPS URL of elasticsearch with basic authentication, it fails to create the pipeline.
Output Configuration
output {
elasticsearch {
hosts => ["https://myeslasticsearch.server.io"]
user => "esusername"
password => "espassword"
ssl => true
}
}
Errors
1. Error registering plugin {:plugin=>"#<LogStash::OutputDelegator:0x50aa9200
2. Pipeline aborted due to error {:exception=>#<URI::InvalidComponentError: bad component(expected user component):
How to fix this?
I notice that there is a field called cacert which requires some PEM file. But I am not sure what to put there since the Elasticsearch server is using a CA certified SSL not a self-signed one.
Addtional question: I don't have any xpack installed. Is 'xpack' required to be purchased for HTTPS output to Elasticsearch from Logstash?
I found the root cause of the issue. There were three things to fix:
The logstash version I tested with was wrong 5.5.0. I downloaded the correct version to match with Elasticsearch Version 5.6.5.
The host I used was running on 443 port. When I didn't specify the port as below logstash appended 9200 with it, due to which the connection failed.
hosts => ['https://my.es.server.com']
Below configuration corrected the port used by logstash.
hosts => ['https://my.es.server.com:443']
I was missing proxy connection settings.
proxy => 'http://my.proxy.com:80'
Overall settings that worked.
output {
elasticsearch {
hosts => ['https://my.es.server.com:443']
user => 'esusername'
password => 'espassword'
proxy => 'http://my.proxy:80'
index => "my-index-%{+YYYY.MM.dd}"
}
}
No need for 'ssl' field.
Also NO need for 'xpack' installation for this requirement.
I installed rabbit mq via docker image on a machine including the management and rabbitmq_auth_backend_ip_range plugins. I want to restrict access to the ports 5671/2 and 15672 to only allow certain IPs accessing them.
As 15672 is the web interface, I have not current solution for that. Any ideas on that?
For 5671/2 (which one is the secure one?) I want to use the plugin rabbitmq_auth_backend_ip_range because as far as I understood, that's its purpose.
My current rabbitmq.config looks like this:
[
{rabbit, [
{auth_backends, [{rabbit_auth_backend_ip_range}]}
]},
{rabbitmq_auth_backend_ip_range, [
{tag_masks,
[{'administrator', [<<"::FFFF:192.168.0.0/112">>]}]
}
]}
].
According to the documentation that allows access only for accounts tagged with administrator. But if I do a telnet nothing changed:
telnet ip-address 5672
I can access it. How do you pass over credentials via telnet? How is ip restriction done with rabbit mq?
rabbitmq-auth-backend-ip-range is only providing authentication mechanism to login/talk to rabbitmq server. That doesn't mean your 5672 port is not open.
You will still be able to telnet on 5672 but if some administrator user tries to connect particularly to RabbitMQ server than it should match with the given IP address otherwise authentication failed will return
For RabbitMQ Management you can define IP address something like this:
{rabbitmq_management, [
{listener, [{port, 15672}, {ip, "127.0.0.1"}]}
]}
Rabbitmq-auth-backend-ip-range link is community plugin for client authorization based on source IP address. With this community plugin, we can restrict access to client on the basis of IP address
Steps To configure plugin in rabbitmq version 3.6.X
wget https://dl.bintray.com/rabbitmq/community-plugins/3.6.x/rabbitmq_auth_backend_ip_range/rabbitmq_auth_backend_ip_range-20180116-3.6.x.zip
unzip content to /usr/lib/rabbitmq/lib/rabbitmq_server-3.x/plugins
Enable plugin:rabbitmq-plugins enable rabbitmq_auth_backend_ip_range
Set a custom tag to which this plugin will block for certain IP address
rabbitmqctl set_user_tags custom_user custom_tag
Configure rabbitmqctl configuration file
vi /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners, [5672]},
{auth_backends, [
{rabbit_auth_backend_internal,
[rabbit_auth_backend_internal, rabbit_auth_backend_ip_range]
}
]
}
]},
{rabbitmq_auth_backend_ip_range, [
{tag_masks,
[{'customtag', [<<"::FFFF:172.xx.xx.xxx">>]}]},
{default_masks, [<<"::0/0">>]}
]}
].
this configuration will effect in such a way that the user with tag customtag will able to connect to rabbitmq server with IP address 172.xx.xx.xxx and all other tags can access from any IP address
sudo service rabbitmq-server restart
PS: As there is no valid link online to configure the rabbitmq_auth_backend_ip_range plugin, so I answered this question with the configuration steps
I am running nginx v1.6.3 on debian jessie 8.5 with this module compiled: https://github.com/kvspb/nginx-auth-ldap
When connecting to a site from different subnets I want the following behaviour:
Subnet A: needs auth via ldap
Subnet B: no auth
I tried the geo modul to turn on ldap_auth only if subnet A matches, but it still needs auth.
Parts of my config
geo $val {
default 0;
10.0.0.0/24 1;
}
server {
...
location / {
if ($val) {
ldap_auth ....
}
}
error.log:
2016/06/23 23:48:50 [emerg] 3307#0: "auth_ldap" directive is not allowed here in /etc/nginx/sites-enabled/proxy:32
I thought about adding an switch auth_ldap_bypass to the ldap_auth nginx module, but I'm not into programming modules for nginx. Maybe there is a solution out there.