Splunk UF not sending data to indexer - splunk

I have Splunk UF and Splunk Enterprise Server, both v8.2.1, running in docker containers but I am unable to see any data on the Enterprise Server with regards to the new index I created, 'mytest':
The Enterprise Server has default port 9997 active as a receiver port:
Both of the containers are connected to 'splunk' network I created:
"Containers": {
"0f9e44620ce9fba16df21af6d2253c4b02b9714cb3ea126a616f10d06f836eb9": {
"Name": "dspinelli-uf",
"EndpointID": "0e1dd065ee3d815c943a8b52e6107e53a4b57d9e3103b17d1461611543769869",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"3a1a084561eda8013baa8847f4ca30fd68eb74468ff666195bf1c15e0f8a280f": {
"Name": "dspinelli-ent",
"EndpointID": "7159b1a41840f9dfae04b50bb61386f8c3ac2233aee334026b9f1d685cfcf571",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
Inputs.conf on the UF:
[splunktcp://9997]
disabled = 0
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
index = mytest
indexes = mytest
outputgroup = tcpout
Outputs.conf on UF:
[indexAndForward]
index = false
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = dspinelli-ent:9997
[tcpout-server://dspinelli-ent:9997]
Communication between the UF and Enterprise Server is established:
netstat -an | grep 9997
tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN
tcp 0 0 172.18.0.3:44420 172.18.0.2:9997 ESTABLISHED
./bin/splunk list forward-server
Active forwards:
dspinelli-ent:9997
Configured but inactive forwards:
None
Attempt to curl the UF with some test data shows success:
curl -k https://x.x.x.x:8087/services/collector \
> -H 'Authorization: Splunk 4022d42f-9132-442a-8a79-5d3eea1ad40d' \
> -d '{"sourcetype": "demo", "event":"Hello, I was sent from UF"}'
{"text":"Success","code":0}
However, no data is ever displayed on the index in Enterprise Server:
Does anyone know what could possibly be wrong here or what the next steps would be?

The issue was with inputs.conf. Updated as follows:
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
_TCP_ROUTING = *
index = _internal
After update/restart the messages started to be received on Enterprise:

Related

Cannot send stream messages to rabbitmq which in docker

I have a rabbitmq container in docker and another service to send stream type messages to it. But it is only ok when the service is outside the docker, if I build the service as a container run in docker,and send stream messages, It always shows "System.Net.Sockets.SocketException (111): Connection refused ". But if you send a message of classic type, it's a success.
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
environment:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_stream advertised_host localhost
RABBITMQ_DEFAULT_USER:"admin"
RABBITMQ_DEFAULT_PASS:"admin"
RABBITMQ_DEFAULT_VHOST:"application"
ports:
- 5672:5672
- 5552:5552
- 15672:15672
volumes:
- ./conf/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./conf/rabbitmq/enabled_plugins:/etc/rabbitmq/enabled_plugins
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 15s
timeout: 15s
retries: 5
env_file:
- ./.local.env
./conf/rabbitmq/rabbitmq.conf
enter code herestream.listeners.tcp.1 = 5552
stream.tcp_listen_options.backlog = 4096
stream.tcp_listen_options.recbuf = 131072
stream.tcp_listen_options.sndbuf = 131072
stream.tcp_listen_options.keepalive = true
stream.tcp_listen_options.nodelay = true
stream.tcp_listen_options.exit_on_close = true
stream.tcp_listen_options.send_timeout = 120
./conf/rabbitmq/enabled_plugins
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_stream,rabbitmq_stream_management].
another service configures in docker:
# RabbitMQ
Host = "host.docker.internal",
VirtualHost = "application",
Port= 5672,
StreamPort = 5552,
User= "admin",
Password = "admin",
UseSSL = false

how to set up freeipa + rabbitmq

rabbitmq version 3.10.0
tell me how to write rabbitmq.conf correctly without using advanced.config
work BindDN in another server--> uid=myuserinfreeipa,cn=users,cn=accounts,dc=mydc1,dc=mydc2
work SearchFilter in another server ---> "(&(uid=%u)(memberOf=cn=mygroupinfreeipa,cn=groups,cn=accounts,dc=mydc1,dc=mydc2)(!(nsaccountlock=TRUE)))"
work BaseDN in another server --> "cn=users,cn=accounts,dc=mydc1,dc=mydc2"
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = my.server.com
auth_ldap.timeout = 500
auth_ldap.port = 389
auth_ldap.user_dn_pattern = CN=${username},OU=Users,dc=mydc1,dc=mydc2
auth_ldap.use_ssl = false
ssl_options.cacertfile = /etc/rabbitmq/ca.crt
auth_ldap.dn_lookup_bind.user_dn = test
auth_ldap.dn_lookup_bind.password = password
auth_ldap.dn_lookup_attribute = distinguishedName
auth_ldap.dn_lookup_base = cn=users,cn=accounts,dc=mydc1,dc=mydc2
auth_ldap.log = network
advanced.config
[
{
rabbitmq_auth_backend_ldap,
[
{
tag_queries, [
{administrator,{in_group,"CN=mygroupinfreeipa,dc=mydc1,dc=mydc2","member"}},
{management, {constant, true}}
]
}
]%% rabbitmq_auth_backend_ldap,
}
].
tail -f /var/log/rabbitmq/rabbit#amqptest.log
LDAP CHECK: login for test
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP network traffic: bind reply = {ok,
{'LDAPMessage',1,
{bindResponse,
{'BindResponse',invalidCredentials,
[],[],asn1_NOVALUE,asn1_NOVALUE}},
asn1_NOVALUE}}
LDAP bind returned "invalid credentials": xxxx
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP bind error: "xxxx" {'EXIT',
{{badmatch,
{error,
{asn1,
{function_clause,
[{'ELDAPv3',encode_restricted_string,
[{refused,"test",[]},[<<4>>]]

Terraform - Failed to set up SSH tunneling for host

Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.

How to expose RSK node to an external network?

I am having problems exposing my RSK node to an external IP.
My startup command looks as follows:
java \
-cp $HOME/Downloads/rskj-core-3.0.1-IRIS-all.jar \
-Drsk.conf.file=/root/bitcoind-lnd/rsk/rsk.conf \
-Drpc.providers.web.cors=* \
-Drpc.providers.web.ws.enabled=true \
co.rsk.Start \
--regtest
This is my rsk.conf:
rpc {
providers {
web {
cors: "*",
http {
enabled = true
bind_address = "0.0.0.0"
hosts = ["localhost", "0.0.0.0"]
port: 4444
}
}
}
}
API is accessible from localhost, but from external network I get error 400. How do I expose it to external network?
You should add your external IP to hosts. Adding just 0.0.0.0 is not enough to indicate all IPs to be valid. Port forwarding needs to be enabled for the port number that you have configured in rsk.conf, which in this case is the default value of 4444.
rpc {
providers {
web {
cors: “*”,
http {
enabled = true
bind_address = “0.0.0.0"
hosts = [“localhost”, “0.0.0.0", “216.58.208.100”]
port: 4444
}
}
}
}
where 216.58.208.100 is your external IP

icinga2 notifications to cachet

I would like to share with you a way to send notifications from icinga2 to cachet via the API.
Icinga2 version : 2.4.10-1
Cachet version : 2.3.9
First of all, you have to know which component ID you want to use (in my case, because you can update component by name)
To get the component ID, you can use the curl command :
curl --insecure --request GET --url https://URL/api/v1/components -H "X-Cachet-Token: TOKEN"
URL : The URL of your cachet installation
TOKEN : The Token of the member in Cachet
Create command in /etc/icinga2/conf.d/commands.conf
object NotificationCommand "cachet-incident-notification-v2" {
import "plugin-notification-command"
command = [ PluginDir + "/cachet-notification-v2.sh" ]
env = {
"SERVICESTATE" = "$service.state$"
}
}
Create notification template in /etc/icinga2/conf.d/templates.conf
template Notification "cachet-incident-notification-v2" {
command = "cachet-incident-notification-v2"
states = [ OK, Warning, Critical, Unknown ]
types = [ Problem, Acknowledgement, Recovery, Custom,
FlappingStart, FlappingEnd,
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
/*
period = "24x7"
*/
interval = 0
}
Create notification in /etc/icinga2/conf.d/notifications.conf
apply Notification "cachet-incident-notification-v2" to Service {
import "cachet-incident-notification-v2"
user_groups = host.vars.notification.pager.groups
assign where service.vars.cachetv2 == "1" && host.vars.cachetv2 == "1"
interval = 0 # Disable Re-notification
}
Add variable in your check service in /etc/icinga2/conf.d/service/your/service.conf
[...]
vars.cachetv2 = "1"
[...]
Add variable in your host config file in /etc/icinga2/conf.d/hosts/your/host
[...]
vars.cachetv2 = "1"
[...]
Create the script in /usr/lib/nagios/plugins/cachet-notification-v2.sh
#!/bin/bash
# Some Constants
NOW="$(date +'%d/%m/%Y')"
CACHETAPI_URL="https://URL/api/v1/components/<ID DU COMPOSANT>"
CACHETAPI_TOKEN="TOKEN><"
# Map Notification states for icinga2
# OK - 1 operational
# Warning - 3 Partial outage
# Critical - 4 Major outage
# Unknown - 2 Performance issues
case "$SERVICESTATE" in
'OK')
COMPONENT_STATUS=1
;;
'WARNING')
COMPONENT_STATUS=3
;;
'CRITICAL')
COMPONENT_STATUS=4
;;
'UNKNOWN')
COMPONENT_STATUS=2
;;
esac
curl -X PUT -H "Content-Type: application/json;" -H "X-Cachet-Token: ${CACHETAPI_TOKEN}" -d '{"status": "'"${COMPONENT_STATUS}"'"}' ${CACHETAPI_URL} -k
PS : Give the execution permission to the script
Check the syntax and reload
/etc/init.d/icinga2 checkconfig && /etc/init.d/icinga2 reload
The result :
When your check results in "CRITICAL", the status in Cachet will be MAJOR ISSUE
When your check results in "WARNING", the status in Cachet will be PARTIAL ISSUE
When your check results in "OK", the status in Cachet will be OPERATIONAL
When your check results in "UNKNOWN", the status in Cachet will be PERFORMANCE DELAY
I hope it will help.
Nicolas B.