How to delete sensu client from web monitoring - redis

Hi I am trying to delete sensu client from monitoring, I have removed the sensu pack and folder from client, still its not reflecting in web monitor, I tried curl and direct removed from the web monitoring still no go.
I have googled and tried deleting redis keys from redis-cli and still no go, the particular client is showing in web monitor, please guide me on how to delete it from monitoring.

curl -s -i -X DELETE http://<sensu-server>:4567/clients/<client-name>
The above will delete the client from sensu. This information was obtained here: Clients API - Sensu Docs - https://docs.sensu.io/sensu-core/1.4/api/clients/

may have a look at the documentation:
http://sensuapp.org/docs/0.11/faq#how-do-i-delete-a-client

Related

How to send notification to Telegram from GitLab pipeline?

In our small startup we use GitLab for development and Telegram for internal communication between developers and PO. Since the PO would like to see the progress immediately, we have set up the GitLab Pipeline so that the preview version is deployed on the web server after each commit. Now we want to expand the pipeline. So that after the deployment a notification is sent via the Telegram group.
So the question - is that possible, and if so, how?
EDIT: since I've already implemented that, that's not a real question. I wanted to post the answer here so that others can use it as well.
So, we'll go through it step by step:
Create a Telegram bot
Add bot to Telegram group
Find out Telegram group Id
Send message via GitLab Pipeline
1. Create a Telegram bot
There are enough good instruction from Telegram itself for this:
https://core.telegram.org/bots#6-botfather
The instructions do not say anything explicitly, but to generate it, you have to go into the chat with the BotFather.
At the end you get a bot token, something like 110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw
2. Add bot to Telegram group
Switch to the Telegram group, and add the created bot as a member (look for the bot by name).
3. Find out Telegram group Id
Get the update status for the bot in browser:
https://api.telegram.org/bot<YourBOTToken>/getUpdates
Find the chat-id in the response:
... "chat": {"id": <YourGroupID>, ...
see for more details: Telegram Bot - how to get a group chat id?
4. Send message via GitLab Pipeline
Send message with a curl command. For example, an existing stage in gitlab pipeline can be extended for this purpose:
upload:
stage: deploy
image: alpine:latest
script:
- 'apk --no-cache add curl'
- 'curl -X POST -H "Content-Type: application/json" -d "{\"chat_id\": \"<YourGroupID>\", \"text\": \"CI: new version was uploaded, see: https://preview.startup.com\"}" https://api.telegram.org/bot<YourBOTToken>/sendMessage '
only:
- main
Remember to adapt the YourBOTToken and YourGroupID, and the text for the message.
*) we use the alpine docker image here, so curl has to be installed - 'apk --no-cache add curl'. With other images this may have to be done in a different way.
One easy way to send notifications (particularly if you're using multiple services or chats) is to use apprise.
To send to one telegram channel:
apprise -vv --body="Notify telegram chat" \
tgram://bottoken/ChatID1 \
This makes it easy to notify many services from your pipeline all at once without needing to write code against the API of each service (apprise handles this for you).
image: python:3.9-slim # or :3.9-alpine if you prefer a smaller image
before_script:
- pip install apprise # consider caching PIP_CACHE_DIR for performance
script: |
# Set a notification to multiple telegram chats, a yahoo email account,
# Slack, and a Kodi Server with a bit of added verbosity:
apprise -vv --body="Notify more than one service" \
tgram://bottoken/ChatID1/ChatID2/ChatIDN \
mailto://user:password#yahoo.com \
slack://token_a/token_b/token_c \
kodi://example.com

Thruk cgi authentication override

I have the latest version of thruk installed with naemon and livestatus. I want to be able to post commands from a python script to cmd.cgi from the same server without the interference of authentication. I have tried the settings of:
use_authentication=0
default_user_name=thrukadmin
but it doesn't seem to work in the thruk gui. When trying to post to the cgi from the thruk gui I get the error, "I'm sorry Dave......"
Any thoughts on why this not working right? The apache server on that system uses ldap to authenticate to the gui, could this be an issue?
Other thoughts?
It's much easier, you don't even need Thruk in the middle. You can simply write to Naemons command_file.
The external command list at https://www.naemon.org/documentation/developer/externalcommands/ contains an example for every possible command.
Here is a shell snippet which schedules a host downtime:
printf "[%lu] SCHEDULE_HOST_DOWNTIME;host1;1478648441;1478638441;1;0;3600;naemonadmin;This is an example comment.\n" `date +%s` > /var/lib/naemon/naemon.cmd
When using Thruk, you can use thruks cli script to send commands:
thruk r -d comment_data=test /hosts/localhost/cmd/schedule_host_downtime
Authentication is only required if you want to send commands by HTTP.

splunk search head cluster joining indexer cluster

Just trying out splunk, have had an issue with integrating a search head cluster with an indexer cluster.
I have 3 machines in a search head cluster and 3 machines in an indexer cluster. These are all on CentOS7, no firewall installed, all machines are able to ping / view each others splunk instaces (ip:8000 / ip:8089).
When following https://docs.splunk.com/Documentation/Splunk/6.6.2/DistSearch/SHCandindexercluster specifically
splunk edit cluster-config -mode searchhead -master_uri 10.152.31.202:8089 -secret newsecret123
I get an error of
Could not contact master. Check that the master is up, the master_uri=10.152.31.202:8089 and secret are specified correctly
I have removed the https:// part from the IP's above as I couldn't post with them included.
I have set the pass4SymmKey to be the same on all servers.
thanks
Please check shclustering pass4symmkey in both search head cluster and in the master.
i suspect pass4symmkey issue.
You should check splunkd.log to see what the error is. I would recommend not setting up the Pass4symKey and verifying it works first, if not then you found your issue.
Also, you did not mention having an extra server acting as the cluster master. This should be an independent server from your indexers. You have one right?

Expose service in OpenShift Origin Server - router is not working

Our team decided to try using OpenShift Origin server to deploy services.
We have a separate VM with OpenShift Origin server installed and working fine. I was able to deploy our local docker images and those services are running fine as well - Pods are up and running, get their own IP and I can reach services endpoints from VM.
The issue is I can't get it working, so the services are exposed outside the machine. I read about the routers, which suppose to be the right way of exposing services, but it just won't work, now some details.
Lets say my VM is 10.48.1.1. The Pod with docker container with one of my services is running on IP 172.30.67.15:
~$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-svc 172.30.67.15 <none> 8182/TCP 4h
The service is simple Spring Boot app with REST endpoint exposed at port 8182.
Whe I call it from VM hosting it, it works just fine:
$ curl -H "Content-Type: application/json" http://172.30.67.15:8182/home
{"valid":true}
Now I wanted to expose it outside, so I created a router:
oc adm router my-svc --ports='8182'
I followed the steps from OpenShift dev doc both from CLI and Console UI. The router gets created fine, but then when I want to check its status, I get this:
$ oc status
In project sample on server https://10.48.3.161:8443
...
Errors:
* route/my-svc is routing traffic to svc/my-svc, but either the administrator has not installed a router or the router is not selecting this route.
I couldn't find anything about this error that could help me solve the issue - does anyone had similar issue? Is there any other (better/proper?) way of exposing service endpoint? I am new to OpenShift so any suggestions would be appirciated.
If anyone interested, I finally found the "solution".
The issue was that there was no "router" service created - I didn't know it has to be created.
Step by step, in order to create this service I followed the instructions from OpenShift doc page which were pretty easy, but I couldn't login using admin account.
I used default admin account
$ oc login -u system:admin
But instead using available certificate, it kept asking me for password, but it shouldn't. What was wrong? My env variables were reset, and I had to set them again
$ export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
$ export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
$ sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
This was one of the first steps described in OpenShift docs OpenShift docs. After that the certificate is set correctly and login works as expected. As an admin I created router service (1st link) and the route started working - no more errors.
So in the end it came out to be pretty simple and dummy, but given that I didn't have experience with OpenShift it was hard for me to find out what is going on. Hope it will help if someone will have the same issue.

ECONNREFUSED on redis what to do?

I have been working on this for days now, and I can't figure out what is wrong.
Everything else is working, but I get the "ECONNREFUSED" on redis.
I have follow intances running:
app01 ROLE: app
web01 ROLE: web
db01 ROLE:db:primary
redis01 ROLE:redis_master
redis02 ROLE:redis_slave
sidekiq01 ROLE:redis
Here is the error from the productionlog:
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (ECONNREFUSED)):
app/models/user.rb:63:in `send_password_reset'
app/controllers/password_resets_controller.rb:10:in `create'
Everything is set-up by using the rubber-gem.
I have tried to remove all instaces and start from the start two times. Also I have tried to make a custom security-rule, but i'm not shure if I did it right.
Please help me!
Bringing this post back from the dead because I found it when I was struggling with the same problem today. I resolved my problem by doing the following:
I added redis_slave or redis_master roles to the servers using cap rubber:add_role. I found this will add both the specified role, and the generic "redis" role. Assuming that you want redis01 to be the only redis_master after adding roles, I'd expect your environment to have:
app01 ROLE: app
web01 ROLE: web
db01 ROLE:db:primary
redis01 ROLE:redis_master
redis01 ROLE:redis
redis02 ROLE:redis_slave
redis01 ROLE:redis
sidekiq01 ROLE:redis_slave
sidekiq01 ROLE:redis
After setting up roles, I updated the servers with cap rubber:bootstrap
In my environment, I'm deploying code from git, so I had to commit these changes and run cap -s branch="branch_name_or_sha" deploy to get rubber/deploy-redis.rb on the servers with the new roles and execute it.
After doing all this, redis runs on all my nodes without throwing Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (ECONNREFUSED)) error on any of them.
Good Luck!