Set splunk alert to send alerts for each host - splunk

I am setting up my Splunk search/alert to search for an error in a group of servers. How can i set the alert to mail me when the error happens on multiple hosts but only once for each host?
For example, if the error is continuously happening on two hosts from a group of 6, I need to get two alerts(one for each host) but only once till the next day. (for the once till the next day option I am using the throttling feature)
Is that possible with splunk?

I figured it out. In the throttle settings, I put the suppress the field value as host and set the alert for every occurence. This way, it sent alert once for all the hosts and then supressed everything till the supress threshold.

To throttle by specific fields you must select the "For each result" option in the Trigger Conditions section. Then put "host" in the "Suppress results containing field value" box.
See http://docs.splunk.com/Documentation/Splunk/7.2.0/Alert/ThrottleAlerts

Related

Can you combine multivalue fields to form a consolidated Splunk alert?

I have a Splunk search which returns several logs of the same exception, one for each ID number (from a batch process). I have no problem extracting the field from the log with reg-ex and can build a single alert for each ID number easily.
Slack Message: "Reference number $result.extractedField$ has failed processing."
Since the error happens in batches, sending out an alert for every reference ID that failed would clutter up my Slack channel very quickly. Is it possible to collect all of the extracted fields and set the alert to send only one message? Like this...
Slack Message: "Reference numbers $result.listOfExtractedFields$ have failed to process."
To have a consolidated alert you need consolidated search results. Do that like this:
index=the_index_youre_searching "the class where the error occurs" "the exception you're looking for"
| stats values(*) as * by referenceID
Be sure to select the "Once" Trigger Condition in the alert setup.

Can a telegram bot block a specific user?

I have a telegram bot that for any received message runs a program in the server and sends its result back. But there is a problem! If a user sends too many messages to my bot(spamming), it will make server so busy!
Is there any way to block the people whom send more than 5 messages in a second and don't receive their messages anymore? (using telegram api!!)
Firstly I have to say that Telegram Bot API does not have such a capability itself, Therefore you will need to implement it on your own and all you need to do is:
Count the number of the messages that a user sends within a second which won't be so easy without having a database. But if you have a database with a table called Black_List and save all the messages with their sent-time in another table, you'll be able to count the number of messages sent via one specific ChatID in a pre-defined time period(In your case; 1 second) and check if the count is bigger than 5 or not, if the answer was YES you can insert that ChatID to the Black_List table.
Every time the bot receives a message it must run a database query to see that the sender's chatID exists in the Black_List table or not. If it exists it should continue its own job and ignore the message(Or even it can send an alert to the user saying: "You're blocked." which I think can be time consuming).
Note that as I know the current telegram bot API doesn't have the feature to stop receiving messages but as I mentioned above you can ignore the messages from spammers.
In order to save time, You should avoid making a database connection
every time the bot receives an update(message), instead you can load
the ChatIDs that exist in the Black_List to a DataSet and update the
DataSet right after the insertion of a new spammer ChatID to the
Black_List table. This way the number of the queries will reduce
noticeably.
I have achieved it by this mean:
# Using the ttlcache to set a time-limited dict. you can adjust the ttl.
ttl_cache = cachetools.TTLCache(maxsize=128, ttl=60)
def check_user_msg_frequency(message):
print(ttl_cache)
msg_cnt = ttl_cache[message.from_user.id]
if msg_cnt > 3:
now = datetime.now()
until = now + timedelta(seconds=60*10)
bot.restrict_chat_member(message.chat.id, message.from_user.id, until_date=until)
def set_user_msg_frequency(message):
if not ttl_cache.get(message.from_user.id):
ttl_cache[message.from_user.id] = 1
else:
ttl_cache[message.from_user.id] += 1
With these to functions above, you can record how many messages sent by any user in the period. If a user's messages sent more than expected, he would be restricted.
Then, every handler you called should call these two functions:
#bot.message_handler(commands=['start', 'help'])
def handle_start_help(message):
set_user_msg_frequency(message)
check_user_msg_frequency(message)
I'm using pyTelegramBotAPI this module to handle.
I know I'm late to the party, but here is another simple solution that doesn't use a Db:
Create a ConversationState class to attach to each telegram Id when they start to chat with the bot
Then add a LastMessage DateTime variable to the ConversationState class
Now every time you receive a message check if enought time has passed from the LasteMessage DateTime, if not enought time has passed answer with a warning message.
You can also implement a timer that deletes the conversation state class if you are worried about performance.

From where I can increase the default max deffer limit value in cpanel?

I am having a cPanel server ,
getting this error while sending the email .
Domain XYZ.com has exceeded the max defers and failures per hour (5/5 (100%)) allowed. Message discarded.
Also the maximum email sending limit is set to 500 in the tweak setting and the account package .
From where I can increase the default max deffer limit value in cpanel .
Thanks
On the tweak settings page, there is an option under the mail tab for:
Number of failed or deferred messages a domain may send before protections can be triggered

Need help in Apache Camel multicast/parallel/concurrent processing

I am trying to achieve concurrent/parallel processing in my requirement, but I did not get appropriate help in my multiple attempts in this regard.
I have 5 remote directories ( which may be added or removed) which contains log files, I want to Dow load them for every 15 minutes to my local directory and want to perform Lucene indexing after completion of ftp transfer job, I want to add routers dynamically.
Since all those remote machines are different end points , and different routes. I don't have any particular end point to kickoff all these.
Start
<parallel>
<download remote dir from: sftp1>
<download remote dir from: sftp2>
....
</parallel>
<After above task complete>
<start Lucene indexing>
<end>
Repeat above for every 15 minutes,
I want to download all folders paralally, Kindly suggest the solution if anybody worked on similar requirement.
I would like to know how to start/initiate these multiple routes (like this multiple remote directories) should be kick started when I don't have a starter end point. I would like to start all ftp operations parallel and on completing those then indexing. Thanks for taking time to reading this post , I really appreciate your help.
I tried like this,
from (bean:foo? Method=start).multicast ().to (direct:a).to (direct:b)...
From (direct:a) .from (sftp:xxx).to (localdir)
from (direct:b).from (sftp:xxx).to (localdir)
camel-ftp support periodic polling via the consumer.delay property
add camel-ftp consumer routes dynamically for each server as shown in this unit test
you can then aggregate your results based on a size or timeout value to initiate the Lucene indexing, etc
[todo - put together an example]

How to get priority of current job?

In beanstalkd
telnet localhost 11300
USING foo
put 0 100 120 5
hello
INSERTED 1
How can I know what is the priority of this job when I reserve it? And can I release it by making the new priority equals to current priority +100?
Beanstalkd doesn't return the priority with the data - but you could easily add it as metadata in your own message body. for example, with Json as a message wrapper:
{'priority':100,'timestamp':1302642381,'job':'download http://example.com/'}
The next message that will be reserved will be the next available entry from the selected tubes, according to priority and time - subject to any delay that you had requested when you originally sent the message to the queue.
Addition: You can get the priority of a beanstalk job (as well as a number of other pieces of information, such as how many times it has previously been reserved), but it's an additional call - to the stats-job command. Called with the jobId, it returns about a dozen different pieces of information. See the protocol document, and your libraries docs.