Trigger Azure Log analytics alert based on log file append - azure-log-analytics

I need to try and create an alert for when a new entry is added to an application log file. Each new entry is time stamped. I have setup/imported the custom log as timestamped and tested with a dummy app log file and manually added entries. I initially set up the alert to trigger when the number of results is greater than 0. This appears to work but depending on the time intervals I set it will keep emailing me the alerts. Is there anyway I can get it to just alert the once for each time a new entry is added?
Alert logic
Based on - Number of results
Operator Greater than
Threshold value 0
Evaluation based on
Period(in Minutes)
1440
Frequency(in minutes)
240
I have set these to cut down on the alert emails. Ideally i'd like it to check every hour and alert when new entry is added but only alert the once. Not sure if it can be done. Is there any tweaks to the Kusto query where I can get it to alert based on a row number increase. With setting the alert to greater than 0 I've a feeling it will always alert because all new entries will mean its higher than that value.
My basic Kusto query just returns lines that list a document number
LogAppend_CL
| where RawData contains "for Document number"

Not sure if i understood your query properly. Do you want to get all the new records inserted every hour ?
Doesn't this alert condition work for you ? You configure an alert, which gets fired every 60 minutes and it goes back to the last 60 minutes and checks if there are any records matching you query and returns them in email.
Alert logic Based on - Number of results Operator Greater than Threshold value 0
Evaluation based on Period(in Minutes) -> 60
Frequency(in minutes) -> 60
Regards
Arun

Related

Laravel where clause based on conditions from value in database

I am building an event reminder page where people can set a reminder for certain events. There is an option for the user to set the amount of time before they need to be notified. It is stored in notification_time and notification_unit. notification_time keeps track of the time before they want to be notified and notification_unit keeps track of the PHP date format in which they selected the time, eg. i for minutes, H for hours.
Eg. notification_time - 2 and notification_unit - H means they need to be notified 2 hours before.
I have Cron jobs running in the background for handling the notification. This function is being hit once every minute.
Reminder::where(function ($query) {
$query->where('event_time', '>=', now()->subMinutes(Carbon::createFromFormat('i', 60)->diffInMinutes() - 1)->format('H:i:s'));
$query->where('event_time', '<=', now()->subMinutes(Carbon::createFromFormat('i', 60)->diffInMinutes())->format('H:i:s'));
})
In this function, I am hard coding the 'i', 60 while it should be fetched from the database. event_time is also part of the same table
The table looks something like this -
id event_time ... notification_unit notification_time created_at updated_at
Is there any way to solve this issue? Is it possible to do the same logic with SQL instead?
A direct answer to this question is not possible. I found 2 ways to resolve my issue.
First solution
Mysql has DATEDIFF and DATE_SUB to get timestamp difference and subtract certain intervals from a timestamp. In my case, the function runs every minute. To use them, I have to refactor my database to store the time and unit in seconds in the database. Then do the calculation. I chose not to use this way because both operations are a bit heavy on the server-side since I am running the function every minute.
Second Solution
This is the solution that I personally did in my case. Here I did the calculations while storing it in the database. Meaning? Let me explain. I created a new table notification_settings which is linked to the reminder (one-one relation). The table looks like this
id, unit, time, notify_at, repeating, created_at, updated_at
The unit and time columns are only used while displaying the reminder. What I did is, I calculated when to be notified in the notify_at column. So in the event scheduler, I need to check for the reminders at present (since I am running it every minute). The repeating column is there to keep track of whether the reminder is repeating or not. If it is repeating I re-calculate the notify_at column at the time of scheduling. Once the user is notified notify_at is set to null.

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Set up email notification every time a threshold value is exceeded within a table in SQL

I have a table where I store errors calculated from a bunch of .csv files that contain prediction and measurements. What I want, is to create an automatic process that will allow me to send an email to my account whenever any row within the table of errors contains a value that exceeds a predefined threshold. I have gone through a lot of documentation online, and as far as I get it I either need to create a trigger or set up a job on the server. However, I am new therefore I have real troubles implementing what I want. Any example would thus be greatly appreciated :)
Sending e-mail from a trigger is not a good idea.
Any trigger is a part of the transaction, this means your transaction will not commit until e-mail will be sent, and in case of sending error your transaction will be rolled back.
As you said you can use a job to do this.
In the job you just check for the records of interest and if any row is found, send e-mail from a job. Or you can raise an error if any row is found and make a job send you email on failure.
The job can be scheduled to execute every 5 minutes and you can check for new rows using datetime filed: new rows in the table means
datetime_field >= dateadd(minute, -5, getdate())

How to alert on an event that normally happens once a day?

I have a batch job that runs once per day.
At the end of the job I submit a meter metric with a count of the items processed.
I want to alert if one day this metric is not updated.
On http://metrics.librato.com the maximum time I can check "not reported for" when creating an alert is 60 minutes.
I thought maybe I can create a composite metric and take the avg rate of change over the past 24 hours, and alert if that reaches zero.
I've been trying:
derive(s("my.metric", "%", {function:"sum", period:"86400"}))
However it seems that, because I log only a single event, above quite small values of period (~250s) my rate of change simply drops to zero ...I guess the low frequency means my single value is completely lost by the sampling.
Maybe I am using the wrong tool for the job...
Is there a way to achieve this in Librato?
There currently is not a way to achieve this as composite metrics are subject to the 60 minute limitation of alerts as well (as of 5/15/2015). You may need to look into configuring the metric (or a similar metric) to report within the 60m time range if possible.

Delete record 24 hours after insert

Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
I'm making a site (learning experience) where the user needs to click a validation link sent by e-mail once they register. I want the users to validate themselves within 24 hours.
I suppose what I'd need is a trigger, but I'm really not sure on the syntax, nor if it is even possible.
I'm not sure of your schema but I would do it a different way. I would have a date/time against the database record that corresponds to the validation link. When they click the link, verify that the date and time of the database record is within 24 hours of the current time. If so, allow it, otherwise reject it.
Q: Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
A: Sure. Write a "sqlcmd" script, wrap it in a .bat file, and invoke it from Windows Scheduled Tasks:
http://windows.microsoft.com/en-US/windows7/schedule-a-task
Alternatively, depending on your version, you could schedule the same SQL script from SQL Server Agent:
http://msdn.microsoft.com/en-us/library/ms189237.aspx
Putting a different spin on things:
When the user clicks your link, you can check if the current time (with respect to MSSQL) is >> 24 hours. If so, you'll reply with a "Too late" message (rather than validating the entry).
In any case - you absolutely, completely, totally, do NOT want to use a trigger!