Getting the average of the number of attempts using splunk - splunk

Whenever there is an attempt at creating a template, the message "Show Template Creation Dialog" gets logged. I want to get the total number of times that message got logged over a specific period of time. I also want to get the average number of times that message got logged over a period of time. How do I do so using splunk? I want to get this as a single number and not a chart.
index="projectm-$env$-ue1" "Show Template Creation Dialog"
| mvexpand "meta.adobeguid" limit=1
| stats count as "Total", distinct_count("meta.adobeguid") AS "Unique Users"

Related

How to calculate time duration between two events in splunk which dont have common element

First Event
06:09:17:362 INFO com.x.y.ConnApp - Making a GET Request
Second Event
06:09:17:480 INFO com.a.b.Response - Output Status Code: 200
Now I want to calculate duration of these two events for every request. I went over the solutions on splunk and Stack Overflow, but still can't get the proper result.
The easy answer is the transaction command, although it has a couple of drawbacks. The first is the command can be a resource hog. The other is can be "greedy" in that multiple requests might be taken to be a single transaction. We'll take care of the second issue with the maxevents option. There's not much we can do about the first except avoid using transaction.
index=foo ("Making a GET Request" OR "Output Status Code:")
| transaction maxevents=2 startswith="Making a GET Request" endswith="Output Status Code:"
| table duration
Another option uses the streamstats command to calculate the difference between adjacent events. This should perform better than transaction.
index=foo ("Making a GET Request" OR "Output Status Code:")
| streamstats window=2 range(_time) as duration
``` Erase the duration field for start events. ```
| eval duration = if(searchmatch("Making a GET Request"),"", duration)
| table _raw duration
Both queries assume the start and end events for different requests are not intermingled.
With the current logging messages, it will be tricky to group logs who are linked by the same source (imagine multiple calls who generate successive Making a GET messages)
In this case, I suggest to spread a ‘correlation Id’ in the logging message
Then you can identify exactly the messages who are triggered by the same source
This involve a change of the app logging function (you can search the following libs: log4/mcd/sleuth)

Splunk query using time an event occurs in one index and using it as a starting point to filter events in another index

What's the most efficient way to perform the following search?
Event occurs on index A at X time
Take X time and use it as a start point in index B
Search all occurrences of a field within index B, with additional filters, 5 minutes after that initial event time that occurred from index A
Example using Windows logs: after every successful login via event ID 4624 (index="security") for a particular user on a host, search all Sysmon event ID 1 (index="sysmon") process creation events on that specific host that occurred in a 5 minute window after the login event. My vision is to examine user logins on a particular host and correlate subsequent process creation events over a short period of time.
I've been trying to play with join, stats min(_time), and eval starttimeu, but haven't had any success. Any help/pointers would be greatly appreciated!
Have you tried map? The map command runs a search for each result of another search. For example:
index=security sourcetype=wineventlog EventCode=4624
```Set the latest time for the map to event time + 5 minutes (300 seconds)```
| eval latest=_time+300
| map search="search index=sysmon host=$host$ earliest=$_time$ latest=$latest$"
Names within $ are field names from the main search.

Trigger Azure Log analytics alert based on log file append

I need to try and create an alert for when a new entry is added to an application log file. Each new entry is time stamped. I have setup/imported the custom log as timestamped and tested with a dummy app log file and manually added entries. I initially set up the alert to trigger when the number of results is greater than 0. This appears to work but depending on the time intervals I set it will keep emailing me the alerts. Is there anyway I can get it to just alert the once for each time a new entry is added?
Alert logic
Based on - Number of results
Operator Greater than
Threshold value 0
Evaluation based on
Period(in Minutes)
1440
Frequency(in minutes)
240
I have set these to cut down on the alert emails. Ideally i'd like it to check every hour and alert when new entry is added but only alert the once. Not sure if it can be done. Is there any tweaks to the Kusto query where I can get it to alert based on a row number increase. With setting the alert to greater than 0 I've a feeling it will always alert because all new entries will mean its higher than that value.
My basic Kusto query just returns lines that list a document number
LogAppend_CL
| where RawData contains "for Document number"
Not sure if i understood your query properly. Do you want to get all the new records inserted every hour ?
Doesn't this alert condition work for you ? You configure an alert, which gets fired every 60 minutes and it goes back to the last 60 minutes and checks if there are any records matching you query and returns them in email.
Alert logic Based on - Number of results Operator Greater than Threshold value 0
Evaluation based on Period(in Minutes) -> 60
Frequency(in minutes) -> 60
Regards
Arun

Splunk - counting numeric information in events

I'm very new to Splunk and wanted to know if the following was possible: I'm trying to set up a dashboard of how many times we had to retry a call to a service. I am currently logging the following text:
number of retries required 0
The number of retries required can vary from 0 to 3
Is there an easy way to query this and display how many times it was either 0, 1, 2 or 3?
Thanks.
The gist of it is that you need to extract that piece of information into a field and than analyze that field according to your wishes (i.e. via timechart, chart, stats, etc.) Here are two different ways:
you can use the Field Extractor to extract and create a new field from the retries count. This is the recommended long-term option.
use the rex command to extract and define a new field inline.
search * | rex field=_raw ".+retries required (?<retries>\d)$"
Then you can chart them over time by appending | timechart retries or use the stats command to do some other calculations.

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.