Splunk interesting field exclusion - splunk

i have 4 fields (Name , age, class, subject) in one index (Student_Entry) and i want to add total events but i want to exclude those events who has any value in subject field.
I tried the below two ways
index=Student_Entry Subject !=* | stats count by event
index=Student_Entry NOT Subject= * | stats count by event

The NOT and != operators are similar, but not equivalent. NOT will return events with no value in the Subject field, whereas != will not. In your case, use !=. See https://docs.splunk.com/Documentation/Splunk/8.0.4/Search/NOTexpressions
stats count by event does nothing because there is no field called 'event'. To count events, just use stats count.

It looks like you were right using index=Student_Entry Subject !=*
Then you can add only - | stats count

You can do it this way, too:
index=Student_Entry
| where isnull(subject)
| stats count

Related

How to apply group_concat in Splunk SPL

I want to implement a group_concat-like behavior in Splunk.Here as in the table where serviceA has 2 entries which need to be combined with a delimiter and the count needs to be added. Is there any way we can achieve the functionality using SPL. Any help is appreciated. Thanks!!
... | stats count, values(Status) by Service_Name
You need to create a multi-value field and values() will be an appropriate way to do it in your case
To combine the Status values with comma separators, add these commands to your query.
| stats count as Count, values(Status) as Status by Service_name
| eval Status = mvjoin(Status, ",")

Splunk: count by Id

I did a query in Splunk which looks like this:
source="/log/ABCDE/cABCDEFGH/ABCDE.log" doSomeTasks
I now want to count the entries in the logfile by Id (Id is an extracted field). But I only want to count every Id once and not every time when doSomeTasks is executed. How could I do this?
To count unique instances of field values, use the distinct_count or dc function.
source="/log/ABCDE/cABCDEFGH/ABCDE.log" doSomeTasks
| stats dc(Id) as IdCount

Filter out values using mstats

I am trying to filter out all negative values in my metrics, I would like to know if the filtering within the mstats call itself possible, to add something like AND metrics_name:data.value > 0 to the query below?
| mstats avg(_value) WHERE metric_name="data.value" AND index="my_metrics" BY data.team
Currently, I am using the msearch and then filtering out the events, so my query is something like the one below but its too slow as I am pulling all the events:
| msearch index=my_metrics
| fields "metrics_name:data.value"
| where mvcount(mvfilter(tonumber(metrics_name:data.value') > 0)) >= 1 OR isnull('metrics_name:data.value')
Unfortunately, you cannot filter or group-by the _value field with Metrics.
You may be able to speed up your search with msearch by including the metric_name in the filter.
| msearch index=my_metrics filter="metric_name=data.value"
Note that using msearch returns a sample of the metric values, not all of them, unless you specify target_per_timeseries=0
Refer to https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Mstats

How to aggregate logs by field and then by bin in AWS CloudWatch Insights?

I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)

How do I create a Splunk query for unused event types?

I have found that I can create a Splunk query to show how many times results of a certain event type appear in results.
severity=error | stats count by eventtype
This creates a table like so:
eventtype | count
------------------------
myEventType1 | 5
myEventType2 | 12
myEventType3 | 30
So far so good. However, I would like to find event types with zero results. Unfortunately, those with a count of 0 do not apear in the query above, so I can't just filter by that.
How do I create a Splunk query for unused event types?
There are lots of different ways for that, depending on what you mean by "event types". Somewhere, you have to get a list of whatever you are interested in, and roll them into the query.
Here's one version, assuming you had a csv that contained a list of eventtypes you wanted to see...
severity=error
| stats count as mycount by eventtype
| inputcsv append=t mylist.csv
| eval mycount=coalesce(mycount,0)
| stats sum(mycount) as mycount by eventtype
Here's another version, assuming that you wanted a list of all eventtypes that had occurred in the last 90 days, along with the count of how many had occurred yesterday:
earliest=-90d#d latest=#d severity=error
| addinfo
| stats count as totalcount count(eval(_time>=info_max_time-86400)) as yesterdaycount by eventtype