Splunk Query Recommendation - splunk

I have below log from my application:
BookData, {
id: 12312
}, appID : 'APP1', Relation_ID : asdas-12312
host = aws#asd. sourcetype=service_name
The entire log above is in the form of a single String. I want to create a table with the no. of times an appID has hit the service. i.e. I want to count the no. of events and group them by appID.
Basically, something like:
appID Count
APP1 23
APP2 25
APP3 100
I tried with below query, but it is not working. It is giving as 0 records found.
index=my_index sourcetype=service_name * | table appID Count | addColTotals labelfield=appID label="appID" count
As per my understanding, above query is not working because appID is not a label, but in that case, how do I go about forming the query with my desired result.

The query doesn't work in part because there is no Count field for the table command to display and no count field for the addcoltotals command to add to the results. To get a count you must tell Splunk to count fields by using the stats, eventstats, streamstats, or timechart command.
Try this:
index=my_index sourcetype=service_name
| stats count as Count by appID

Related

KQL return Security Event ID's and their count by computer

I can’t figure out how to write a KQL query that would basically list all the Security Events ID’s and their count for each single computer.
This query returns all the Event ID per computer
This query returns the count by Computer, EventID and Activity
Could anyone provide some hints please?
Thank you!
UPDATE:
This is close to what I'd like to have:
However, is there a way to get something like this (basically, without any kung-fu within Excel):
you could try using the count() aggregation function, with both Computer and EventId as the aggregation keys:
SecurityEvent
| where Timestamp > ago(12h)
| summarize count() by Computer, EventId
or, based on my understanding of the later comment, you could try this:
SecurityEvent
| where Timestamp > ago(12h)
| summarize count() by Computer, EventId
| summarize count_by_event_id = make_bag(pack(tostring(EventId), count_)) by Computer

Splunk: count by Id

I did a query in Splunk which looks like this:
source="/log/ABCDE/cABCDEFGH/ABCDE.log" doSomeTasks
I now want to count the entries in the logfile by Id (Id is an extracted field). But I only want to count every Id once and not every time when doSomeTasks is executed. How could I do this?
To count unique instances of field values, use the distinct_count or dc function.
source="/log/ABCDE/cABCDEFGH/ABCDE.log" doSomeTasks
| stats dc(Id) as IdCount

splunk join 2 search queries

I am writing a splunk query to find out top exceptions that are impacting client. So I have 2 queries, one is client logs and another server logs query. Joined both of them using a common field, these are production logs so I am changing names of it. I am trying to find top 5 failures that are impacting client. below is my query.
index=pirs sourcetype=client-* env=* (type=Error error_level=fatal) error_level=fatal serviceName=FailedServiceEndpoint | table _time,serviceName,xab,endpoint,statusCode | join left=L right=R where L.xab = R.xab [search index=zirs sourcetype=server-* | rex mode=sed field=span_name "s#\..*$##" | search span_success = false spanName=FailedServiceEndpoint | table _time,spanName,xab] | chart count over L.serviceName
I explicitly mentioned a service name in here, In the final query there wont be service name, because we need top 5 failures that are impacting client.
This query provides me with service name and count, I also need other columns like endpoint name, httpStatusCode I am not sure how to do that and also if there is anything refactoring required for splunk query?
That's an odd use of join. I don't see that particular syntax documented, but apparently it works for you.
To get more fields, use stats instead of chart.
| stats count, values(endpointName) as endpointName, values(httpStatusCode) as httpStatusCode by serviceName

How to aggregate logs by field and then by bin in AWS CloudWatch Insights?

I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)

Stats Count Splunk Query

I wonder whether someone can help me please.
I'd made the following post about Splunk query I'm trying to write:
https://answers.splunk.com/answers/724223/in-a-table-powered-by-a-stats-count-search-can-you.html
I received some great help, but despite working on this for a few days now concentrating on using eval if statements, I still have the same issue with the "Successful" and "Unsuccessful" columns showing blank results. So I thought I'd cast the net a little wider and ask please whether someone maybe able to look at this and offer some guidance on how I may get around the problem.
Many thanks and kind regards
Chris
I tried exploring your use-case with splunkd-access log and came up with a simple SPL to help you.
In this query I am actually joining the output of 2 searches which aggregate the required results (Not concerned about the search performance).
Give it a try. If you've access to _internal index, this will work as is. You should be able to easily modify this to suit your events (eg: replace user with ClientID).
index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log"
| stats count as All sum(eval(if(status <= 303,1,0))) as Successful sum(eval(if(status > 303,1,0))) as Unsuccessful by user
| join user type=left
[ search index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log"
| chart count BY user status ]
I updated your search from splunk community answers (should look like this):
w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientID as ClientID detail.statusCode AS statusCode
| stats count as All sum(eval(if(statusCode <= 303,1,0))) as Successful sum(eval(if(statusCode > 303,1,0))) as Unsuccessful by ClientID
| join ClientID type=left
[ search w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientID as ClientID detail.statusCode AS statusCode
| chart count BY ClientID statusCode ]
I answered in Splunk
https://answers.splunk.com/answers/724223/in-a-table-powered-by-a-stats-count-search-can-you.html?childToView=729492#answer-729492
but using dummy encoding, it looks like
w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientId as ClientID, detail.statusCode as Status
| eval X_{Status}=1
| stats count as Total sum(X_*) as X_* by ClientID
| rename X_* as *
Will give you ClientID, count and then a column for each status code found, with a sum of each code in that column.
As I gather you can't get this working, this query should show dummy encoding in action
`index=_internal sourcetype=*access
| eval X_{status}=1
| stats count as Total sum(X_*) as X_* by source, user
| rename X_* as *`
This would give an output of something like