I want to search the log event
"Closure request counts: startAssets: "
and find occurrences where the startAssets are larger than 50.
How would I do that?
Something like:
Closure request counts: startAssets: 51
would maybe give a search similar to
"Closure request counts: startAssets: {num} AND num >=50"
perhaps?
What does that look like in SPL?
That's pretty simple, but you'll need to extract the number to do it. I like to use the rex command to do that, but there may be other ways.
index=foo "Closure request counts: startAssets: *"
| rex "startAssets: (?<startAssets>\d+)"
| where startAssets > 50
Related
First Event
17:09:05:362 INFO com.a.b.App - Making a GET Request and req-id: [123456]
Second Event
17:09:06:480 INFO com.a.b.App - Output Status Code: 200 req-id:"123456"
I tried to use index="xyz" container="service-name" | transaction "req-id" startswith="Making a GET Request" endswith="Output Status Code" | table duration but it is also not working.
I want to calculate duration of above two events for every request. I went over some solutions in splunk and Stack Overflow, but still can't get the proper result.
Try doing it with stats instead:
index=ndx sourcetype=srctp
| rex field=_raw "req\-id\D+(?<req_id>\d+)"
| rex field=_raw "(?<sequence>Making a GET Request)"
| rex field=_raw "(?<sequence>Output Status Code)"
| eval sequence=sequence+";"+_time
| stats values(sequence) as sequence by req_id
| mvexpand sequence
| rex field=sequence "(?<sequence>[^;]+);(?<time>\d+)"
| eval time=strftime(time,"%c")
This will extract the "req-id" into a field named req_id, and the start and end of the sequence into a field named sequence
Presuming the sample data you shared is correct, when you stats values(sequence) as sequence, it will put the "Making..." entry first and the "Output..." entry second
Because values() will do this, when you mvexpand and then split the values()'d field part into sequence and time, they'll be in the proper order
If the sample data is incomplete, you may need to tweak the regexes for populating sequence
It’s seem you’re going with my previously suggested approach 😉
Now you have 2 possibilities
1. SPL
Below the simplest query, only invoking 1 rex and assuming _time field correctly filled
index=<your_index> source=<your_source>
("*Making a GET Request*" OR "*Output Status Code*")
| rex field=_raw "req\-id\D+(?<req_id>\d+)"
| stats max(_time) as end, min(_time) as start by id
| eval duration = end - start
| table id duration
Note that depending the amount of data to scan, this one can be ressources consuming for your Splunk cluster
2. Log the response time directly in API (more efficient)
It seem you are working on an API. You must have capabilities to get the response time of each call and directly trace it in your log
Then you can exploit it easily in SPL without calculation
It always preferable to persist data at index time vs. operate systematic calculation at search time
Small question for a Splunk query please.
May I ask if there is a way to search for “a first log that got printed, but the second was not printed” statement please? Background, I have a very simple piece of Java logic as follow:
LOGGER.info("START/END compute something that might result in a bad exception for id START " + id);
invote_method_which_can_fail(id);
LOGGER.info("START/END compute something that might result in a bad exception for id END " + id);
Which results in something like (snippet from a million):
START/END compute something that might result in a bad exception for id START 12345
START/END compute something that might result in a bad exception for id END 12345
START/END compute something that might result in a bad exception for id START 88888
START/END compute something that might result in a bad exception for id START 98765
START/END compute something that might result in a bad exception for id END 98765
As you can see, the id 88888 in my example got the start statement printed, but not the end statement, because something bad happened in the java code. (the question is not about how to make the java code reliable)
May I ask if there is a Splunk query which can find me those id please?
What I tried: So far, I am downloading the search result containing all the starts. Then, downloading the search results with all the ends. Once having both, I am running another offline script in order to find all the id from the first search result that are not there from the second...
I do not think this is "the smart thing to do" and was wondering if there is a smarter query which can give me the expected result directly in Splunk.
Thank you
You can try something along these lines (with rex and stats):
index=... "START/END compute something that might result in a bad exception for id"
| rex "(?<operation>(START|END))\s+(?<id>\d+)"
| stats count(eval(operation="START")) as start count(eval(operation="END")) as end by id
| where NOT start=end
I have not tested this SPL code
I am using AWS Cloudwatch Insights and running a query like this:
fields #message, #timestamp
| filter strcontains(#message, "Something of interest happened")
| stats count() as interestCount by bin(10m) as tenMinuteTime
| stats max(interestCount) by datefloor(tenMinuteTime, 1d)
However, on the last line, I get the following error:
mismatched input 'stats' expecting {K_PARSE, K_SEARCH, K_FIELDS, K_DISPLAY, K_FILTER, K_SORT, K_ORDER, K_HEAD, K_LIMIT, K_TAIL}
It would seem to mean from this that I cannot take multiple layers of stat queries in Insights, and thus cannot take a statistic of a statistic. Is there a way around this?
You cannot currently use multiple stat commands and from what I know there is no direct way around that at this time. You can however thicken up your single stat command and separate by comma, like so:
fields #message, #timestamp
| filter strcontains(#message, "Something of interest happened")
| stats count() as #interestCount,
max(interestCount) as #maxInterest,
interestCount by bin(10m) as #tenMinuteTime
You define fields and use functions after stats and then process those result fields.
I want to find a string (driving factor) and if found, only then look for another string with same x-request-id and extract some details out of it.
x-request-id=12345 "InterestingField=7850373" [this one is subset of very specific request]
x-request-id=12345 "veryCommonField=56789" [this one is a superSet of all kind of requests]
What I've tried:
index=myindex "InterestingField" OR "veryCommonField"
| transition x-request-id
But problem with above is this query join all those request as well which has only veryCommonField in it.
I want to avoid join as they are pretty low in performance.
What I need:
list InterestingField, veryCommonField
Example:
Below represents beginning of all kind of request. We get thousands of such request in a day.
index=myIndex xrid=12345 "Request received for this. field1: 123 field2: test"
Out of all above request below category falls under 100.
index=myIndex xrid=12345 "I belong to blahBlah category. field3: 67583, field4: testing"
I don't want to search in a super-set of 1000k+ but only in matching 100 requests. Because with increased time span, this search query will take very long.
If I'm understanding your use-case, the following may be helpful.
Using stats
index=myindex "InterestingField" OR "veryCommonField" | stats values(InterestingField), values(veryCommonField) by x-request-id
Using subsearch
index=myindex [ index=myindex InterestingField=* | fields x-request-id | format ]
Depending on the number of results that match InterestingField, you can also use map, https://docs.splunk.com/Documentation/Splunk/8.0.3/SearchReference/Map
index=myindex InterestingField="*" | map maxsearches=0 "search index=myindex x-request-id=$x-request-id$ | stats values(InterestingField), values(veryCommonField) by x-request-id"
If you provide more thorough example events, we can assist you further.
I am trying to query using Rest API on splunk with the following:
curl -u "<user>":"<pass>" -k https://splunkserver.com:8089/services/search/jobs/export -d'search=search index%3d"<index_name" sourcetype%3d"access_combined_wcookie" starttime%3d06/02/2013:0:0:0 endtime%3d06/10/2013:0:0:0 uri_path%3d"<uri1>" OR uri_path%3d"<uri2>" user!%3d"-" referer!%3d"-" | eval Time %3d request_time_length%2f1000000 | stats stdev%28Time%29 as stdev, mean%28Time%29 as mean, count%28uri_path%29 as count by uri_path'
However I do not get the computed mean and stdev, I only see count. How can I add the mean and stdev?
The query looks about right. I tried a similar query on my end it seemed to give me all 3 aggregates. Only thing I can think of is to make sure you have events that match the search criteria. It could be your time boundaries. Try expanding those or maybe removing one/both of them to see if you get any data for mean and stdev.