I am trying to write a Splunk SPL query that will show me the most popular search terms that a user is looking for in one of my web apps. I have the logs already in Splunk but I am having a hard time extract the search parameter from the event. The event shows the full SQL select statement that looks like the query below:
select result from table where search_term = 'searched for this text'
How can I have this:
index=my_app search_term | top result
How do I actually capture the search term?
Thank you
You can use rex to extract the search term. Something like this
index=my_app | rex "search_term = '(?<search_term>[^']+)"
If you want individual words then use the split function followed by mvexpand to make each word a separate event.
... | eval words=split(search_term, " ") | mvexpand words
Related
I am creating a dashboard for our service. And I want to create metrics for url requests.
Lets say have a similar url like this one:
/api/v1/users/{userId}/settings
And I have following query in Splunk
url=*/api/v1/users/*/settings
| stats avg(timeTaken) as avg_latency, p99(timeTaken) as "p99(ms)", perc75(timeTaken) as "p75(ms)", count as total_requests, count(eval(responseStatus=500)) as failed_requests by url
| eval "success_rate"=round((total_requests - failed_requests)/total_requests*100,2)
| eval avg = round(avg)
| sort success_rate
All I want is to have a table with one common url showing all the metrics. But instead, I get a table with a list of all urls with different parameters.
You want to create a field which is the URL minus the UserId part, And therefore the stats will be grouped by which url is called.
You can do this by using split(url,"/") to make a mv field of the url, and take out the UserId by one of two ways depending on the URLs.
Mvfilter: Eg: mvfilter(eval(x!=userId))
Or created a new mvfield with the userId removed by it's index in the mvfield using this: Add/Edit/Delete mvfield
Instead of removing you could also choose to replace the UserId with "{userId}", so long as you do the same for all Urls.
And then you can rejoin the url using mvjoin(url,"/")
I hope I understood your question correctly and this helps you!
You could try doing a replace() on your URL field with eval before calling stats:
| eval url=replace(url,"\/\d+\/settings","/settings")
If it turns out the userid is important to hold onto, pull it into its own field prior to running replace():
| rex field=url "\/(?<userid>\d+)\/settings"
expansion for comment
For multiple possible endings of your URL, try something like this:
index=ndx sourcetype=srctp URL IN("*/api/v1/users/*/settings","*/api/v1/users/*/logout","*/api/v1/users/*/profile")
| rex field=url "\/(?<url_type>\w+)$"
| eval url=replace(url,"\/\d+\/\w+$","")
| stats avg(timeTaken) as avg_latency, p99(timeTaken) as "p99(ms)", perc75(timeTaken) as "p75(ms)", count as total_requests, count(eval(responseStatus=500)) as failed_requests by url type
| eval "success_rate"=round((total_requests - failed_requests)/total_requests*100,2)
| eval avg = round(avg)
| sort success_rate
This will extract the "type" (logout, profile, settings) into a new field, then cleanup the URL by removing everything from userid to the end
In Splunk, I need to get the count of events from the below msg field value which matches factType=COMMERCIAL and has filters.
Using the basic Splunk query with wildcard does not work efficiently. Could you please assist
app_name="ABC" cf_space_name=prod msg="*/facts?factType=COMMERCIAL&sourceSystem=ADMIN&sourceOwner=ABC&filters*"
msg: abc.asia - [2021-08-23T00:27:08.152+0000] "GET
/facts?factType=COMMERCIAL&sourceSystem=ADMIN&sourceOwner=ABC&filters=%257B%2522stringMatchFilters%2522:%255B%257B%2522key%2522:%2522BFEESCE((json_data-%253E%253E'isNotSearchable')::boolean,%2520false)%2522,%2522value%2522:%2522false%2522,%2522operator%2522:%2522EQ%2522%257D%255D,%2522multiStringMatchFilters%2522:%255B%257B%2522key%2522:%2522json_data-%253E%253E'id'%2522,%2522values%2522:%255B%25224970111%2522%255D%257D%255D,%2522containmentFilters%2522:%255B%255D,%2522nestedMultiStringMatchFilter%2522:%255B%255D,%2522nestedStringMatchFilters%2522:%255B%255D%257D&sorts=%257B%2522sortOrders%2522:%255B%257B%2522key%2522:%2522id%2522,%2522order%2522:%2522DESC%2522%257D%255D%257D&pagination=null
Try this:
index=ndx sourcetype=srctp msg=*
| rex field=msg "factType=(?<facttype>\w+).(?<params>.+)"
| stats count by facttype params
| fields - count
| search facttype="commercial"
The rex will extract the facttype and any following parameters (note - if the URL is submitted with the arguments in a different order, you'll need to adjust the regular expression)
Then use a | stats count by to bin them together
Lastly, search only where there is both a facttype="commercial" and the URL has additional parameters
I want to perform a search where I need to use a static search string + input from a csv file with usernames:
Search query-
index=someindex host=host*p* "STATIC_SEARCH_STRING"
Value from users.csv where the list is like this- Please note that User/UserList is NOT a field in my Splunk:
**UserList**
User1
User2
User3
.
.
UserN
I have tried using multiple one of them being-
| inputlookup users.csv | join [search index=someindex host=host*p* "STATIC_SEARCH_STRING"] | lookup users.csv UserList OUTPUT UserList as User| stats count by User
The above one just outputs the list of users with count as '1' - which I assume it is getting from the table itself.
When I try searching events for a single user like-
index=someindex host=host*p* "User1" "STATIC_SEARCH_STRING". I get 100's of events for that user.
Can someone please help me with this?
Sorry if this is a noob question, I have been trying to learn splunk in order to reduce my workload and am stuck here.
Thanks in advance!
index=someindex host=host*p* "STATIC_SEARCH_STRING" [ | inputlookup users.csv | fields UserList | rename UserList as query]
What is happening here is that there is a sub-search, which does an inputlookup on the users.csv file. We then use fields to ensure there is only a single field (UserList) in the data. We then rename that field to query. This is a special field in sub-searches; when the sub-search returns the field query, it is expanded out into the expression (field_value_1) OR (field_value_2) OR ....
This expression is then appended to the original search string, so the final search that Splunk executes is index=someindex host=host*p* "STATIC_SEARCH_STRING" ("alice") OR ("bob") OR ("charlie")
This approach is outlined at https://docs.splunk.com/Documentation/Splunk/8.0.3/Search/Changetheformatofsubsearchresults
You can also look at the Splunk format command, https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Format if you need to alter the sub-search's expression format, for example, adding * around each returned expression.
I think you're doing the search inside out
What I think you may want is the following:
index=ndx sourcetype=srctp host=host*p* User=*
| search
[| inputlookup users.csv ]
| stats count by User
If I understand your question correctly, you want to use the values in your lookup as a filter on the data (ie, only where User is in that list)
If that is the case, the above will do just that
If you need to make the fieldnames match because the lookup table has a different name, change the subsearch to the following:
[| inputlookup users.csv
| rename lookup_field_name as User ]
I am sending some data to splunk which looks like:
"Start|timestamp:1552607877702|type:counter|metricName:cache|count:34488378|End"
And then extracting the fields using a regex:
search "attrs.name"="service" | regex (Start)(.*)(End) | extract pairdelim="\"{|}" kvdelim=":"
After extraction, I can see the fields (type, metricName, count) under "INTERESTING FIELDS". How do I go about using these fields in a dashboard?
Thanks
search "attrs.name"="service" | regex (Start)(.*)(End) | extract pairdelim="\"{|}" kvdelim=":" | stats count by metricName
Or
search "attrs.name"="service" | regex (Start)(.*)(End) | extract pairdelim="\"{|}" kvdelim=":" | stats count by type
Or
search "attrs.name"="service" | regex (Start)(.*)(End) | extract pairdelim="\"{|}" kvdelim=":" | table type, metricName, count
should all give you a table, which can also be represented as a visualization. You can save any of these, or the original events, as a dashboard panel.
If you see a field listed in either the "Selected fields" or "Interesting fields" list then that means Splunk has extracted them and made them available for use. Use them by mentioning them by name in an SPL command such as table type, metricName, count or stats max(count) by metricName. Once you have the fields the rest is up to your imagination (and the rules of SPL).
I have 3 fields in my splunk result like message, id and docId.
Need to group the results by id and doc id which has specific messages
message="successfully added" id=1234 docId =1345
message="removed someUniqueId" id=1234 docId =1345
I have to group based on the results by both id's which has the specific message
search query | rex "message=(?<message[\S\s]*>)" | where message="successfully added"
which is giving result for the first search, when i tried to search for second search query which is not giving result due to the someUniqueId"
search query | rex "message=(?<message[\S\s]*>)" | where match(message, "removed *")
Could you pelase help me to filter the results which has the 2 messages and group by id and docID
The match function expects a regular expression, not a pattern, as the second argument. Try search query | rex "message=(?<message>[\S\s]*)" | where match(message, "removed .*").
BTW, the regex strings in the rex commands are invalid, but that may be a typing error in the question.