How to get the list of values for dropdown in Splunk Dashboard? - splunk

Making a dashboard and want to create a chart. To display the metrics, I must choose cache from the dropdown. The problem is how I can make a list of caches if I have the following log:
Message= [CACHE_NAME=<HERE IS THE NAME OF CACHE> method=GET <SOME_OTHER_STRINGS> found=true]
Desired result:

Get a list of caches by searching for them. Read the log, extract cache names from it, then remove duplicates.
index=foo "CACHE_NAME"
```Extract the cache name from the event```
| rex "CACHE_NAME=(?<cache_name>\S+)"
```Filter out repeated names```
| dedup cache_name
| fields cache_name

Related

Syntax highlighting of Splunk search results is missing when grouping events via Transactions command

I use List view in Splunk website.
Normally in the Splunk search results, the fields are highlighted with colours(red and green). Also, if one event is coloured with white background then the next event is coloured with dark background to make it readable.
Here is the Screenshot where I blurred the data as it has company information:
But when I use transactions command to group events together then there is no syntax highlighting available even though I am using List view. But, the 1st group of events are coloured together in light background and the second group of events are coloured together in dark background so that works fine.
Command:-
application_name=appname | transaction startswith="This is the start of the transaction" endswith="This is the end of the transaction"
Screenshot:
1 transaction Splunk event taken from search result:
{"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the start of the transaction","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690242714069}
{"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : app log text","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690243292964}
{"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : another app log","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690243306564}
{"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : {\"data\":{\"fields\":[{\"__typename\":\"name\",\"field\":\"value\",\"field2\":\"value2\",\"field3\":\"value 3\",\"field4\":\"value4\",\"field5\":\"value5\",\"field6\":\"value6\",\"field7\":\"value7\",\"field8\":null,\"field9\":\"value9\",\"field10\":null,\"field11\":111059.0,\"field12\":111059.0,\"field13\":null,\"field14\":\"value14\",\"field15\":\"2018-10-01\",\"field16\":null,\"field17\":false,\"field18\":{\"field19\":\"value19\",\"fieldl20\":\"value20\",\"field21\":2.6,\"field22\":\"2031-10-31\",\"field23\":\"2017-11-06\"},\"field24\":{\"field25\":\"\",\"field26\":\"\"},\"field27\":{\"field28\":{\"field29\":0.0,\"field30\":0.0,\"field31\":240.63,\"field32\":\"2022-12-31\",\"field33\":0.0,\"field34\":\"9999-10-31\"}},\"field35\":[{\"field36\":{\"field37\":\"value37\"}},{\"field38\":{\"field39\":\"value39\"}}],\"field40\":{\"__typename\":\"value40\",\"field41\":\"value41\",\"field42\":\"value 42\",\"field43\":111059.0,\"field44\":\"2031-04-01\",\"field45\":65204.67,\"field46\":null,\"field47\":\"value47\",\"field48\":\"value48\",\"field49\":null,\"field50\":\"value50\",\"field51\":null,\"field52\":null}},{\"__typename\":\"value53\",\"field54\":\"value54\",\"field55\":\"value55\",\"field56\":\"value56\",\"field57\":\"value57\",\"field58\":\"value58\",\"field59\":\"9\",\"field60\":\"value60\",\"field61\":null,\"field62\":\"value62\",\"field63\":null,\"field64\":88841.0,\"field65\":38841.0,\"field66\":null,\"field67\":\"value67\",\"field68\":\"2018-10-01\",\"field69\":null,\"field70\":false,\"field71\":{\"field72\":\"value72\",\"field73\":\"value73\",\"field74\":2.6,\"field75\":\"2031-10-31\",\"field76\":\"2017-11-06\"},\"field77\":{\"field78\":\"\",\"field79\":\"\"},\"field80\":{\"field81\":{\"field82\":0.0,\"field83\":0.0,\"field84\":84.16,\"field85\":\"2022-12-31\",\"field86\":0.0,\"field87\":\"9999-10-31\"}},\"field88\":[{\"field89\":{\"field90\":\"value90\"}},{\"field91\":{\"field92\":\"value92\"}}],\"field93\":null},{\"__typename\":\"value94\",\"field95\":\"value95\",\"field96\":\"value96\",\"field97\":\"value97\",\"field98\":\"value98\",\"field99\":\"value99\",\"field100\":\"1\",\"field101\":\"value101\",\"field102\":null,\"field103\":\"value103\",\"field104\":\"359\",\"field105\":88025.0,\"field106\":79316.87,\"field107\":\"309\",\"field108\":\"value108\",\"field109\":\"2018-10-01\",\"field110\":\"2048-09-30\",\"field111\":false,\"field112\":{\"field113\":\"value113\",\"field114\":\"value114\",\"field115\":2.35,\"field116\":\"2031-10-31\",\"field117\":\"2017-11-06\"},\"field118\":{\"field119\":\"\",\"field120\":\"\"},\"field121\":{\"field122\":{\"field123\":341.58,\"field124\":0.0,\"field125\":155.33,\"field126\":\"2022-12-31\",\"field127\":186.25,\"field128\":\"2022-12-31\"}},\"field129\":[{\"field130\":{\"field131\":\"value131\"}},{\"field132\":{\"field133\":\"value133\"}}],\"field134\":null}]}}","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690243306564}
{"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : This is the end of the transaction","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690870483226}
So even though I am using the List view it feels like I am seeing the Raw view.
Is there something I can change in the search query so that the syntax highlighting is applied for the search results when searching via transactions command?
EDIT:
The initial issue is resolved but that lead to another annoyance.
A transaction command is used to group Splunk events. In my case, each Splunk event is a JSON. So I converted all Splunk events of the transaction into JSON array and now the Syntax highlighting is applied. But the problem is that I need to manually expand each JSON using the plus icon. Is there an expand all option?
Syntax highlighting is not standard for all Splunk events. It is, however, done for JSON events. The output of the transaction command is not valid JSON so no syntax highlighting is done.
I want to preface this by saying, this wouldn't be good practice.
But regardless you could do the following.
| eval _raw=_raw+","
| transaction id
| eval _raw="["+rtrim(_raw,",")+"]"
Where the transaction line would be your own transaction command.
The reason for this behaviour, is that syntax highlights will just check that if _raw is a valid json(or xml) and highlight the _raw if it is.
when grouping events with the transaction command, the _raw becomes the multiple events appended onto one another. this does not produce a valid json therefore it won't be highlighted.
These eval function manipulate the events such that after the transaction they produce a valid json as it's a list of the individual json events.
If you are not using Json. and instead XML, you may be able to use a very similar method, but I can't tell due to the blurred pictures, looks like it is however.

How to format splunk graphs to show multiple lines (one line for each method)?

I am new to splunk reports, I am trying to achieve the following:
I want to generate splunk logs report (graphical) for API performances with execution time on x-axis and method names on y-axis. I am trying to run following query:
cs_dataowner_id="ICTO-31263" cs_stage = UAT
| search cs_component_id="icomply-gpat-api-buslogs"
| search Action=API_PERFORMANCE
| table Message Execution_Time
| sort by Execution_Time desc
Expected line graph should show a single line for each method (API) expanding with time on x axis hence number of lines on y-axis should be equal to number of apis/methods called in that time range.
Current output: A single line on y axis for all the methods (here I have 2 apis).
I tried all the formatting options but nothing worked.
Screenshot:
Instead of piped search commands, do it all on the first line:
cs_dataowner_id="ICTO-31263" cs_stage=UAT cs_component_id="icomply-gpat-api-buslogs" Action=API_PERFORMANCE
Instead of the sort and table commands, use chart:
| chart count(Message) as Messages over Execution_Time by Message
This command graphs the number of calls to each API with Execution_Time on the X-axis and separate lines for each API (Message).

How do I transform array in search or elsewhere in dashboard

I have a search that is working fine
index=event_db environment=prod release = 2020150015
| timechart count as Events
However, I'd like to modify this to search for any release in an array of releases. I'm aware of the "in" operator.
The catch is that the array of releases I've been provided ("Releases") is formatted slightly differently like so:
[ver2020.15.0015, ver2020.15.0016, ver2020.22.0019] // in general, many more than 3!
Is there a way to use the in operator and some mapping to get
release in
[2020150015, 2020150016, 2020220019] ?
Can this be put in the search?
This is part of a panel so if it's simpler I could have code elsewhere to convert [ver2020.15.0015, ver2020.15.0016, ver2020.22.0019] into [2020150015, 2020150016, 2020220019]
However, as mentioned I'm a newbie so my knowledge of where to put code to transform an array is limited :)
I have a fieldset section and a panel with a query in it.
The "Releases" array is populated in the fieldset section as so:
<input type="text" token="Releases">
<label>Release or Releases</label>
<default>*</default>
</input>
The user enters ver2020.15.0015 or perhaps ver2020.15.*.
I can't just have the user enter 2020150015 as the ver2020.15.0015 format is used elsewhere.
Perhaps there's a way to create new field Releases_Alt right after getting this?
Let me know of any other info I can provide. As I said, I'm new to Splunk so I'm still struggling with terminology.
Try this query. It uses a subsearch to build the IN argument. Subsearches in Splunk run before the main search and the output of the subsearch replaces the subsearch itself.
index=event_db environment=prod release IN (
[ | makeresults
| eval Releases=replace ($Releases|s$, "[ver\.]+","")
| return $Releases ] )
| timechart count as Events
The makeresults command is there because even subsearches have to start with a generating command. makeresults creates a "dummy" event that allows other commands to work.
The eval command does the work of converting release versions into the desired format. Note the use of |s with the Releases token. This construct ensures the contents of the token are enclosed in quotation marks, which is expected by the replace function.
Finally, the return command with $ returns the results of the eval, but without the field name itself. Without it, the subsearch would return releases="2020150015, 2020150016, 2020220019", which wouldn't work.

How to link the events of a search used as an alert in Splunk

I have a query that I created that looks like this:
index="someindex" Level=Error | rex field=_raw "\"Exception\":\"(?<ExceptionType>.*?):"
| eval ExceptionType = if(isnull(ExceptionType), "Custom log",ExceptionType) | search ExceptionType="Custom log"
And I saved it as an alert that sends a message to Slack that looks like this:
Here's the problem. When I run this search normally I get the results like so:
And I can click on the "events" tab to see the individual events that are aggregated by the "stats" command.
However, when I click the link generated by the alert, I only get the aggregated results. I can't view the individual events. So my question is: is there any way to create a link that will allow to expand the events from which the results are aggregated?
Looks like the "eventstats" command was exactly what I needed.

Scrapy: Item discrepancy

Scenario: a page with multiple items, each consisting of title, description, image. What happens when one of the items are missing the title? How does scrapy handle it? It seems that scrapy blindly selects all titles //div[id='content']/ul/li/div[id='title']/text(),
Expected output is that that row will have a missing title. But I fear that since it blindly selects all titles on the page without considering the item context. If the 5th item is missing title, wouldn't it mistakenly use the 6th item's title instead?
title1 | description | image
.
.
title4 | description | image
title6 | description | image <--- it's supposed to be missing the title.
| description | image
Does scrapy have a way to deal with this problem?
A workaround I was thinking would be to look at the parent item element, and then, look inside that item. If something is missing don't show it.
there are variety of ways you can handle this situation
1) you can implement a pipeline that can skip items that are not required
2) you can add check in your extraction part to only yield/return an item that is required
you needs to understand Scrapy is a high level crawling Framework , that is also providing builten support for data extraction , you can use any library for extraction you would like to.