I have used this query, but I cannot get the time chart to show the trend of the CPU. appears to be only showing the current cpu. my objective is to show the trend from each computer(host)
Telemetry_system
| where hostname_s contains "computer-A"
| where TimeGenerated > ago(5m)
| summarize
by
hostname,
callBackUrl,
cpu_d
| summarize Aggregatedcpu= avg(cpu_d) by hostname, callBackUrl
| render timechart
If I understand your intention correctly, try this:
Telemetry_system
| where hostname_s contains "computer-A"
| where TimeGenerated > ago(5m)
| summarize Aggregatedcpu = avg(cpu_d) by strcat(hostname, "_", callBackUrl), bin(TimeGenerated, 1s)
| render timechart
Related
I have a table generated by chart that lists the results of a compliance scan
These results are typically Pass, Fail, and Error - but sometimes there is "Unknown" as a response
I want to show the percentage of each (Pass, Fail, Error, Unknown), so I do the following:
| fillnull value=0 Pass Fail Error Unknown
| eval _total=Pass+Fail+Error+Unknown
<calculate percentages for each field>
<append "%" to each value (Pass, Fail, Error, Unknown)>
What I want to do is eliminate a "totally" empty column, and only display it if it actually exists somewhere in the source data (not merely because of the fillnull command)
Is this possible?
I was thinking something like this, but cannot figure out the second step:
| eventstats max(Unknown) as _unk
| <if _unk is 0, drop the field>
edit
This could just as easily be reworded to:
if every entry for a given field is identical, remove it
Logically, this would look something like:
if(mvcount(values(fieldname))<2), fields - fieldname
Except, of course, that's not valid SPL
could you try that logic after the chart :
``` fill with null values ```
| fillnull value=null()
``` do 90° two time, droping empty/null ```
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column
[edit:] it is working when I do the following but not sure it is easy to make it working on all conditions
| stats count | eval keep=split("1 2 3 4 5"," ") | mvexpand keep
| table keep nokeep
| fillnull value=null()
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column
[edit2:] and if you need to add more null() could be done like that
| stats count | eval keep=split("1 2 3 4 5"," "), nokeep=0 | mvexpand keep
| table keep nokeep
| foreach nokeep [ eval nokeep=if(nokeep==0,null(),nokeep) ]
| transpose 0 include_empty=false | transpose 0 header_field=column | fields - column
When I search "SearchText1" then lets say there are 20 records.
When I search "SearchText2" then there are 10 results
Then I need to display a single value "2" in the dashboard
How do I formulate the Splunk query?
I tried below query where the numerator count is evaluated correctly but something is wrong with the denominator count related part:
index=something "searchText1"
| stats count as NumeratorCount
| eval numerator=NumeratorCount
| append [ | search index=something "searchText2"
| stats count as DenominatorCount
| eval denominator=DenominatorCount ]
| eval result=round(if(denominator=0,0,numerator/denominator), 2)
| table result
When you remove the table command, you'll see the numerator and denominator are in separate results. The means the eval command computing 'result' is dividing numerator by NULL and NULL by denominator.
The fix is to combine the two rows using appendcols as in this example.
index=_internal "service_health_monitor"
| stats count as NumeratorCount
| eval numerator=NumeratorCount
| appendcols [ | search index=_internal "service_health_metrics_monitor"
| stats count as DenominatorCount
| eval denominator=DenominatorCount ]
| eval result=round(if(denominator=0,0,numerator/denominator), 2)
| table result
I am new to Splunk dashboard. I need some help with this kind of data.
2020-09-22 11:14:33.328+0100 org{abc} INFO 3492 --- [hTaskExecutor-1] c.j.a.i.p.v.b.l.ReadFileStepListener : [] read-feed-file-step ended with status exitCode=COMPLETED;exitDescription= with compositeReadCount 1 and other count status as: BatchStatus(readCount=198, multiEntityAccountCount=0, readMultiAccountEntityAdjustment=0, accountFilterSkipCount=7, broadRidgeFilterSkipCount=189, writeCount=2, taskCreationCount=4)
I wanted to have statistics in a dashboard showing all the integer values in the above log.
Edit 1:
I tried this but not working.
index=abc xyz| rex field=string .*readCount=(?P<readCount>\d+) | table readCount
See if this run-anywhere example helps.
| makeresults
| eval _raw="2020-09-22 11:14:33.328+0100 org{abc} INFO 3492 --- [hTaskExecutor-1] c.j.a.i.p.v.b.l.ReadFileStepListener : [] read-feed-file-step ended with status exitCode=COMPLETED;exitDescription= with compositeReadCount 1 and other count status as: BatchStatus(readCount=198, multiEntityAccountCount=0, readMultiAccountEntityAdjustment=0, accountFilterSkipCount=7, broadRidgeFilterSkipCount=189, writeCount=2, taskCreationCount=4)"
`comment("Everything above just sets up test data")`
| extract kvdelim=",", pairdelim="="
| timechart span=1h max(*Count) as *Count
I solved this using
index=xyz |regex ".*fileName=(\s*([\S\s]+))" | rex field=string .*compositeReadCount=(?P<compositeReadCount>\d+) |regex ".*readCount=(?P<readCount>\d+)" | regex ".*multiEntityAccountCount=(?P<multiEntityAccountCount>\d+)" | regex ".*readMultiAccountEntityAdjustment=(?P<readMultiAccountEntityAdjustment>\d+)" | regex ".*accountFilterSkipCount=(?P<accountFilterSkipCount>\d+)" | regex ".*broadRidgeFilterSkipCount=(?P<broadRidgeFilterSkipCount>\d+)" | regex ".*writeCount=(?P<writeCount>\d+)" | regex ".*taskCreationCount=(?P<taskCreationCount>\d+)" | regex ".*status=(\s*([\S\s]+))" | table _time fileName compositeReadCount readCount multiEntityAccountCount readMultiAccountEntityAdjustment accountFilterSkipCount broadRidgeFilterSkipCount writeCount taskCreationCount status
I am new to splunk dashboard development, so far I am creating KPI's using just 'single value'.
I have three KPI's resulted 600, 250, 150
KPI 1 search expression - Result is 600 (example)
index=indexname kubernetes.container_name=tpt
MESSAGE = "Code request"
| spath output=message path=MESSAGE
| table _time message
| stats count as count1
KPI 2 search expression - Result is 250 (example)
index=indexname kubernetes.container_name=rsv
MESSAGE = "pin in email"
| spath output=message path=MESSAGE
| table _time message
| stats count as count2
KPI 3 search expression - Result is 150 (example)
index=indexname kubernetes.container_name=rsv
MESSAGE = "pin in sms"
| spath output=message path=MESSAGE
| table _time message
| stats count as count3
I have shown above KPI's as numbers in the dashboard. However I would like show a pie chart with 60%, 25% and 15% share for above numbers. What would be search expression to create this chart?
You could achieve it by making it as a single query, extracting the fields and appending it using splunk append, below is the queries
index=indexname kubernetes.container_name=tpt MESSAGE = "*Code request*"
| spath output=msg path=MESSAGE
| eval counts=case((msg="Code request" ,"count1",msg="pin in email" ,"count2",msg="pin in sms" ,"count3")
| stats count by counts
| append [search index=indexname kubernetes.container_name=rsv MESSAGE = "*pin in email*"
| spath output=msg path=MESSAGE
| eval counts=case((msg="Code request" ,"count1",msg="pin in email" ,"count2",msg="pin in sms" ,"count3")
| stats count by counts
| append [search index=indexname kubernetes.container_name=rsv MESSAGE = "*pin in sms*"
| spath output=msg path=MESSAGE
| eval counts=case((msg="Code request" ,"count1",msg="pin in email" ,"count2",msg="pin in sms" ,"count3")
| stats count by counts ]]
NOTE: I am using a graph database (OrientDB to be specific). This gives me the freedom to write a server-side function in javascript or groovy rather than limit myself to SQL for this issue.*
NOTE 2: Since this is a graph database, the arrows below are simply describing the flow of data. I do not literally need the arrows to be returned in the query. The arrows represent relationships.*
I have data that is represented in a time-flow manner; i.e. EventC occurs after EventB which occurs after EventA, etc. This data is coming from multiple sources, so it is not completely linear. It needs to be congregated together, which is where I'm having the issue.
Currently the data looks something like this:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
Where "next" is the out() edge to the event that comes next in the time-flow. On a graph this comes out to look like:
EventA-->EventB-->EventC
EventA-->EventD
Since this data needs to be congregated together, I need to merge duplicate events but preserve their edges. In other words, I need a select query that will result in:
-->EventB-->EventC
EventA--|
-->EventD
In this example, since EventB and EventD both occurred after EventA (just at different times), the select query will show two branches off EventA as opposed to two separate time-flows.
EDIT #2
If an additional set of data were to be added to the data above, with EventB->EventE, the resulting data/graph would look like:
# | event | next
--------------------------
12:0 | EventA | 12:1
12:1 | EventB | 12:2
12:2 | EventC |
12:3 | EventA | 12:4
12:4 | EventD |
12:5 | EventB | 12:6
12:6 | EventE |
EventA-->EventB-->EventC
EventA-->EventD
EventB-->EventE
I need a query to produce a tree like:
-->EventC
-->EventB--|
| -->EventE
EventA--|
-->EventD
EDIT #3 and #4
Here is the data with edges shown as opposed to the "next" column above. I also added a couple additional columns here to hopefully clear up any confusion about the data:
# | event | ip_address | timestamp | in | out |
----------------------------------------------------------------------------
12:0 | EventA | 123.156.189.18 | 2015-04-17 12:48:01 | | 13:0 |
12:1 | EventB | 123.156.189.18 | 2015-04-17 12:48:32 | 13:0 | 13:1 |
12:2 | EventC | 123.156.189.18 | 2015-04-17 12:48:49 | 13:1 | |
12:3 | EventA | 103.145.187.22 | 2015-04-17 14:03:08 | | 13:2 |
12:4 | EventD | 103.145.187.22 | 2015-04-17 14:05:23 | 13:2 | |
12:5 | EventB | 96.109.199.184 | 2015-04-17 21:53:00 | | 13:3 |
12:6 | EventE | 96.109.199.184 | 2015-04-17 21:53:07 | 13:3 | |
The data is saved like this to preserve each individual event and the flow of a session (labeled by the ip address).
TL;DR
Got lots of events, some duplicates, and need them all organized into one neat time-flow graph.
Holy cow.
After wrestling with this for over a week I think I FINALLY have a working function. This isn't optimized for performance (oh the loops!), but gets the job done for the time being while I can work on performance. The resulting OrientDB server-side function (written in javascript):
The function:
// Clear previous runs
db.command("truncate class tmp_Then");
db.command("truncate class tmp_Events");
// Get all distinct events
var distinctEvents = db.query("select from Events group by event");
// Send 404 if null, otherwise proceed
if (distinctEvents == null) {
response.send(404, "Events not found", "text/plain", "Error: events not found" );
} else {
var edges = [];
// Loop through all distinct events
distinctEvents.forEach(function(distinctEvent) {
var newEvent = [];
var rid = distinctEvent.field("#rid");
var eventType = distinctEvent.field("event");
// The main query that finds all *direct* descendents of the distinct event
var result = db.query("select from (traverse * from (select from Events where event = ?) where $depth <= 2) where #class = 'Events' and $depth > 1 and #rid in (select from Events group by event)", [eventType]);
// Save the distinct event in a temp table to create temp edges
db.command("create vertex tmp_Events set rid = ?, event = ?", [rid, event]);
edges.push(result);
});
// The edges array defines which edges should exist for a given event
edges.forEach(function(edge, index) {
edge.forEach(function(e) {
// Create the temp edge that corresponds to its distinct event
db.command("create edge tmp_Then from (select from tmp_Events where rid = " + distinctEvents[index].field("#rid") + ") to (select from tmp_Events where rid = " + e.field("#rid") + ")");
});
});
var result = db.query("select from tmp_Events");
return result;
}
Takeaways:
Temp tables appeared to be necessary. I tried to do this without temp tables (classes), but I'm not sure it could be done. I needed to mock edges that didn't exist in the raw data.
Traverse was very helpful in writing the main query. Traversing through an event to find its direct, unique descendents was fairly simple.
Having the ability to write stored procs in Javascript is freaking awesome. This would have been a nightmare in SQL.
omfg loops. I plan to optimize this and continue to make it better so hopefully other people can find some use for it.