Splunk conditional search - splunk

I want to do this.
If scope == 'request':
search request_type=*
elif scope == 'site':
search request_type=* site=*
scope == 'zone':
search request_type=* site=* zone=*
scope == 'cluster':
search request_type=* site=* zone=* cluster=*
And I just can't make it happen. Why is this so hard? I tried a gen'ing up a search string. I tried a multisearch. I don't want charts per scope type. That is ugly. I can't do something like this:
eval search_string="request_type=* site=* zone=* cluster=*" | search $search_string$
I also tried a conditional multi-search. I get no filtering from that.
| multisearch
[search $request_type_token$ | where "$scope_token$" == "request_type" ]
[search $request_type_token$ $site_token$ | where "$scope_token$" == "site"]
[search $request_type_token$ $site_token$ $zone_token$ | where "$scope_token$" == "zone"]
[search scope=$scope_token$ $request_type_token$ $site_token$ $zone_token$ $cluster_token$ | where "$scope_token$" == "cluster"]

multisearch is not the right approach as it will run all 4 searches simultaneously.
You should be able to build the search string in a subsearch something like this:
index=foo request_type=* [| makeresults
| eval search=case($token$="site","site=*",
$token$="zone", "site=* zone=*",
$token$="cluster", "site=* zone=* cluster=*",
1==1, "")
| fields search]
The subsearch evaluates the token and sets the search string based on the selected value. The 1==1 case catches any unexpected values.

Since this is taking place on a dashboard (else you wouldn't have tokens), you may be best-off building the possible searches into separate panels, and only displaying the one you choose by using the depends="$token$" option on each panel - using a conditional eval when a dropdown item is chosen
https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML

Related

How to filter a date-field with a swift vapor-fluent query

To avoid multiple inserts of the same person in a database, I wrote the following function:
func anzahlDoubletten(_ req: Request, nname: String, vname: String, gebTag: Date)
async throws -> Int {
try await
Teilnehmer.query(on: req.db)
.filter(\.$nname == nname)
.filter(\.$vname == vname)
.filter(\.$gebTag == gebTag)
.count()
}
The function always returns 0, even if there are multiple records with the same surname, prename and birthday in the database.
Here is the resulting sql-query:
[ DEBUG ] SELECT COUNT("teilnehmer"."id") AS "aggregate" FROM "teilnehmer" WHERE "teilnehmer"."nname" = $1 AND "teilnehmer"."vname" = $2 AND "teilnehmer"."geburtstag" = $3 ["neumann", "alfred e.", 1999-09-09 00:00:00 +0000] [database-id: psql, request-id: 1AC70C41-EADE-43C2-A12A-99C19462EDE3] (FluentPostgresDriver/FluentPostgresDatabase.swift:29)
[ INFO ] anzahlDoubletten=0 [request-id: 1AC70C41-EADE-43C2-A12A-99C19462EDE3] (App/Controllers/TeilnehmerController.swift:49)
if I query directly I obtain:
lwm=# select nname, vname, geburtstag from teilnehmer;
nname | vname | geburtstag
---------+-----------+------------
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
so count() should return 4 not 0:
lwm=# select count(*) from teilnehmer where nname = 'neumann' and vname = 'alfred e.' and geburtstag = '1999-09-09';
count
-------
4
My DateFormatter is defined like so:
let dateFormatter = ISO8601DateFormatter()
dateFormatter.formatOptions = [.withFullDate, .withDashSeparatorInDate]
And finally the attribute "birthday" in my model:
...
#Field(key: "geburtstag")
var gebTag: Date
...
I inserted the 4 alfreds in my database using the model and fluent, passing the birthday "1999-09-09" as a String and fluent inserted all records correctly.
But .filter(\.$gebTag == gebTag) seems to return constantly 'false'.
Is it at all possible to use .filter() with data types other than String?
And if so, what am I doing wrong?
Many thanks for your help
Michael
The problem you've hit is that you're storing only dates whereas you're filtering on dates with times. Unfortunately there's no native way to store just a date. However there are a few options.
The easiest way is to change the date field to a String and then use your date formatter (make sure you remove the time part) to convert the query option to a String.
I am guessing slightly here, but I suspect that your table was not created by a Migration? If it had been, your geburtstag field would include a time component as this is the default and you would have spotted the problem quickly.
In any event, the filter is actually filtering on the time component of gebTag as well as the date. This is why it is returning zero.
I suggest converting the geburtstag to a type that includes the time and ensuring that the time component is set to 0:00:00 when you store it. You can reset the time component to 'midnight' using something like this:
extension Date {
var midnight: Date { return Calendar.current.date(bySettingHour: 0, minute: 0, second: 0, of: self)! }
}
Then change your filter to:
.filter(\.$gebTag == gebTag.midnight)
Alternatively, just use the static method in Calendar:
.filter(\.$gebTag == Calendar.startOfDay(for:gebTag))
I think this is the most straightforward way of doing it.

NextgenSplunk: Need help forming a splunk query which takes sessionId from a particular set of logs, use it to form next query

I need to form a Splunk query to find a particular sessionId for which log a is available but log b is not. Both are part of the same transaction but code breaking in between somewhere.
LOGGER.info("Log a:: setting some details in session");
Response response = handler.transactionMethod(token); //throws some exception
LOGGER.info("Log b:: getting details in session");
So in the success scenario, both Log a and Log b will be printed. But when transactionMethod throws an exception, only Log a will be printed for that sessionId and not Log b.
The requirement is I need to find any of the sessionId for which only Log a is present, not Log b.
Assuming that you have 2 fields TEXT and SessionID already defined, we will use the following test data:
SessionID=1001 TEXT="setting
SessionID=1001 TEXT="getting
SessionID=1002 TEXT="setting
SessionID=1003 TEXT="getting"
Splunk query:
| makeresults count=4
| streamstats count
| eval TEXT=case(count=1 OR count=3, "setting", count=2 OR count=4, "getting")
| eval SessionID=case(count=1 OR count=2, 1001, count=3, 1002, count=4, 1003)
``` The above is just setting of the test data ```
``` Below is the actual SPL for the task ```
| stats count(eval(TEXT=setting")) as LogA count(eval(TEXT="getting") as Logb by SessionID
| search LogA > 0 and LogB = 0
As you can see I specifically excluded the case when only "LogB" record is present (SessionID=3)

Process fields with nested arrays into strings with strcat_array for output in Kusto

I would like to process Azure AD audit Logs into HTML tables/csv files. The data contains nested sets of arrays that I would like to summarise into a comma separated string.
eg data that looks like this
{
"TargetResources": [{"displayName": "Policy",
"modifiedProperties": [{"displayname": "PolicySetting1"},
{"displayname": "PolicySetting2"}]
}]
}
Would be processed into
TargetResource | Policy
modifedProps | PolicySetting1, PolicySetting2
mv-expand doesn't seem to work because some rows do not have modifiedProperties so those rows get eliminated
The only solution I have been able to find that gets close to what I am trying to do looks like this:
AuditLogs
| extend TargetResource = tostring(TargetResources[0].displayName)
| extend ModifiedProperty0 = tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[0].displayName)
| extend ModifiedProperty1 = tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[1].displayName)
| extend ModifiedProperty2 = tostring(parse_json(tostring(TargetResources[0].modifiedProperties))[2].displayName)
| extend ModifiedProperties = strcat(ModifiedProperty0,", ",ModifiedProperty1,", ",ModifiedProperty2)
This solution is limited in that it cannot work for arbitrary numbers of modifiedProperty values (it only works properly for exactly 3) which is a requirement for my purposes, I would like the solution to work if modifiedProperties does not exist and if there are 0-15 values.
Thank you for any help you can provide
if I understood your description correctly, you could use mv-apply (twice) to achieve that:
datatable(d: dynamic)
[
dynamic({"TargetResources":[{"displayName": "Policy0","someOtherProperty":"hello world"}]}),
dynamic({"TargetResources":[{"displayName": "Policy1","modifiedProperties":[{"displayname":"PolicySetting1"},{"displayname":"PolicySetting2"}]}]}),
dynamic({"TargetResources":[{"displayName": "Policy2","modifiedProperties":[{"displayname":"PolicySetting3"},{"displayname":"PolicySetting4"}]}, {"displayName":"Policy3","modifiedProperties":[{"displayname":"PolicySetting5"},{"displayname":"PolicySetting6"}]}]}),
]
| mv-apply tr = d.TargetResources on (
extend TargetResource = tr.displayName
| mv-apply mp = tr.modifiedProperties on (
extend propertyName = mp.displayname
| summarize modifiedProps = strcat_array(make_set(propertyName), ", ")
)
)
| project TargetResource, modifiedProps
TargetResource
modifiedProps
Policy0
Policy1
PolicySetting1, PolicySetting2
Policy2
PolicySetting3, PolicySetting4
Policy3
PolicySetting5, PolicySetting6

How to track SLA of VM availability set (or availability zone) through heartbeats with Log Analytics (KQL)

I want to track the SLAs of our VMs in a Monitor Workbook using a Log Analytics query.
For this, I use the 'Heartbeat' table, which gives the heartbeats of each VM.
However, some of our VMs are in an availability set/zone and as such, the SLA is only broken,
if in an interval of 1 minute, both heartbeats are missing.
As such I need to be able to group the heartbeats by availability set/zone in the query, but there doesn't seem to be such a property on the heartbeat.
I can use a separate Azure Resource Graph query to search for which VMs are in an availability set/zone, but when I merge this query with my Log Analytics query, I can't do any further Kusto Query Language processing on the query (I can only merge the tables).
For information, these are my Log Analytics Heartbeat query and my Resource Graph SLA query:
let timeRangeStart = {TimeRange:start};
let timeRangeEnd = {TimeRange:end};
Heartbeat
| where ResourceType == "virtualMachines"
| extend ResourceGroup = case(ResourceGroup <> "", ResourceGroup, "On-Prem")
| where TimeGenerated > timeRangeStart and TimeGenerated < timeRangeEnd and Computer in ({Servers})
| extend Resource=tolower(iff(isempty(_ResourceId), Resource, _ResourceId))
| summarize heartbeat_tot = count() by Resource,ResourceGroup, SubscriptionId
| extend total_number_of_buckets=round((timeRangeEnd-timeRangeStart)/1m)
| extend round(availability_rate=heartbeat_tot*100/total_number_of_buckets,2)
| extend availability_rate = min_of(availability_rate, 100)
| order by availability_rate asc
Resources // VMs
| where type == 'microsoft.compute/virtualmachines'
| extend AvSet = properties.availabilitySet.id
| extend AvZone = properties.availabilityZone.id
| extend VMname_SLA = iff(isnotempty(AvZone), AvZone, iff(isnotempty(AvSet), AvSet, id))
| extend SLA_VM = iff(isnotnull(AvZone), '99.99%', iff(isnotnull(AvSet), '99.95%', ''))
| extend managedBy = tolower(id)
| join kind = leftouter (
Resources // Disks
| where type == 'microsoft.compute/disks'
| where isnotempty(managedBy)
| extend managedBy = tolower(managedBy)
// What do Standard HDD disks have as SKU tag??? I used StandardHDD for the time being
| extend Tier_disk = sku.tier
| extend SLA_disk = iff(Tier_disk == 'StandardHDD', '95%', iff(Tier_disk == 'Standard', '99.5%', '99.9%'))
) on managedBy
| extend SLA_tot = iff(isnotempty(SLA_VM), SLA_VM, SLA_disk)
| project managedBy, VMname_SLA, SLA_tot
| order by managedBy asc
How many resources is it?
If it is not a large number of resources, a workaround would be:
run your ARG query in text parameter, and format the results of the query to effectively generate a json array of objects, with id, location, etc that you need. then mark this parameter as hidden
in your Logs query, reference that parameter json text before the query, and use KQL operators to turn that JSON structure into a table. then you can join/filter on that table in the query
it isn't optimal, and won't work well if there are large numbers of resources since every time you run your query you're effectively "uploading" a json blob and then immediately parsing it apart again.

How to display a user determined by the maximum number rows

I'm using laravel 4.2 and I have the following table that looks like this
ID|chosen(boolean('0','1'))|subject|user_id
1|1 |cricket|2
2|0 |cricket|3
3|1 |rugby |2
I would like to choose the user_id that has the most chosen rows or in other words the user that has the most 1s in their chosen column. So in this case user 2 would be the correct result.
I believe this is the answer
$table = DB::table('tablename')
->select(array(DB::raw('count(user_id) AS CountUser')))
->where('chosen', 1)
->orderBy('CountUser', 'desc')
->groupBy('user_id')
->first();
however how do I display the user_id in my view? For example:
{{$table->user_id}} //should give me '2'
Error message reads:
Undefined property: stdClass::$user_id
You are selecting the count only. Try this
$table = DB::table('tablename')
->select(DB::raw('count(user_id) AS CountUser, user_id'))
->where('chosen', 1)
->orderBy('CountUser', 'desc')
->groupBy('user_id')
->first();