Hi, I am tryinig to create a Splunk query to get the daily disk usage of our Fileshare servers using the sourcetype="PerfmonMk:LogicalDisk" - splunk

This is what I currently have and it shows me the usage, but I would like to have a panel that shows how much was used each day.
index=perfmon (host=server1 OR host=server2) sourcetype="PerfmonMk:LogicalDisk" instance="F:\\Data_2022_00_T3"
| bin _time span=24h aligntime=#d
| eval GB_Free=(Free_Megabytes/1024)
| dedup _time
| table _time, GB_Free
I'm new to Splunk and I've looked into the diff command but not sure if that is the right approach

Related

get size of my kube audit log ingested daily in azure

I would like to know how can I get the size (in terms of GB) of my kub audit log ingested on daily basis. Is there a KQL query which I can run in my log analytics workspace to find that out?
The reason I want is because I would like to calculate the azure consumption. Thanks
By using the usage table, it is possible to review how much data was ingested into an LA workspace.
Scope spans from solutions to data types (which correlates usually to the destination table, but not always).
Kube-audit is only exportable by default to the AzureDiagnostic table, a table shared among many azure resources, hence - it is impossible to differentiate the source of each record within the total count.
for example, I've being using the following query to review how much data was ingested at the scope of my AzureDiagnostic table in the last 10 days:
Usage
| where TimeGenerated > startofday(ago(10d))
| where DataType == 'AzureDiagnostics'
| summarize IngestedGB = sum(Quantity) / 1000 by bin(TimeGenerated, 1h)
| render timechart
In my case all data originated from Kube-audit logs, but, it shouldn't be the case of most users:
AzureDiagnostics
| where TimeGenerated > startofday(ago(10d))
| summarize count() by bin(TimeGenerated, 1h), Category
| render timechart

Few hourly events are missing in the splunk dashboard

Need help on the below weird issue.
We have a Splunk query where it is built on the logs. Below is the query used.
index=aws_lle_airflow "INFO - Source count for aws-bda-lle.marketing.bq_mktg_campaign:"
| rex field=_raw "INFO - Source count for aws-bda-lle.marketing.bq_mktg_campaign: (?<bqTableRecordCount>[^\"]+)"
| table _time bqTableRecordCount
| sort _time
problem is ideally as per the table inserts 10 events should be displayed in the dashboard but only 8 events are shown.
Even though the logs are written not sure why the dashboard is not reflecting. Could someone help me what could be the issue and what needs to be done to resolve.

Splunk query to get user, saved search name, last time the query ran

From Splunk, I am trying to get the user, saved search name and last time a query ran ?
A single Splunk query will be nice.
I am very new to Splunk and I have tried these queries :-
index=_audit action=search info=granted search=*
| search IsNotNull(savedsearch_name) user!="splunk-system-user"
| table user savedserach_name user search _time
The above query , is always empty for savesearch_name.
Splunk's audit log leaves a bit to be desired. For better results, search the internal index.
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| table user savedsearch_name _time
You won't see the search query, however. For that, use REST.
| rest /services/saved/searches | fields title search
Combine them something like this (there may be other ways)
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| fields user savedsearch_name _time
| join savedsearch_name [| rest /services/saved/searches
| fields title search | rename title as savedsearch_name]
| table user savedsearch_name search _time
Note that you have a typo in your query. "savedserach_name" should be "savedsearch_name".
But I also recommend a free app that has a dedicated search tool for this purpose.
https://splunkbase.splunk.com/app/6449/
Specifically the "user activity" view within that app.
Why it's a complex problem - part of the puzzle is in the audit log's info="granted" event, another part is in the audit log's info="completed" event, even more of it is over in the introspection index. You need those three stitched together, and the auditlog is plagued with parsing problems and autokv compounds the problem by extracting all of fields from the SPL itself.
That User Activity view will do all of this for you, sidestep pretty thorny autokv problems in the audit data, and not just give you all of this per search, but also present stats and rollups by user, app, dashboard, even by sourcetypes-that-were-actually-searched
it also has a macro called "calculate pain" that will score a "pain" number for each search, and then sum up all the "pain" in the by-user, by-app, by-sourcetype rollups etc. So that admins can try and pick off the worst offenders first.
it's up on SB here and approved for both Cloud and onprem - https://splunkbase.splunk.com/app/6449/
(and there's a #sideview_ui channel for it in the community slack.)

how can I find all dashboards in splunk, with usage information?

I need to locate data that has become stale in our Splunk instance - so that I can remove it
I need a way to find all the dashboards, and sort them by usage. From the audit logs I've been able to find all the actively used logs, but as my goal is to remove data, I most need the dashboards not in use
any ideas?
You can get a list of all dashboards using | rest /services/data/ui/views | search isDashboard=1. Try combining that with your search for active dashboards to get those that are not active.
| rest /services/data/ui/views | search isDashboard=1 NOT [<your audit search> | fields id | format]

Dedup field by timeslice in splunk

I am looking to see how many servers are reporting into splunk over time. This is a query similar to the one I have tried:
sourcetype=defined | dedup host | timechart count by pop
What is happening is the host gets deduped before the time chart (obviously) so I'm not exactly getting the results I'm looking for.
How can I deduplicate the server list per time slice in the timechart?
Please let me know if further clarification is necessary.
Looks like the option I was looking for was the distinct_count function for charts. This is the final query that is returning results that I am looking for:
sourcetype=defined | timechart dc(host) by pop