How do I create a dashboard that searches other dashboards for recently modified or updated dashboards in Splunk? - splunk

So I was just wondering if it was possible to create a simple xml or html code that has dashboard that searches for all other recently modified or updated searches of dashboards in splunk?
And if so when I search up these updated databases I would like to know the indexes and dataset that these dashboards have.
Requested Table format
Dashboard Name, Index, Timestamp (Shows when the dashboard was last updated)
Hopefully that makes sense..Please let me know if it's possible, or similar ways I can find this! Thanks

You could search
"_audit" index :
index=_audit | table _time user action info
index=_internal
The "_internal" index also has some sources on which to do username analytics ie:searches.log

Related

How do I Query on Splunk Dashboard information

We are currently using ADO (pipeline artifacts) to Splunk for Build step reviews. I am working on a project to migrate Splunk Dashboards to powerbi. I need a query to list down the Dashboards and reports from Splunk on the below criteria so we would identify which ones to be created in powerbi
List of Dashboards / reports (possibly with Author details)
frequency of usage - like how many times the Dashboard / reports was viewed in the last 30 days
I tried few queries from mysplunk but it did not give the result. Thanks for any inputs / suggestions. Thanks.
It would help to know what you've tried so far.
To get a list of dashboards, try this query:
| rest /servicesNS/-/-/data/ui/views
To see which dashboards have been viewed, search the Splunk UI access log.
index=_internal sourcetype=splunkd_ui_access
| rex "\\/data\\/ui\\/views\\/(?<dashboard>[^\?]+)"
| stats count by dashboard
The below queries worked for me ..
To list the Dashboards
| rest /servicesNS/-/-/data/ui/views | table title,app,owner,eai:data,description,updated,version
To list the reports
|rest /servicesNS/-/-/saved/searches |table title,app,owner
Please note this will list down the default dashboards from Splunk that you may want to filter out.
I am still working on the query for the usage frequency . Thanks.

Splunk query to get user, saved search name, last time the query ran

From Splunk, I am trying to get the user, saved search name and last time a query ran ?
A single Splunk query will be nice.
I am very new to Splunk and I have tried these queries :-
index=_audit action=search info=granted search=*
| search IsNotNull(savedsearch_name) user!="splunk-system-user"
| table user savedserach_name user search _time
The above query , is always empty for savesearch_name.
Splunk's audit log leaves a bit to be desired. For better results, search the internal index.
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| table user savedsearch_name _time
You won't see the search query, however. For that, use REST.
| rest /services/saved/searches | fields title search
Combine them something like this (there may be other ways)
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| fields user savedsearch_name _time
| join savedsearch_name [| rest /services/saved/searches
| fields title search | rename title as savedsearch_name]
| table user savedsearch_name search _time
Note that you have a typo in your query. "savedserach_name" should be "savedsearch_name".
But I also recommend a free app that has a dedicated search tool for this purpose.
https://splunkbase.splunk.com/app/6449/
Specifically the "user activity" view within that app.
Why it's a complex problem - part of the puzzle is in the audit log's info="granted" event, another part is in the audit log's info="completed" event, even more of it is over in the introspection index. You need those three stitched together, and the auditlog is plagued with parsing problems and autokv compounds the problem by extracting all of fields from the SPL itself.
That User Activity view will do all of this for you, sidestep pretty thorny autokv problems in the audit data, and not just give you all of this per search, but also present stats and rollups by user, app, dashboard, even by sourcetypes-that-were-actually-searched
it also has a macro called "calculate pain" that will score a "pain" number for each search, and then sum up all the "pain" in the by-user, by-app, by-sourcetype rollups etc. So that admins can try and pick off the worst offenders first.
it's up on SB here and approved for both Cloud and onprem - https://splunkbase.splunk.com/app/6449/
(and there's a #sideview_ui channel for it in the community slack.)

Splunk only select matching JSON data

I load JSON reports into Splunk and those reports have many arrays. When I search:
source=test| search "behavior.system.processes.process{}.fileactivities.fileCreated.call{}.path"="C:\\Windows*"
I often like to show the matching data. I use table to do so:
source=test| search "behavior.system.processes.process{}.fileactivities.fileCreated.call{}.path"="C:\\Windows*" | table "behavior.system.processes.process{}.fileactivities.fileCreated.call{}.path"
However, the issue is that this shows me all fileCreated of the matching event and not only the one starting with C:\Windows.
How do I filter that?
#joe-jeff
I posted answer on answers.splunk.com. Please check below link.
https://answers.splunk.com/answers/745093/only-select-matching-json-data.html

Finding the query that created a table in BigQuery

I am a new employee at the company. The person before me had built some tables in BigQuery. I want to investigate the create table query for that particular table.
Things I would want to check using the query is:
What joins were used?
What are the other tables used to make the table in question?
I have not worked with BigQuery before but I did my due diligence by reading tutorials and the documentation. I could not find anything related there.
Brief outline of your actions below:
Step 1 - gather all query jobs of that user using Jobs.list API - you must have Is Owner permission for respective projects to get someone else's jobs
Step 2 - extract only those jobs run by the user you mentioned and referencing your table of interest - using destination table attribute
Step 3 - for those extracted jobs - just simply check respective queries which allow you to learn how that table was populated
Hth!
I have been looking for an answer since a long time.
Finally found it :
Go to the three bars tab on the left hand side top
From there go to the Analytics tab.
Select BigQuery under which you will find Scheduled queries option,click on that.
In the filter tab you can enter the keywords and get the required query of the table.
For me, I was able to go through my query history and find the query I used.
Step 1.
Go to the Bigquery UI, on the bottom there are personal history and project history tabs. If you can use the same account used to execute the query I recommend personal history.
Step 2.
Click on the tab and there will be a list of queries ordered from most recently run. Check the time the table was created and find a query that ran before the table creation time.
Since the query will run first and create the table there will be slight differences. For me it stayed between a few seconds.
Step 3.
After you find the query used to create the table, simply copy it. And you're done.

text information retrieve result analysis dataset (text)

I had created the text semantic search engine. However, I cannot find the data set which is labeled so that I can evaluate the information retrieve of my system.
Is there any public available document (text) which is labeled. As I would need the text document to evaluate the information retrieve result. (recall, precision, F1 value...)
Thanks.
I do research in this direction. In all my research, i have used AOL dataset which consists of ~20M web queries collected from ~650k users over three months (March 01, 2006 to May 31, 2006). The data is sorted by anonymous user ID and sequentially arranged.
The data set includes {AnonID, Query, QueryTime, ItemRank, ClickURL}. More details can be found in the link mentioned above. I am interested to know how you have implemented and if possible, share your engine's code. I am also interested to know the performance on AOL dataset in your search engine.
You can find the dataset in my git repository. Thanks!