Azure Sentinel Kusto query table with data from another query - kql

I'm trying to find a way to use the Azure Sentinel to pull all DNS results to a domain based upon a Security Alert.
Under the Security Alert table, they provide the domain name for an event as part of a JSON, here is the table for extracting that data.
SecurityAlert
| where parse_json(ExtendedProperties).AnalyticDescription == "Usage of digital currency mining pool"
| extend DomainName_ = tostring(parse_json(ExtendedProperties).DomainName);
What I would like to do is take that query, and then Query the DnsEvents table to find all queries that match the domain name on the table Name. An example of the query is
DnsEvents
| where Name contains "xmr-au1.nanopool.org"
How can I perform the second query but use the data from the first query to filter?

you could try something like this:
let domain_names =
SecurityAlert
| where ExtendedProperties has 'Usage of digital currency mining pool' // this line is optional, but may improve performance
| extend props = parse_json(ExtendedProperties).
| where props.AnalyticDescription == "Usage of digital currency mining pool"
| project DomainName_ = tostring(props.DomainName)
;
DnsEvents
| where Name has_any (domain_names)

Related

get size of my kube audit log ingested daily in azure

I would like to know how can I get the size (in terms of GB) of my kub audit log ingested on daily basis. Is there a KQL query which I can run in my log analytics workspace to find that out?
The reason I want is because I would like to calculate the azure consumption. Thanks
By using the usage table, it is possible to review how much data was ingested into an LA workspace.
Scope spans from solutions to data types (which correlates usually to the destination table, but not always).
Kube-audit is only exportable by default to the AzureDiagnostic table, a table shared among many azure resources, hence - it is impossible to differentiate the source of each record within the total count.
for example, I've being using the following query to review how much data was ingested at the scope of my AzureDiagnostic table in the last 10 days:
Usage
| where TimeGenerated > startofday(ago(10d))
| where DataType == 'AzureDiagnostics'
| summarize IngestedGB = sum(Quantity) / 1000 by bin(TimeGenerated, 1h)
| render timechart
In my case all data originated from Kube-audit logs, but, it shouldn't be the case of most users:
AzureDiagnostics
| where TimeGenerated > startofday(ago(10d))
| summarize count() by bin(TimeGenerated, 1h), Category
| render timechart

Access Issue and best function to accomplish task

New user here and I've read many threads, but can't seem to figure out the best way to accomplish my task.
Current Issue:I'm using a switch function in Access to accomplish my goal. Here is what I have, but i'm getting a syntax error?
UPDATE all_rugs_prod
SET construction_facet =
Switch(
construction = Machine Woven, Machine Made,
construction = Machine Made, Machine Made,
construction = Printed, Printed,
construction = Hand Hooked, Hand Hooked
)
all_rugs_prod is Database,
construction_facet is the field I want to value to be returned in,
construction is the field it is going to search in.
I'm very new to all this so, i need as much help as I can get.....
Backdrop:I'm taking say database 1, then mapping/matching the fields to database 2. database 2 has many other fields that require data to be popluated in that were added in database 2 that were added.
I created an Append database from database 1 into databas 2 and matched those fields that were appended from database 1 that match database 2.
My biggest issue is the fact that I need to normalize/map data in database 2. Example: in database 2 there is a field from database 1 that has many different text values. I need to search that field and bring back a predetermined text value based on a predetermined list it would fit into. So say in database 2.field7 the text is "aqua blue", I need to normalize/map it to return it to database 2.field8 "blue" and so on and so forth. what is the best way to accomplish this. The list in some cases of say various colors is very long. Thanks!
The syntax error arises because you need to enclose literal strings in double quotes, e.g.
"Machine Woven"`
Otherwise each word separated by whitespace will be interpreted as a field (as opposed to a literal string), which, if not found in the source dataset, will result the fields being interpreted as parameters requiring a value to be supplied by the user; but more critically, this will result in too many arguments supplied to the switch function.
However, since you are only updating the value of records which contain the value "Machine Woven" in the construction field, your query could be simplified to:
update all_rugs_prod
set construction_facet = "Machine Made"
where construction = "Machine Woven"
For a situation in which many possible values in place of "Machine Woven" are being mapped to "Machine Made", I would suggest creating a separate mapping table, e.g.:
Mapping_Table
+---------------------+--------------+
| map_from | map_to |
+---------------------+--------------+
| Machine Woven | Machine Made |
| Machine Built | Machine Made |
| Machine Constructed | Machine Made |
+---------------------+--------------+
And then use a simple update query with inner joins to the above mapping table to perform an implicit selection and update the new value, e.g.:
update
all_rugs_prod inner join mapping_table on
all_rugs_prod.construction = mapping_table.map_from
set
all_rugs_prod.construction_facet = mapping_table.map_to

Splunk query to get user, saved search name, last time the query ran

From Splunk, I am trying to get the user, saved search name and last time a query ran ?
A single Splunk query will be nice.
I am very new to Splunk and I have tried these queries :-
index=_audit action=search info=granted search=*
| search IsNotNull(savedsearch_name) user!="splunk-system-user"
| table user savedserach_name user search _time
The above query , is always empty for savesearch_name.
Splunk's audit log leaves a bit to be desired. For better results, search the internal index.
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| table user savedsearch_name _time
You won't see the search query, however. For that, use REST.
| rest /services/saved/searches | fields title search
Combine them something like this (there may be other ways)
index=_internal savedsearch_name=* NOT user="splunk-system-user"
| fields user savedsearch_name _time
| join savedsearch_name [| rest /services/saved/searches
| fields title search | rename title as savedsearch_name]
| table user savedsearch_name search _time
Note that you have a typo in your query. "savedserach_name" should be "savedsearch_name".
But I also recommend a free app that has a dedicated search tool for this purpose.
https://splunkbase.splunk.com/app/6449/
Specifically the "user activity" view within that app.
Why it's a complex problem - part of the puzzle is in the audit log's info="granted" event, another part is in the audit log's info="completed" event, even more of it is over in the introspection index. You need those three stitched together, and the auditlog is plagued with parsing problems and autokv compounds the problem by extracting all of fields from the SPL itself.
That User Activity view will do all of this for you, sidestep pretty thorny autokv problems in the audit data, and not just give you all of this per search, but also present stats and rollups by user, app, dashboard, even by sourcetypes-that-were-actually-searched
it also has a macro called "calculate pain" that will score a "pain" number for each search, and then sum up all the "pain" in the by-user, by-app, by-sourcetype rollups etc. So that admins can try and pick off the worst offenders first.
it's up on SB here and approved for both Cloud and onprem - https://splunkbase.splunk.com/app/6449/
(and there's a #sideview_ui channel for it in the community slack.)

input certain field from dropdown and then use another field from that row (csv file) in the search

I'm new to Splunk and I am in the process of writing a Splunk query which takes in a key from the dropdown option on a dashboard and I want to then extract a different row (specifically domain) associated with that key in the csv file and then use it in the search to filter it by domain.
The query which I currently wrote is:
basequery
| lookup tenant.csv key as tenant_key output domain as Domain
| search tenant_key = $selected_client$
| stats count
I just want to display a count filtered by domain associated with the key provided by the dropdown. I'm not quite sure what is wrong or how to go about it.
Try this query. Putting tenant_key in the base query limits the search to events with the selected client. Then the lookup maps tenant to domain. Finally, do the count.
basequery tenant_key=$selected_client$
| lookup tenant.csv key as tenant_key output domain as Domain
| stats count

Is it important to have an automated acceptance tests to test whether a field saves to a database?

I'm using SpecFlow for the automated Acceptance Testing framework and NHibernate for persistance. Many of the UI pages for an intranet application that I'm working on are basic data entry pages. Obviously adding a field to one of these pages is considered a "feature", but I can't think of any scenarios for this feature other than
Given that I enter data X for field Y on Record 1
And I click Save
When I edit Record 1
Then I should data X for field Y
How common and necessary is it to automate tests like this? Additionally, I'm using NHibernate so it's not like I'm handrolling my own data persistance layer. Once I add a property to my mapping file, there is a high chance that it won't get deleted by mistake. When considering this, isn't a "one-time" manual test enough? I'm eager to hear your suggestions and experience in this matter.
I usually have scenarios like "successful creation of ..." that tests the success case (you fill-in all required fields, all input is valid, you confirm, and finally it is really saved).
I don't think that you can easily define a separate scenario for one single field, because usually the scenario of successful creation requires several other criteria to be met "at the same time" (e.g. all required fields must be filled).
For example:
Scenario: Successful creation of a customer
Given I am on the customer creation page
When I enter the following customer details
| Name | Address |
| Cust | My addr |
And I save the customer details
Then I have a new customer saved with the following details
| Name | Address |
| Cust | My addr |
Later I can add additional fields to this scenario (e.g. the billing address):
Scenario: Successful creation of a customer
Given I am on the customer creation page
When I enter the following customer details
| Name | Address | Billing address |
| Cust | My addr | Bill me here |
And I save the customer details
Then I have a new customer saved with the following details
| Name | Address | Billing address |
| Cust | My addr | Bill me here |
Of course there can be more scenarios related to the new field (e.g. validations, etc), that you have to define or extend.
I think if you take this approach you can avoid having a lot of "trivial" scenarios. And I can argue that this is the success case of the "create customer feature", which deserves a single test at least.