KQL Kusto Query multiple tables using same variable - kql

Regarding the Kusto Query Language for advanced hunting on Defender ATP
I'm looking to query the information for one computer but across multiple tables
I use the Let command to assign the computer name to a variable and this works but only for the 1st table, in this case DeviceNetworkInfo
The results only include the 1st object DeviceNetworkInfo there is no result for the other 2
I'm looking to get the results from all tables using the single hostname variable.
So no results for lines 2 and 3. I only want to type the hostname once.
What i'm using is below
Thanks
0 Let hostname = "Computer1";
1 DeviceNetworkInfo | where DeviceName contains hostname;
2 DeviceProcessEvents | where DeviceName contains hostname;
3 DeviceAlertEvents | where DeviceName contains hostname;

Your query should work as is. Make sure the other two tables have relevant data.
P.S. I think in your case you can use has instead of contains, as it's much more performant (see more info here).

I'd try something like this:
let host = "computername";
search in (DeviceNetworkInfo, DeviceProcessEvents,
AlertInfo)
// put in timestamp
DeviceName =~ host

Related

KQL - Searching in lists/strings

I'm setting up an Azure workbook that allows VM's to be selected using a parameter that I have called virtual_machines. When I select multiple VM resources using this parameter, it behaves in a query as below (I'm just using :names)
print ('{virtual_machines:names}')
vm-test-01, vm-test-02
I'm now trying to use this in a query but I'm getting no results. I think it's struggling with the fact the values aren't quoted individually. For example, if I use it in my usual where clause I get no results.
| where Computer in ('{virtual_machines:names}')
The query returned no results.
Should I be using a different method to filter other than in or should I be manipulating my {virtual_machines:names} list somehow? I'm new to KQL and am a little confused by the data types I'm dealing with here in terms of lists, dictionaries etc...
Any pointers?
Thanks
Multi-value parameter
print Computer = "vm_test_01"
| where Computer in (dynamic([{virtual_machines_names}]))

Do I have problem with some hidden characters in MS Access?

I use data from MS Excel further in MS Access where I try to create several Queries. My Table contains the following numbers:
Table:
"111_1"
"222_2"
"123_3"
So, if I've built those query, I always receive not correct results:
Query:
like "###_#" | 0 Results
not like "###_#" | 3 Results
Why do I experience this kind of behavior? How can I avoid it? It seems like my inputs contains some hidden characters
Thank you
The octothorpe designates a digit, so try with:
"###?#"
or:
"###_#"

Postgres: select using column as regex, when only some rows are valid regexes

TLDR
How to do a regex-match query with a column value ('input' ~ t.somecolumn), where just a known subset of rows has a valid regex in that column?
Full example
there is a blocked_items table including two varchar columns: type and value,
one of the types is DOMAIN_REGEX, and then the value always includes a correct regex,
but: for other types value doesn't need to be a regex and can cause errors when treated as one.
To check if a domain is blocked, I'm calling this query and passing the URL in question as $1 parameter:
SELECT 1 FROM blocked_items WHERE type = 'DOMAIN_REGEX' AND $1 ~ value LIMIT 1
The problem: on some database instances the query fails if rows with another type have value that's not a valid regex. On one database this query runs correctly, and on another instance, regardless of the input, throws: invalid regular expression: quantifier operand invalid.
Example test data:
| type | value |
|--------------+---------------------|
| EMAIL | test+++1#test.com |
| DOMAIN_REGEX | test\d\.com |
Question
I know the reason for my error is that the db engine can choose to check the second condition ($1 ~ value) first -- I've checked the EXPLAIN for my query and indeed it's different on these two database instances.
Is there a way I can
force the db to check the type column first, so the regex filter is always valid?
form the query differently to ignore the error for non-regex value entries? Or check if it's a valid regex first?
work around this issue in another way?
// I know changing the schema or using LIKE instead will probably suffice, but now that I stumbled upon this I'm curious if there is a solution using regexes like this :)
You should be able to force the order of operations using case:
SELECT 1
FROM blocked_items
WHERE (CASE WHEN type <> 'DOMAIN_REGEX' THEN false
ELSE $1 ~ value
END)
LIMIT 1;
In general, SQL (and Postgres) provide little control over the order of evaluation of expressions. However, CASE should provide that control under many circumstances.
You are right, the schema is not great. If you still really have to keep the schema, you could try CASE/WHEN, https://www.postgresqltutorial.com/postgresql-case/

Read sql query via xml file

I have a financial application that has a large set of rules to check. The file is stored in a sql server. This is a web application using C#. Each file must be checked for these rules and there are hundreds of rules to consider. These rules change every few weeks to months. My thought was to store these rules in an xml file and have my code behind read the xml and dynamically generate the sql queries on the file. For testing purposes we are hard coding these rules, but would like to move to an architecture that is more accommodating of these rules changes. I'd think that xml is a good way to go here, but I'd appreciate advice of those that have gone down similar roads before.
The complexity of each rule check is small and generally are just simple statements such as: "If A && B && (C || D)" then write output string to log file".
My thought would be to code up the query in xml (A && B && (C || D)) and attach a string to that node in the xml. If the query is successful the string is written, if the query is not successful no string is written.
Thoughts?
In response to a comment, here is a more specific example:
The database has an entity called 'assets'. There are a number of asset types supported, such as checking, savings, 401k, IRA, etc etc. An example of a rule we want to check would be: "If the file has a 401k, append warning text to the report saying ". That example is for a really simple case.
We also get into more complex and dynamic cases where for a short period of time a rule may be applied to deny files with clients in specific states with specific property types. Classic example is to not allow condominiums in Florida. This rule max exist for a while, then be removed.
The pool of rules are constantly changing based on the discretion of large lending banks. We need to be able to make these rule changes outside of the source code for the site. Thus the idea of using xml and have the C# parse the xml and apply the rules dynamically was my idea. Does this help clarify the application and its needs?
could you just have a table with SQL in it? you could then formalist it a bit by having the SQL return a particular strucure..
so you table of checks might be:
id | chechGroup | checkName | sql |
1 | '401k checks'| '401k present' |select |
| '401k present'|
| ,count(*) |
| ,'remove 401k'|
|from |
| assests |
|where |
| x like '401k%'|
you could insist that the sql in the sql column returns something of the format:
ruleName | count | comment
'401k present'| 85 |'remove 401k'
you could have different types of rules.. when i have done similar to this I have not returned totals instead I have returned something more like:
table | id | ruleBorken | comment
'assets' | 1 | '401k present' | 'remove 401k'
this obviously would have a query more like:
select
'assets'
,id
,'401k present'
,'remove 401k'
from
assets
where
x like '401k%'
this makes it easier to generate interactive reports where the aggregate functions are done by the report (e.g. ssrs) allowing drill down to problem records.
the queries that validate the rules can either be run within a stored procedure that selects the queries out and uses EXEC to execute them, or they could be run from your application code one by one.
some of the columns (e.g. rule name) can be populated but he calling stored procedure or code.
the comments and rulename in this example are basically the same, but it can be handy to have the comments separate and put a case statement in there. - e.g. when failing validation rules, say on fields that should not be blank if you have a 401k, then you can have a case statement that tells which fields are missing in the comments.
If you want end users or non devs to create the rules then you could look at ways of generating the where clause in code and allowing the user to select table, rule name and generate a where clause through some interface, then save it to your rule table and you are good to go.
if all of your rules return a set format it allows you to have one report template for all rules - equally if you have exactly three types of rule then you could have three return formats and three report formats.. basically i like formalizing the result structure as it allows much more reuse elsewhere

Matching similarly ending strings with SQL sort/group by

I have a massive table of emails and would like to sort by domain (and count up the # in each domain)
Example output:
#gmail.com = 1000
#aol.com = 790
#hotmail.com = 550
#somethingweird.com = 2
The regex would be for all strings that match from "#" to the final character in the string.
Any ideas how I could do this?
If you can change your design you may try changing the way you store email addresses in the db, or add an additional column. This will perform much better with indexing than having to do a tablescan through your whole table to generate a list groupings.
If it's massive then you need a scalable solution.
Add a computed column (or separate domain column) to split the email address on # and index that.
Then it's a simple COUNT.. GROUP BY
If you use Oracle you can GROUP BY regexp_substr(mail_column,'#.*')