Firebase query does not allow order on multiple properties? - objective-c

I'm trying to fetch some data from firebase. In my object, I have
Recent
|_UniversitId
|_objectiId1
| |_ userId:1
| |_ timestamp:143242344
|_objectiId2
|_ userId:1
|_ timestamp:143243222
My querying path is http://firbasedbname.com/Recent/UniversityId. I need to fetch the entries which are userId id is equal to 1 and order that set by timestamp. I tried the following,
FIRDatabaseQuery *query = [[firebase queryOrderedByChild:#"userId"] queryEqualToValue:#"1"];
This fetches my the users correctly. But is there a possible way to order this set by timestamp. To do that I tried by putting another queryOrderedByChild. But it says it can be used only one time. How may I fix this?

queryOrderedByChild can be used only once.
a workaround would be
|_objectiId1
| |_ userId:1
| |_ timestamp:143242344
| |_ userId_timestamp:1_143242344
|_objectiId2
|_ userId:1
|_ timestamp:143243222
|_ userId_timestamp:1_143243222
Then try :
FIRDatabaseQuery *query = [[firebase queryOrderedByChild:#"userId_timestamp"] queryEqualToValue:#"1_143242344"];
check this out https://youtu.be/sKFLI5FOOHs?t=541
another way to do it would be :
|_objectiId1
|_ userId1:
| |_ objectId11:143242344
| |_ objectId12:143243222
|_ userId2:
Then
querying path is http://firbasedbname.com/Recent/UniversityId/userId1
and then order by value

Related

Defender KQL to show blocked Bluetooth Devices with all relevant fields

I'm trying to write a query to report on BlueToothPolicyTriggered events, that will return all the details to show when a device was blocked by policy AND the details of that device.
Our BT policy basically should allow everything but block file transfer over BT. That seems to be working as expected, but before rolling out wider, want a quick way to 'see' if any other devices are being blocked incorrectly or be able to refer to it if a user reports an issue so we can get all the details of the device blocked to add an exception etc.
However (and I'm new to kql) it seems once I filter a table using an 'ActionType' the columns available to report on are restricted, and in this case we lose details of the BT device that has been blocked
This shows all events that have triggered the policy and whether it was 'accepted' or 'blocked' but not the details of the device
search in (DeviceEvents) ActionType == "BluetoothPolicyTriggered"
| extend parsed=parse_json(AdditionalFields)
| extend Result = tostring(parsed.Accepted)
| extend BluetoothMACAddress = tostring(parsed.BluetoothMacAddress)
| extend PolicyName = tostring(parsed.PolicyName)
| extend PolicyPath = tostring(parsed.PolicyPath)
| summarize arg_max(Timestamp, *) by DeviceName, BluetoothMACAddress
| sort by Timestamp desc
| project Timestamp, DeviceName, DeviceId, Result, ActionType, BluetoothMACAddress, PolicyPath, PolicyName, ReportId
Then I have this which will show every BT connection, the device details im looking for, but not whether it was blocked or accepted
DeviceEvents
| extend parsed=parse_json(AdditionalFields)
| extend MediaClass = tostring(parsed.ClassName)
| extend MediaDeviceId = tostring(parsed.DeviceId)
| extend MediaDescription = tostring(parsed.DeviceDescription)
| extend MediaSerialNumber = tostring(parsed.SerialNumber)
| where MediaClass == "Bluetooth"
| project Timestamp, DeviceId, DeviceName, MediaClass, MediaDeviceId, MediaDescription, parsed
| order by Timestamp desc
Ive been trying to somehow join these together (despite being the same DeviceEvents table) with not much success. I don't trust the output as im seeing entries saying a device was blocked when I know it wasnt.
DeviceEvents
| where ActionType == "BluetoothPolicyTriggered"
| extend parsed=parse_json(AdditionalFields)
| extend Result = tostring(parsed.Accepted)
| extend BluetoothMACAddress = tostring(parsed.BluetoothMacAddress)
| extend PolicyName = tostring(parsed.PolicyName)
| extend PolicyPath = tostring(parsed.PolicyPath)
| project Timestamp, DeviceName, DeviceId, Result, ActionType, BluetoothMACAddress, PolicyPath, PolicyName, ReportId
| join kind =inner (DeviceEvents
| extend parsed=parse_json(AdditionalFields)
| extend MediaClass = tostring(parsed.ClassName)
| extend MediaDeviceId = tostring(parsed.DeviceId)
| extend MediaDescription = tostring(parsed.DeviceDescription)
| extend MediaSerialNumber = tostring(parsed.SerialNumber)
) on DeviceName
| where MediaClass == "Bluetooth"
| project Timestamp, DeviceName, Result, ActionType, MediaClass, MediaDeviceId, MediaDescription,BluetoothMACAddress
| sort by Timestamp desc
Am i going about this completely wrong ?

Create a Join in Azure Resource Graph

I'm new to KQL and wondering if anyone would know how to do a join with using the query tables below?
policyresources
| where type == "microsoft.authorization/policysetdefinitions" |join advisorresources
Just looking at the graph resources
https://learn.microsoft.com/en-us/azure/governance/resource-graph/samples/samples-by-table?tabs=azure-cli#advisorresources
&
https://learn.microsoft.com/en-us/azure/governance/resource-graph/samples/samples-by-table?tabs=azure-cli#advisorresources
policyresources
| where type == "microsoft.authorization/policysetdefinitions" | extend resourceId = tostring(properties.resourceId)
| join (advisorresources | extend resourceId = tostring(properties.resourceMetadata.resourceId)) on resourceId
may work. Not a 100% sure if the resourceId is the common key. Worth a try though

Convert disk size from megabytes to gigabytes in KQL query

I have following query that helps me data from vm disk
InsightsMetrics
| where Namespace == "LogicalDisk"
| extend Tags = todynamic(Tags)
| extend Drive=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
| extend DiskSize=tostring(todynamic(Tags)["vm.azm.ms/diskSizeMB"])
| summarize
Free_space_percentage = avgif(Val, Name == 'FreeSpacePercentage'),
Free_Gigabytes = avgif(Val, Name == 'FreeSpaceMB') /1024
by Computer, Drive
| join (
InsightsMetrics
| where Namespace == "LogicalDisk"
| extend Tags = todynamic(Tags)
| extend DiskSize=tostring(todynamic(Tags)["vm.azm.ms/diskSizeMB"])
| extend Drive=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
) on Computer, Drive
| where DiskSize has "."
| summarize by Computer,Drive , Free_space_percentage, Free_Gigabytes, DiskSize
Issue is now that DiskSize is displayed in megabytes when all the rest are in gigabytes. I have now tried several hours to try convert it to gigas without luck. Could someone help me where and how should i do my convert in my query?
It seems your issue is not with converting MB to GB, but with structuring a query that will give you the average values as well as the disk size.
Assuming the disks' sizes are not changed during the query period, take_any() will do the trick.
InsightsMetrics
// | where TimeGenerated between(datetime(2022-04-01) .. datetime(2022-04-01 00:00:10))
| where Namespace == "LogicalDisk"
| extend Tags = todynamic(Tags)
| extend Drive = tostring(Tags["vm.azm.ms/mountId"])
| extend diskSizeGB = Tags["vm.azm.ms/diskSizeMB"]/1024.0
| summarize
avg_FreeSpacePercentage = avgif(Val, Name == 'FreeSpacePercentage')
,avg_FreeSpaceGB = avgif(Val, Name == 'FreeSpaceMB') /1024
,take_any(diskSizeGB)
by Computer, Drive
Computer
Drive
avg_FreeSpacePercentage
avg_FreeSpaceGB
diskSizeGB
DC00.na.contosohotels.com
C:
74.9538803100586
94.8232421875
126.50976181030273
DC00.na.contosohotels.com
D:
91.4168853759766
14.6240234375
15.998043060302734
SQL12.na.contosohotels.com
C:
57.7019577026367
72.998046875
126.50976181030273
SQL12.na.contosohotels.com
D:
92.02197265625
29.4443359375
31.998043060302734
SQL12.na.contosohotels.com
F:
99.9144668579102
127.7626953125
127.87304306030273
AppBE01.na.contosohotels.com
C:
73.2973098754883
92.7275390625
126.50976181030273
AppBE01.na.contosohotels.com
D:
91.3375244140625
14.611328125
15.998043060302734
Fiddle

Deep dive Azure Log analytics cost using KQL query

I'm running following Log Analytics Kusto query to get data what uses and thus generetes our Log Analytics cost
Usage
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) by Solution, DataType
| sort by Solution asc, DataType asc
and then the output is following:
What kinda query should I use if I want to deep dive more eg to ContainerInsights/InfrastructureInsights/ServiceMap/VMInsights/LogManagement so to get more detailed data what name or namespaces really cost?
Insightmetrics table have e.g these names and namespaces.
I was able maybe able to get something out using following query but something is still missing. Not totally sure if I'm on right or wrong way
union withsource = tt *
| where _IsBillable == true
| extend Namespace, Name
Here is the code for getting the name and namespace details. using Kusto query
let startTimestamp = ago(1h);
KubePodInventory
| where TimeGenerated > startTimestamp
| project ContainerID, PodName=Name, Namespace
| where PodName contains "name" and Namespace startswith "namespace"
| distinct ContainerID, PodName
| join
(
ContainerLog
| where TimeGenerated > startTimestamp
)
on ContainerID
// at this point before the next pipe, columns from both tables are available to be "projected". Due to both
// tables having a "Name" column, we assign an alias as PodName to one column which we actually want
| project TimeGenerated, PodName, LogEntry, LogEntrySource
| summarize by TimeGenerated, LogEntry
| order by TimeGenerated desc
For more information you can go through the Microsoft document and here is the Kust Query Tutorial.

How to configure DBT sources from Big Query EXTERNAL_QUERY

In Big Query, I am using an external connection/federated SQL query (cloudSQL) from which I can get data with SELECT * FROM EXTERNAL_QUERY("gcp-name.europe-west3.friendly_name", "SELECT * FROM database_name.external_table;")
Now my question is, in DBT, how do I define this source in my schema.yml file and how should my FROM {{source(...,...)}} statement look like?
From my perspective right now given my above comments on the current state of the dbt-external-tables package (which I don't believe meets your needs), I would say you have two options:
Define your external dependencies as static views in a custom schema and then import as dbt sources.
Define your external dependencies within dbt using something like Evaluate(<select *>) and then ref() those like normal in your transform / stage layer.
Example of #1
* my-bq-project-id
|
|_ dbt_schema
|
|_ external_db_schema
|
|_ external_table_1
|_ external_table_2
etc.
And then you'd have:
* my-dbt-project-dir
|
|_ analysis
|_ data
|_ models
| |_ sources
| | |
| | |> my_external_table_1.yml
| | |> my_external_table_2.yml
| |
| |_ transforms
| |_ final
|_ dbt_project.yml
|_ readme.md
Where "my_external_table_1.yml" looks like:
sources:
- name: external_db_schema
database: my-bq-project-id
tables:
- name: my_external_table_1
description: "Lorem Ipsum"
And your static view is defined by running a query like:
create view if not exists `my-bq-project-id.external_db_schema.my_external_table_1` as
( SELECT * FROM EXTERNAL_QUERY("gcp-name.europe-west3.friendly_name",
"SELECT * FROM database_name.external_table;"))
Example of #2
Just make a base level dbt model that does exactly what you are describing on a 1-1 object mapping:
my_external_table_1.sql
execute immediate (
SELECT * FROM EXTERNAL_QUERY("gcp-name.europe-west3.friendly_name",
"SELECT * FROM database_name.external_table;")
)
And then from here you'll be able to ref('my_external_table_1') in your transform layer etc.