Azure Stream Analytics alternative to Sparks mapWithState - azure-stream-analytics

Is there a way in Azure Stream Analytics to create some aggregation with a custom state like Sparks mapWithState does?
Here is my scenario:
I have data from IoT devices containing the following fields:
DeviceId
Position
Value
The data may arrive out of order.
Whenever a new packet arrives for a given DeviceId, I want to output the last n positions and values for that device. Like
Input:
{ "DeviceId": "A", "Position": 10, "Value": 100}
Output:
{ "DeviceId": "A", "Positions": [10], "Value": [100]}
Next Input:
{ "DeviceId": "A", "Position": 11, "Value": 101}
Output:
{ "DeviceId": "A", "Positions": [10, 11], "Value": [100, 101]}
Next Input:
{ "DeviceId": "A", "Position": 9, "Value": 99}
Output:
{ "DeviceId": "A", "Positions": [9, 10, 11], "Value": [9, 100, 101]}
In Spark Structured Streaming I would implement this using groupBy and mapWithState. Is there a way to implement this in ASA?

in ASA, you can use one of the following methods to do this:
if you have an additional column that can be use for TIMESTAMP, you
can use TIMESTAMP BY and ASA will reorder the events. Then you can
use LAG to fetch latest events for this particular device.
without any timestamp column, you can create COLLECTTOP operator, and order the events according to your "Position" column
alternatively, you can implement your own stateful logic using User Defined Aggregates (UDA) as described here.
Let me know if you need help to implement one of these 3 methods. I'll be happy to provide further details.
Thanks,
JS

Related

Azure Data Factory JSON syntax

In Azure Data Factory, I have a copy activity. The data source is the response body from a REST API POST request.
The sink is a SQL table. The problem is that, even though my JSON data contains multiple rows, only the first row is getting copied.
The source data looks like the following:
{
"offset": 0,
"limit": 1000,
"total": 65,
"loaded": 34,
"unloaded": 31,
"cubeCaches": [
{
"id": "MxMUVDN0Q1MzAk5MDg6RDkxREQxMUU5RDBDNzR2NMTk6YWNsZGxwMTJtc3QuY2952aXppZW50aW5==",
"projectId": "15D91DD11E9D0C74B3319",
"source": {
"name": "12302021",
"id": "07EF95111EC7F954158",
"type": "cube"
},
"state": {
"active": true,
"dirty": false,
"infoDirty": false,
"persisted": true,
"processing": false,
"loadedState": "loaded"
},
"lastUpdateTime": "2022-01-24T14:22:30Z",
"lastHitTime": "2022-02-14T20:02:02Z",
"hitCount": 1,
"size": 798720,
"creatorId": "D4E8BFD56085",
"lastUpdateJob": 18937,
"openViewCount": 0,
"creationTime": "2022-01-24T15:07:24Z",
"historicHitCount": 22,
"dataLanguages": [],
"rowCount": 2726,
"columnCount": 9
},
{
"id": "UYwMTIxMUFNjkxMUU5RDBDMTRCNkMwMDgwRUYzNUQ0MUI6YWNsZjLmNvbQ==",
"projectId": "120D0C1480EF35D41B",
"source": {
"name": "All Clients (YTD)",
"id": "49E5B13466251CD0B54E8F",
"type": "cube"
},
"state": {
"active": true,
"dirty": false,
"infoDirty": false,
"persisted": true,
"processing": false,
"loadedState": "loaded"
},
"lastUpdateTime": "2022-01-03T01:00:01Z",
"hitCount": 0,
"size": 82488152,
"creatorId": "1E2AFB011E80EF35FF14",
"lastUpdateJob": 364091,
"openViewCount": 0,
"creationTime": "2022-02-14T01:04:55Z",
"historicHitCount": 0,
"dataLanguages": [],
"rowCount": 8146903,
"columnCount": 13
}
}
I want to add a row in the Sink table (SQL) for every "id" in the JSON. However, when I run the activity, only the first record gets copied. It's mapped correctly, but I want it to copy all rows in the JSON, not just 1.
My Mapping tab in Azure Data Factory looks like this:
What am I doing wrong here? I'm thinking there is something wrong with my "Source" syntax for each of the columns...
In $cubeCashes[0][...] you're explicitly mapping the first element from this array into columns, and that's why only one row lands in the Sink.
I don’t know a way to achieve what you intend with copy activity only. I would use the Mapping Data Flow here, and inlide I would flatten (Flatten activity) your data to get the array of objects.
Then from this flattened dataset you could use a Derived Column to map the fields in JSON into columns of your target, Select, to remove unwanted original fields, and Sink it into your target location.

Django rest_framework dynamic selecting other objects

i want to create an Api for an order system. I have Ingredients and Products, both have categories which have to match if you want to combine them. So if a user selects a Pizza how can I only load in the ingredients which are available for Pizza, so the user cant select pasta as a Topping on his Pizza.
So if the User selects a Product pizza in extra and extraWo only the ingredients should show up, which are available for pizza
Thank you for your help
Depending on the API structure.
For example:
[
{
"Crust": "NORMAL",
"Flavor": "BEEF-NORMAL",
"Order_ID": 1,
"Size": "M",
"Table_No": 1,
"Timestamp": "2018-12-12T13:42:13.704148+00:00"
},
{
"Crust": "THIN",
"Flavor": "CHEESE",
"Order_ID": 2,
"Size": "S",
"Table_No": 5,
"Timestamp": "2018-12-12T13:42:13.704148+00:00"
},
{
"Crust": "NORMAL",
"Flavor": "CHICKEN-FAJITA",
"Order_ID": 3,
"Size": "L",
"Table_No": 3,
"Timestamp": "2018-12-12T13:42:13.720690+00:00"
}
]
Each pizza would be unique so foo[1]["Crust"] will give you "Normal". Add logic like checking the availability. Have a look at https://docs.djangoproject.com/en/3.1/topics/db/queries/#querying-jsonfield
For example do:
object.filter(pizza__normal='available')

Scale Fargate service tasks to match CloudWatch metric

I'm using CloudWatch Metric Math to calculate the number of workers (tasks) that I want my Fargate service to be scaled to. I planned on creating an alarm in CloudWatch to trigger the scaling action once it rose above or below the target number of 0. However, it doesn't look like there a way I can create an alarm based on CloudWatch Metric Math - or an alarm that does any type of comparison between two numbers (number of tasks needed vs. number of tasks existing).
How can I setup a Fargate scaling policy to scale based on my existing metric of 'Workers Needed'.
Metric Math
Formula: m1-m2-3 == desired scale offset
m1: Active Workers (tasks)
m2: Workers Needed (tasks)
{
"type": "metric",
"x": 0,
"y": 0,
"width": 24,
"height": 6,
"properties": {
"metrics": [
[ { "expression": "m1-m2-3", "label": "Workers/Needed difference", "id": "e1" } ],
[ "AWS/ECS", "MemoryUtilization", "ServiceName", "worker-service", "ClusterName", "my-cluster", { "period": 60, "stat": "SampleCount", "id": "m1", "label": "Active Workers" } ],
[ "LogMetrics", "Workers Needed", { "period": 60, "stat": "Maximum", "id": "m2" } ]
],
"view": "timeSeries",
"stacked": false,
"region": "us-east-1",
"title": "Worker/Lab difference",
"period": 300
}
}
Edit: Alarms based on metric math is now a thing
This doc page about metric math doesn't mention alarms at all, expressions seem to be more about visualizing with a dashboard. I also don't see anything about metric math in the SDK or cli documentation as far as alarms are concerned.
Your next simplest solution is probably paying homage to the great catch-all of all shortcomings of AWS, and write a Lambda that pulls the metrics, does the calculation, then publishes the metric as a custom metric with PutMetric. You can trigger this with a CloudWatch Event if you want a cron-like thing, or many, many other things by integrating it with SNS or just invoking it directly.
It's not the answer you want, but unfortunately I think it's the simplest way to get the functionality you want.

AWS boto3 page_iterator.search can't compare datetime.datetime to str

Trying to capture delta files(files created after last processing) sitting on s3. To do that using boto3 filter iterator by query LastModified value rather than returning all the list of files and filtering on the client site.
According to http://jmespath.org/?, the below query is valid and filters the following json respose;
filtered_iterator = page_iterator.search(
"Contents[?LastModified>='datetime.datetime(2016, 12, 27, 8, 5, 37, tzinfo=tzutc())'].Key")
for key_data in filtered_iterator:
print(key_data)
However it fails with;
RuntimeError: xxxxxxx has failed: can't compare datetime.datetime to str
Sample paginator reponse;
{
"Contents": [{
"LastModified": "datetime.datetime(2016, 12, 28, 8, 5, 31, tzinfo=tzutc())",
"ETag": "1022dad2540da33c35aba123476a4622",
"StorageClass": "STANDARD",
"Key": "blah1/blah11/abc.json",
"Owner": {
"DisplayName": "App-AWS",
"ID": "bfc77ae78cf43fd1b19f24f99998cb86d6fd8220dbfce0ce6a98776253646656"
},
"Size": 623
}, {
"LastModified": "datetime.datetime(2016, 12, 28, 8, 5, 37, tzinfo=tzutc())",
"ETag": "1022dad2540da33c35abacd376a44444",
"StorageClass": "STANDARD",
"Key": "blah2/blah22/xyz.json",
"Owner": {
"DisplayName": "App-AWS",
"ID": "bfc77ae78cf43fd1b19f24f99998cb86d6fd8220dbfce0ce6a81234e632c5a8c"
},
"Size": 702
}
]
}
Boto3 Jmespath implementation does not support dates filtering (it will mark them as incompatible types "unicode" and "datetime" in your example). But by the way Dates are parsed by Amazon you can perform lexographical comparison of them using to_string() method of Jmespath.
Something like this:
"Contents[?to_string(LastModified)>='\"2015-01-01 01:01:01+00:00\"']"
But keep in mind that its a lexographical comparison and not dates comparison. Works most of the time tho.
After spend a few minutes on boto3 paginator documentation, I just realist it is actually an syntax problem, which I overlook it as a string.
Actually, the quote that embrace comparison value on the right is a backquote/backtick, symbol [ ` ] . You cannot use single quote [ ' ] for the comparison values/objects.
After inspect JMESPath example, I notice it is using backquote for comparative value. So boto3 paginator implementation indeed comply to JMESPath standard.
Here is the code I run without error using the backquote.
import boto3
s3 = boto3.client("s3")
s3_paginator = s3.get_paginator('list_objects')
s3_iterator = s3_paginator.paginate(Bucket='mytestbucket')
filtered_iterator = s3_iterator.search(
"Contents[?LastModified >= `datetime.datetime(2016, 12, 27, 8, 5, 37, tzinfo=tzutc())`].Key"
)
for key_data in filtered_iterator:
print(key_data)

BigQuery: Create column of JSON datatype

I am trying to load json with the following schema into BigQuery:
{
key_a:value_a,
key_b:{
key_c:value_c,
key_d:value_d
}
key_e:{
key_f:value_f,
key_g:value_g
}
}
The keys under key_e are dynamic, ie in one response key_e will contain key_f and key_g and for another response it will instead contain key_h and key_i. New keys can be created at any time so I cannot create a record with nullable fields for all possible keys.
Instead I want to create a column with JSON datatype that can then be queried using the JSON_EXTRACT() function. I have tried loading key_e as a column with STRING datatype but value_e is analysed as JSON and so fails.
How can I load a section of JSON into a single BigQuery column when there is no JSON datatype?
Having your JSON as a single string column inside BigQuery is definitelly an option. If you have large volume of data this can end up with high query price as all your data will end up in one column and actually querying logic can become quite messy.
If you have luxury of slightly changing your "design" - I would recommend considering below one - here you can employ REPEATED mode
Table schema:
[
{ "name": "key_a",
"type": "STRING" },
{ "name": "key_b",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{ "name": "key",
"type": "STRING"},
{ "name": "value",
"type": "STRING"}
]
},
{ "name": "key_e",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{ "name": "key",
"type": "STRING"},
{ "name": "value",
"type": "STRING"}
]
}
]
Example of JSON to load
{"key_a": "value_a1", "key_b": [{"key": "key_c", "value": "value_c"}, {"key": "key_d", "value": "value_d"}], "key_e": [{"key": "key_f", "value": "value_f"}, {"key": "key_g", "value": "value_g"}]}
{"key_a": "value_a2", "key_b": [{"key": "key_x", "value": "value_x"}, {"key": "key_y", "value": "value_y"}], "key_e": [{"key": "key_h", "value": "value_h"}, {"key": "key_i", "value": "value_i"}]}
Please note: it should be newline delimited JSON so each row must be on one line
You can't do this directly with BigQuery, but you can make it work in two passes:
(1) Import your JSON data as a CSV file with a single string column.
(2) Transform each row to pack your "any-type" field into a string. Write a UDF that takes a string and emits the final set of columns you would like. Append the output of this query to your target table.
Example
I'll start with some JSON:
{"a": 0, "b": "zero", "c": { "woodchuck": "a"}}
{"a": 1, "b": "one", "c": { "chipmunk": "b"}}
{"a": 2, "b": "two", "c": { "squirrel": "c"}}
{"a": 3, "b": "three", "c": { "chinchilla": "d"}}
{"a": 4, "b": "four", "c": { "capybara": "e"}}
{"a": 5, "b": "five", "c": { "housemouse": "f"}}
{"a": 6, "b": "six", "c": { "molerat": "g"}}
{"a": 7, "b": "seven", "c": { "marmot": "h"}}
{"a": 8, "b": "eight", "c": { "badger": "i"}}
Import it into BigQuery as a CSV with a single STRING column (I called it 'blob'). I had to set the delimiter character to something arbitrary and unlikely (thorn -- 'þ') or it tripped over the default ','.
Verify your table imported correctly. You should see your simple one-column schema and the preview should look just like your source file.
Next, we write a query to transform it into your desired shape. For this example, we'd like the following schema:
a (INTEGER)
b (STRING)
c (STRING -- packed JSON)
We can do this with a UDF:
// Map a JSON string column ('blob') => { a (integer), b (string), c (json-string) }
bigquery.defineFunction(
'extractAndRepack', // Name of the function exported to SQL
['blob'], // Names of input columns
[{'name': 'a', 'type': 'integer'}, // Output schema
{'name': 'b', 'type': 'string'},
{'name': 'c', 'type': 'string'}],
function (row, emit) {
var parsed = JSON.parse(row.blob);
var repacked = JSON.stringify(parsed.c);
emit({a: parsed.a, b: parsed.b, c: repacked});
}
);
And a corresponding query:
SELECT a, b, c FROM extractAndRepack(JsonAnyKey.raw)
Now you just need to run the query (selecting your desired target table) and you'll have your data in the form you like.
Row a b c
1 0 zero {"woodchuck":"a"}
2 1 one {"chipmunk":"b"}
3 2 two {"squirrel":"c"}
4 3 three {"chinchilla":"d"}
5 4 four {"capybara":"e"}
6 5 five {"housemouse":"f"}
7 6 six {"molerat":"g"}
8 7 seven {"marmot":"h"}
9 8 eight {"badger":"i"}
One way to do it, is to load this file as CSV instead of JSON (and quote the values or eliminate newlines in the middle), then it will become single STRING column inside BigQuery.
P.S. You are right that having a native JSON data type would have made this scenario much more natural, and BigQuery team is well aware of it.