Stream analytics small rules on high amount of device data - azure-stream-analytics

We have the following situation.
We have multiple devices sending data to an event hub (Interval is
one second)
We have a lot of small stream analytics rules for alarm
checks. The rules are applied to a small subset of the devices.
Example:
10000 Devices sending data every second.
Rules for roughly 10 devices.
Our problem:
Each stream analytics query processes all of the input data, although the job has to process only a small subset of the data. Each query filters on device id and filters out the most amount of data. Thus we need a huge number of streaming units which lead to high stream analytics cost.
Our first idea was to create an event hub for each query. However, here we have the problem that each event hub has at least one throughput unit, which leads also to high costs.
What is the best solution in our case?

One possible solution would be to use IoT hub and to create a different Endpoint with a specific Route for the devices you want to monitor.
Have a look to this blog post to see if this will work for your particular scenario: https://azure.microsoft.com/en-us/blog/azure-iot-hub-message-routing-enhances-device-telemetry-and-optimizes-iot-infrastructure-resources/
Then in Azure Stream Analytics, you can use this specific Endpoint as input.
Thanks,
JS (Azure Stream Analytics team)

Related

How to enrich events using a very large database with azure stream analytics?

I'm in the process of analyzing Azure Stream Analytics to replace a stream processing solutions based on NiFi with some REST microservices.
One step is the enrichment of sensor data form a very large database of sensors (>120Gb).
Is it possible with Azure Stream Analytics? I tried with a very small subset of the data (60Mb) and couldn't even get it to run.
Job logs give me warnings of memory usage being too high. Tried scaling to 36 stream units to see if it was even possible, to no avail.
What strategies do I have to make it work?
If I deterministically (via a hash function) partition the input stream using N partitions by ID and then partition the database using the same hash function (to get id on stream and ID on database to the same partition) can I make this work? Do I need to create several separated stream analytics jobs do be able to do that?
I suppose I can use 5Gb chunks, but I could not get it to work with ADSL Gen2 datalake. Does it really only works with Azure SQL?
Stream Analytics supports reference datasets of up to 5GB. Please note that large reference datasets come with the downside of making jobs/nodes restarts very slow (up to 20 minutes for the ref data to be distributed; restarts that may be user initiated, for service updates, or various errors).
If you can downsize that 120Gb to 5Gb (scoping only the columns and rows you need, converting to types that are smaller in size), then you should be able to run that workload. Sadly we don't support partitioned reference data yet. This means that as of now, if you have to use ASA, and can't reduce those 120Gb, then you will have to deploy 1 distinct job for each subset of stream/reference data.
Now I'm surprised you couldn't get a 60Mb ref data to run, if you have details on what exactly went wrong, I'm happy to provide guidance.

Best technology for building race simulation application

I am trying to do something new, something I have never done before. I am looking for advice or point me into right direction how to choose technology. I am trying to build race simulation app that will have thousands of iot devices streaming data into central platform. While I understand that I can use some sort of IOT hub with cloud providers, but what technology do I choose for storing data?
Example is online indoor biking app. There are apps where you can connect your indoor bike online and have simulated race. For my project I am trying to build something similar. Do I use NO SQL db in this scenario? What technology will allow better scale of application like this since it could be millions of devices around the world in "simulated" race. I am not worried about front-end and things like that, but backend, IOT hub, storing data, presenting-real time?
At this point it is important to understand what kind of data your IoT devices will stream, and at what kind of a rate. It will have significant impact on your question.
That it is if it's just location information and some other small data sent lets say once a second, then if you're talking about tens of thousands of devices - this is not a big load of information, and any standard database, like MySQL will be able to deal with it. You will of course need a multi-threaded server(s) capable of handling many requests in parallel.
If your IoT devices will stream HD video, then you're looking at a completely different solution, with a much stronger server, capable of handling allot of streams in parallel, with significant bandwidth requirements from your hosting company, as well as storage space for all the videos. In this case you will store the streams as files (if you'll need them later on), and you won't need any special database either.
In any case, once you'll reach millions of users, you'll be able to scale most modern databases and servers, like MySQL replication capability. For example, take a look how Wikipedia is relying on MySQL: wikipedia - MySQL https://www.mysql.com/why-mysql/case-studies/mysql-cs-wikipedia.html
So I wouldn't be worried regarding the database on this stage, but make sure that the design of my system is in accordance to the the type of data and rate it is streamed.
Hope this gives you a pointer.

Sending data from website to BigQuery using Pub/Sub and Cloud Functions

Here's what I'm trying to accomplish
A visitor lands on my website
Javascript collects some information and sends a hit
The hit is processed and inserted into BigQuery
And here's how I have planned to solve it
The hit is sent to Cloud Functions HTTP trigger (using Ajax)
Cloud Functions sends a message to Pub/Sub
Pub/Sub sends data to another Cloud Function using a Pub/Sub trigger
The second Cloud Function processes the hit into Biguery row and inserts it into BigQuery
Is there a simpler way to solve this?
Some other details to take into account
There are around 1 million hits a day
Don't want to use Cloud Dataflow because it inflates the costs
Can't (probably) skip Pub/Sub because some hits are sent when a person is leaving the site and the request might not have enough time to process everything.
You can perform a Big Query streaming, this one is less expensive and you avoid reach the Load Jobs quotas 1000 per table per day.
Another option is if you don't mind that the data spend a lot of time loading, you can store all the info in a Cloud Storage bucket and then load all the data with a transfer. You can program it in order that data be uploaded daily. This solution is focus in a batch environment in which you will store all the info in one side and then you transfer it to the final destination. If you only want to streaming the solution that you mentioned is ok.
It’s up to you to choose the option that better fits to your specific usage.

Combining messages from multiple devices

Is there a recommended way in the Azure ecosystem to join the JSON messages sent by two or more separate devices at approximately the same time in order to run them through, for example, an Azure ML webservice.
The goal of this would be running a real time analysis with data coming from multiple devices.
Thank you
Edit :
Perhaps I should have phrased my question better, but I am currently using Azure Stream Analytics in order to capture the data sent from a device to Azure ML, which works fine (from learn.microsoft.com/en-us/azure/iot-hub/…). Now I want to do the same thing but with multiple devices that each send part of the information that Azure ML needs.
I think what you are looking for is Azure Stream Analytics which allows you to work on windows of time.
This article shows how to integrate ASA with Machine Learning.
And you can easily set the input of an ASA job to an IoT Hub.

Is there a way to leverage Hadoop tools to mange parallel REST API calls to external sources?

I am writing software that creates a large graph database. The software needs to access dozens of different REST APIs with millions of total requests. The data will then be processed by the Hadoop cluster. Each of these APIs have rate limits that vary by requests/second, per window, per day and per user (typically via OAuth).
Does anyone have any suggestions on how I might use either a Map function or other Hadoop-ecosystem tool to manage these queries? The goal would to be to leverage the parallel processing in Hadoop.
Because of the varied rate limits, it often makes sense to switch to a different API query while waiting for the first limit to reset. An example would be one API call that creates nodes in the graph and another that enriches the data for that node. I could have the system go out and enrich the data for the new nodes while waiting for the first API limit to reset.
I have tried using SQS queuing on EC2 to manage the various API limits and states (creating a queue for each API call), but have found it to be ridiculously slow.
Any ideas?
It looks like the best option for my scenario will be using Storm, or specifically the Trident abstraction. It gives me the greatest flexibility for both workload management but process management as well