I have a IOT Hub which receive messages (avro format) from connected device. I want to configure the alerts (under IOT Hub Monitoring section) based on the specific values in the message however it seems alerting don't have provision to configure rule based on the date being sent by device.
Any pointers on this? is this possible or any alternate option?
Thanks,
Bhupal
You can have Azure Stream Analytics job to do that.It would read message sent in Avro format and then act on it based on a rule. Pleas refer to this doc on how to use SQL Azure as your reference data in RuleEngine
http://learniotwithzain.com/2019/08/alert-engine-using-azure-stream-analytics-and-sql-azure-as-reference-data/
Few more links to help you with that:
Rules engine for Stream Analytics on Azure
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-threshold-based-rules
An alternative option would be to use AzureFunctions also but that would need you to do all the underlying stuff which is pretty easy with Azure Stream Analytics.
An example of Azure Functions: Here the message is intercepted and then passed to a different eventhub after
[FunctionName("IotDeviceAnalytics")]
public static async Task Run(
[IoTHubTrigger("iothub-eventhubname", Connection = "IotHubConnectionString",
ConsumerGroup = "consumergroup")] EventData[] events,
[EventHub("eventhubconnectionstring", Connection =
"EventHubConnectionString")]IAsyncCollector<string> outputEvents,
ILogger log)
{
foreach (EventData eventData in events)
{
//eventData would have your message
}
}
But as with all components of Azure please do check cost and size limitations. Using SQL Azure as reference data for rule engine has limitation on size of Rule that can be saved as reference data.
Related
How do I handle and return a human readable error in a Java Azure Function app?
All examples of this on a Google search, are just simple instructions on how to do a try-catch, which is not my question.
More specfically, how do we design the return status code and the response body of the request, in a way that provides the most flexibility to a wide array of situations?
Given that we are not integrating Spring-Boot in this case, and that we do not have access to anything Spring.
Given that my API generally returns an object that we will call Pojo1, on error, what is the best way to return a informative message.
NOTE: Of course, I do know there are situations where you want security through obscurity, in which case I would probably choose logging errors to app insights. This is not my question though.
Well, you can set custom headers while returning the request. This
can be done using a setHeader function.
You can also use azure service bus or event grid which will carry
specific messages regarding the errors.
Also, you can use azure monitoring which collect all the error and
notify you when everything happens.
Refer this article by Eugen Paraschiv for indepth explanation on how to use setheader.
Refer this documentation on azure service bus and this documentation on event grid.
Refer this documentation on azure monitoring logs.
We are developing an SAP Fiori App to be used on the Launchpad and as an offline-enabled hybrid app as well using the SAP SDK and its Kapsel Plug Ins. One issue we are facing at the moment is the ODATA message handling.
On the Gateway, we are using the Message Manager to add additional information to the response
" ABAP snippet, random Gateway entity method
[...]
DATA(lo_message_container) = me->mo_context->get_message_container( ).
lo_message_container->add_message(
iv_msg_type = /iwbep/cl_cos_logger=>warning
iv_msg_number = '123'
iv_msg_id = 'ZFOO'
).
" optional, only used for 'true' errors
RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception
EXPORTING
message_container = lo_message_container.
In the Fiori app, we can directly access those data from the message manager. The data can be applied to a MessageView control.
// Fiori part (Desktop, online)
var aMessageData = sap.ui.getCore().getMessageManager().getMessageModel().getData();
However, our offline app always has an empty message model. After a sync or flush, the message model is always empty - even after triggering message generating methods in the backend.
The only way to get some kind of messages is to raise a /iwbep/cx_mgw_busi_exception and pass the message container. The messages can be found, in an unparsed state, in the /ErrorArchive entity and be read for further use.
// Hybrid App part, offline, after sync and flush
this.getModel().read("/ErrorArchive", { success: .... })
This approach limits us to negative, "exception worthy", messages only. We also have to code some parts of our app twice (Desktop vs. Offlne App).
So: Is there a "proper" to access those messages after an offline sync and flush?
For analyzing the issue, you might use the tool ILOData as seen in this blog:
Step by Step with the SAP Cloud Platform SDK for Android — Part 6c — Using ILOData
Note, ILOData is part of the Kapsel SDK, so while the blog above was part of a series on the SAP Cloud Platform SDK for Android, it also applies to Kapsel apps.
ILOData is a command line based tool that lets you execute OData requests and queries against an offline store.
It functions as an offline OData client, without the need for an application.
Therefore, it’s a good tool to use to test data from the backend system, as well as verify app behavior.
If a client has a problem with some entries on their device, the offline store from the device can be retrieved using the sendStore method and then ILOData can be used to query the database.
This blog about Kapsel Offline OData plugin might also be helpful.
I'm facing some issues with filtering on message body in the Azure IoT Hub. Is this still not supported? The tests go through, but when I try real messages from the device everything is hitting the fallback and not the intended route.¨
In other words:
//this is working when adding property to message in the device code
temperature > 30
//this is not working when message contains json object without using any properties
$body.temperature > 30
Do we still need to use the message properties?
This feature (such as a filtering on the $body) requires setup the following message system properties:
message.ContentType = "application/json";
message.ContentEncoding = "utf-8";
See more details here.
Is there any other way to insert data in BigQuery via API apart from via streaming data i.e. Table.insetAll
InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId)
.addRow("rowId", rowContent)
.build())
As you can see in the docs, you also have 2 other possibilites:
Loading from Google Cloud Storage, BigTable, DataStore
Just run a job.insert method from the job resource and set as metadata the field configuration.load.sourceUri.
In the Python Client, this is done in the method LoadTableFromStorageJob.
You can therefore just send your files to GCS for instance and then have an API call to bring the files to BigQuery.
Media Upload
This is also a job.load operation but this time the HTTP request also carries binaries from a file in your machine. So you can pretty much send any file that you have in your disk with this request (given the format is accepted by BQ).
In Python, this is done in the resource table Table.upload_from_file.
We need to send large messages on ServiceBus Topics. Current size is around 10MB. Our initial take is to save a temporary file in BlobStorage and then send a message with reference to the blob. The file is compressed to save upload time. It works fine.
Today I read this article: http://geekswithblogs.net/asmith/archive/2012/04/10/149275.aspx
The suggestion there is to split the message in smaller chunks and on the receiving side aggregate them again.
I can admit that is a "cleaner approach", avoiding the roundtrip to BlobStore. On the other hand I prefer to keep things simple. The splitting mechanism introduces increased complexity. I mean there must have been a reason why they didn't include that in the ServiceBus from the beginning ...
Has anyone tried the splitting approach in real life situation?
Are there better patterns?
I wrote that blog article a while ago, the intention was to implement
the splitter and aggregator patterns using the Service Bus. I found this question by chance when searching for a better alternative.
I agree that the simplest approach may be to use Blob storage to store the message body, and send a reference to that in the message. This is the scenario we are considering for a customer project right now.
I remember a couple of years ago, there was some sample code published that would abstract Service Bus and Storage Queues from the client application, and handle the use of Blob storage for large message bodies when required. (I think it was the CAT team at Microsoft, but I'm not sure).
I can't find the sample with a Quick google search, but as it's probably a couple of years old, it will be out of date, as the Service Bus client library has been improved a lot since then.
I have used the splitting of messages when the message size was too large, but as this was for batched telemetry data there was no need to aggregate the messages, and I could just process a number of smaller batches on the receiving end instead of one large message.
Another disadvantage of the splitter-aggregator approach is that it requires sessions, and therefore a session enabled Queue or Subscription. This means that all messages will require sessions, even smaller ones, and also the Session Id cannot be used for another purpose in the implementation.
If I were you I would not trust the code on the blog post, it was written a long time ago, and I have learned a lot since then :-).
The Blob Storage approach is probably the way to go.
Regards,
Alan
In case someone will stumble in the same scenario, the Claim Check approach would help.
Details:
Implement Claim Check Pattern
Use ServiceBus.AttachmentPlugin (Assuming you use C#. Optionally, you can create your own)
Use extral storage e.g. Azure Storage Account (optionally, you can use other storage)
C# Code Snippet
using ServiceBus.AttachmentPlugin;
...
// Getting connection information
var serviceBusConnectionString = Environment.GetEnvironmentVariable("SERVICE_BUS_CONNECTION_STRING");
var queueName = Environment.GetEnvironmentVariable("QUEUE_NAME");
var storageConnectionString = Environment.GetEnvironmentVariable("STORAGE_CONNECTION_STRING");
// Creating config for sending message
var config = new AzureStorageAttachmentConfiguration(storageConnectionString);
// Creating and registering the sender using Service Bus Connection String and Queue Name
var sender = new MessageSender(serviceBusConnectionString, queueName);
sender.RegisterAzureStorageAttachmentPlugin(config);
// Create payload
var payload = new { data = "random data string for testing" };
var serialized = JsonConvert.SerializeObject(payload);
var payloadAsBytes = Encoding.UTF8.GetBytes(serialized);
var message = new Message(payloadAsBytes);
// Send the message
await sender.SendAsync(message);
References:
https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/
https://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html
https://github.com/SeanFeldman/ServiceBus.AttachmentPlugin
https://github.com/mspnp/cloud-design-patterns/tree/master/claim-check/code-samples/sample-3