We are in Australia East.
We have an event hub with events coming through from an application. On Friday 19th March morning I created a Stream Analytics job, to try and read one of the event streams. This worked successfully on the Event Hub and returned results in the "Input preview" window when setting this up. This seems to match the timings on the message below (we are about 12 hours in front of UTC).
However by Friday afternoon, it started failing with one of the error messages "InternalServerError" or "No such host is known". I was working through the drop down boxes available when creating a new input after selecting "Select Event Hub from your subscriptions", so I know I haven't got anything wrong in the setup.
When trying to submit a support request, we get this slightly cryptic message:
The link doesn't work, it claims Stream Analytics is not supported in Resource Health, even though it is. Does this mean "It's down sorry, we are working on it" (as in actually working on it), or is it a canned response and we should escalate it?
Or is anyone else having trouble creating Stream Analytics Jobs and we are suffering an outage? The Azure Status monitor shows they are in good health.
Looks like this was a permission error. I needed to go into the IAM of the Event Hub and set the Stream Analytics Job up with Reader permissions. For some reason, this wasn't automatically added when setting up the Stream Analytics Job, as I thought it would be.
Once the permission was set, the job started successfully.
Related
We've recently started adding programmed radio to our existing SMAPI implementation. I've followed the Sonos Developer documentation and (eventually) got it working as expected. I'm just seeking for some clarification around the 'auto updating' based on the 'queueVersion' value.
Our schedules which are feeding the programmed radio can change from time to time. These changes should be reflected on the Sonos Players as soon as possible. For what I understand this should be possible by modifying the queueVersion property in both GET /context, GET /itemWindow and GET /version.
Looking at the GET /version documentation I see that Players "[...] are responsible for periodically polling this [QueueVersion] value to detect changes in the cloud queue track list, [...]".
I've monitored our API logs for about 15 minutes in which I would expect at least a GET /version request, but none showed up. The only calls I'm seeing are POST /timePlayed.
Can anyone (from the Sonos team perhaps?) clarify what this interval is set to, or how it can be controlled?
Given that you aren't seeing GET /version requests, there may be an error in your configuration.
The player sends a GET /version request every 5 minutes when paused and every 10 minutes when playing. This is by design, not depending on any setting that you can control. However, players fetch new tracks as needed using GET /itemWindow. The player requires a version in your response, so it doesn't send a GET /version request in this case. After the player gets a new item window, it resets the polling interval to another 10 minutes.
See the Play audio (cloud queue) page for details.
While I am developing the Azure Function App with Event Hub triggered locally, something wired which drew my attention. When I started debugging, my consumer function app will occasionally automatically be triggered with my previous message through event hub, however, I didn't even fire my event hub publisher at that time! It felt like some event messages were stored in some cache places where I have no idea where they are, that were also trying to trigger my function app from background again and again...
My App settings for my function is using UseDevelopmentStorage=true, and is not related to any of my storage account, in addition above scenario did not always happen every time, but it made me concerned because I had no idea why the same message to be triggered multiply times that are out of my control, once message were published and consumed by function app, it should disappear from event hub message queue, right?
Can anyone please let me know where I can check my messaged stored locally or when published in Azure portal? Thank you very much!
Can anyone please let me know where I can check my messaged stored
locally or when published in Azure portal?
Firstly,i'm afraid that azure function won't save your messages into cache.Based on the official document:
When all function execution completes (with or without errors),
checkpoints are added to the associated storage account. When
check-pointing succeeds, all 1,000 messages are never retrieved again.
Above is description of event hub checkpoint mechanism.Besides,you could refer to this blog. The AzureWebJobsStorage is set to be UseDevelopmentStorage=true when you debug function locally,so i suggest you checking the data in the local storage account.When you run it on the portal,associated storage account will be checked.
Here are some similar issues for your reference:
1.https://github.com/Azure/azure-functions-host/issues/2796
2.https://github.com/Azure/Azure-Functions/issues/589
3.https://github.com/Azure/azure-event-hubs-dotnet/issues/358
Of course,you could open a stack here to get more help.
I have a free tier subscription to Azure IoT hub with only two edge devices connected to it, one of them mostly off. Yesterday, it looks like my hub recorded a slew of messages--within 45 minutes (5 to 5:45 pm PST), 25K messages were recorded by the hub. A few related issues.
I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST today (same day UTC), but not sure why it is still blockign me after that.
I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade"
Any help with any of these?
1.I'm not sure what these messages were. I'll add message storage for the future, but wondering if there's a way to debug this.
IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. You can use it to monitor Device identity operations,Device telemetry,Cloud-to-device messages,Connections,File uploads and Message routing.
2.Ever since then, I haven't been able to use the IoT hub. I get a "message count exceeded" error. That made sense till around 5 pm PST
today (same day UTC), but not sure why it is still blockign me after
that.
IoT Hub Free edition enables you to transmit up to a total of 8,000 messages per day, and register up to 500 device identities. The device identity limit is only present for the Free Edition.
3.I tried to change my F1 to a basic tier to basic, but that wasn't allowed because I am apparently "not allowed to downgrade".
You cannot switch from Free to one of the paid editions. The free edition is meant to test out proof-of-concept solutions only.
Confirming the earlier answer, the only solution is to delete the old hub and create a new free one, which is simple enough.
I still haven't figured out what those specific error messages were, but I do notice that when there are errors such as CA certificate auth failures, lots of messages get sent up. I'm still working with MSFT support on the CA certificate signing issues, but this one is a side effect.
For future reference, look at yoru hub's metrics, and note that (i) quote gets reset midnight UTC, but (ii) net violations do not.
I followed a IOT Hub tutorial and got it working. I then created a Stream Analytics job and used the above as an Input (which upon test connection works).
However I do not see any inputs being received. When running a sample test I get the following error:
Description Error code: ServiceUnavailable Error message: Unable to
connect to input source at the moment. Please check if the input
source is available and if it has not hit connection limits.
I can see telemetry messages being received in the IOT Hub. Any help would be appreciated
Is the stream analytics job running?
I had a similar problem where i wasn't getting any events from stream analytics and i had forgotten to turn it on.
Click on the stream analytics > overview > start
I had the same problem (using Event Hubs in my case). The root cause was that I had too many queries within my job running against the same input. I solved it by splitting my input into several inputs across multiple consumer groups.
From the documentation (emphasis added):
Each Stream Analytics IoT Hub input should be configured to have its own consumer group. When a job contains a self-join or multiple inputs, some input may be read by more than one reader downstream, which impacts the number of readers in a single consumer group. To avoid exceeding IoT Hub limit of 5 readers per consumer group per partition, it is a best practice to designate a consumer group for each Stream Analytics job.
I have exactly the same problem, though my modules on my raspberry pi are running without failure.
SA says: "Make sure the input has recently received data and the correct format of those events has been selected.
I need to produce an error display message when the Windows Mobile PDA is in flight mode. The user will need to pull and push data from a SQL Server server; however when in flight mode this is not doable and a message needs to be displayed. Currently the message that is displayed is:
a requeste to send information the the computer using IIS has failed.
For more results please see HRESULT
I am programming using VB.Net and I am fairly new to it. I have searched for the past week on the Internet and come across information suggesting that I use TAPI, however i do not know what to import, or where the "flight mode detection" code would go in my application.
Use the State and Notification Broker. Use the SystemState class to read and/or monitor the value of something like SystemProperty.PhoneRadioOff. Here's an example of using the SNB (in C#, but simple enough to convert).