I have a question based on the architecture in the following URL:
https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-how-it-works.html
I am able to see the tensors in the S3 and I see for each rule I have created a new processing job has been started. My question is if the output of these reports are also pushed automatically to CloudWatch? I see there is an arrow from the Debugger processing container to CloudWatch in the architecture picture I mentioned above.
Related
We are in Australia East.
We have an event hub with events coming through from an application. On Friday 19th March morning I created a Stream Analytics job, to try and read one of the event streams. This worked successfully on the Event Hub and returned results in the "Input preview" window when setting this up. This seems to match the timings on the message below (we are about 12 hours in front of UTC).
However by Friday afternoon, it started failing with one of the error messages "InternalServerError" or "No such host is known". I was working through the drop down boxes available when creating a new input after selecting "Select Event Hub from your subscriptions", so I know I haven't got anything wrong in the setup.
When trying to submit a support request, we get this slightly cryptic message:
The link doesn't work, it claims Stream Analytics is not supported in Resource Health, even though it is. Does this mean "It's down sorry, we are working on it" (as in actually working on it), or is it a canned response and we should escalate it?
Or is anyone else having trouble creating Stream Analytics Jobs and we are suffering an outage? The Azure Status monitor shows they are in good health.
Looks like this was a permission error. I needed to go into the IAM of the Event Hub and set the Stream Analytics Job up with Reader permissions. For some reason, this wasn't automatically added when setting up the Stream Analytics Job, as I thought it would be.
Once the permission was set, the job started successfully.
Reading a lot about error handling for AWS Lambdas and nothing covers to topic of a running Lambda container just crashing.
Is this a possibility because it seems like one? I'm building an event driven system using Lambdas, triggered by a file upload to S3 and am uncertain if I should bother building in logic to pickup processing if a lambda has died.
e.g. File object is create on S3 -> S3 notifies Lambda of the event -> Lambda instance happens to crash before it can start processing -> Event is now gone forever* (assumption here, I'm unsure if that's true, but can't find anything to say the contrary).
I'm debating building in logic to reconcile what is on S3 and what was processed each day so I can detect the (albeit rare) scenario where a Lambda died (died and couldn't write a failure to a DLQ) and we need to process these files. Is this worth it? Would S3 somehow know that the lambda died and it needs to put the event on a DLQ of its own?
From https://docs.aws.amazon.com/fr_fr/lambda/latest/dg/with-s3.html AWS S3 are async.
Next from https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html, Async lambda invocation are retries twice without any queuing.
I guess if more tries are needed, better to setup a SNS/SQS queuing.
While I am developing the Azure Function App with Event Hub triggered locally, something wired which drew my attention. When I started debugging, my consumer function app will occasionally automatically be triggered with my previous message through event hub, however, I didn't even fire my event hub publisher at that time! It felt like some event messages were stored in some cache places where I have no idea where they are, that were also trying to trigger my function app from background again and again...
My App settings for my function is using UseDevelopmentStorage=true, and is not related to any of my storage account, in addition above scenario did not always happen every time, but it made me concerned because I had no idea why the same message to be triggered multiply times that are out of my control, once message were published and consumed by function app, it should disappear from event hub message queue, right?
Can anyone please let me know where I can check my messaged stored locally or when published in Azure portal? Thank you very much!
Can anyone please let me know where I can check my messaged stored
locally or when published in Azure portal?
Firstly,i'm afraid that azure function won't save your messages into cache.Based on the official document:
When all function execution completes (with or without errors),
checkpoints are added to the associated storage account. When
check-pointing succeeds, all 1,000 messages are never retrieved again.
Above is description of event hub checkpoint mechanism.Besides,you could refer to this blog. The AzureWebJobsStorage is set to be UseDevelopmentStorage=true when you debug function locally,so i suggest you checking the data in the local storage account.When you run it on the portal,associated storage account will be checked.
Here are some similar issues for your reference:
1.https://github.com/Azure/azure-functions-host/issues/2796
2.https://github.com/Azure/Azure-Functions/issues/589
3.https://github.com/Azure/azure-event-hubs-dotnet/issues/358
Of course,you could open a stack here to get more help.
I followed a IOT Hub tutorial and got it working. I then created a Stream Analytics job and used the above as an Input (which upon test connection works).
However I do not see any inputs being received. When running a sample test I get the following error:
Description Error code: ServiceUnavailable Error message: Unable to
connect to input source at the moment. Please check if the input
source is available and if it has not hit connection limits.
I can see telemetry messages being received in the IOT Hub. Any help would be appreciated
Is the stream analytics job running?
I had a similar problem where i wasn't getting any events from stream analytics and i had forgotten to turn it on.
Click on the stream analytics > overview > start
I had the same problem (using Event Hubs in my case). The root cause was that I had too many queries within my job running against the same input. I solved it by splitting my input into several inputs across multiple consumer groups.
From the documentation (emphasis added):
Each Stream Analytics IoT Hub input should be configured to have its own consumer group. When a job contains a self-join or multiple inputs, some input may be read by more than one reader downstream, which impacts the number of readers in a single consumer group. To avoid exceeding IoT Hub limit of 5 readers per consumer group per partition, it is a best practice to designate a consumer group for each Stream Analytics job.
I have exactly the same problem, though my modules on my raspberry pi are running without failure.
SA says: "Make sure the input has recently received data and the correct format of those events has been selected.
I have a scenario where we have many clients uploading to s3.
What is the best approach to knowing that there is a new file?
Is it realistic/good idea, for me to poll the bucket ever few seconds?
UPDATE:
Since November 2014, S3 supports the following event notifications:
s3:ObjectCreated:Put – An object was created by an HTTP PUT operation.
s3:ObjectCreated:Post – An object was created by HTTP POST operation.
s3:ObjectCreated:Copy – An object was created an S3 copy operation.
s3:ObjectCreated:CompleteMultipartUpload – An object was created by the completion of a S3 multi-part upload.
s3:ObjectCreated:* – An object was created by one of the event types listed above or by a similar object creation event added in the future.
s3:ReducedRedundancyObjectLost – An S3 object stored with Reduced Redundancy has been lost.
These notifications can be issued to Amazon SNS, SQS or Lambda. Check out the blog post that's linked in Alan's answer for more information on these new notifications.
Original Answer:
Although Amazon S3 has a bucket notifications system in place it does not support notifications for anything but the s3:ReducedRedundancyLostObject event (see the GET Bucket notification section in their API).
Currently the only way to check for new objects is to poll the bucket at a preset time interval or build your own notification logic in the upload clients (possibly based on Amazon SNS).
Push notifications are now built into S3:
http://aws.amazon.com/blogs/aws/s3-event-notification/
You can send notifications to SQS or SNS when an object is created via PUT or POST or a multi-part upload is finished.
Your best option nowadays is using the AWS Lambda service. You can write a Lambda using either node.js javascript, java or Python (probably more options will be added in time).
The lambda service allows you to write functions that respond to events from S3 such as file upload. Cost effective, scalable and easy to use.
You can implement a pub-sub mechanism relatively simply by using SNS, SQS, and AWS Lambda. Please see the below steps. So, whenever a new file is added to the bucket a notification can be raised and acted upon that (everything is automated)
Please see attached diagram explaining the basic pub-sub mechanism
Step 1
Simply configure the S3 bucket event notification to notify an SNS topic. You can do this from the S3 console (Properties tab)
Step 2
Make an SQS Queue subscribed to this topic. So whenever an object is uploaded to the S3 bucket a message will be added to the queue.
Step 3
Create an AWS Lambda function to read messages from the SQS queue. AWS Lambda supports SQS events as a trigger. Therefore, whenever a message appears in the SQS queue, Lambda will trigger and read the message. Once successfully read the message it will be automatically deleted from the queue. For the messages that can't be processed by Lambda (erroneous messages) will not be deleted. So those messages will pile up in the queue. To prevent this behavior using a Dead Letter Queue (DLQ) is a good idea.
In your Lambda function, add your logic to handle what to do when users upload files to the bucket
Note: DLQ is nothing more than a normal queue.
Step 4
Debugging and analyzing the process
Make use of AWS Cloud watch to log details. Each Lambda function creates a log under log groups. This is a good place to check if something went wrong.