How to automate Cloud Watch Dashboards.? - amazon-cloudwatch

I have several dashboards in CloudWatch that represent a view of my infrastructure: Number of instances from an autoscaling-group that are currently running, the CPU/Disk usage per instance, etc. However, when I update an autoscaling-group, I have to manually update the dashboards (autoscaling-group ID) to include its EC2 instances in the display. I'm looking for some kind of metric/dimension that can filter autoscaling-groups by tag. Is it possible, if yes then how? if no, how can I make it differently?
Thanks.

You can build lambda function to do this job.
Configure lambda function to trigger every few minutes and check the autoscaling group for addition/removal of instances and update the cloudwatch dashboard accordingly
1 . Create a lambda function to check the instance present in the ASG and update the CW dashboard based on the instances present in the dashboard.
2 . Create a cloudwatch rule to trigger the lambda function every 5/10 minutes

Related

Create alarm template with CloudWatch

Still discovering CloudWatch alarm. I have the metric M1, which have a Dimension "Namespace".
I have let say 10 different namespace.
I wish to create a different alarm per namespace, but the only way I find is to create 10 different alarm, one for each namespace.
Is is it possible to create alarm template with CloudWatch? I create one alarm, grouping by namespace. Meaning that the metrics of 3 different namespaces goes above the configured threshold, then I will have 3 alerts instead of just 1.

Get notification if ECS service launches a new task, if autoscaling is triggered

We have used ECS for our production setups. As per my understanding of ECS, while creating a cluster of type EC2, we specify the number of instances to be launched. When we create a service, and if autoscaling is enabled we specify the minimum and the maximum number of tasks that can be created.
While creating these tasks, if there is no space left on the existing instances, ECS launches a new instance to place these tasks.
I would like to know if we can trigger a notification whenever a new EC2 instance gets added in the ECS cluster if autoscaling is triggered?
If yes, please help me with links or steps for the same.
Thanks.
Should be doable, see here: https://docs.aws.amazon.com/autoscaling/ec2/userguide/ASGettingNotifications.html
There are simple ways to test it via manually increasing the capacity too and are other notifications you can also subscribe too.

Notification Service on AWS S3 bucket (prefix) size

I have a specific use-case where we have a huge amount of data that is continuously streamed into the AWS bucket.
we want a notification service for s3 bucket on the specific folder where if a folder reaches specific size(for example 100 TB) a cleaning service should be triggered via (SNS, Aws lambda)
I have checked into AWS documentation. I did not found any direct support from Aws regarding this issue.
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
We are planning to have a script that will periodically run and check the size of s3 Object and kicks AWS lambda.
is there any elegant way to handle case like this .any suggestion or opinion is really appreciated.
Attach s3 trigger event to a lambda function which will get triggered, whenever any file is added to the S3 bucket.
Then in the lambda function check for the file size. This will eliminate to run a script periodically to check the size.
Below is a sample code for adding S3 trigger to a lambda function.
s3_trigger:
handler: lambda/lambda.s3handler
timeout: 900
events:
- s3:
bucket: ${self:custom.sagemakerBucket}
event: s3:ObjectCreated:*
existing: true
rules:
- prefix: csv/
- suffix: .csv
There is no direct method for obtaining the size of a folder in Amazon S3 (because folders do not actually exist).
Here's a few ideas...
Periodic Lambda function to calculate total
Create an Amazon CloudWatch Event to trigger an AWS Lambda function at specific intervals. The Lambda function would list all objects with the given Prefix (effectively a folder) and total the sizes. If it exceeds 100TB, the Lambda function could trigger the cleaning process.
However, if there are thousands of files in that folder, this would be somewhat slow. Each API call can only retrieve 1000 objects. Thus, it might take many calls to count the total, and this would be done every checking interval.
Keep a running total
Configure Amazon S3 Events to trigger an AWS Lambda function whenever a new object is created with that Prefix. The Lambda function can retrieve increment the running total in a database. If the total exceeds 100TB, the Lambda function could trigger the cleaning process.
Which database to use? Amazon DynamoDB would be the quickest and it supports an 'increment' function, but you could be sneaky and just use AWS Systems Manager Parameter Store. This might cause a problem if new objects are created quickly because there's no locking. So, if files are coming in every few seconds or faster, definitely use DynamoDB.
Slow motion
You did not indicate how often this 100TB limit is likely to be triggered. If it only happens after a few days, you could use Amazon S3 Inventory, which provides a daily CSV containing a listing of objects in the bucket. This solution, of course, would not be applicable if the 100TB limit is hit in less than a day.

Deleting events published by AWS S3 buckets which are still in queue to be processed by lambda

My architecture is:
1.Drop multiple files in aws S# bucket
2. Lambda picks the file one by one and starts processing it
Problem is :
I am not able to stop the lambda to process the files in between. Even if i stop the lambda instance and restart it, it picks from where it left.
Is there a way to achieve this?
You have no control over the events pushed by S3. You'll be better off if you just cancel the Lambda subscription if you want to stop it for good, but I am afraid that already emitted events will be processed as long as your Lambda is active.
What exactly are you trying to achieve?
If you want to limit the number of files your Lambda functions can process, you can just limit of concurrent executions on your function to 1, so it won't auto-scale based on demand.
Simply go to Concurrency as the image below shows, set it to 1 and save it.
Detach the lambda S3 trigger and add it newly.
This way all new events will be picked up and not the old events

Automating scaleup of Streaming units - Stream analytics job

We would like to automate scale up of streaming units for certain stream analytics job if the 'SU utilization' is high. Is it possible to achieve this using PowerShell? Thanks.
Firstly, as Pete M said, we could call REST API to create or update a transformation within a job.
Besides, Azure Stream Analytics Cmdlets New-AzureRmStreamAnalyticsTransformation could be used to update a transformation within a job.
Depends on what you mean by "automate". You can update a transformation via the API from a scheduled job, including streaming unit allocation. I'm not sure if you can do this via the PS object model but you can always make a rest call:
https://learn.microsoft.com/en-us/rest/api/streamanalytics/stream-analytics-transformation
If you mean you want to use powershell to create and configure a job to automatically scale on its own, unfortunately today that isn't possible regardless of how you create the job. ASA doesn't support elastic scaling. You have to do it "manually", either by hand or some manner of scheduled webjob or similar.
It is three years later now, but I think you can use App Insights to automatically create an alert rule based on percent utilization. Is it an absolute MUST that you use powershell? If so, there is an Azure Automation Script on Github:
https://github.com/Azure/azure-stream-analytics/blob/master/Autoscale/StepScaleUp.ps1