AWS S3 monitor folder level metrics - amazon-s3

I am trying to monitor S3 folder level metrics, trying to get a comparison between two folders.
s3://logs-bucket/error/2019/01/
s3://logs-bucket/info/2019/01/
I spent an hour playing around with CloudWatch metrics but still have no idea how to do it. I am not trying to do anything fancy, just trying to graph NumberOfObjects and BucketSizeBytes between 2 folders. Is sub-level metrics a paid feature?

CloudWatch provides only bucket level metrics by default. But you can define additional metrics via filters (S3 bucket->Management->Metrics->Filters). Define new metric filter per each prefix (/error/2019/01/,/info/2019/01/). Then you can use FilterId as a dimension in the CloudWatch S3 query.
Doc: https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudwatch-monitoring.html
Update: #Tartaglia is right, filters are only for request metrics => you can't get NumberOfObjects andBucketSizeByteswithFilterId` dimension. So you can't use default CloudWatch functionality for your monitoring. You can script/code it and push requested metrics as a custom metrica to the CloudWatch.

Related

How to create Multi-metric Alarm?

There are multiple instances with servicename-N running & i am able to push custom metric using python sdk to Namespace-X. However when i create alarm it only let me select One metric from one of the instances. i have tried Metric Math based alarm but the email notification doesnt include detail information(serviceName, instanceID).
Any pointers to achieve this(eg. have meta data like serviceName in notification email) ?
scenario image
You can achieve this by creating a Metric Insights alarm (https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-cloudwatch-metrics-insights-alarms/).
Amazon CloudWatch Metrics Insights alarms enables customers to alarm on entire fleets of dynamically changing resources with a single alarm using standard SQL queries...With Metric Insights alarms you can set alarms using Metric Insight queries that monitor multiple resources without having to worry if the resources are short lived or not. For example, you can set a single alarm that alerts when any of your EC2 instances reaches a high threshold for CPU utilization and the alarm will evaluate new instances that are launched afterwards.
Alternatively, you can create an alarm for each individual time series and then create a composite alarm that groups the individual alarms. You can then put your action on the composite alarm. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html

How to filter Dynatrace metrics with request attributes using environment REST API

Using the Dynatrace SaaS GUI in Multidimensional analysis menu I am able to split and filter metrics with request attributes, but I can't find any document on how is the syntax to do the same with the environment v2 API (/metrics/query).
Thanks!
The documentation of the metric query API describes the metricSelector parameter, which is used to select the metric to query and to perform operations on it, e.g. splitting by dimension or filtering based on some values.
You can develop and test the metric-selector in the UI via menu item "Metrics" -> then search for the metric and in the data-explorer build your metric query via "split by" and "filter", then the tab "Code" will show the corresponding metricSelector that you can use also in the query-API.
E.g. a possible metric-query looks like follows:
sample.metric:filter(and(in("dt.entity.process_group_instance",entitySelector("type(process_group_instance),tag(~"Prod~")")))):splitBy("dt.entity.process_group_instance",rx_pid):avg:auto:sort(value(avg,descending)):limit(10)
The documentation for the metric selector explains details and contains many more examples.

Google compute engine - balancing based on custom metric

I'm trying to have instance group on compute engine autobalanced according my custom metric.
The google monitoring/api/v3 sample github source creates custom metric and I can see it in Stackdriver. Great!
But when I want to use that metric in my instance group, the group is not autoscaled. Only 1 instance is presented.
Am I doing autoscaler setup correctly?
gcloud compute instance-groups managed set-autoscaling $MY_GROUP \
--max-num-replicas 2 --min-num-replicas 1 \
--custom-metric-utilization metric=custom.googleapis.com/custom_measurement,utilization-target=3,utilization-target-type=GAUGE \
--zone us-central1-f
Note: in custom_metric.py I have set INSTANCE_ID to id of my 1st VM instance and run custom_metric.py multiple times to simulate some data, because my test VM instance does nothing real.
V3 API currently won't work with autoscaler, mate, only shows up in Stackdriver.
Note: Autoscaler does not yet support Google Cloud Monitoring V3.
If you want to scale on custom metrics - use V2 API. It's deprecated (and won't show up in Stackdriver), however, still one and only way to make autoscaler work on your own metrics. It's an answer I got today for support ticket.

Loading Google Analytics 2 Years historical data via API using CC Rest connector

I am wondering if there is a way to pull Google Analytics un-sampled historical data for 2 years via API using CC Rest connector component. Unfortunately the GA account is a standard and not a premium so I can not get around 500K limit.
It would be great if GoodData Developer team can share an ETL graph file to solve this request. It is a common use case per clients.
Thanks,
Andy
I have discovered this kind of solution.
Run the ga_00_master graph that will run multiple time the ga_01_sub graph. For each day you want to have the data it will send a request to Google Analytics and gives you a file with the data for that day.
There are few things to do
fill in GA_CONNECTION to sub graph
and link parameter file ga_params.prm
and parameters
for master graph:
GA_MIN_DATE = "yyyy-MM-dd"
for sub graph:
PROFILE_NUMBER

AWS monthly calculator query: is data out counted twice for cloudfront and s3?

I am trying to use cloudfront to stream videos and trying to compute my rough monthly cost.
I had a question regarding the AWS monthly calculator http://calculator.s3.amazonaws.com/calc5.html
When I calculate the cost of 500GB/day under "Amazon CloudFront" tab, do I need to include that again in "Data transfer Out" under "Amazon S3" tab. If I do that, it is doubling my monthly cost from ~1700$ to 3700$
Thanks.
Amazon CloudFront charges are based on actual usage of the service in three areas: Data Transfer, HTTP/HTTPS Requests, and Invalidation Requests.
For more details please visit: http://aws.amazon.com/cloudfront/faqs/#What_is_the_price_of_Amazon_CloudFront
[Disclosure : Bucket Explorer]
The s3 costs remain the same. Cloudfront costs are additional. Cloudfront does cache content so, depending on the frequency and location of the requests you might not be paying s3 for each request.
This probably belongs in ServerFault