Mediatailor metrics in Cloudwatch - amazon-cloudwatch

I'm looking Mediatailor metrics in Cloudwatch and found that for "Avail" group there are: duration, observedDuration, filledDuration, observedFilledDuracion, fillRate, observedFillRate.
For example for duration, documentation says that duration is a "planned" value and observedDuration is a "observed" value but it is not clear for me. I guess that planned is according with the AD marker in the manifest and observed is from the ad insertion step itself (is it correct?) I guess that "observed" values are more accurate.
Anyway I suppose that "planned" and "observed" values should be similar but usually this is not the case. These are a couple of examples for the values
Filled are similar but duration and fillRate are really different so I don't understand which I should to use

I guess that planned is according with the AD marker in the manifest and observed is from the ad insertion step itself (is it correct?) I guess that "observed" values are more accurate.
Yes, the observed values are what MediaTailor takes action on based on the VAST response from the Ad Decision Server (ADS). Planned is the value received from the Origin via SCTE-35 messaging.
Per the following documentation: https://docs.aws.amazon.com/mediatailor/latest/ug/monitoring-cloudwatch-metrics.html
Duration - The planned total number of milliseconds of ad avails
within the CloudWatch time period. The planned total is based on the
ad avail durations in the origin manifest.
ObservedDuration - The observed total number of milliseconds of ad
avails that occurred within the CloudWatch time period.
Avail.ObservedDuration is emitted at the end of the ad avail, and is
based on the duration of the segments reported in the manifest during
the ad avail.
To continue your example regarding the Duration metric let's say that for a set period of time the Origin server sends a manifest with a single Ad Break for 90 seconds. MediaTailor will perform a request to the ADS asking for 90 seconds of content. The ADS returns a VAST response that includes a 45 second Ad and a 30 second Ad for a total of 75 seconds. MediaTailor will report to CloudWatch that, for the set period of time, the Duration of all Avails planned by the Origin server was 90 seconds, but the Observed Duration provided by the ADS was 75 seconds.
The documentation goes into further detail regarding each metric and even provides some examples.

Related

After importing a metric into Victoria Metrics, the metric is repeated for 5 minutes. What controls this behavior?

I am writing some software that will be pushing data to Victoria Metrics, as below:
curl -d 'foo{bar="baz"} 30' -X POST 'http://[Victoria]/insert/0/prometheus/api/v1/import/prometheus'
I noticed that if I push a single metric like this, it shows up as not a single data point but rather shows up repeatedly as if it was being scraped every 15 seconds, either until I push a new value for that metric or 5 minutes passes.
What setting/mechanism is causing this 5-minute repeat period?
Pushing data with a timestamp does not change this. Metric gets repeated for 5 minutes after that time or until a change regardless.
I don't necessarily need to alter this behavior, just trying to understand why it's happening.
How do you query the database?
I guess this behaviour is due to the ranged query concept and ephemeral datapoints, check this out:
https://docs.victoriametrics.com/keyConcepts.html#range-query
The interval between datapoints depends on the step parameter, which is 5 minutes when omitted.
If you want to receive only the real datapoints, go via export functions.
https://docs.victoriametrics.com/#how-to-export-time-series
TSDB VM has ephemeral dots which fill gaps in the closest sample on the left to the requested timestamp.
So if you make the instant request:
curl "http://<victoria-metrics-addr>/api/v1/query?query=foo_bar&time=2022-05-10T10:03:00.000Z"
The time range at which VictoriaMetrics will try to locate a missing data sample is equal to 5m by default and can be overridden via step parameter.
step - optional max lookback window for searching for raw samples when executing the query. If step is skipped, then it is set to 5m (5 minutes) by default.
GET | POST /api/v1/query?query=...&time=...&step=...
You can read more about key concepts in this part of the documentation
key-concepts
There you can find also information about query range and different concepts about TSDB

How do I tell I find all AWS metrics using high-resolution?

i run into this error in AWS cloudwatch
which does not make sense as I think/thought we had 0 high resolution metrics(high resolution only records for 3 hours). We typically just do 1 minute interval reporting. How do I find all metrics with high resolution? In this way I am hoping I can edit them to not high resolution.
I searched around a ton on the documentation and I looked into micrometer code which seems to default to highResolution = false and a step of 2 minutes. (We are using micrometer). I am trying to figure out next steps on figuring out why AWS thinks this data is high resolution data.
I was also thinking 'ok, perhaps it would roll up to 1 minute data then 5 minute data' so in my query I tried 1 minute and 5 minute but I still get the error of only 3 hours of data.
Error is thrown because you're using the query syntax (SELECT ...) and that only supports the latest 3 hours of data. The feature is called Metrics Insights, you can see the limits here: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-insights-limits.html
Error is not related to high resolution metrics. Even if they were high resolution, when you're setting the period to 5 min, you would only retrieve datapoints aggregated to 5 min granularity.

Bitcoin Exchange API - more frequent high low

Any way to get more a high-low value more frequent than every 24 hours from say the Bitstamp API ticker?
This link only tells you how to get the value for every 24 hours
https://www.bitstamp.net/api/
(this also seems to be a problem with every other exchange I've tried)
24 hours is compared by time_now - 24 hours so it should give you updated rates every second or may be every minute depends on the configuration of the api file.

How to reduce time allotted for a batch of HITs?

today I created a small batch of 20 categorization HITs with the name Grammatical or Ungrammatical using the web UI. Can you tell me the easiest way to manage this batch so that I can reduce its time allotted to 15 minutes from 1 hour and remove also remove the categorization of masters. This is a very simple task that's set to auto-approve within 1 hour, and I am fine with that. I just need to make it more lucrative for people to attempt this at the penny rate.
You need to register a new HITType with the relevant properties (reduced time and no masters qualification) and then perform a ChangeHITTypeOfHIT operation on all of the HITs in the batch.
API documentation here: http://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_ChangeHITTypeOfHITOperation.html

GAE Java API Channel

In http://code.google.com/intl/es-ES/appengine/docs/quotas.html#Channel you
can read that with billing enabled the maximum channel created rate is 60
creations/minute. Does it mean that we can created only 86,400
channels/day. It's very low rate, isn't it? And if i have estimated that I
could have peaks of for example: 4,000 creations/minute... What i can do?
60 creations/minute are few creations if the channels are 1to1... Is this
correct?
My interpretation of that section is that you will NOT be able to create 4k connections per minute. Here is how I would think about it: over ANY 1-minute period, no more than 60 channels can be created. For example, you can create 60 channels at time T. Then, for the next 60 seconds you won't be able to create any. Or, you can create 30 at time T. Then, every 2 seconds, create a channel.
I believe another way to think about this is in terms of the token bucket algorithm.
Anyway, I believe you can fill out this form to request a higher limit. There is a link to that form from the docs that you linked to in your question.