Suppose you have logs with some transaction ID and timestamp
12:00: transactionID1 handled by funcX
12:01: transactionID2 handled by funcX
12:03: transactionID2 handled by funcY
12:04: transactionID1 handled by funcY
I want to get the time between 2 logs of the same event and aggregate (e.g. sum, avg) the time difference.
For example, for transactionID1, the time diff would be (12:04 - 12:01) 3min and for transactionID2, the time diff would be (12:03 - 12:02) 1min. Then I'd like to take the average of all these time differences, so (3+1)/2 or 2min.
Is there a way to that?
This doesn't seem possible with CloudWatch alone. I don't know where your logs come from, e.g. EC2, Lambda function. What you could do is to use the AWS SDK to create custom metrics.
Approach 1
If the logs are written by the same process, you can keep a map of transactionID and startTimein memory and create a custom metric with transactionID as dimension and calculate the metric value with the startTime. In case the logs are from different processes e.g. Lambda function invocations, then you can use DynamoDB to store the startTime.
Approach 2
If the transactions are independent you could also create custom metrics per transaction and use CloudWatch DIFF_TIME which will create a calculated metric with values for each transaction.
With CloudWatch AVG it should then be possible to calculate the average duration.
Personally, I have used the first approach to calculate a duration across Lambda functions and other services.
Related
I am trying to optimize my Apache Beam pipeline on Google Cloud Platform Dataflow.
Background information: I am trying to read streaming data from PubSub Messages, and aggregate them based on 3 time windows: 1 min, 5 min and 60 min. Such aggregations consists of summing, averaging, finding the maximum or minimum, etc. For example, for all data collected from 1200 to 1201, I want to aggregate them and write the output into BigTable's 1-min column family. And for all data collected from 1200 to 1205, I want to similarly aggregate them and write the output into BigTable's 5-min column. Same goes for 60min.
The current approach I took is to have 3 separate dataflow jobs (i.e. 3 separate Beam Pipelines), each one having a different window duration (1min, 5min and 60min). See https://beam.apache.org/releases/javadoc/2.0.0/org/apache/beam/sdk/transforms/windowing/Window.html. And the outputs of all 3 dataflow jobs are written to the same BigTable, but on different column families. Other than that, the function and aggregations of the data are the same for the 3 jobs.
However, this seems to be very computationally inefficient, and cost inefficient, as the 3 jobs are essentially doing the same function, with the only exception being the window time duration and output column family.
Some challenges and limitations we faced was that from the Apache Beam documentation, it seems like we are unable to create multiple windows of different periods in a singular dataflow job. Also, when we write the final data into big table, we would have to define the table, column family, column, and rowkey. And unfortunately, the column family is a fixed property (i.e. it cannot be redefined or changed given the window period).
Hence, I am wondering if there is a way to only use 1 dataflow job (i.e. 1 Apache Beam pipeline) that fulfils the objective of this project? Which is to aggregate data on different window periods, and write them to different column families of the same BigTable.
I was considering using Split stream: first window by 1-min, then split into 3 streams (1 write to bigtable for 1-min interval, another for 5-min aggregation, and another for 60-min aggregation). However, the problem is that we are working with streaming data and not batch data.
Thank you
I'm new to promQL and I am using it to create grafana dashboard to visualize various API metrics like throughput, latency etc.
For measuring latency I came across these queries being used together. Can someone explain how are they working
histogram_quantile(0.99, sum(irate(http_request_duration_seconds_bucket{path="<API Endpoint>"}[2m])*30) by (path,le))
histogram_quantile(0.95, sum(irate(http_request_duration_seconds_bucket{path="<API Endpoint>"}[2m])*30) by (path,le))
Also I want to write a query which will show me number of API calls with latency greater than 4sec. Can someone please help me there as well?
The provided queries are designed to return 99th and 95th percentiles for the http_request_duration_seconds{path="..."} metric of histogram type over requests received during the last 2 minutes (see 2m in square brackets).
Unfortunately the provided queries have some issues:
They use irate() function for calculating the per-second increase rate of every bucket defined in http_request_duration_seconds histogram. This function isn't recommended to use in general case, because it tends to return jumpy results on repeated queries - see this article for details. So it is better to use rate or increase instead when calculating histogram_quantile.
They multiply the calculated irate() by 30. This has no any effect on query results, since histogram_quantile() normalizes the provided per-bucket values.
So it is recommended to use the following query instead:
histogram_quantile(0.99,
sum(
increase(http_request_duration_seconds_bucket{path="..."}[2m])
) by (le)
)
This query works in the following way:
Prometheus selects all the time series matching the http_request_duration_seconds_bucket{path="..."} time series selector on the selected time range on the graph. These time series represent histogram buckets for the http_request_duration_seconds histogram. Each such bucket contains a counter, which counts the number of requests with duration not exceeding the value specified in the le label.
Prometheus calculates the increase over the last 2 minutes per each selected time series, e.g. how many requests hit every bucket during the last 2 minutes.
Prometheus calculates per-le sums over bucket values calculated at step 2 - see sum() function docs for details.
Prometheus calculates the estimated 99th percentile for the bucket results returned at step 3 by executing histogram_quantile function. The error of the estimation depends on the number of buckets and the le values. More buckets with better le distribution usually give lower error for the estimated percentile.
I've been tasked with monitoring a data integration task, and I'm trying to figure out the best way to do this using cloudwatch metrics.
The data integration task populates records in 3 database tables. What I'd like to do is publish custom metrics each day, with the number of rows that have been inserted for each table. If the row count for one or more tables is 0, then it means something has gone wrong with the integration scripts, so we need to send alerts.
My question is, how to most logically structure the calls to put-metric-data.
I'm thinking of the data being structured something like this...
Namespace: Integrations/IntegrationProject1
Metric Name: RowCount
Metric Dimensions: "Table1", "Table2", "Table3"
Metric Values: 10, 100, 50
Does this make sense, or should it logically be structured in some other way? There is no inherent relationship between the tables, other than that they're all associated with a particular project. What I mean is, I don't want to be infering some kind of meaningful progression from 10 -> 100 -> 50.
Is this something that can be done with a single call to the cloudwatch put-metric-data, or would it need to be 3 seperate calls?
Seperate calls I think would look something like this...
aws cloudwatch put-metric-data --metric-name RowCount --namespace "Integrations/IntegrationProject1" --unit Count --value 10 --dimensions Table=Table1
aws cloudwatch put-metric-data --metric-name RowCount --namespace "Integrations/IntegrationProject1" --unit Count --value 100 --dimensions Table=Table2
aws cloudwatch put-metric-data --metric-name RowCount --namespace "Integrations/IntegrationProject1" --unit Count --value 50 --dimensions Table=Table3
This seems like it should work, but is there some more efficient way I can do this, and combine it into a single call?
Also is there a way I can qualify that the data has a resolution of only 24 hours?
Your structure looks fine to me. Consider having a dimension for your stage: beta|gamma|prod.
This seems like it should work, but is there some more efficient way I can do this, and combine it into a single call?
Not using the AWS CLI, but if you used any SDK e.g. Python Boto3, you can publish up to 20 metrics in a single PutMetricData call.
Also is there a way I can qualify that the data has a resolution of only 24 hours?
No. CloudWatch will aggregate the data it receives on your behalf. If you want to see a daily datapoint, you can change the period to 1 day when graphing the metric on the CloudWatch Console.
Is there a way to check how many slots were used by a query over the period of its execution in BigQuery? I checked the execution plan but I could just see the Slot Time in ms but could not see any parameter or any graph to show the number of slots used over the period of execution. I even tried looking at Stackdriver Monitoring but I could not find anything like this. Please let me know if it can be calculated in some way or if I can see it somewhere I might've missed seeing.
A BigQuery job will report the total number of slot-milliseconds from the extended query stats in the job metadata, which is analogous to computational cost. Each stage of the query plan also indicates input stats for the stage, which can be used to indicate the number of units of work each stage dispatched.
More details about the representation can be found in the REST reference for jobs. See query.statistics.totalSlotMs and statistics.query.queryPlan[].parallelInputs for more information.
BigQuery now provides a key in the Jobs API JSON called "timeline". This structure provides "statistics.query.timeline[].completedUnits" which you can obtain either during job execution or after. If you choose to pull this information after a job has executed, "completedUnits" will be the cumulative sum of all the units of work (slots) utilised during the query execution.
The question might have two parts though: (1) Total number of slots utilised (units of work completed) or (2) Maximum parallel number of units used at a point in time by the query.
For (1), the answer is as above, given by "completedUnits".
For (2), you might need to consider the maximum value of queryPlan.parallelInputs across all query stages, which would indicate the maximum "number of parallelizable units of work for the stage" (https://cloud.google.com/bigquery/query-plan-explanation)
If, after this, you additionally want to know if the 2000 parallel slots that you are allocated across your entire on-demand query project is sufficient, you'd need to find the point in time across all queries taking place in your project where the slots being utilised is at a maximum. This is not a trivial task, but Stackdriver monitoring provides the clearest view for you on this.
For the Conversion Stats attribution values in the Ads API (ie, post_click_1d, post_click_7d, post_click_28d, post_imp_1d, post_imp_7d, post_imp_28d), are the values cumulative or are they capturing specific conversions relative to the time interval?
If cumulative, do they reset per day?
Is there any difference between account, campaign and ad-level stats calls?
It depends on your the value you pass to aggregate_days. By default, its 0 which is the total number of conversion. If you set it to 1, it would be aggregated daily, likewise, 7 for weekly
Refer to the Aggregation Window part of the documentation.
https://developers.facebook.com/docs/reference/ads-api/conversionstatistics/
There's definitely difference between account, campaign and ad-level stats. It depends on which level of the stats you need, whether you want all the ad-groups within a campaign, account, or just a few particular ad-group.