Converting AWS CloudWatch Metrics Insight query to CDK Metric - amazon-cloudwatch

I am modifying the sample at https://github.com/cdk-patterns/serverless/tree/main/the-eventbridge-etl/typescript as I want to add a dashboard widget to my CloudFormation Stack that shows the Fargate vCPU usage. I have been able to upgrade the app to use CDK v2, and deployment/functionality has been confirmed. However, I cannot get the vCPU widget in the dashboard to show any data.
If I configure the widget manually, from within AWS CloudWatch's Source field, the query looks as follows:
{
"metrics": [
[ { "expression": "SELECT COUNT(ResourceCount) FROM SCHEMA(\"AWS/Usage\", Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'", "label": "Query1", "id": "q1" } ],
[ "AWS/Usage", "ResourceCount", "Service", "Fargate", "Type", "Resource", { "id": "m1" } ]
],
"view": "timeSeries",
"title": "ExtractECSJob",
"region": "us-west-2",
"timezone": "Local",
"stat": "Sum",
"period": 300
}
However, when I attempt to use CDK, with the following TypeScript code:
const extractECSWidget = new GraphWidget({
title: "ExtractECSJob",
left: [
new Metric({
namespace: "AWS/Usage",
metricName: "ResourceCount",
statistic: "Sum",
period: Duration.seconds(300),
dimensionsMap: {
"Service": "Fargate",
"Type": "Resource",
"Resource": "vCPU"
}
})
]
});
This does not translate to the above, and no information is shown in this widget. The new source looks as follows:
{
"view": "timeSeries",
"title": "ExtractECSJob",
"region": "us-west-2",
"metrics": [
[ "AWS/Usage", "ResourceCount", "Resource", "vCPU", "Service", "Fargate", "Type", "Resource", { "stat": "Sum" } ]
],
"period": 300
}
How do I map the above metrics source definition to the CDK source construct?
I tried using MathExpression but with the following:
let metrics = new MathExpression({
expression: "SELECT COUNT('metricName') FROM SCHEMA('\"AWS/Usage\"', 'Class','Resource','Service','Type') WHERE Service = 'Fargate' AND Resource = 'vCPU'",
usingMetrics: {}
})
const extractECSWidget = new GraphWidget({
title: "ExtractECSJob",
left: [
metrics
]
});
I get the warning during cdk diff:
[Warning at /EventbridgeEtlStack/EventBridgeETLDashboard] Math expression 'SELECT COUNT(metricName) FROM SCHEMA($namespace, Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'' references unknown identifiers: metricName, namespace, lass, esource, ervice, ype, ervice, argate, esource, vCPU. Please add them to the 'usingMetrics' map.
What should I put in the usingMetrics map? Any help is appreciated.

Thanks to AWS Support I was able to have this fixed. The updated code looks like the following:
let metrics = new MathExpression({
expression: "SELECT COUNT(ResourceCount) FROM SCHEMA(\"AWS/Usage\", Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'",
usingMetrics: {},
label: "Query1"
})
let metric2 = new Metric({
namespace: "AWS/Usage",
metricName: "ResourceCount",
period: cdk.Duration.seconds(300),
dimensionsMap: {
"Service": "Fargate",
"Type": "Resource",
}
})
const extractECSWidget = new GraphWidget({
title: "ExtractECSJobTest",
left: [metrics, metric2],
region: "us-west-2",
statistic: "Sum",
width: 24
});
dashboardStack.addWidgets(
extractECSWidget
);
When running cdk deploy, I still get the same warning (about unknown identifiers being referenced) but the widget is functioning as expected.

This feature has not been supported by CDK yet. I've opened the issue https://github.com/aws/aws-cdk/issues/22844
I faced the same issue while creating an alarm based on a metric based on a query. I found the workaround with level 1 construct CfnAlarm
Maybe same kind of workaround exists for Widget.

Related

Deploying AWS Cloudwatch dashboards with the CDK: how do I hide metrics

I have a custom metric that I push updates to in my code. In the CDK, I have created a derived metric from this custom metric. I would like the derived metric to show up in the dashboard but the original metric to be hidden. How can I achieve this?
Here is my cut-down (TypeScript) CDK code which deploys successfully:
const createDashboard = (scope: cdk.Construct, namespace: string, statistic = Statistic.AVERAGE) => {
const customDynamoLatencyMetric: IMetric = new Metric({
period: Duration.minutes(1),
metricName: 'MY_DYNAMO_LATENCY_METRIC',
namespace,
statistic,
});
const derivedAverageDynamoLatencyMetric = new MathExpression({
expression: 'm1/1000', label: 'To Dynamo Latency', usingMetrics: { m1: customDynamoLatencyMetric }, period: Duration.minutes(1),
});
const dashboard = new Dashboard(
scope,
'myDashboard', {
dashboardName: 'myDashboard',
},
);
const widget = new GraphWidget({
title: 'Average Latency',
left: [customDynamoLatencyMetric, derivedAverageDynamoLatencyMetric],
view: GraphWidgetView.TIME_SERIES,
region: AWS_DEFAULT_REGION,
width: 12,
});
dashboard.addWidgets(widget);
};
If I manually mark this metric as invisible in the AWS Cloudwatch Dasgboard Console then when I view/edit source in the Cloudwatch Console I see the following:
"metrics": [
[ "stephenburns-gcs-pipeline", "DYNAMO_LATENCY", { "id": "m1", "visible": false } ],
[ { "label": "To Dynamo Latency", "expression": "m1/1000", "period": 60, "id": "e1", "region": "ap-southeast-2" } ]
]
My question is how do I get that "visible": false property via the CDK?
I tried using the Metric's dimensions property e.g.
dimensions: { visible: false }
but it fails at deployment time with the error: "Invalid metric field type, only "String" type is allowed"
Does anyone know how to mark a metric as initially invisible?
If you only add the original Metric to the usingMetrics property of the MathExpression and don't add it directly to the GraphWidget, the CDK appears to automatically set visible to false. The CDK documentation does not currently (as of version 1.123.0) indicate a way to set the visibility of a Metric directly.
In the code example you provided, this would simply require changing the line:
left: [customDynamoLatencyMetric, derivedAverageDynamoLatencyMetric],
to:
left: [customDynamoLatencyMetric],

Generate "Instances" definition programmatically to create EMR cluster in StepFunctions

I have a case where I want to dynamically create an EMR cluster based on a user-defined configuration and execute a sequence of steps on it using AWS Step Functions.
For this, I am planning to provide the instance configuration as an input to the step functions workflow.
Based on the StepFunctions-EMR Integration Documentation, the definition is the same as that of the RunJobFlow API.
However, when I try to generate the definition by serializing an instance of JobFlowInstancesConfig to JSON and pass it to the StateMachine as an input, it throws an error saying:
The field 'Instances.KeepJobFlowAliveWhenNoSteps' is required but was missing
Here is the JSON generated post serialization:
{
"instanceFleets": [
{
"instanceFleetType": "MAIN",
"targetOnDemandCapacity": 1,
"instanceTypeConfigs": [
{
"instanceType": "m5.xlarge"
}
]
},
{
"instanceFleetType": "CORE",
"targetOnDemandCapacity": 1,
"instanceTypeConfigs": [
{
"instanceType": "c5.2xlarge"
}
]
}
],
"keepJobFlowAliveWhenNoSteps": true
}
I am passing this in the input, and accessing it in my StepFunctions definition in the below Task (where I expect the above definition to be replacing $.jobFlowInstancesConfig):
...
"GetCluster": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync",
"Parameters": {
"Name.$": "$.clusterName",
"VisibleToAllUsers": true,
"ReleaseLabel": "emr-5.30.0",
"Applications": [
{
"Name": "Spark"
}
],
"ServiceRole": "EMR_DefaultRole",
"JobFlowRole": "EMR_EC2_DefaultRole",
"LogUri": "s3://my-aws-logs/elasticmapreduce/",
"Instances.$": "$.jobFlowInstancesConfig"
}
}
...
My suspicion is that this is failing because StepFunctions expects the field names to start with upper case.
Question: How do I programmatically generate the appropriate definition without having to play around with Strings for generating the JSON? Is there a straightforward way to serialize the above definition to one that will work with StepFunctions?

How to use input transformer for ECS Fargate launch type with Terraform CloudWatch event trigger

I'm using terraform to create a CloudWatch Event Trigger with a ECS Fargate launch type which the event source is S3. When I use the input_transformer field to pass in the bucket and key into the ECS task, my event rule results in a failed invocation.
This is the aws_cloudwatch_event_rule:
resource "aws_cloudwatch_event_rule" "event_rule" {
name = "dev-gnss-source-put-rule-tf"
description = "Capture S3 events on uploads bucket"
event_pattern = <<PATTERN
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
],
"requestParameters": {
"bucketName": [
"example-bucket-name"
]
}
}
}
PATTERN
}
This is the aws_cloudwatch_event_target:
resource "aws_cloudwatch_event_target" "event_target" {
target_id = "dev-gnss-upload-event-target-tf"
arn = "example-cluster-arn"
rule = aws_cloudwatch_event_rule.event_rule.name
role_arn = aws_iam_role.uploads_events.arn
ecs_target {
launch_type = "FARGATE"
task_count = 1 # Launch one container / event
task_definition_arn = "example-task-definition-arn"
network_configuration {
subnets = ["example-subnet"]
security_groups = []
}
}
input_transformer {
input_paths = {
s3_bucket = "$.detail.requestParameters.bucketName"
s3_key = "$.detail.requestParameters.key"
}
input_template = <<TEMPLATE
{
"containerOverrides": [
{
"name": "myproject-task",
"environment": [
{ "name": "S3_BUCKET", "value": <s3_bucket> },
{ "name": "S3_KEY", "value": <s3_key> }
]
}
]
}
TEMPLATE
}
}
If I remove the input_transformer section, it will work fine, but I need to pass in the s3 bucket and key to process the particular file.
My rationale for doing this is to remove the need for an intermediary Lambda and was guided by this Medium post: https://medium.com/#bowbaq/trigger-an-ecs-job-when-an-s3-upload-completes-3559c44c37d1
Any advice is appreciated.
After hours of going in circles, I found an answer!
So the first step is to check what the cause of the failed invocation is. You can do this by checking CloudTrail logs by navigating to Cloud Trail > Event history > Search by Event name and type RunTask in the search box. You should see a series of events from the event source ecs.amazonaws.com. Find one that relates to your the Failed Invocation you experienced.
When you click into the event, you can see under the Event record section an errorMessage. In my case, it was the following:
"errorCode": "InvalidParameterException",
"errorMessage": "Override for container named myproject-task is not a container in the TaskDefinition.",
This may be different for you. For me, it was because my containerOverride name was incorrect. This field refers to: The name of the container that receives the override. This parameter is required if any override is specified. ref: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html
Correcting this field fixed my issue.

Dropbox API V2 list_file_members/batch empty results

I'm currently trying to work with the Dropbox list_file_members API endpoint, as it appears to me to be the only place to find out who owns a file (
see follow example result taken from the documentation page )
{
"users": [
{
"access_type": {
".tag": "owner"
},
"user": {
"account_id": "dbid:AAH4f99T0taONIb-OurWxbNQ6ywGRopQngc",
"same_team": true,
"team_member_id": "dbmid:abcd1234"
},
"permissions": [],
"is_inherited": false
}
],
"groups":[...]
...
}
However, when I call the API on a single file I get the follow
{
"users": [],
"groups": [
{
"access_type": {
".tag": "editor"
},
"permissions": [],
"is_inherited": true,
"group": {
"group_name": "Everyone at TEAM_NAME_HERE",
"group_id": "g:GROUP_ID_HERE",
"member_count": 6,
"group_management_type": {
".tag": "company_managed"
},
"group_type": {
".tag": "team"
},
"is_owner": false,
"same_team": true
}
}
],
"invitees": []
}
This result contains no owner information, so I'm assuming this is because everyone has the same access levels ??
The problem worsens when I try to call files in batches using the sharing_list_file_members/batch endpoint, I get the following result
[
{
"file": "id:THIS_IS_MY_FILE_ID",
"result": {
".tag": "result",
"members": {
"users": [],
"groups": [],
"invitees": []
},
"member_count": 0
}
}
]
Obviously this is even less helpful, this is the same when I access the API via my own PHP, as well as the API explorer, could anyone tell me where I'm going wrong and why I'm getting no results from users and even groups when done in batches ?
The /2/sharing/list_file_members endpoint is documented as:
Use to obtain the members who have been invited to a file, both inherited and uninherited members.
The /2/sharing/list_file_members/batch endpoint is documented as:
Get members of multiple files at once. The arguments to this route are more limited, and the limit on query result size per file is more strict. To customize the results more, use the individual file endpoint.
Inherited users are not included in the result, and permissions are not returned for this endpoint.
It sounds like the file for your example is in a team folder, and so the group listed for your non-batch example is the team group, i.e., an inherited group. The documentation indicates that this group isn't expected when using the batch endpoint.

ARM - How can I get the access key from a storage account to use in AppSettings later in the template?

I'm creating an Azure Resource Manager template that instantiates multiple resources, including an Azure storage account and an Azure App Service with a Web App.
I'd like to be able to capture the primary access key (or the full connection string, either way is fine) from the newly-created storage account, and use that as a value for one of the AppSettings for the Web App.
Is that possible?
Use the listkeys helper function.
"appSettings": [
{
"name": "STORAGE_KEY",
"value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]"
}
]
This quickstart does something similar:
https://azure.microsoft.com/en-us/documentation/articles/cache-web-app-arm-with-redis-cache-provision/
The syntax has changed since the other answer was accepted. The error you will now hit is 'Template language expression property 'key1' doesn't exist, available properties are 'keys'
Keys are now represented as an array of keys, and the syntax is now:
"StorageAccount": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('StorageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('StorageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
See: http://samcogan.com/retrieve-azure-storage-key-in-arm-script/
I faced with this issue two times. First in the 2015 and last today in May of 2017.
I need to add connection strings to the WebApp - I want to add strings automatically from generated resources during deployment from the ARM template. It can help later to not add manually this values.
First time I used old version of the function listKeys (it looks like old version returns result not as object but as value):
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2015-05-01-preview').key1)]"
},
Today last version of the working template is:
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
],
"properties": {
"DefaultConnection": {
"value": "[concat('Data Source=tcp:', reference(resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))).fullyQualifiedDomainName, ',1433;Initial Catalog=', parameters('databaseName'), ';User Id=', parameters('administratorLogin'), '#', parameters('sqlserverName'), ';Password=', parameters('administratorLoginPassword'), ';')]",
"type": "SQLServer"
},
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
},
"AzureWebJobsDashboard": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
}
}
},
Thanks.
below is example for adding storage account to ADLA
"storageAccounts": [
{
"name": "[parameters('DataLakeAnalyticsStorageAccountname')]",
"properties": {
"accessKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]"
}
}
],
in variable you can keep
"variables": {
"apiVersion": "[providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]]",
"storageAccountid": "[concat(resourceGroup().id,'/providers/','Microsoft.Storage/storageAccounts/', parameters('DataLakeAnalyticsStorageAccountname'))]"
},