Can step functions wait on a static website? - amazon-s3

If I deploy a static website with s3 and api gateway, is there any way for a step function to wait for some activity, then redirect the user on that static website to another?

WeCanBeFriends,
This is possible using the Job Status Poller pattern, but tweaked slightly. If the "Job" is to deploy the website, then the condition to "Complete Job" is to see some activity come in (ideally through cloudwatch metrics).
Once you see enough metrics to be ok with your deployment, you can either do a push notification to the webapp to inform it to redirect (using a lambda function that calls SNS - as in the wait timer sample) or have the webapp poll the execution status until it's complete.
Below I've posted a very simple variation to the Job Status Poller to illustrate my example:
{
"Comment": "A state machine that publishes to SNS after a deployment completes.",
"StartAt": "StartDeployment",
"States": {
"StartDeployment": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:012345678912:function:KickOffDeployment",
"ResultPath": "$.guid",
"Next": "CheckIfDeploymentComplete"
},
"CheckIfDeploymentComplete": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:012345678912:function:CheckIfDeploymentComplete",
"Next": "TriggerWebAppRefresh",
"InputPath": "$.guid",
"ResultPath": "$.status",
"Retry": [ {
"ErrorEquals": [ "INPROGRESS" ],
"IntervalSeconds": 5,
"MaxAttempts": 240,
"BackoffRate": 1.0
} ],
"Catch": [ {
"ErrorEquals": ["FAILED"],
"Next": "DeploymentFailed"
}]
},
"DeploymentFailed": {
"Type": "Fail",
"Cause": "Deployment failed",
"Error": "Deployment FAILED"
},
"TriggerWebAppRefresh": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:012345678912:function:SendSNSToWebapp",
"InputPath": "$.guid",
"End": true
}
}
}

Related

How do I use ContentMD5 Properly in Step Function Definition when using PutObject on an ObjectLock Enabled S3 Bucket

Currently, I have written a Step Function definition that uses PutObject SDK to an S3 bucket that has ObjectLock Enabled. Because of Object Lock enabled on my S3, I need to pass ContentMD5 from the Definition. This is the definition I am currently using:
{
"Comment": "PutObject to S3",
"StartAt": "PutObject",
"States": {
"PutObject": {
"Type": "Task",
"End": true,
"Parameters": {
"Body": "test data",
"Bucket": "worm-bucket-test",
"Key": "logs.txt",
"ContentMD5": "States.Base64Encode(States.Hash('test data', 'MD5'))" },
"Resource": "arn:aws:states:::aws-sdk:s3:putObject",
"Resource": "arn:aws:states:::aws-sdk:s3:putObject",
"Catch": [ {
"ErrorEquals": [ "States.TaskFailed" ],
"Next": "Wait1Sec"
} ]
},
"Wait1Sec": {
"Type": "Wait",
"Seconds": 1,
"Next": "PutObject"
}
}
}
Unfortunately, I continue to receive the following error:
{
"Error": "S3.S3Exception",
"Cause": "The Content-MD5 you specified was invalid. (Service: S3, Status Code: 400, Request ID: xxx, Extended Request ID: xxx)"
}
I am able to create a Lambda, and write code that handles Content-MD5 data to S3, but my goal is to have this same functionality from SF to S3 directly without having to use a Lambda function. Any help will be much appreciated.

BigQuery: Routine deployment failing with error "Unknown option: description"

We use terraform to deploy BigQuery objects (datasets, tables, routines etc..) to region europe-west2 in GCP. We do this many many times a day and all of a sudden at "2021-08-18T21:15:44.033910202Z" our deployments starting failing when attempting to deploy BigQuery routines. They are all failing with errors of the form:
status: {
code: 3
message: "Unknown option: description"
}
Here is the first log message I can find pertaining to this error (I have redacted project names):
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "Unknown option: description"
},
"authenticationInfo": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com"
}
}
]
},
"requestMetadata": {
"callerIp": "10.51.0.116",
"callerSuppliedUserAgent": "Terraform/0.14.7 (+https://www.terraform.io) Terraform-Plugin-SDK/2.5.0 terraform-provider-google/3.69.0,gzip(gfe)",
"callerNetwork": "//compute.googleapis.com/projects/myadminproject/global/networks/__unknown__",
"requestAttributes": {},
"destinationAttributes": {}
},
"serviceName": "bigquery.googleapis.com",
"methodName": "google.cloud.bigquery.v2.RoutineService.InsertRoutine",
"authorizationInfo": [
{
"resource": "projects/myproject/datasets/p00003818_dp_model",
"permission": "bigquery.routines.create",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment",
"metadata": {
"routineCreation": {
"routine": {
"routineName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment"
},
"reason": "ROUTINE_INSERT_REQUEST"
},
"#type": "type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"
}
},
"insertId": "ak27xdbke",
"resource": {
"type": "bigquery_dataset",
"labels": {
"dataset_id": "p00003818_dp_model",
"project_id": "myproject"
}
},
"timestamp": "2021-08-18T21:15:43.109609Z",
"severity": "ERROR",
"logName": "projects/myproject/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-08-18T21:15:44.033910202Z"
}
The fact that this occurred without any changes by ourselves indicates that this is a problem at the Google end. I also observe that whilst we witnessed this in a few projects it occurred first in one project and then a few minutes later in another - that may or may not be helpful information.
Posting here in case anyone else hits this problem and also hoping it might catch the attention of a googler.
UPDATE! I have reproduced the pproblem using the REST API https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/insert
I have entered a payload that does not include a description and that successfully creates a routine:
However, if I include a description which, as this screenshot indicates, is a valid parameter:
then the request fails:

Chaostoolkit istio extension hangs when playing experiment

I'm trying to use the chaos toolkit istio extension, my problem is as follows:
I have a experiment.json file which contains a single probe to retrieve a virtual service. The file looks similar to the following:
{
"version": "1.0.0",
"title": "test",
"description": "N/A",
"tags": []
"secrets": {
"istio": {
"KUBERNETES_CONTEXT": {
"type": "env",
"key": "KUBERNETES_CONTEXT"
}
}
},
"method": [
{
"type": "probe",
"name": get_virtual_service:,
"provider": {
"type": "python",
"module": "chaosistio.fault.probes",
"func": "get_virtual_service",
"arguments": {
"virtual_service_name": "test"
"ns": "test-ns"
}
}
}
}
I have set KUBERNETES_CONTEXT and http/https proxy as env vars. My authorisation is using $HOME/.kube/config.
When playing the experiment it validates the file fine and tries to perform the action but becomes stuck and just hangs until it times out.
The error I see in the logs is a HTTPSConnectionPool error (failed to establish a new connection, operation timed out).
Am I missing any settings? All help appreciated.

How to pass AWS Lambda error in AWS SNS notification through AWS Step Functions?

I have created an AWS Step Function which triggers a Lambda python code, terminates without error if Lambda succeeds, otherwise calls an SNS topic to message the subscribed users if the Lambda fails. It is running, but the message was fixed. The Step Function JSON is as follows:
{
"StartAt": "Lambda Trigger",
"States": {
"Lambda Trigger": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-2:xxxxxxxxxxxx:function:helloworldTest",
"End": true,
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"ResultPath": "$.error",
"Next": "Notify Failure"
}
]
},
"Notify Failure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Message": "Batch job submitted through Step Functions failed with the following error, $.error",
"TopicArn": "arn:aws:sns:us-east-2:xxxxxxxxxxxx:lambda-execution-failure"
},
"End": true
}
}
}
Only thing is, I want to append the failure error message to my message string, which I tried, but is not working as expected.
But I get a mail as follows:
How to go about it?
I could solve the problem using "Error.$": "$.Cause".
The following is a working example of the failure portion of state machine:
"Job Failure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Subject": "Lambda Job Failed",
"Message": {
"Alarm": "Lambda Job Failed",
"Error.$": "$.Cause"
},
"TopicArn": "arn:aws:sns:us-east-2:xxxxxxxxxxxx:Job-Run-Notification"
},
"End": true
}
Hope this helps!
Here is the full version of the code
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:StepFunctionTest",
"End": true,
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"Next": "NotifyFailure"
}
]
},
"NotifyFailure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Subject": "[ERROR]: Task failed",
"Message": {
"Alarm": "Batch job submitted through Step Functions failed with the following error",
"Error.$": "$.Cause"
},
"TopicArn": "arn:aws:sns:us-east-1:XXXXXXXXXXXXX:Notificaiton"
},
"End": true
}
}
}
This line is already appending exception object to 'error' path.
"ResultPath": "$.error"
We just need pass '$' to Message.$ to SNS task, both input and error details will be sent to SNS.
{
"TopicArn":"${SnsTopic}",
"Message.$":"$"
}
if we don't want input to Lambda to be appended in email, we should skip ResultPath or have just '$' as ResultPath, input object is ignored.
"ResultPath": "$"

Is it reasonable to be concerned an SES object won't be available in S3?

I've setup SES Rule in the following way:
Actions:
1) S3: Saves SES object to an S3 bucket
2) Lambda: Triggers my lambda function for email processing
In my testing, I've always been able to retrieve my SES object from the bucket using the messageID in the very first line of code. I'm then able to parse and read it without issue.
My question is, is it reasonable to be concerned that the SES object may not always be immediately available? I'm considering adding error handling incase the object isn't there. Basically to wait 1/2 a second and try again until the lambda times out. But I don't want to complicate the code if this is not a reasonable concern, handled by boto3, ect. Thoughts?
In your case, it is best to use only one S3 action configured with a notification on a SNS topic and have your Lambda subscribe to this topic.
Your Lambda will receive a SNS event containing a stringified SES event in the message:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
...
"Sns": {
"Type": "Notification",
"MessageId": "...",
"TopicArn": "...",
"Subject": "Amazon SES Email Receipt Notification",
"Message": "<STRINGIFIED SES EVENT>",
...
}
}
]
}
If you parse the Message, you will get something like this:
{
"notificationType": "Received",
"mail": {
"timestamp": "...",
"source": "...",
"messageId": "...",
"destination": [
...
],
"headersTruncated": false,
"headers": [
...
],
"commonHeaders": {
"returnPath": "...",
"from": [
"..."
],
"date": "...",
"to": [
...
],
"messageId": "...",
"subject": "..."
}
},
"receipt": {
...
"action": {
"type": "S3",
"topicArn": "...",
"bucketName": "<YOUR_BUCKET>",
"objectKey": "<YOUR_OBJECT_KEY>"
}
}
}
where you will find the exact reference to the uploaded object in your bucket (receipt.action.bucketName and receipt.action.objectKey).
With this setup, it reasonable to consider that when your Lambda is triggered the object is available.