I'm writing a CloudFormation code to create and configure an S3 bucket.
As part of the configuration, I'm adding a lambda triggering in 2 events. It's the same lambda.
How can I write this code? Should I duplicate the section or can I map the two events to the same behavior?
Here is the code
MyBucket:
Condition: CreateNewBucket
Type: AWS::S3::Bucket
## ...Bucket Config comes here... ##
## The interesting part ##
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put'
Function: My Lambda name
Filter:
S3Key:
Rules:
- Name: prefix
Value: 'someValue'
Is there an option to write:
LambdaConfigurations:
- Events: ['s3:ObjectCreated:Put', 's3:ObjectCreated:Post']
Or maybe
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put',
- Event 's3:ObjectCreated:Post',
...
Or Do I need to copy-past the block twice?
I can't find an example for this behavior.
Sorry if this is a trivial question, I'm new to CloudFormation.
Thanks!
It depends on exactly what you want to trigger the lambda function. If you want the event to fire on all object create events (s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:copy, and s3:ObjectCreated:CompleteMultipartUpload), you can simply use 's3:ObjectCreated:*' as the value for Event.
If you want Put and Post specifically, you'll need to supply two event configurations, one for Put and another for Post. The CloudFormation parser will accept multiple Event elements in a LambdaConfiguration but only the last one to appear is applied to the event notification.
This is an interesting divergence in console/API functionality vs CloudFormation/CDK functionality. The PutBucketNotificationConfiguration API operation accepts LambdaFunctionConfiguration arguments that support multiple individual events. PutBucketNotificationConfiguration replaces PutBucketNotification, which accepted CloudFunctionConfiguration, which has a deprecated Event element. So I wonder if CloudFormation still refers to the older API operation.
Related
There are several references using $[...] syntax in the serverless-plugin-aws-alerts docs: Serverless Framework: Plugins
I understand about ${…} variables from the relevant docs: Serverless Framework Variables
But I can’t find anything to describe what is happening in the below code snippet (taken from the aws-alerts plugin docs linked above)
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # Optionally - naming template for the alarms, overwrites globally defined one
prefixTemplate: $[stackName] # Optionally - override the alarm name prefix, overwrites globally defined one
https://www.serverless.com/plugins/serverless-plugin-aws-alerts It's described here, under Custom Naming:
You can define a custom naming template for the alarms.
nameTemplate property under alerts configures naming template for all the alarms, while placing nameTemplate under alarm definition configures (overwrites) it for that specific alarm only. Naming template provides interpolation capabilities, where supported placeholders are:
$[functionName] - function name (e.g. helloWorld)
$[functionId] - function logical id (e.g. HelloWorldLambdaFunction)
$[metricName] - metric name (e.g. Duration)
$[metricId] - metric id (e.g. BunyanErrorsHelloWorldLambdaFunction for the log based alarms, $[metricName] otherwise)
Note: All the alarm names are prefixed with stack name (e.g. fooservice-dev).
So
alerts:
nameTemplate: $[functionName]-$[metricName]-Alarm # configures names for all alarms
alerts:
alarms:
definitions:
customAlarm:
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # configures (overwrites) it for that specific alarm only.
I have a requirement where I need to set assignee's to all the "user-tasks" in a process instance as soon as the instance is created, which is based on the candidate group set to the user-task.
i tries getting the user-tasks using this :
Collection<UserTask> userTasks = execution.getBpmnModelInstance().getModelElementsByType(UserTask.class);
which is correct in someway but i am not able to set the assignee's , Also, looks like this would apply to the process itself and not the process instance.
secondly , I tried getting it from the taskQuery which gives me only the next task and not all the user-tasks inside a process.
Please help !!
It does not work that way. A process flow can be simplified to "a token moves through the bpmn diagram" ... only the current position of the token is relevant. So naturally, the tasklist only gives you the current task. Not what could happen after ... which you cannot know, because if you had a gateway that continues differently based on the task outcome? So drop playing with the BPMN meta model. Focus on the runtime.
You have two choices to dynamically assign user tasks:
1.) in the modeler, instead of hard-assigning the task to "a-user", use an expression like ${taskAssignment.assignTask(task)} where "taskAssignment" is a bean that provides a String method that returns the user.
2.) add a taskListener on "create" to the task and set the assignee in the listener.
for option 2 you can use the camunda spring boot events (or the (outdated) camunda-bpm-reactor extension) to register one central component rather than adding a listener to every task.
I am unable to get TaskDefinitionArn in a variable .
I am trying to do the below:
cloudwatchTriggerForLambdaFunction:
Type: 'AWS::Events::Rule'
Properties:
Description: 'Trigger Lambda function according to the specified schedule'
ScheduleExpression: !Ref CronExpression
State: ENABLED
Targets:
- Arn: !Sub '${LambdaFunction.Arn}'
Id: cloudwatchTriggerForLambdaFunction
- Arn: !GetAtt FargateLauncher.Arn
Id: fargate-launcher
Input:
!Sub |
{
taskDefinition: "${TaskDefinitionArn}"
}
but the above throwing the error like below:
An error occurred (ValidationError) when calling the CreateStack operation: Template error: instance of Fn::Sub references invalid resource attribute TaskDefinitionArn.
I cannot get the value of TaskDefinitionArn in a parameter as this is going to be created runtime so must get this lie above. Pleae suggest some solution to this. Thanks in advance.
I had to change my approach a bit.
I am now using direct cloudwatch trigger on fargate task instead of running lambda function to trigger fargate task.
So that way, this query seems invalid.
If you try this way do try to create the arnanually like
**arn:aws:ecs:${AWS::AccountId}:${AWS:Region}**
Since your resource name is TaskDefinition, you should reference it by name.
{
taskDefinition: "${TaskDefinition}"
}
But according to this aws documentation, the ecs event rule should be defined as below
{
"Group" : String,
"LaunchType" : String,
"NetworkConfiguration" : NetworkConfiguration,
"PlatformVersion" : String,
"TaskCount" : Integer,
"TaskDefinitionArn" : String
}
therefore the key should be TaskDefinitionArn not taskDefinition. Please have a look at the reference.
I agree - but i am using this approach link below to run fargate task with cloudwatch trigger where he is using TaskDefinitionArn as a parameter. Which I don;t want to do. I want to get the value of Arn itself while running my task.
Let me know if you did not get my query.
creating a 'Target' for a cloudwatch event rule via cloudformation for a fargate launchtype task
I need to add some custom headers to every boto3 request that is sent out. Is there a way to manage the connection itself to add these headers?
For boto2, connection.AWSAuthConnection has a method build_base_http_request which has been helpful. I've yet to find an analogous function within the boto3 documentation though.
This is pretty dated but we encountered the same issue, so I'm posting our solution.
I wanted to add custom headers to boto3 for specific requests.
I found this: https://github.com/boto/boto3/issues/2251, and used the event system for adding the header
def _add_header(request, **kwargs):
request.headers.add_header('x-trace-id', 'trace-trace')
print(request.headers) # for debug
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', _add_header)
You can try using a wildcard for all requests:
event_system.register_first('before-sign.*.*', _add_header)
*SERVICE_NAME- you can find all available services here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html
For more information about register a function to a specific event: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/events.html
Answer from #May Yaari is pretty awesome. To the concern raised by #arainchi:
This works, there is no way to pass custom data to event handlers, currently we have to do it in a non-pythonic way using global variables/queues :( I have opened issue ticket with Boto3 developers for this exact case
Actually, we could leverage the python functional programming property: returning a function inside a function to get around:
In the case we want to add a custom value custom_variable to the header, we could do
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', _register_callback(custom_variable))
def _register_callback(custom_variable):
def _add_header(request, **kwargs):
request.headers.add_header('header_name_you_want', custom_variable)
return _add_header
Or a more pythonic way using lambda
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', lambda request, **kwargs: _add_header(request, custom_variable))
def _add_header(request, custom_variable):
request.headers.add_header('header_name_you_want', custom_variable)
I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox