What is this syntax: $[] - it’s referenced in Serverless Framework docs but not explained in the Variables section - serverless-framework

There are several references using $[...] syntax in the serverless-plugin-aws-alerts docs: Serverless Framework: Plugins
I understand about ${…} variables from the relevant docs: Serverless Framework Variables
But I can’t find anything to describe what is happening in the below code snippet (taken from the aws-alerts plugin docs linked above)
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # Optionally - naming template for the alarms, overwrites globally defined one
prefixTemplate: $[stackName] # Optionally - override the alarm name prefix, overwrites globally defined one

https://www.serverless.com/plugins/serverless-plugin-aws-alerts It's described here, under Custom Naming:
You can define a custom naming template for the alarms.
nameTemplate property under alerts configures naming template for all the alarms, while placing nameTemplate under alarm definition configures (overwrites) it for that specific alarm only. Naming template provides interpolation capabilities, where supported placeholders are:
$[functionName] - function name (e.g. helloWorld)
$[functionId] - function logical id (e.g. HelloWorldLambdaFunction)
$[metricName] - metric name (e.g. Duration)
$[metricId] - metric id (e.g. BunyanErrorsHelloWorldLambdaFunction for the log based alarms, $[metricName] otherwise)
Note: All the alarm names are prefixed with stack name (e.g. fooservice-dev).
So
alerts:
nameTemplate: $[functionName]-$[metricName]-Alarm # configures names for all alarms
alerts:
alarms:
definitions:
customAlarm:
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # configures (overwrites) it for that specific alarm only.

Related

Have the same lambda configurations for multiple events in cloudFormation

I'm writing a CloudFormation code to create and configure an S3 bucket.
As part of the configuration, I'm adding a lambda triggering in 2 events. It's the same lambda.
How can I write this code? Should I duplicate the section or can I map the two events to the same behavior?
Here is the code
MyBucket:
Condition: CreateNewBucket
Type: AWS::S3::Bucket
## ...Bucket Config comes here... ##
## The interesting part ##
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put'
Function: My Lambda name
Filter:
S3Key:
Rules:
- Name: prefix
Value: 'someValue'
Is there an option to write:
LambdaConfigurations:
- Events: ['s3:ObjectCreated:Put', 's3:ObjectCreated:Post']
Or maybe
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put',
- Event 's3:ObjectCreated:Post',
...
Or Do I need to copy-past the block twice?
I can't find an example for this behavior.
Sorry if this is a trivial question, I'm new to CloudFormation.
Thanks!
It depends on exactly what you want to trigger the lambda function. If you want the event to fire on all object create events (s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:copy, and s3:ObjectCreated:CompleteMultipartUpload), you can simply use 's3:ObjectCreated:*' as the value for Event.
If you want Put and Post specifically, you'll need to supply two event configurations, one for Put and another for Post. The CloudFormation parser will accept multiple Event elements in a LambdaConfiguration but only the last one to appear is applied to the event notification.
This is an interesting divergence in console/API functionality vs CloudFormation/CDK functionality. The PutBucketNotificationConfiguration API operation accepts LambdaFunctionConfiguration arguments that support multiple individual events. PutBucketNotificationConfiguration replaces PutBucketNotification, which accepted CloudFunctionConfiguration, which has a deprecated Event element. So I wonder if CloudFormation still refers to the older API operation.

Azure Appconfiguration and IOption|snappshot|monitor pattern

If you use IOptions pattern i.e typed settings approach how should you then be able to have a dynamic name convention for parameters in App Configuration (AC) ? Let's say we have 3 environments test, stage and prod and in AC we would like to have a name convention for parameters as:
<environment>:<application name>:<param name>
Is that possible to achieve due to when I have tested there seems to be some "behind the scene" mapping based IOptions entity name and appsettings.json structure or can I override this this behavior to achieve a more dynamic param name convention based on Env parameters as (test|stage|prod), Env parameter for service name and a more generic name convention in IOptions entity/appsettings files for all parameters that should be centrally/dynamically stored
Thanks!
The naming convention you plan to use will work, however, I would recommend naming config keys like below and use labels in AC for environments.
<application name>:<param name>
For example,
Key
Value
Label
app1:settings:debug
true
dev
app1:settings:debug
true
stage
app2:settings:color
yellow
test
Your application then can load only the configuration that is relevant to it (app1 vs. app2) and for the environment where it runs (dev/state/test etc.) by using key filters and label filters. You can find more details of this discussion in https://learn.microsoft.com/en-us/azure/azure-app-configuration/concept-key-value.
Here is a tutorial that shows how to use IOptions with App Configuration in an ASP.NET Core app:
https://learn.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-aspnet-core

What is "hellostepfunc1" in the serverless documenation for setup AWS stepfunctions?

In these documentation from the serverless website - How to manage your AWS Step Functions with Serverless and GiTHUb - serverless-step-functions, we can find this word hellostepfunc1: in the serverless.yml file. I could not find reference to it. I dont understand what is it, and I can't find any reference to it, even after the State Machine was created into AWS.
If I delete it I get the follow error
Cannot use 'in' operator to search for 'role' in myStateMachine
But if I change its name for someName for example I have no error and the State Machine will works good.
I could assume it is only an identifier but I not sure.
Where can I find reference to it?
This is quite specific to the library you are using and how it names the statemachine which is getting created based upon whether the name: field is provided under the hellostepfunc1: or not.
Have a look at the testcases here and here to understand better.
In-short a .yaml like
stateMachines:
hellostepfunc1:
definition:
Comment: 'comment 1'
.....
has name of statemachine like hellostepfunc1StepFunctionsStateMachine as no name was specified.
Whereas for a .yaml like
stateMachines:
hellostepfunc1:
name: 'alpha'
definition:
Comment: 'comment 1'
.....
the name of statemachine is alpha as you had name was specified.

How do I find the version and authority of a Perl 6 module?

In Bar.pm, I declare a class with an authority (author) and a version:
class Bar:auth<Camelia>:ver<4.8.12> {
}
If I use it in a program, how do I see which version of a module I'm using, who wrote it, and how the module loader found it? As always, links to documentation are important.
This question was also asked on perl6-users but died before a satisfactory answer (or links to docs) appeared.
Another wrinkle in this problem is that many people aren't adding that information to their class or module definitions. It shows up in the META.json file but not the code.
(Probably not a satisfying answer, because the facts of the matter are not very satisfying, especially regarding the state of the documentation, but here it goes...)
If the module or class was versioned directly in the source code à la class Bar:auth<Camelia>:ver<4.8.12>, then any code that imports it can introspect it:
use Bar;
say Bar.^ver; # v4.8.12
say Bar.^auth; # Camelia
# ...which is short for:
say Bar.HOW.ver(Bar); # v4.8.12
say Bar.HOW.auth(Bar); # Camelia
The ver and auth methods are provided by:
Metamodel::ClassHOW (although that documentation page doesn't mention them yet)
Metamodel::ModuleHOW (although that documentation page doesn't exist at all yet)
Unfortunately, I don't think the meta-object currently provides a way to get at the source path of the module/class.
By manually going through the steps that use and require take to load compilation units, you can at least get at the prefix path (i.e. which location from $PERL6LIB or use lib or -I etc. it was loaded from):
my $comp-spec = CompUnit::DependencySpecification.new: short-name => 'Bar';
my $comp-unit = $*REPO.resolve: $comp-spec;
my $comp-repo = $comp-unit.repo;
say $comp-repo.path-spec; # file#/home/smls/dev/lib
say $comp-repo.prefix; # "/home/smls/dev/lib".IO
$comp-unit is an object of type CompUnit.
$comp-repo is a CompUnit::Repository::FileSystem.
Both documentations pages don't exist yet, and $*REPO is only briefly mentioned in the list of dynamic variables.
If the module is part of a properly set-up distribution, you can get at the meta-info defined in its META6.json (as posted by Lloyd Fournier in the mailing list thread you mentioned):
if try $comp-unit.distribution.meta -> %dist-meta {
say %dist-meta<ver>;
say %dist-meta<auth>;
say %dist-meta<license>;
}

Salt: Pass parameters to custom module executed inside a pillar

I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox