I recently updated to v1.44.0 and used the #serverless/enterprise-plugin and am now unable to deploy. I’m simply trying to create a User Pool, but keep getting an error.
An error occurred: EnterpriseLogAccessIamRole - Policy statement must contain resources. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: dc158686-378c-4d01-97fb-1414d55a735d)
serverless.yml
tenant: [omitted]
app: [omitted]
service: auth
frameworkVersion: ">=1.44.0"
plugins:
- '#serverless/enterprise-plugin'
provider:
name: aws
runtime: nodejs8.10
region: us-east-1
custom:
stage: ${opt:stage, self:provider.stage}
cognito:
app:
userPool: ${self:service}-app-user-pool-${self:custom.stage}
identityPool: AppIdentityPoolDev
resources:
Resources:
AppUserPool:
Type: AWS::Cognito::UserPool
Properties:
UserPoolName: ${self:custom.cognito.app.userPool}
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
MobileAppClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: ${self:service}-mobile-app-client-${self:custom.stage}
UserPoolId:
Ref: AppUserPool
GenerateSecret: true
Outputs:
AppUserPool:
Value:
Ref: AppUserPool
MobileAppClient:
Value:
Ref: MobileAppClient
Related
When I send a GET request to AWS APIGateway's URL "https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/blablabla.png" or Custom Domain's URL "devblablabla.bla.com" via browser or POSTMAN I receive a 200 response with the "X-Cache: Miss from cloudfront" header:
GET request to AWS APIGateway
Do you have any idea how I can rewrite the serverless.yml file for receiving 200 response with the "X-cache:HIT" header?
This is the configuration that I deploy:
# serverless.yml
service: s3-blablabla-service
provider:
name: aws
stage: dev
region: us-east-1
environment:
SERVICE_NAME: ${self:service}
apiGateway:
binaryMediaTypes: "*/*"
plugins:
- serverless-apigateway-service-proxy
- serverless-domain-manager
- serverless-finch
custom:
c3launchBucketName: "blabla-pl-${self:provider.stage}"
c3scormBucketName: "blabla-crs-${self:provider.stage}"
domainName: "${self:provider.stage}blablabla.bla.com" # Change this to your domain.
basePath: "" # This will be prefixed to all routes
apiGatewayServiceProxies:
- s3:
path: /pl/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3launchBucketName}
Ref: S3Bucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
- s3:
path: /crs/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3scormBucketName}
Ref: S3ScormBucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
customDomain:
domainName: ${self:custom.domainName}
basePath: ${self:custom.basePath}
stage: ${self:provider.stage}
createRoute53Record: true
autoDomain: true
client:
bucketName: ${self:custom.c3launchBucketName}
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3launchBucketName}
S3ScormBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3scormBucketName}
After the deployment I receive this result:
endpoints:
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/pl/{myKey+}
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/{myKey+}
Service deployed to stack s3-blablabla-service-dev
Serverless Domain Manager:
Domain Name: devblablabla.bla.com
Target Domain: abrakadabra.cloudfront.net
Hosted Zone Id: BARBARBAR
Transform: AWS::Serverless-2016-10-31
Description: >
patientcheckout
Sample SAM Template for patientcheckout
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 20
Runtime: java11
#Architectures:
# - x86_64
MemorySize: 512
Resources:
PatientCheckoutBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub ${AWS::StackName}-${AWS::AccountId}-${AWS::Region}
PatientCheckoutFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: patientcheckout
Handler: com.harsha.aws.lambda.patientcheckout.PatientCheckoutLambda::handler
Policies:
- S3ReadPolicy:
BucketName: !Sub ${AWS::StackName}-${AWS::AccountId}-${AWS::Region}
Events:
S3Event:
Type: S3
Properties:
Bucket: !Ref PatientCheckoutBucket
Events: s3:ObjectCreate:*
The event is not supported notifications (Service: Amazon S3 Status Code: 400; Error InvalidArgument;
I get this when error I run the code with sam cli. sam build is success, but s3 bucket creation fails
frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
I am trying to deploy sample.war application on EC2 instance at the time of launch. That is when an instance is launched the application should be deployed automatically on it using cfn-init and Metadata. I added a user with policy and authentication with no luck. If I wget with the S3 path, the file is being downloaded. Below is my script. What am I missing in this, or is there any other way to do this?
---
AWSTemplateFormatVersion: 2010-09-09
Description: Test QA Template
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref AMIIdParam
InstanceType: !Ref InstanceType
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
java-1.8.0-openjdk.x86_64: []
tomcat: []
httpd.x86_64: []
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
files:
/usr/share/tomcat/webapps/sample.zip:
source: https://s3.amazonaws.com/mybucket/sample.zip
mode: '000500'
owner: tomcat
group: tomcat
authentication: S3AccessCreds
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: 'S3'
accessKeyId: !Ref HostKeys
secretKey: Fn::GetAtt:
- HostKeys
- SecretAccessKey
buckets: !Ref BucketName
CfnUser:
Type: AWS::IAM::User
Properties:
Path: '/'
Policies:
- PolicyName: 'S3Access'
PolicyDocument:
Statement:
- Effect: 'Allow'
Action: s3:*
Resource: '*'
HostKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref CfnUser
I was unable to reproduce this using the following template:
---
AWSTemplateFormatVersion: 2010-09-09
Description: Test QA Template
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-08589eca6dcc9b39c
InstanceType: t2.micro
KeyName: default
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
/opt/aws/bin/cfn-init -s ${AWS::StackId} --resource MyInstance --region ${AWS::Region}
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
java-1.8.0-openjdk.x86_64: []
tomcat: []
httpd.x86_64: []
services:
sysvinit:
httpd:
enabled: true
ensureRunning: true
files:
/usr/share/tomcat/webapps/sample.zip:
source: https://s3.amazonaws.com/mybucket/sample.zip
mode: '000500'
owner: tomcat
group: tomcat
(In other words, use of the above template allowed me to install a sample.zip file using cfn-init.)
Thus there is something permissions-related in the way you're accessing the S3 bucket.
Suffice to say it is a bad practice to use Access Keys. Have a look at this document on best practices of assigning an IAM Role to an EC2 instance and then adding a Bucket Policy that grants appropriate access to that Role.
Created cloud formation template to create bucket with notification.
Following is code:
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
CBRS3ToS3IADelay:
Description: Number of days before an S3 object is transitioned from S3 to S3-IA
Type: Number
Default: 365
CBRS3ToGlacierDelay:
Description: Number of days before an S3-IA object is transitioned from S3-IA to Glacier.
Type: Number
Default: 1460
CBRBucketName:
Description: S3 bucket name
Type: String
Default: "my-bucket-test0011"
Resources:
CBRS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Ref: CBRBucketName
AccessControl: Private
LifecycleConfiguration:
Rules:
- Id: CbrCertReportGlacierArchiveRule
Status: Enabled
Transitions:
- StorageClass: STANDARD_IA
TransitionInDays: !Ref CBRS3ToS3IADelay
- StorageClass: GLACIER
TransitionInDays: !Ref CBRS3ToGlacierDelay
NotificationConfiguration:
LambdaConfigurations:
-
Function: "arn:aws:lambda:xxxx:xxxx:function:xxxx"
Event: "s3:ObjectCreated:Put"
Filter:
S3Key:
Rules:
-
Name: suffix
Value: ".gz"
Tags:
- Key: PRODUCT
Value: CRAWS
VersioningConfiguration:
Status: Enabled
Code working with notification block.
But above template is not working with notification.
Getting following error:
Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument
I able to do from console.
Anyone help me to fix this issue?
this is late, so more of answering myself for this question (just managed to fix the same problem): it fails due to a preliminary check on s3 to invoke that lambda function, we will need this:
CBRS3BucketCanInvokeFunctionX:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: ARN_OF_FUNCTION_X
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !Sub 'arn:aws:s3:::${CBRBucketName}'
your CBRS3Bucket will also need to let above resource run first:
CBRS3Bucket:
Type: AWS::S3::Bucket
DependsOn: CBRS3BucketCanInvokeFunctionX
Try taking the .gz and put in just gz.