I have a shared resource between many Cloud Stacks, and I want Serverless to ignore creating the resource if it exists, I found this configuration written in YAML to create a new resource, but I wanted it to ignore creating it if it exists, Is there a way to do it?
# you can add CloudFormation resource templates here
resources:
Resources:
NewResource:
Type: AWS::S3::Bucket
Properties:
BucketName: saif-bucket
I found an article about sharing resources between different serverless projects, and it seems that we can just define the resource as S3SharedBucketArtifacts instead of NewResource and that will do the trick.
Code will be :
# you can add CloudFormation resource templates here
resources:
Resources:
S3SharedBucketArtifacts:
Type: AWS::S3::Bucket
Properties:
BucketName: saif-bucket
Reference:
How to reuse an AWS S3 bucket for multiple Serverless Framework
Related
I've already created the RDS proxy and wanted to attach something like this.
provider:
name: aws
rds-proxy:
arn: arn:aws:xxxxxx
Or any cloudformation syntax which I can extend in Resources section.
In my provider: block I have a VPC declared as vpc:
How can I get the id/ARN/whatever of this VPC in my resources block? I have an AWS CloudFormation resource and I want to pass my VPC's id as the VpcId. Is there some ${self:} thing to get the id?
I tried Ref: vpc but that didn't work.
You wont be able to use Ref as the resource wasn't created within this stack.
By setting vpc configuration in either the functions or provider block you're referencing existing resources.
If the existing resource was created using CloudFormation you could export it and make it available for import in this stack, but if it was created manually its not possible.
I used Serverless Framework to create an S3 bucket and Lambda functions for uploading a file.
In my "Resources" section I have the following to create the S3 bucket:
UploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3BucketName}
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedMethods:
- PUT
- HEAD
AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
For some reason, every time I executed the file upload, the PUT request generated a
"403 Forbidden"
So I started manually editing the CORS configuration XML directly in the S3 console when suddenly and inexplicably it started working. Out of curiosity, I restored the CORS configuration back to the original manually and it still worked. I re-deployed using the sls deploy command and it continues to work.
I'm curious why it is working all of a sudden and, more importantly, what I can do to make sure I don't run into this again.
I've got a Concourse trigger set to detect when a particular file appears in an S3 bucket. Using this resource: https://github.com/concourse/s3-resource . Configuration is like so:
- name: s3-trigger-file
type: s3
source:
bucket: mybucket
regexp: filename_that_doesnt_change
access_key_id: {{s3-access-key-id}}
secret_access_key: {{s3-secret-access-key}}
I use it as a trigger like so:
jobs:
- name: job-waiting-for-s3-file-to-appear
public: true
plan:
- get: s3-trigger-file
trigger: true
Seems like an extremely simple configuration. However, when I start the job and put a file in the bucket, I get 'no versions available'.
Any suggestions on how I might proceed in troubleshooting? Thanks ~~
Concourse is not detecting s3-trigger-file. Here are a few potential causes:
The access-key-id and secret-access-key you are using do not have access to the file.
The filename in your regexp: is incorrect. Make sure it's an exact match that includes the file extension.
There is some networking configuration preventing your Concourse from talking to S3. You can ensure this is not the case by fly hijacking into the check container and using the Amazon CLI to manually pull the file.
I'm trying to provision my EC2 instances in Elastic Beanstalk with some ssh keys from a private S3 bucket. Here's a snippet of my .ebextensions/.config:
files:
"/root/.ssh/id_rsa" :
mode: "000400"
ownder: root
group: root
source: https://s3-us-west-2.amazonaws.com/<bucket>/<app>_id_rsa
Unfortunately, I'm getting a 403 response from S3. Is there a way to grant access to the EC2 instances using a Security Group? I can't grant each instance access individually as I won't know their IPs before they are scaled. Is there some other way to grant just this Elastic Beanstalk app access? I'm having trouble coming up with a good S3 Bucket Policy...
You can setup a IAM Role for S3 access and assign the IAM Role to EC2.
IAM Roles for Amazon EC2
According to Amazon Documentation, you need to use a resource key with to add an authentication in order to download private file from an s3 bucket. Here is an example from their website:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
**S3Auth:**
type: "s3"
buckets: ["**elasticbeanstalk-us-west-2-123456789012**"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "***aws-elasticbeanstalk-ec2-role***"
files:
"**/tmp/data.json**" :
mode: "000755"
owner: root
group: root
authentication: "**S3Auth**"
source: **https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-west-2-123456789012/data.json**
All the text in bold, needs to be replaced with custom content unique to your own environment except aws-elasticbeanstalk-ec2-role which is IAM role for the environment created by default, you can replace it with another IAM role. Once the resource has been identified, you can re reuse on as many files as possible. You can get more information here http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-files
first click on the tab below
then click on the added role
and add AmazonS3FullAccess access policy
In my case I tried creating a new EC2 role that would include access policy to S3, but could not get it working, as it seems by default this role does not get attached to ec2 instances? Played around with VPC S3 bucket roles, but that only messed up bucket and locked me out. The proper solution was to add the S3 access policy to already existing ElasticBeanstalk role:
aws-elasticbeanstalk-ec2-role
that #chaseadamsio and #tom mentioned, thank you for that.