IdentityPool Creation with CloudFormation - serverless-framework

I'm attempting to follow along with a tutorial located at http://serverless-stack.com/chapters/create-a-cognito-identity-pool.html for identity pool creation and document the creation by using cloudformation so that I can easily undo everything when I am done. However, I am having trouble finding any examples that show how to effectively do this using the template syntax. What I currently have is the following
ScratchUserPool:
Type: AWS::Cognito::UserPool
Properties:
UserPoolName: notes-user-pool
ScratchUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: notes-client
ExplicitAuthFlows: [ADMIN_NO_SRP_AUTH]
UserPoolId:
Ref: ScratchUserPool
ScratchIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
IdentityPoolName: ScratchIdentityPool
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: ScratchUserPoolClient
ProviderName:
Ref: ScratchUserPool
The deployment step is failing when it attempts to create the ScratchIdentityPool. I get an error stating that:
An error occurred while provisioning your stack: ScratchIdentityPool -
Invalid Cognito Identity Provider (Service: AmazonCognitoIdentity;
Status Code: 400; Error Code: InvalidParameterException; Request ID:
bc058020-663b-11e7-9f2a-XXXXXXXXXX)
Am I not referencing the Client or Provider name correctly?

Almost immediately after I posted my question, I think I was able to answer it. The problem with my identity pool is that I needed to reference the provider name in the following way:
ScratchIdentityPool:
Type: AWS::Cognito::IdentityPool
Properties:
IdentityPoolName: ScratchIdentityPool
AllowUnauthenticatedIdentities: false
CognitoIdentityProviders:
- ClientId:
Ref: ScratchUserPoolClient
ProviderName:
Fn::GetAtt: [ScratchUserPool, ProviderName]
I needed to use the special amazon function Fn::GetAtt in order for this to work.

Related

How to delete a elasticache subnet group with cloud custodian?

I want to use cloud custodian to cleanup some aws resources.( elasticache in this case) .
However ; I got error when trying to delete elasticache subnet groups. According to custodian cache.subnet-group documentation, delete is not a valid action for cache.subnet-group.
How do I perform delete in this case?
my policy file.
policies:
- name: cleanup-elasticache-subnet-group
resource: aws.cache-subnet-group
filters:
- type: value
key: "CacheSubnetGroupName"
op: contains
value: foo
actions:
- type: delele >>> !!! delete is not a valaid action type !!!
Since Cloud Custodian doesn't natively provide the delete action today, your main options are:
Open a feature request for an aws.cache-subnet-group.delete action.
Use Cloud Custodian to report/notify on subnet groups you need to delete, but handle the actual deletion with external tools.
For what it's worth, you can review open feature requests here. I didn't see any for this action, but it's always good to check!

Getstream.io throws exception when using the "to" field

I have two flat Feed Groups, main, the primary news feed, and main_topics.
I can make a post to either one successfully.
But when I try to 'cc' the other using the to field, like, to: ["main_topics:donuts"] I get:
code: 17
detail: "You do not have permission to do this, you got this error because there are no policies allowing this request on this application. Please consult the documentation https://getstream.io/docs/"
duration: "0.16ms"
exception: "NotAllowedException"
status_code: 403
Log:
The request didn't have the right permissions or autorization. Please check our docs about how to sign requests.
We're generating user tokens server-side, and the token works to read and write to both groups without to.
// on server
stream_client.user(user.user_id).create({
name: user.name,
username: user.username,
});
Post body:
actor: "SU:5f40650ad9b60a00370686d7"
attachments: {images: [], files: []}
foreign_id: "post:1598391531232"
object: "Newsfeed"
text: "Yum #donuts"
time: "2020-08-25T14:38:51.232"
to: ["main_topics:donuts", "main_topics:all"]
verb: "post"
The docs show an example with to: ['team:barcelona', 'match:1'], and say you need to create the feed groups in the panel, but mention nothing about setting up specific permissions to use this feature.
Any idea why this would happen? Note that I'm trying to create the new topics (donuts, all) which don't exist when this post is made. However, the docs don't specify that feeds need to be explicitly created first - maybe that's the missing piece?
If you haven’t already tried creating the feed first, then try that. Besides that, the default permissions restrict a user from posting on another’s feed. I think it’s acceptable to do this if it’s a notification feed but not user or timeline.
You can email the getstream support to change the default permissions because these are not manageable from the dashboard.
Or you can do this call server side as an admin permissions.

YAML code not well formed for S3 Bucket Encryption

Can someone tell me what i am doing wrong here. I am trying to enforce the S3 bucket to use the KMS key, which i am referencing, but the YAML format is incorrect. Can you advice. Thanks
TrailBucket:
Condition: InternalBucket
Type: 'AWS::S3::Bucket'
Properties: {
BucketEncryption:
ServerSideEncryptionConfiguration: *********
- ServerSideEncryptionByDefault:
SSEAlgorithm: !Ref Key
}
TrailBucketPolicy:
Condition: InternalBucket
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref TrailBucket
PolicyDocument:
If i remove the brackets {}, then i get the following error message in cloud-formation
The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID:
Try removing the brackets and asterisks

Way to access S3 Bucket's name from cloudformation/boto3?

My team wants to create an S3 bucket in a cloudformation template without assigning it a bucket name (to let cloudformation name it itself).
When putting a file into the S3 bucket from my lambda function, is there a way for me to get the S3 bucket's name without having to manually look at the AWS console and check what name was created?
Using the Serverless Application Model (SAM) you can include environment variables with the function properties
AWSTemplateFormatVersion: '2010-09-09'
Description: "Demo"
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyLambdaFunction:
Type: 'AWS::Serverless::Function'
Properties:
Runtime: nodejs10.x
Handler: index.handler
CodeUri: ./src
Policies:
- Version: '2012-10-17'
Statement:
- Action:
- s3:PutObject
Effect: 'Allow'
Resource: !Sub ${MyS3Bucket.Arn}/*
Environment:
Variables:
BUCKET_NAME: !Ref MyS3Bucket
MyS3Bucket:
Type: 'AWS::S3::Bucket'
Then your function can access the environment variables using process.env.BUCKET_NAME in node.js. In python I think you'd use os.environ['BUCKET_NAME']
const aws = require('aws-sdk');
const s3 = new aws.S3();
exports.handler = async (event) => {
const params = {
Body: 'The Body',
Bucket: process.env.BUCKET_NAME,
Key: 'abc123',
}
return s3.putObject(params).promise();
}
I would assume this works for any CloudFormation templates which aren't using the SAM transform too.
You can use Fn::GetAtt to get the values from your newly created S3 bucket.
You can check it here.
S3 Ref & Get Attribute Documentation
The problem is how to pass the value to lambda function.
Here is the step that might be works.
Use the function Get Attribute above to get the s3 bucket name that cloudformation created.
Insert the value into a file, you can use the UserData, or use Metadata if you are using cloud-init already.
Store the file into existed s3 bucket (or any other storage that lambda can access), you can using the cloud formation template bucket, that always been created when you launch a cloudformation template (usually named cf-template...).
Add a code to your lambda to access the s3 and get the file. Now you get the data of the s3 bucket that your cloudformation has been created and can use it on lambda.
Hope this help.

Backand's API with S3: Upload to region other than US Standard

I would like to use AWS S3 to store my app's user's files securely.
I am based in the EU (UK), so my bucket's region is EU (Ireland). Based on the Noterious example in the Backand docs, and the snippet provided by the Backand dashboard, this is my custom File Upload action:
function backandCallback(userInput, dbRow, parameters, userProfile) {
var data = {
"key" : "<my AWS key ID",
"secret" : "<my secret key>",
"filename" : parameters.filename,
"filedata" : parameters.filedata,
"region" : "Ireland",
"bucket" : "<my bucket name>"
};
var response = $http({method:"PUT",url:CONSTS.apiUrl + "/1/file/s3" ,
data: data, headers: {"Authorization":userProfile.token}});
return response;
}
When testing the action in the Backand dashboard, I get this error: 417 The remote server returned an error: (500) Internal Server Error.: An error occurred, please try again or contact the administrator. Error details: Maximum number of retry attempts reached : 3.
With an American bucket and region: "US Standard", it works without error. So, similarly to this answer, I think this is because the AWS endpoint isn't correctly set up.
I have tried region: "EU", region: "Ireland", region: "eu-west-1" and similar combinations.
So - Is there any way to configure Backand to use AWS endpoints other than US Standard? (I'd have thought that would have been the whole point of setting the region.)
We have checked this issues and apparently there is a different in the security method of AWS between east coast (N. Virginia) and newer regions like Ireland.
This issue is scheduled for one of the next releases, and I will update here when resolved.