What's the difference between the provider level role and iamRoleStatements in a serverless template? - serverless-framework

These are two properties you can set in a template and I am curious how they differ and which one I should use.
The definitions don't make it clear:
role: arn:aws:iam::XXXXXX:role/role # Overwrite the default IAM role which is used for all functions
iamRoleStatements: # IAM role statements so that services can be accessed in the AWS account
Can someone explain how they differ along with use cases for both?
I'm not sure if I should just make a new provider level role with all resources the application needs and assign the role paramater to it, or if I should just keep the default role serverless makes, and add my own policies to the iamRoleStatements

iamRoleStatements is designed to contain the most common permissions needed for this service. For example, you have an API gateway and a bunch of lambda functions that all use DynamoDB to store the transactional data. Almost all the lambda functions need to have permission to query DynamoDB, so iamRoleStatements should be configured like this.
provider:
name: aws
...
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- <DynamoDB table and indices arns>
All the lambdas will get the same iamRoleStatements written above. Now, say if you have a special lambda function that needs completely different permission sets. You can craft a role in the console, and use the role option to overwrite the default role which contains iamRoleStatements.

Related

Referring to existing Dynamo DB in Lambda functions

I'm trying to read an existing dynamo DB in Lambda function, but resources in YAML creates a new table. If I could do that, someone please help me how? Also I need to use an existing S3 bucket
If you change your resources frequently or even occasionally then you should use Parameter Store. This will allow your lambda function to pick up the correct table names at runtime.
Anytime you update/change your table to have a new name, you just update the value in parameter store and your Lambda will automatically refer to the new table.
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html

How can I give others permission to read my INFORMATION_SCHEMA.SCHEMATA in BigQuery?

What privileges can I grant to let everyone in the world query my information schema? i.e. I want everyone to be able to run:
select * from `projectid`.INFORMATION_SCHEMA.SCHEMATA
Currently I get back:
Access Denied: Table projectid:INFORMATION_SCHEMA.SCHEMATA: User does not have permission to query table projectid:INFORMATION_SCHEMA.SCHEMATA
Usually in BigQuery you set permissions at the dataset level. For example, this query will run for anyone, as the dataset is public for everyone:
SELECT *
FROM `fh-bigquery.flights.INFORMATION_SCHEMA.TABLES`
But you can't do this:
SELECT *
FROM `fh-bigquery.INFORMATION_SCHEMA.SCHEMATA`
This because you need project level permissions to see all my datasets, even the ones I haven't made public.
If you really want to share the schemas of all your datasets with the world, then you could create a custom role just for this, with the bigquery.datasets.get permission:
https://console.cloud.google.com/iam-admin/roles
Then you need to assign this role to all users - but that's not an option.
At the project level, you can assign this role to one of these:
Google Account email: user#gmail.com
Google Group: admins#googlegroups.com
Service account: server#example.gserviceaccount.com
G Suite domain: example.com
One option in this case:
Create a Google Group.
Give this new role to this new Google Group.
Make this Google Group free to join.
Tell people "hey, if you want to see my project SCHEMATA, join this group".
Then all will work.

AWS Elemental MediaConvert - output object to be owned by different account

I have an account 'A' where Elemental MediaConvert service is running. The output file must be placed in the S3 bucket of account 'B'. I am able to set this up by using an IAM role in account 'A' and setting the canned ACL to 'bucket-owner-full-control'. This way, account 'A' is the owner of the object and account 'B' has full control over the object. I am using a Lambda function to pass the IAM role and submit the MediaConvert job. This implementation works perfectly.
However, I now have a requirement that account 'B' must also be the owner of the object. I can probably have MediaConvert place the outputs in an S3 bucket in account 'A' directly, and then have another Lambda function copy the object over to account 'B' assuming a suitable IAM role from account 'B'. But, I want to achieve this by using just the MediaConvert service and maybe a suitable cross-account role from account 'B'. Thay way, I will have less code to maintain overall.
Is it possible to set up the workflow this way? Any help would be greatly appreciated. Thanks!

Serverless.yml Aurora RDS: Infrastructure as code

I'm looking for a way to declare via serverless.yml an Aurora DB with all the tables.
I would like to be able to deploy via serverless deploy a new Aurora instance with all the tables.
Thanks,
The easiest way I found is to declare the aurora DB resource with a YAML like:
Resources:
RDSCluster:
Type: AWS::RDS::DBCluster
Properties:
MasterUsername: ${self:custom.dbLogin}
MasterUserPassword: ${self:custom.dbPassword}
DatabaseName: MagaDB
Engine: aurora
EngineMode: serverless
ScalingConfiguration:
AutoPause: true
MaxCapacity: 2
MinCapacity: 1
SecondsUntilAutoPause: 300
EnableHttpEndpoint: true
StorageEncrypted: true
And then create an init.sql script that will instantiate all the tables.
The difference between Aurora and DynamoDB is that you need to declare tables when deploying a DynamoDB but you don't need to with Aurora.
To do this with the Serverless Framework, you'll need to write a CloudFormation template and include it inside the resources block of your serverless.yml file.
Here are the docs, so you can learn more about including CloudFormation in your serverless.yml file.
Here's a set of examples from AWS that can help, although they're extremely verbose and include lots of extra things you may not need.

How to manage the role wise functionalities in grails application (custom authorization)

I am developing an application using Grails framework. I would like to allow access to the application to Users based on their Roles and the privileges granted to those Roles. Spring Security can provide only high-level security. which is not sufficient to my task. But here I would like to manage the privileges dynamically.
Please suggest any best approach to do this.
Suppose take a Reprint functionality in application. Being an Admin will decide to who can access and who cannot based on their Roles.
Thanks in Advance
Spring security provides the role based functionality which is not dynamic or admin or any super your can't allow / deny the access to the particular thing dynamically to the particular user.
But you could create the custom authorization workflow.
Assuming that you have 'user' table in your database, in this table you can create the one column as 'authorization' as a string / varchar data.
Make a JSON as follows ( for example)
[ "resource1":{
"canView" : true,
"canEdit" : false,
"canDelete" : false
},
"resource2":{
"canView" : true,
"canEdit" : true,
"canDelete" : true
}
]
Create / build this JSON as per your requirement, this is just an example.
Store this as a string in the database convert it after fetching from backend ( String to JSON) manipulate its values dynamically (which admin will change), again convert to string and update it, fetch and convert to JSON and check value wherever you want check -> Is the user have an access to get this resource or not.