How to setup RDS proxy to lambda in serverless framework - serverless-framework

I've already created the RDS proxy and wanted to attach something like this.
provider:
name: aws
rds-proxy:
arn: arn:aws:xxxxxx
Or any cloudformation syntax which I can extend in Resources section.

Related

Is the s3 connection configurable with role arn alone?

Alluxio on kubernetes(EKS) supports s3 connection without aws accessKey and secretKey? Is the s3 connection configurable with role arn alone?
We are installing Alluxio on EKS using s3 as a underlaying storage layer. Alluxio cluster is up and running with s3 storage when configurations are done like below (Using aws aceess key and secret)
ALLUXIO_JAVA_OPTS: |--
Dalluxio.master.hostname=alluxio-master-0 -
Dalluxio.master.journal.type=UFS -
Dalluxio.master.journal.folder=/journal -
Dalluxio.security.stale.channel.purge.interval=365d -
Dalluxio.master.mount.table.root.ufs=s3://cubixalluxiodata/ -
Dalluxio.master.mount.table.root.option.aws.accessKeyId=AxxxxxxxxxxxxO -
Dalluxio.master.mount.table.root.option.aws.secretKey=DxxxxxxxxxxxxD*
However we are looking for approach to configure s3 storage for alluxio without accessKey/secretKey. But with a role arn based authentication alone. Please suggest on possibility of the approach.
looks to me you need to use AWS credential profile file (https://docs.alluxio.io/os/user/stable/en/ufs/S3.html#advanced-credentials-setup) to connect to S3. Possibly to setup your AWS instance profile file and share that file to your running image

Providing credentials to the AWS CLI in ECS/Fargate

I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargate—i.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.

Avoid creating new resource if it exists using serverless

I have a shared resource between many Cloud Stacks, and I want Serverless to ignore creating the resource if it exists, I found this configuration written in YAML to create a new resource, but I wanted it to ignore creating it if it exists, Is there a way to do it?
# you can add CloudFormation resource templates here
resources:
Resources:
NewResource:
Type: AWS::S3::Bucket
Properties:
BucketName: saif-bucket
I found an article about sharing resources between different serverless projects, and it seems that we can just define the resource as S3SharedBucketArtifacts instead of NewResource and that will do the trick.
Code will be :
# you can add CloudFormation resource templates here
resources:
Resources:
S3SharedBucketArtifacts:
Type: AWS::S3::Bucket
Properties:
BucketName: saif-bucket
Reference:
How to reuse an AWS S3 bucket for multiple Serverless Framework

Serverless getting VPC id from vpc declaration

In my provider: block I have a VPC declared as vpc:
How can I get the id/ARN/whatever of this VPC in my resources block? I have an AWS CloudFormation resource and I want to pass my VPC's id as the VpcId. Is there some ${self:} thing to get the id?
I tried Ref: vpc but that didn't work.
You wont be able to use Ref as the resource wasn't created within this stack.
By setting vpc configuration in either the functions or provider block you're referencing existing resources.
If the existing resource was created using CloudFormation you could export it and make it available for import in this stack, but if it was created manually its not possible.

Access S3 bucket without running aws configure with kubernetes

I have an S3 bucket with some sql scripts and some backup files using mysqldump.
I also have a .yaml file that deploys a fresh mariadb image.
As I'm not very experienced with kubernetes yet, if I want to restore one of those backup files into the pod, I need to bash into it, run aws cli, insert my credentials, then sync the bucket locally and run mysql < backup.sql
This, obviously, destroys the concept of full automated deployment.
So, the question is... how can I securely make this pod immediately configured to access S3?
I think you should consider mounting S3 bucket inside the pod.
This can be achieved by for example s3fs-fuse.
There are two nice articled about Mounting a S3 bucket inside a Kubernetes pod and Kubernetes shared storage with S3 backend, I do recommend reading both to understand how this works.
You basically have to build your own image from Dockerfile and supply necessary S3 bucket info and AWS security credentials.
Once you have the storage mounted you will be able to call scripts from it in a following way:
apiVersion: v1
kind: Pod
metadata:
name: test-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]