I would like to start using serverless-framework to manage lambda deploys at my company, but we handle PHI so security’s tight. Our compliance director and CTO had concerns about passing our AWS key and secret to another company.
When doing a serverless deploy, do AWS credentials ever actually pass through to Serverless, Inc?
If not, can someone point me to where in the code I can prove that?
Thanks!
Running serverless deploy isn't just one call, it's many.
AWS example (oversimplification):
Check if deployment s3 bucket already exists
Create an S3 bucket
Upload packages to s3 bucket
Call CloudFormation
Check CloudFormation stack status
Get info of created recourses (e.g. endpoint urls of created APIs)
And those calls can change dependent on what you are doing and what you have done before.
The point I'm trying to make is is that these calls which contain your credentials are not all located in one place and if you want to do a full code review of Serverless Framework and all it's dependencies, have fun with that.
But under the hood, we know that it's actually using the JavaScript aws-sdk (go check out the package.json), and we know what endpoints that uses {service}.{region}.amazonaws.com.
So to prove to your employers that nothing with your credentials is going anywhere except AWS you can just run a serverless deploy with wireshark running (other network packet analyzers are available). That way you can see anything that's not going to amazonaws.com
But wait, why are calls being made to serverless.com and serverlessteam.com when I run a deploy?
Well that's just tracking some stats and you can see what they track here. But if you are uber paranoid, this can be turned off with serverless slstats --disable.
Related
In terraform to read an object from s3 bucket at the time of deployment I can use data source
data aws_s3_bucket_object { }
Is there a similar concept in CDK? I've seen various methods of uploading assets to s3, as well as importing an existing bucket, but not getting an object from the bucket. I need to read a configuration file from the bucket that will affect further deployment.
Its important to remember that CDK itself is not a deployment option. it can deploy, but the code you are writing in a cdk stack is the definition of your resources - not a method for deployment.
So, you can do one of a few things.
Use your SDK for your language to make a call to the s3 bucket and load the data directly. This is perfectly acceptable and an understood way to gather information you need before deployment - each time the stack Synths (which it does before every cdk deploy that code will run and will pull your data.
Use a CodePipeline to set up a proper pipeline, and give it two sources - one your version control repo and the second your s3 bucket:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html
The preferred way - drop the json file, and use Parameter Store. CDK contains modules that will create a token version of this parameter on synth, and when it deploys it will reference that properly back to the Systems Manager Parameter store
https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
If your parameters change after deployment, you can have that as part of your cdk stack pretty easily (using cfn outputs). If they change in the middle/during deployment, you really need to be using a CodePipeline to manage these steps instead of just CDK.
Because remember: The cdk deploy option is just a convenience. It will execute everything and has no way to pause in the middle and execute specific steps. (other than a very basic, this depends on this resources)
I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargate—i.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.
Terraform 0.9.5.
I am in the process of putting together a group of modules that our infrastructure team and automation team will use to create resources in a standard fashion and in turn create stacks to provision different envs. All working well.
Like all teams using terraform shared state becomes a concern. I have configured terraform to use a s3 backend, that is versioned and encrypted, added a lock via a dynamo db table. Perfect. All works with local accounts... Okay the problem...
We have multiple aws accounts, 1 for IAM, 1 for billing, 1 for production, 1 for non-production, 1 for shared services etc... you get where I am going. My problem is as follows.
I authenticate as user in our IAM account and assume the required role. This has been working like a dream until i introduced terraform backend configuration to utilise s3 for shared state. It looks like the backend config within terraform requires default credentials to be set within ~/.aws/credentials. It also looks like these have to be a user that is local to the account where the s3 bucket was created.
Is there a way to get the backend configuration setup in such a way that it will use the creds and role configured within the provider? Is there a better way to configured shared state and locking? Any suggestions welcome :)
Update:Got this working. I created a new user within the account where the s3 bucket is created. Created a policy to just allow that new user s3:DeleteObject,GetObject,PutObject,ListBucket and dynamodb:* on the specific s3 bucket and dynamodb table. Created a custom credentials file and added default profile with access and secret keys assigned to that new user. Used the backend config similar to
terraform {
required_version = ">= 0.9.5"
backend "s3" {
bucket = "remote_state"
key = "/NAME_OF_STACK/terraform.tfstate"
region = "us-east-1"
encrypt = "true"
shared_credentials_file = "PATH_TO_CUSTOM_CREDENTAILS_FILE"
lock_table = "MY_LOCK_TABLE"
}
}
It works but there is an initial configuration that needs to happen within your profile to get it working. If anybody knows of a better setup or can identify problems with my backend config please let me know.
Terraform expects backend configuration to be static, and does not allow it to include interpolated variables as might be true elsewhere in the config due to the need for the backend to be initialized before any other work can be done.
Due to this, applying the same config multiple times using different AWS accounts can be tricky, but is possible in one of two ways.
The lowest-friction way is to create a single S3 bucket and DynamoDB table dedicated to state storage across all environments, and use S3 permissions and/or IAM policies to impose granular access controls.
Organizations adopting this strategy will sometimes create the S3 bucket in a separate "adminstrative" AWS account, and then grant restrictive access to the individual state objects in the bucket to the specific roles that will run Terraform in each of the other accounts.
This solution has the advantage that once it has been set up correctly in S3 Terraform can be used routinely without any unusual workflow: configure the single S3 bucket in the backend, and provide appropriate credentials via environment variables to allow them to vary. Once the backend is initialized, use workspaces (known as "state environments" prior to Terraform 0.10) to create a separate state for each of the target environments of a single configuration.
The disadvantage is the need to manage a more-complicated access configuration around S3, rather than simply relying on coarse access control with whole AWS accounts. It is also more challenging with DynamoDB in the mix, since the access controls on DynamoDB are not as flexible.
There is a more complete description of this option in the Terraform s3 provider documentation, Multi-account AWS Architecture.
If a complex S3 configuration is undesirable, the complexity can instead be shifted into the Terraform workflow by using partial configuration. In this mode, only a subset of the backend settings are provided in config and additional settings are provided on the command line when running terraform init.
This allows options to vary between runs, but since it requires extra arguments to be provided most organizations adopting this approach will use a wrapper script to configure Terraform appropriately based on local conventions. This can be just a simple shell script that runs terraform init with suitable arguments.
This then allows to vary, for example, the custom credentials file by providing it on the command line. In this case, state environments are not used, and instead switching between environments requires re-initializing the working directory against a new backend configuration.
The advantage of this solution is that it does not impose any particular restrictions on the use of S3 and DynamoDB, as long as the differences can be represented as CLI options.
The disadvantage is the need for unusual workflow or wrapper scripts to configure Terraform.
I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.
Creating AMI's from EBS backed instances is exceedingly easy, but doing the same from an instance-store based instance seems like it can only be done manually using the CLI.
So far I've been able to bootstrap the creation of an 'instance-store' based server off of an HVM Amazon Linux AMI with Ansible, but I'm getting lost on the steps that follow... I'm trying to follow this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-instance-store-ami.html#amazon_linux_instructions
Apparently I need to store my x.509 cert and key on the instance, but which key is that? Is that...
one I have to generate on the instance with openssl,
one that I generate/convert from AWS,
one I generate with Putty, or
one that already exists in my AWS account?
After that, I can't find any reference to ec2-bundle-vol in Ansible. So I'm left wondering if the only way to do this is with Ansible's command module.
Basically what I'm hoping to find out is: Is there a way to easily create instance-store based AMI's using Ansible, and if not, if anyone can reference the steps necessary to automate this? Thanks!
Generally speaking, Ansible AWS modules are meant to manage AWS resources by interacting with AWS HTTP API (ie. actions you could otherwise do in the AWS Management Console).
They are not intended to run AWS specific system tools on EC2 instances.
ec2-bundle-vol and ec2-upload-bundle must be run on the EC2 instance itself. It is not callable via the HTTP API.
I'm afraid you need to write a custom playbook / role to automate the process.
On the other hand, aws ec2 register-image is an AWS API call and correspond to the ec2_ami Ansible module.
Unfortunately, this module doesn't seem to support image registering from an S3 bucket.