Is it possible to push test events to AWS Lambda with the Serverless Framework? - testing

I'm using the Serverless Framework to push to AWS Lambda. To test my functions, I'm currently just using the Lambda console to add each method, which is getting rather tedious, and I would like to have a way to push them along side with the code with serverless deploy.
I've found this reference on the Serverless method for testing locally, but it doesn't seem to deploy those test events to Lambda.
Ideally, I'd like to be able to to do this in Serverless, but if there a way to do it via aws-cli it might be a good option too.

Unfortunately the test events are a feature of the AWS console alone, and are not made available on the AWS API (docs).
As you've noticed, the Serverless Framework includes invocation commands- you've linked to Invoke Local, but Invoke also exists, which invokes your function, on the cloud, just like the AWS console.
As Serverless' Invoke command can take a JSON file as an event, a work around I might suggest is to create a folder (like tests/payloads) of JSONs events as part of your code. That way you can then use serverless invoke -f functionName -p ./tests/payloads/payloadName.json to emulate the experience the AWS Console gives you.

Related

Read bucket object in CDK

In terraform to read an object from s3 bucket at the time of deployment I can use data source
data aws_s3_bucket_object { }
Is there a similar concept in CDK? I've seen various methods of uploading assets to s3, as well as importing an existing bucket, but not getting an object from the bucket. I need to read a configuration file from the bucket that will affect further deployment.
Its important to remember that CDK itself is not a deployment option. it can deploy, but the code you are writing in a cdk stack is the definition of your resources - not a method for deployment.
So, you can do one of a few things.
Use your SDK for your language to make a call to the s3 bucket and load the data directly. This is perfectly acceptable and an understood way to gather information you need before deployment - each time the stack Synths (which it does before every cdk deploy that code will run and will pull your data.
Use a CodePipeline to set up a proper pipeline, and give it two sources - one your version control repo and the second your s3 bucket:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html
The preferred way - drop the json file, and use Parameter Store. CDK contains modules that will create a token version of this parameter on synth, and when it deploys it will reference that properly back to the Systems Manager Parameter store
https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
If your parameters change after deployment, you can have that as part of your cdk stack pretty easily (using cfn outputs). If they change in the middle/during deployment, you really need to be using a CodePipeline to manage these steps instead of just CDK.
Because remember: The cdk deploy option is just a convenience. It will execute everything and has no way to pause in the middle and execute specific steps. (other than a very basic, this depends on this resources)

Is there a way to directly trigger an ECS Task (i.e. without Lambda) on object upload to S3 using the Serverless Framework?

Problem: Trigger an ECS Task on object upload to S3. This AWS tutorial makes it seem like it is possible, however there does not seem to be much information about how to do emulate this using the Serverless Framework.
Constraint: As explained in this AWS tutorial, trigger the ECS task without using an intermediary Lambda
Disclaimer: I am very new to serverless (both the technology and this framework) so I may be misunderstanding something fundamentally. Nevertheless, I am super eager to learn and build with these incredible tools and would appreciate any help/guidance!
Your problem statement is contradictory - do you want to trigger an ECS task using Lambda + Serverless, or without using Lambda? Serverless is a framework to build and deploy Lambda functions on AWS, or serverless functions on any number of IAAS providers. If you don't want a Lambda solution, you needn't use Serverless.
Option 1 - Trigger an ECS Task from EventBridge
This is the option detailed in the tutorial you've linked. EventBridge is an AWS service that connects various AWS, third-party or custom events to a number of supported targets.
There already exists an event published to the default event bus on EventBridge when an object is uploaded to S3.
Setup an event rule that fires when this event triggers.
Setup a target on this event rule that runs an ECS task whenever the event is triggered. (So whenever an object is uploaded to S3).
S3 upload event -> event rule -> target -> run ECS task
You can do this on the AWS console or using the API.
Option 2 - Trigger a Lambda function
Looks like you're not interested in this. Listing here however because you mentioned Serverless.
Write a Lambda function that uses the AWS API to run an ECS task whenever it is invoked.
You can develop and deploy this function using Serverless.
You have a couple of options on how to invoke this Lambda function
Create an SNS topic and publish to this topic whenever an object is uploaded to S3. You can set this up on your S3 bucket. Hook your Lambda function to this SNS topic.
Use the aforementioned AWS event and set the Lambda function as your target. A sample using Serverless is here

Does Serverless, Inc ever see my AWS credentials?

I would like to start using serverless-framework to manage lambda deploys at my company, but we handle PHI so security’s tight. Our compliance director and CTO had concerns about passing our AWS key and secret to another company.
When doing a serverless deploy, do AWS credentials ever actually pass through to Serverless, Inc?
If not, can someone point me to where in the code I can prove that?
Thanks!
Running serverless deploy isn't just one call, it's many.
AWS example (oversimplification):
Check if deployment s3 bucket already exists
Create an S3 bucket
Upload packages to s3 bucket
Call CloudFormation
Check CloudFormation stack status
Get info of created recourses (e.g. endpoint urls of created APIs)
And those calls can change dependent on what you are doing and what you have done before.
The point I'm trying to make is is that these calls which contain your credentials are not all located in one place and if you want to do a full code review of Serverless Framework and all it's dependencies, have fun with that.
But under the hood, we know that it's actually using the JavaScript aws-sdk (go check out the package.json), and we know what endpoints that uses {service}.{region}.amazonaws.com.
So to prove to your employers that nothing with your credentials is going anywhere except AWS you can just run a serverless deploy with wireshark running (other network packet analyzers are available). That way you can see anything that's not going to amazonaws.com
But wait, why are calls being made to serverless.com and serverlessteam.com when I run a deploy?
Well that's just tracking some stats and you can see what they track here. But if you are uber paranoid, this can be turned off with serverless slstats --disable.

Backing up a Serverless Framework deployment

I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.

Is there an Ansible module for creating 'instance-store' based AMI's?

Creating AMI's from EBS backed instances is exceedingly easy, but doing the same from an instance-store based instance seems like it can only be done manually using the CLI.
So far I've been able to bootstrap the creation of an 'instance-store' based server off of an HVM Amazon Linux AMI with Ansible, but I'm getting lost on the steps that follow... I'm trying to follow this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-instance-store-ami.html#amazon_linux_instructions
Apparently I need to store my x.509 cert and key on the instance, but which key is that? Is that...
one I have to generate on the instance with openssl,
one that I generate/convert from AWS,
one I generate with Putty, or
one that already exists in my AWS account?
After that, I can't find any reference to ec2-bundle-vol in Ansible. So I'm left wondering if the only way to do this is with Ansible's command module.
Basically what I'm hoping to find out is: Is there a way to easily create instance-store based AMI's using Ansible, and if not, if anyone can reference the steps necessary to automate this? Thanks!
Generally speaking, Ansible AWS modules are meant to manage AWS resources by interacting with AWS HTTP API (ie. actions you could otherwise do in the AWS Management Console).
They are not intended to run AWS specific system tools on EC2 instances.
ec2-bundle-vol and ec2-upload-bundle must be run on the EC2 instance itself. It is not callable via the HTTP API.
I'm afraid you need to write a custom playbook / role to automate the process.
On the other hand, aws ec2 register-image is an AWS API call and correspond to the ec2_ami Ansible module.
Unfortunately, this module doesn't seem to support image registering from an S3 bucket.