Just deploy cloudformation changes with serverless framework for AWS - serverless-framework

I am making changes just to custom resources in my serverless.yml with an AWS provider. The package from the lambda code is not changing, it's already uploaded to S3 from a previous deploy.
How can I say "use the artifacts already in S3, just upload the changed cloudformation template and update the stack using that"?

Updating only the infrastructure with the Serverless Framework is not something achievable as of right now. You will need to perform a full deployment even if there were no code changes.
However, executing a regular sls deploy won't do the trick if no code has changed as the framework won't detect infrastructure changes only. If you want to force a redeployment (i.e you have hooked up a new trigger for your Lambda function in your serverless.yml file), you must force the deployment by using the --force flag
sls deploy --force

Related

Why do I get 404 not found error when trying to load my just-deployed Source Bundle in AWS Elastic Beanstalk?

Overview
I have a very simple CodePipeline that deploys new versions of an application to my AWS Elastic Beanstalk environment. If I run the pipeline, it works and the application is deployed without errors. But if I then navigate to view the "Application versions" for my application, and click the link for the "Source Bundle" for the just-now-deployed version at the top of the list, I'm just shown a generic AWS 404 not found page. If I click on ANY source bundle links, I see the same error.
What in the world is happening?
Some context
The CodePipeline successfully uploads the build artifacts to the designated artifact bucket. Those artifacts are all still there. The links from the Application Versions page don't seem to be resolving correctly.
I have a Lifecycle defined for my Application Versions to limit to the last 50 versions, and to retain the source bundles in S3. The source bundles are in the bucket mentioned in the previous paragraph, designated for the artifacts, but there are no source bundles in the elasticbeanstalk bucket. This has puzzled more than one AWS Support technicians already.
Nevermind, I was using the new fancy redesigned beta console. The link in the old console works perfectly.

How to deploy s3 bucket within a docker container in CDK?

I am deploying a static site with AWS CDK. This works but as the site has grown, the deployments are failing due to
No space left on device
I am looking for solutions to this problem. One suggestion I have seen is to deploy within a docker container.
How can I do this in CDK and are there any other solutions?
I would advise that you use cdk-pipelines to manage your deployment - thats the best way forward.
But if you have to use a docker container then I have done something similar (in Jenkins).
Steps...
Create a 'Dockerfile' in your project, this will be your custom build environment, it should look like this...
FROM node:14.17.3-slim
ENV CDK_DEFAULT_ACCOUNT=1234 \
CDK_DEFAULT_REGION=ap-southeast-2
RUN npm install -g typescript
Make sure your pipeline installs any npm packages you need
'Build' your project, npx cdk synth
'Deploy' your project,npx cdk deploy --require-approval never
Lastly you'll need a way to authenticate with AWS so BB Pipelines and specifically the Docker container can 'talk' to cloudformation.
But like I said, cdk-pipelines is best solution, here is good tutorial

How to setup alias using serverless framework

I have a project I'm working on with another developer using the serverless framework in aws. I need us both be able to deploy the stack without each other stepping on the others changes. I've been looking for an alias feature where I can provide some prefix or something that will make the deployment unique, but so far I've been unsuccessful. Is there such a feature in serverless to do this? If not how do teams deploy multiple version of the same code without stepping on each other?
You can use Serverless Stages. Set your stage to your name and your teammate can set the stage to his name.
Production and Dev can also be separate stages.
https://serverless-stack.com/chapters/stages-in-serverless-framework.html

AWS CodeBuild Nodejs Runtime

Does AWS CodeBuild already has aws-cli installed? If yes, do i still need to configure a profile or a role attached to codebuild would be sufficient?
Best Regards
For the first question, the answer is 'Yes'. The curated images have the aws-cli installed.
For the second question, the service role you provided in the project would be use, but you could still configure your profile if you want to.
Just to make it clearer and concise: CodeBuild can't have aws-cli installed, but the images it uses to run a build can have it.
Images managed by AWS CodeBuild do have AWS CLI and you can verify it by simply adding aws --version to one of you commands (pre_build might be a good place for that).
Same check can be done for the custom images, if you're not sure.
You can find more details on the Github page on what packages are installed in the images. In the AWS documentation you can find the links to the according Github pages.

GitLab CI Pipeline not triggered by push from runner

We use Gitlab CE. We have two repos / projects, one to store the source code, and the other one to build and store the package that we’ll deploy. We use a runner to push the changes in the former into the latter. This used to trigger the pipeline of the latter repo. Recently, a change was pushed to the latter repo manually, and since then the push from the runner doesn’t trigger the pipeline any more in the target repo (manual pushes still trigger the pipeline, also, the push in the runner runs flawlessly, and the commit appears in the target repo). I was not the one who created the setup, so I don’t know how to make the push from the runner trigger the pripeline (or, rather, why it doesn’t do it automatically).
As far as I understand, the push should trigger the pipeline wherever it comes from. So why doesn't it do so?
So apparently the issue appeared because the user account to which the deploy key used in the target repo belonged to has been disabled. Creating a new key with an active user account solved the problem, the pipeline is triggered properly now.