Does AWS CodeBuild already has aws-cli installed? If yes, do i still need to configure a profile or a role attached to codebuild would be sufficient?
Best Regards
For the first question, the answer is 'Yes'. The curated images have the aws-cli installed.
For the second question, the service role you provided in the project would be use, but you could still configure your profile if you want to.
Just to make it clearer and concise: CodeBuild can't have aws-cli installed, but the images it uses to run a build can have it.
Images managed by AWS CodeBuild do have AWS CLI and you can verify it by simply adding aws --version to one of you commands (pre_build might be a good place for that).
Same check can be done for the custom images, if you're not sure.
You can find more details on the Github page on what packages are installed in the images. In the AWS documentation you can find the links to the according Github pages.
Related
I am deploying a static site with AWS CDK. This works but as the site has grown, the deployments are failing due to
No space left on device
I am looking for solutions to this problem. One suggestion I have seen is to deploy within a docker container.
How can I do this in CDK and are there any other solutions?
I would advise that you use cdk-pipelines to manage your deployment - thats the best way forward.
But if you have to use a docker container then I have done something similar (in Jenkins).
Steps...
Create a 'Dockerfile' in your project, this will be your custom build environment, it should look like this...
FROM node:14.17.3-slim
ENV CDK_DEFAULT_ACCOUNT=1234 \
CDK_DEFAULT_REGION=ap-southeast-2
RUN npm install -g typescript
Make sure your pipeline installs any npm packages you need
'Build' your project, npx cdk synth
'Deploy' your project,npx cdk deploy --require-approval never
Lastly you'll need a way to authenticate with AWS so BB Pipelines and specifically the Docker container can 'talk' to cloudformation.
But like I said, cdk-pipelines is best solution, here is good tutorial
I could see some bash commands in the CI/CD section of the tool. But is there any case study or proven case where the cases would be integrated with AWS Codebuild please ? Also is it Supported with testRigor at the moment ? If Yes, can you explain how ?
Yes, AWS CodeBuild is supported by testRigor out of the box.
The way it works is you copy the script from CI/CD section and paste it as a bash script which is triggered in CodeBuild. Works out of the box.
I know it is not exactly the same but here is a video showing how to use it with GitHub Actions which is hopefully will clarify some things: https://youtu.be/f8QFFHBywto
I am making changes just to custom resources in my serverless.yml with an AWS provider. The package from the lambda code is not changing, it's already uploaded to S3 from a previous deploy.
How can I say "use the artifacts already in S3, just upload the changed cloudformation template and update the stack using that"?
Updating only the infrastructure with the Serverless Framework is not something achievable as of right now. You will need to perform a full deployment even if there were no code changes.
However, executing a regular sls deploy won't do the trick if no code has changed as the framework won't detect infrastructure changes only. If you want to force a redeployment (i.e you have hooked up a new trigger for your Lambda function in your serverless.yml file), you must force the deployment by using the --force flag
sls deploy --force
I've installed Spinnaker on AWS using https://aws.amazon.com/quickstart/architecture/spinnaker/
I've also installed Halyard and updated Spinnaker to 1.5.0
Problem is after I execute
hal config features edit --chaos true
The option for ChaosMonkey doesn't appear in the UI.
I've restarted the service and rebooted the system, I've also tried to manually change the setting in any settings.js files of Deck , but to no avail.
What am I missing ?
There is a guide here for getting chaos monkey up-and-running on GCE with hal. I realize it's not a perfect answer, but maybe it will point out a platform independent step you might have missed?
I'm trying to create a Jenkins job to spin up a VM on Amazon EC2 based on an AMI that I currently have saved. I've done my searching and can't find an easy way to do this other than through Amazon's GUI. This isn't very ideal as there are a lot of manual steps involved and it's time-consuming.
If anyone's had any luck doing this or could point me in the right direction that would be great.
Cheers,
Darwin
Unless I'm misunderstanding the question this should be possible using the cli, assuming you can install and configure the cli on your jenkins server you can just run the command as a shell script as part of the build.
Create an instance with CLI.
The command would be something along the lines of:
[path to cli]/aws ec2 run-instances --image-id ami-xyz
If your setup is too complicated for a single cli command, I would recommend creating a simple cloudformation template.
If you are unable to install the cli, you could use any number of sdk's e.g. java to make a simple application you could run with jenkins.
There is the Jenkins EC2 Plugin
Looking at the document it looks like you may be able to reuse your AMI. If not, you can configure it with an init script
Next, configure AMIs that you want to launch. For this, you need to
find the AMI IDs for the OS of your choice. ElasticFox is a good tool
for doing that, but there are a number of other ways to do it. Jenkins
can work with any Unix AMIs. If using an Ubuntu EC2 or UEC AMI you
need to fill out the rootCommandPrefix and remoteAdmin fields under
'advanced'. Windows is currently unsupported.