CodeBuild set No VPC using aws codebuild cli - aws-codebuild

How can I set No VPC using aws codebuild cli?
I tried using:
aws codebuild update-project \
--name <PROJECT_NAME> \
--vpc-config vpcId='',subnets='',securityGroupIds=''
But I'm getting the error:
Invalid length for parameter vpcConfig.subnets[0], value: 0, valid range: 1-inf
Invalid length for parameter vpcConfig.vpcId, value: 0, valid range: 1-inf
Invalid length for parameter vpcConfig.securityGroupIds[0], value: 0, valid range: 1-inf

Use null instead of empty string '':
--vpc-config vpcId=null,subnets=null,securityGroupIds=null

Related

Gitlab CI/CD error: nested array of strings up to 10 levels deep

I am writing a gitlab ci/cd to put encryption on the s3 bucket . I am following the official documentation link from AWS. But while running it on gitlab ci/cd pipeline, I am getting this error on the editor.
This GitLab CI configuration is invalid: jobs:onestage:script config should be a string or a nested array of strings up to 10 levels deep.
The error line is as follow:
aws s3api put-bucket-encryption --bucket bucket-name --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
Thanks #Joachim-isaksson for your help. It indeed helped me to solve this error. Meanwhile, I am putting the code to solve this error that i have used.
'aws s3api put-bucket-encryption --bucket my-bucket --server-side-encryption-configuration "{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}"'

(gitlab to aws s3): did not find expected key while parsing a block mapping at line 1 column 1

This GitLab CI configuration is invalid: (): did not find expected key while parsing a block mapping at line 1 column 1.
I have below gitlab-ci.yml file which shows error in pipeline
deploy:
image:
stage: deploy
name: banst/awscli
entrypoint: [""]
script:
- aws configure set region us-east-1
- aws s3 sync . s3://$S3_BUCKET/
only:
main
I have done
gitlab runner register and running
added aws s3 bucket and aws key and secret key id in variable

bypass input values in GitHub Actions workflow to a terraform variables file

As part of provisioning google cloud resources with GitHub actions using terraform I need to bypass some input values using terraform variables file, the issue is THL does not support Golang.
I have tried to do the following:
Create a GitHub actions workflow with
workflow_dispatch:
inputs:
new_planet:
description: 'Bucket Name'
required: true
default: 'some bucket'
At the end of the workflow there:
- name: terraform plan
id: plan
run: |
terraform plan -var-file=variables.tf
In the variables.tf:
variable "backend_bucket" {
type = string
default = ${{ github.event.inputs.new_planet }}
description = "The backend bucket name"
I will appreciate it if you have any idea how to bypass the input values from the workflow into the terraform.
You can use the backend-config option in the command line [1]. You would first need to configure the backend (e.g., by creating a backend.tf file) and add this:
terraform {
backend "s3" {
}
}
This way, you would be prompted for input every time you run terraform init. However, there is an additional CLI option, -input=false which prevents Terraform from asking for input. This snippet below will move into the directory where the Terraform code is (depending on the name of the repo, the directory name will be different) and run terraform init with the -backend-config options as well as -input set to false:
- name: Terraform Init
id: init
run: |
cd terraform-code
terraform init -backend-config="bucket=${{ secrets.STATE_BUCKET_NAME }}" \
-backend-config="key=${{ secrets.STATE_KEY }}" \
-backend-config="region=${{ secrets.AWS_REGION }}" \
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY_ID }}" \
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-input=false -no-color
I suppose you don't want the name of the bucket and other sensitive values to be hardcoded, I suggest using the GitHub Actions secrets [2].
After you set this up, you can run terraform plan without having to specify variables for the backend config. On the other hand, you could create a terraform.tfvars file in one of the previous steps so it can be consumed by plan step. Here is one of my examples:
- name: Terraform Tfvars
id: tfvars
run: |
cd terraform-code
cat << EOF > terraform.tfvars
profile = "profilename"
aws_region = "us-east-1"
EOF
You would finish off with the following snippet (note the -input=false again:
- name: Terraform Plan
id: plan
run: |
cd terraform-code
terraform plan -no-color -input=false
continue-on-error: true
All of the terraform part is available through the GitHub Action provided by Hashicorp [3].
[1] https://www.terraform.io/docs/language/settings/backends/configuration.html#partial-configuration
[2] https://docs.github.com/en/actions/security-guides/encrypted-secrets
[3] https://github.com/hashicorp/setup-terraform

aws Decrypted Variables Error Message: parameter does not exist: JWT_SECRET

I am new to aws I was trying to create a pipeline. But it turns this error once it builds
[Container] 2020/05/23 04:32:56 Phase context status code: Decrypted Variables Error Message: parameter does not exist: JWT_SECRET
Even though the token was stored by running this command
s ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString
I tried to fix that by adding this line buildspec.yml post build commands. but still does not fix the problem
- kubectl set env deployment/simple-jwt-api JWT_SECRET=$JWT_SECRET
My buildspec.yml contain this added line to configure the pass of my jwt secret to the app
env:
parameter-store:
JWT_SECRET: JWT_SECRET
Check my github repos for more details about the code
Also once I run this under cmd to test the api endpoints kubectl get services simple-jwt-api -o wide I have got this error
Error from server (NotFound): services "simple-jwt-api" not found
Well it is obvious since the pipeline failed to build. Please how can I fix it?
In my case I go this error while I have created my stack in different region than the cluster. So whenever it search for the variable it does not find it. So, be carful to point to the same region in every creation action :).
The best solution I found was to add a region tag when declaring the env variables.
aws ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString --region <your-cluster-region>
I also encountered this same issue,
Changing the kubectl version in the buildspec.yml file worked for me
- curl -LO https://dl.k8s.io/release/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl
# Download the kubectl checksum file
- curl -LO "https://dl.k8s.io/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl.sha256"
Note that the <YOUR_KUBERNETES_VERSION> must be the same with what you have on your created cluster dashboard.

Filter S3 list-objects results to find a key matching a pattern

I would like to use the AWS CLI to query the contents of a bucket and see if a particular file exists, but the bucket contains thousands of files. How can I filter the results to only show key names that match a pattern? For example:
aws s3api list-objects --bucket myBucketName --query "Contents[?Key==*mySearchPattern*]"
The --query argument uses JMESPath expressions. JMESPath has an internal function contains that allows you to search for a string pattern.
This should give the desired results:
aws s3api list-objects --bucket myBucketName --query "Contents[?contains(Key, `mySearchPattern`)]"
(With Linux I needed to use single quotes ' rather than back ticks ` around mySearchPattern.)
If you want to search for keys starting with certain characters, you can also use the --prefix argument:
aws s3api list-objects --bucket myBucketName --prefix "myPrefixToSearchFor"
I tried on Ubuntu 14, awscli 1.2
--query "Contents[?contains(Key,'stati')].Key"
--query "Contents[?contains(Key,\'stati\')].Key"
--query "Contents[?contains(Key,`stati`)].Key"
Illegal token value '?contains(Key,'stati')].Key'
After upgraded the aws version to 1.16 , worked with
--query "Contents[?contains(Key,'stati')].Key"