Gitlab CI/CD error: nested array of strings up to 10 levels deep - amazon-s3

I am writing a gitlab ci/cd to put encryption on the s3 bucket . I am following the official documentation link from AWS. But while running it on gitlab ci/cd pipeline, I am getting this error on the editor.
This GitLab CI configuration is invalid: jobs:onestage:script config should be a string or a nested array of strings up to 10 levels deep.
The error line is as follow:
aws s3api put-bucket-encryption --bucket bucket-name --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'

Thanks #Joachim-isaksson for your help. It indeed helped me to solve this error. Meanwhile, I am putting the code to solve this error that i have used.
'aws s3api put-bucket-encryption --bucket my-bucket --server-side-encryption-configuration "{\"Rules\": [{\"ApplyServerSideEncryptionByDefault\": {\"SSEAlgorithm\": \"AES256\"}}]}"'

Related

Persistent Bitbucket pipeline build artifacts greater than 14 days

I have a pipeline which loses build artifacts after 14 days. I.e, after 14 days, without S3 or Artifactory integration, the pipeline of course loses "Deploy" button functionality - it becomes greyed out since the build artifact is removed. I understand this is by intention by BB/Atlassian to reduce costs etc (detail in below link).
Please check last section of this page "Artifact downloads and Expiry" - https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
If you need artifact storage for longer than 14 days (or more than 1
GB), we recommend using your own storage solution, like Amazon S3 or a
hosted artifact repository like JFrog Artifactory.
Question:
Is anyone able to provide advice or sample code on how to approach BB Pipeline integration with Artifactory (or S3) in order to retain artifacts. Is the Artifactory generic upload/download pipe approach the only way or is the quote above hinting at a more native BB "repository setting" to provide integration with S3 or Artifactory? https://www.jfrog.com/confluence/display/RTF6X/Bitbucket+Pipelines+Artifactory+Pipes
Bitbucket give an example of linking to an S3 bucket on their site.
https://support.atlassian.com/bitbucket-cloud/docs/publish-and-link-your-build-artifacts/
The key is Step 4 where you link the artefact to the build.
However the example doesn't actually create an artefact that is linked to S3, but rather adds a status code with a description that links to the uploaded item's in S3. To use these in further steps you would then have to download the artefacts.
This can be done using the aws cli and an image that has this installed, for example the amazon/aws-sam-cli-build-image-nodejs14.x (SAM was required in my case).
The following is an an example that:
Creates an artefact ( a txt file ) and uploads to an AWS S3 bucket
Creates a "link" as a build status against the commit that triggered the pipeline, as per Amazon's suggestion ( this is just added for reference after the 14 days... meh)
Carrys out a "deployment", where by the artefact is downloaded from AWS S3, in this stage I also then set the downloaded S3 artefact as a BitBucket artefact, I mean why not... it may expire after 14 days but at if I've just re-deployed then I may want this available for another 14 days....
image: amazon/aws-sam-cli-build-image-nodejs14.x
pipelines:
branches:
main:
- step:
name: Create artefact
script:
- mkdir -p artefacts
- echo "This is an artefact file..." > artefacts/buildinfo.txt
- echo "Generating Build Number:\ ${BITBUCKET_BUILD_NUMBER}" >> artefacts/buildinfo.txt
- echo "Git Commit Hash:\ ${BITBUCKET_COMMIT}" >> artefacts/buildinfo.txt
- aws s3api put-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt --body artefacts/buildinfo.txt
- step:
name: Link artefact to AWS S3
script:
- export S3_URL="https://bitbucket-artefact-test.s3.eu-west-2.amazonaws.com/${BITBUCKET_BUILD_NUMBER}/buildinfo.txt"
- export BUILD_STATUS="{\"key\":\"doc\", \"state\":\"SUCCESSFUL\", \"name\":\"DeployArtefact\", \"url\":\"${S3_URL}\"}"
- curl -H "Content-Type:application/json" -X POST --user "${BB_AUTH_STRING}" -d "${BUILD_STATUS}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/commit/${BITBUCKET_COMMIT}/statuses/build"
- step:
name: Test - Deployment
deployment: Test
script:
- mkdir artifacts
- aws s3api get-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt artifacts/buildinfo.txt
- cat artifacts/buildinfo.txt
artifacts:
- artifacts/**
Note:
I've got the following secrets/variables against the repository:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
BB_AUTH_STRING

aws Decrypted Variables Error Message: parameter does not exist: JWT_SECRET

I am new to aws I was trying to create a pipeline. But it turns this error once it builds
[Container] 2020/05/23 04:32:56 Phase context status code: Decrypted Variables Error Message: parameter does not exist: JWT_SECRET
Even though the token was stored by running this command
s ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString
I tried to fix that by adding this line buildspec.yml post build commands. but still does not fix the problem
- kubectl set env deployment/simple-jwt-api JWT_SECRET=$JWT_SECRET
My buildspec.yml contain this added line to configure the pass of my jwt secret to the app
env:
parameter-store:
JWT_SECRET: JWT_SECRET
Check my github repos for more details about the code
Also once I run this under cmd to test the api endpoints kubectl get services simple-jwt-api -o wide I have got this error
Error from server (NotFound): services "simple-jwt-api" not found
Well it is obvious since the pipeline failed to build. Please how can I fix it?
In my case I go this error while I have created my stack in different region than the cluster. So whenever it search for the variable it does not find it. So, be carful to point to the same region in every creation action :).
The best solution I found was to add a region tag when declaring the env variables.
aws ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString --region <your-cluster-region>
I also encountered this same issue,
Changing the kubectl version in the buildspec.yml file worked for me
- curl -LO https://dl.k8s.io/release/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl
# Download the kubectl checksum file
- curl -LO "https://dl.k8s.io/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl.sha256"
Note that the <YOUR_KUBERNETES_VERSION> must be the same with what you have on your created cluster dashboard.

AWS cli throws error when copying large files

I'm trying to copy objects from an s3 bucket to another using aws cli tool.
It works OK for small objects, but on large file buckets, as soon as the copy starts, I get one of the following errors:
copy failed: s3://bucket/file.ogv to s3://bucket-tmp/file.ogv ('Connection aborted.', OSError(0, 'Error'))
or
copy failed: s3://bucket/file.ogv to s3://bucket-tmp/file.ogv An error occurred (NoSuchKey) when calling the UploadPartCopy operation: Unknown
if I include the --no-guess-mime-type I get
fatal error: ('Connection aborted.', OSError(0, 'Error'))
I tryied --debug, but I really didn't understand much of the debug output but I could see OSError(0, 'Error') again in the log.
Anyone has seen anything like this ? in another answer (this one), people told about another tool s3cmd, but I couldn't make it work.
I'm trying to access ceph on a corporate server with path-style urls and https endpoint.
My command:
aws --endpoint-url https://myendpoint.url s3 cp s3://mybucket s3://mybucket-tmp --recursive
Also when I tried to configure s3cmd I get an ungly python debug output with OSError: [Errno 0] Error in the middle.
I discovered that if I use s3api command instead of s3 command it works. Format of working command:
aws --endpoint-url <my-endpoint-url> s3api copy-object --copy-source my-source-bucket/whatever/path/file.txt --key whatever/path/file.txt --bucket my-destination-bucket
It only copys one file at once. You can grab a list of objects in the bucket using s3 command ls or s3api command list-objects

aws-cli fails to work with one particular S3 bucket on one particular machine

I'm trying to remove the objects (empty bucket) and then copy new ones into an AWS S3 bucket:
aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive
aws s3 cp ./ s3://BUCKET_NAME/ --region us-east-2 --recursive
The first command fails with the following error:
An error occurred (InvalidRequest) when calling the ListObjects
operation: You are attempting to operate on a bucket in a region that
requires Signature Version 4. You can fix this issue by explicitly
providing the correct region location using the --region argument, the
AWS_DEFAULT_REGION environment variable, or the region variable in the
AWS CLI configuration file. You can get the bucket's location by
running "aws s3api get-bucket-location --bucket BUCKET". Completed 1
part(s) with ... file(s) remaining
Well, the error prompt is self-explanatory but the problem is that I've already applied the solution (I've added the --region argument) and I'm completely sure that it is the correct region (I got the region the same way the error message is suggesting).
Now, to make things even more interesting, the error happens in a gitlab CI environment (let's just say some server). But just before this error occurs, there are other buckets which the exact same command can be executed against and they work. It's worth mentioning that those other buckets are in different regions.
Now, to top it all off, I can execute the command on my personal computer with the same credentials as in CI server!!! So to summarize:
server$ aws s3 rm s3://OTHER_BUCKET --region us-west-2 --recursive <== works
server$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== fails
my_pc$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== works
Does anyone have any pointers what might the problem be?
For anyone else that might be facing the same problem, make sure your aws is up-to-date!!!
server$ aws --version
aws-cli/1.10.52 Python/2.7.14 Linux/4.13.9-coreos botocore/1.4.42
my_pc$ aws --version
aws-cli/1.14.58 Python/3.6.5 Linux/4.13.0-38-generic botocore/1.9.11
Once I updated the server's aws cli tool, everything worked. Now my server is:
server$ aws --version
aws-cli/1.14.49 Python/2.7.14 Linux/4.13.5-coreos-r2 botocore/1.9.2

AWS EMR --steps

I am running the following .sh to run a command on AWS using EMR:
aws emr create-cluster --name "Big Matrix Re Run 5" --ami-version 3.1.0 --auto-terminate --log-uri FILE LOCATION --enable-debugging --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=c3.xlarge InstanceGroupType=CORE,InstanceCount=3,InstanceType=c3.xlarge --steps NAME AND LOCATION OF FILE
I've deleted the pertinent file name and locations as those aren't my issue, but I am having an issue with the --steps portion of the script.
How do I specify the steps that I want to run in the cluster? The documentation doesn't give any examples.
Here is the error:
Error parsing parameter '--steps': should be: Key value pairs, where values are separated by commas, and multiple pairs are separated by spaces.
--steps Name=string1,Jar=string1,ActionOnFailure=string1,MainClass=string1,Type=string1,Properties=string1,Args=string1,string2 Name=string1,Jar=string1,ActionOnFailure=string1,MainClass=string1,Type=string1,Properties=string1,Args=string1,string2
Thanks!
The documentation page for the AWS Command-Line Interface create-cluster command shows examples for using the --steps parameter.
Steps can be supplied on the command-line, or can refer to files available within HDFS or Amazon S3.
Within HDFS:
aws emr create-cluster --steps file://./multiplefiles.json --ami-version 3.3.1 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --auto-terminate
Within Amazon S3:
aws emr create-cluster --steps Type=HIVE,Name='Hive program',ActionOnFailure=CONTINUE,ActionOnFailure=TERMINATE_CLUSTER,Args=[-f,s3://elasticmapreduce/samples/hive-ads/libs/model-build.q,-d,INPUT=s3://elasticmapreduce/samples/hive-ads/tables,-d,OUTPUT=s3://mybucket/hive-ads/output/2014-04-18/11-07-32,-d,LIBS=s3://elasticmapreduce/samples/hive-ads/libs] --applications Name=Hive --ami-version 3.1.0 --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge