If I check the 'Build phase' tab of a CodeBuild project called from CodePipeline using CODEBUILD_CLONE_REF, I get the following error:
Internal Service Error: CodeBuild is experiencing issues
AWS support responded that this can be caused by having too many 'git refs' in the repo. At the time of this writing, CodeBuild uses a go-lang -based library for handling git and the library has a known limitation.
AWS support recommended switching to the CODE_ZIP method of passing code between the CodePipeline 'source' stage and the CodeBuild stage at issue.
We might also try to reduce the number of refs in the repo.
Related
Summary:
When attempting to amplify push changes to my graphql api via the Amplify CLI after aborting an amplify push (using ctrl-c) the cli complains that there is a deployment in progress and cannot deploy.
First error message:
A deployment is in progress.
If the prior rollback was aborted, run:
`amplify push --iterative-rollback` to rollback the prior deployment
`amplify push --force` to re-deploy
Both of those suggested commands result in:
✖ An error occurred when pushing the resources to the cloud
Cannot iteratively rollback as the following step does not contain a previousMetaKey: {"status":"WAITING_FOR_DEPLOYMENT"}
An error occurred during the push operation: Cannot iteratively rollback as the following step does not contain a previousMetaKey: {"status":"WAITING_FOR_DEPLOYMENT"}
All deployments in the Amplify Admin UI show as completed.
I tried amplify pull, amplify env pull, amplify pull --restore (all of which override your local changes - heads up). None worked to solve my problem. Still could not amplify push.
I finally found this idea: https://github.com/aws-amplify/amplify-adminui/issues/172#issuecomment-819784558
Solution
Deleting the deployment-state.json file as suggested in that reply allowed me to perform amplify push again. If you open it up you'll see that this is where the cli must be seeing the {"status":"WAITING_FOR_DEPLOYMENT"}
Sharing my solution here in case someone else has the same problem!
Solution
Deleting the deployment-state.json file as suggested in that reply allowed me to perform amplify push again. If you open it up you'll see that this is where the cli must be seeing the {"status":"WAITING_FOR_DEPLOYMENT"}.
Sharing my solution here in case someone else has the same problem!
Open the corresponding s3 bucket to the Amplify app.. usually begins with "Amplify-appname...." and delete the "deployment-state.json" file
I've run into an issue using MLflow server. When I first ran the command to start an mlflow server on an ec2 instance, everything worked fine. Now, although logs and artifacts are being stored to postgres and s3, the UI is not listing the artifacts. Instead, the artifact section of the UI shows:
Loading Artifacts Failed
Unable to list artifacts stored under <s3-location> for the current run. Please contact your tracking server administrator to notify them of this error, which can happen when the tracking server lacks permission to list artifacts under the current run's root artifact directory.
But when I check in s3, I see the artifact in the s3 location that the error shows. What could possibly have started causing this as it used to work not too long ago and nothing was changed on the ec2 that is hosting mlflow?
I found the answer. The error was that mlflow could not find boto3, so a conda installation of that worked. The logs for this were buried and hard to find in stdout.
I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.
I am running this error during the "DOWNLOAD_SOURCE" phase in CodeBuild:
"invalid pkt-len found"
No other information is provided. I have tried various things to rule out problems.
a) The CodeCommit repo clones successfully, and appears to be fully functional.
b) Building from an earlier revision on this CodeCommit repository that had previously built successfully now throw this error -- Fails with same error message
b) Building from a separate CodeCommit repository with a separate CodeBuild project that has previously built successfully AND has no new commits -- Fails with same error
c) A brand new CodeBuild project and CodeCommit repo -- Does not fail
d) Building the same CodeBuild job that fails, with a zip file (of the same code base) as source instead of CodeCommit, and it does not fail.
I was getting the same error in Codebuild. Turned out, I was using a URL of a sub-folder in the repository. Since it was not a proper Git repo URL, it was throwing an invalid pkt len error. I hope this helps somebody who stumbles onto the same error.
Got a response from AWS - this was an issue on their end, which they have resolved.
Attempt to deploy via serverless framework using Windows 10 fails:
C:\Users\xxxxxx>sls deploy --verbose Serverless: Packaging
service... Serverless: Excluding development dependencies...
Error --------------------------------------------------
EPERM: operation not permitted, scandir
'C:\Users\xxxxxx\AppData\Local\ElevatedDiagnostics' For debugging
logs, run again after setting the "SLS_DEBUG=*" environment variable.
Your Environment Information ----------------------------- OS: win32
Node Version: 6.11.2 Serverless Version: 1.19.0
Tried again with command prompt under elevated privileges:
EBUSY: resource busy or locked, scandir
'C:\Users\xxxxxx\AppData\Local\Microsoft\InputPersonalization\TextHarvester\WaitList.dat'
I assumed there was a permissions issue at first so I retried with the command prompt at full admin mode but just ran into the the second error. My research suggested an issue with windows search so I turned it off (and also all background apps). Trying again (and again) I just ran into more similar issues and am unable to deploy anything. Anyone had similar issues and found a way around them?
I worked it out finally, so in case anyone else encounters this issue here is a summary. There seem to be 2 issues:
Don't create functions in your root folder. Create a specific folder for your serverless function i.e. not in C:\Users\nnnnnn> but within your regular document storage. In Windows 10 it works nicely if you use a OneDrive folder, with the benefit that your function(s) are also then replicated to other dev machines that you might use (and are automatically backed up offsite).
More importantly, the serverless framework seems to have an issue if you attempt to deploy to a region other than the default region set in your aws CLI configuration. I've no idea why this should be since the credentials I use with the AWS CLI are authorised for all regions. I also have no idea why the issue should result in serverless attempting to access a whole series of windows files for which it has no authority but nevertheless...
In my case, I primarily use region ap-southeast-2. By default, SLS CREATE generates a serverless.yml using a default US region. If this is left as-is, there is then a mismatch between the deployment region and your AWS CLI region. Not good. To avoid the minor pain of having to specify a deployment region in the SLS deploy command, just update the deployment region in the serverless.yml file to match the CLI region.
Now works a treat...