I'm having a problem with aws CloudFormation…
I guess as I'm new I'm missing something…
So I installed sam cli on my mac and it generated this .yaml file
then I go to cloud formation and try to upload this file to a stack
during creation it gives me an error:
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless
Application Specification document. Number of errors found: 1. Resource
with id [HelloWorldFunction] is invalid. 'CodeUri' is not a valid S3 Uri
of the form 's3://bucket/key' with optional versionId query parameter..
Rollback requested by user.
What should I do here ?
I'm trying to create a lambda function with trigger on s3 file upload and I need an .yaml file for CloudFormation to describe all the services and triggers… I found it extremely difficult to find a template which works…
How should I try to fix this ? when even cli generated yaml files don't work ?
Shouldn't CloudFormation initialize a lambda function when there no such function created yet?
Thanks a lot
The templates that AWS SAM uses are more flexible than those that can be interpreted by AWS CloudFormation. The problem you're running into here is that AWS SAM can handle relative paths on your file system as a CodeUri for your lambda function, CloudFormation however expects an S3 uri in order to retrieve the function code to upload to the lambda.
You should have a look at the sam package command. This command will resolve all sam specific things (e.g., it will upload the code to S3 and replace the CodeUri in the template). And create a "packaged template" file that you will be able to upload to CloudFormation.
You can also use the sam deploy command, which will package the template and deploy it to cloudformation itself.
I am trying to download a file (about 2T) to my local server from s3 such like:
aws s3 cp s3://outputs/star_output.tar.gz ./ --profile abcd --endpoint-url=https://abc.edu
It seems the downloadeing finished with a file like star_output.tar.gz.9AB04cEd but ended up with a failure:
download failed: s3://outputs/star_output.tar.gz to ./ local variable 'current_index' referenced before assignment
And the file star_output.tar.gz.9AB04cEd was also automatically deleted.
I tried a small text file and it downloaded no issue. Is this related to the size of the file (too big)?
Anyone knows the possible reason?
I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.
We maintain a Debian repository for an app and all .deb files are stored on a s3 bucket.
We wrote a script to upload the files and update the Packages.gz file. All went fine until one of the developers found deb-s3 and tried using it.
After the first package upload we started getting this error message:
W: Failed to fetch s3://s3.amazonaws.com/myapp/dists/test/main/binary-amd64/Packages Hash Sum mismatch
I've tried to restore an old version of our Packages.gz file with no success. I've searched for this error and removing the /var/lib/apt/lists/ does not work either.
What would deb-s3 do that could break our entire repo?
Looks like deb-s3 creates a Releases file under dist/test and that conflicts with Packages.gz.
Removing the Release file restored our repository back to what it was.
How to sync files to one drive automatically from terminal/command line and also get a success or failure message after uploading