AWS CLI create lambda function cannot see my zip file that is in S3 "No such file or directory" - amazon-s3

My first attempt was using the console and it worked. I have a new zip file that successfully uploaded to my bucket in S3. I can list the bucket and see both files there but when I try to use CLI to create the Lambda function it returns "Error parsing parameter '--zip-file': Unable to load paramfile"..." No such file or directory:"
I expect from the documentation that "fileb://path/to/file.zip" implies that the bucket name be included but I am unsure if the region URL is needed. I tried it with and without the region URL with the same results.
Again, I am able to use these file if I create the Lambda using the console, but not CLI. What am I missing?
[royce#localhost ~]$ aws s3 ls s3://uploads.lai
2017-08-18 10:27:48 60383836 userpermission-1.zip
2017-08-31 07:43:50 60389082 userpermission-4.zip
2017-08-18 14:15:43 1171 userpermission.db
[royce#localhost ~]$ aws lambda create-function --function-name awstest01 --zip-file "fileb://uploads.lai/userpermission-4.zip" --runtime java8 --role execution-role-arn --handler app.handler
Error parsing parameter '--zip-file': Unable to load paramfile fileb://uploads.lai/userpermission-4.zip: [Errno 2] No such file or directory: 'uploads.lai/userpermission-4.zip'

The --zip-file flag is if you are uploading your function from a local zip file.
If you are using S3, the CLI the command should be something along the lines of aws create function --code "S3Bucket=string,S3Key=string,S3ObjectVersion=string"
You may check the reference here:
http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html

Related

AWS CloudFormation package does not upload local Lambda files to S3

I am trying to upload local artifacts that are referenced in CF template to an S3 bucket using aws cloudformation package command and then deploy the packaged one to S3. Then, when I run the command:
aws cloudformation package --template-file template-file.yaml --s3-bucket my-app-cf-s3-bucket
It creates the packaged YAML file but does not upload anything to my S3 bucket.
Here is my CloudFormation
Resources:
MyUserPoolPreSignupLambda:
type: AWS::Lambda::Function
Properties:
FunctionName: MyUserPoolPreSignup
Code: lambda-pre-signup.js
Runtime: nodejs-16
What am I doing wrong here?
Check if you provided IAM user credentials for the AWS CLI (credentials file / Env variables)
If the template has no change, CLI won't upload files for multiple calls. To upload forcefully even if there is no change in your template, use '--force upload'
--force-upload (boolean) Indicates whether to override existing files in the S3 bucket. Specify this flag to upload artifacts even if they match existing artifacts in the S3 bucket.
I had a typo in my cloudformation template. The type property must be capitalized:
type: AWS::Lambda::Function instead of Type: AWS::Lambda::Function

AWS - SAM cli yaml template does not work with cloudformation stack

I'm having a problem with aws CloudFormation…
I guess as I'm new I'm missing something…
So I installed sam cli on my mac and it generated this .yaml file
then I go to cloud formation and try to upload this file to a stack
during creation it gives me an error:
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless
Application Specification document. Number of errors found: 1. Resource
with id [HelloWorldFunction] is invalid. 'CodeUri' is not a valid S3 Uri
of the form 's3://bucket/key' with optional versionId query parameter..
Rollback requested by user.
What should I do here ?
I'm trying to create a lambda function with trigger on s3 file upload and I need an .yaml file for CloudFormation to describe all the services and triggers… I found it extremely difficult to find a template which works…
How should I try to fix this ? when even cli generated yaml files don't work ?
Shouldn't CloudFormation initialize a lambda function when there no such function created yet?
Thanks a lot
The templates that AWS SAM uses are more flexible than those that can be interpreted by AWS CloudFormation. The problem you're running into here is that AWS SAM can handle relative paths on your file system as a CodeUri for your lambda function, CloudFormation however expects an S3 uri in order to retrieve the function code to upload to the lambda.
You should have a look at the sam package command. This command will resolve all sam specific things (e.g., it will upload the code to S3 and replace the CodeUri in the template). And create a "packaged template" file that you will be able to upload to CloudFormation.
You can also use the sam deploy command, which will package the template and deploy it to cloudformation itself.

Is it possible to trigger lambda by changing the file of local s3 manually in serverless framework?

I used the serverless-s3-local to trigger aws lambda locally with serverless framework.
Now it worked when I created or updated a file by function in local s3 folder, but when I added a file or changed the context of the file in local s3 folder manually, it didn’t trigger the lambda.
Is there any good way to solve it?
Thanks for using serverlss-s3-local. I'm the author of serverless-s3-local.
How did you add a file or change the context of the file? Did you use the AWS command as following?
$ AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER aws --endpoint http://localhost:8000 s3 cp ./face.jpg s3://local-bucket/incoming/face.jpg
{
"ETag": "\"6fa1ab0763e315d8b1a0e82aea14a9d0\""
}
If you don't use the aws command and apply these operations to the files directory, these modifications aren't detected by S3rver which is the local S3 emurator. resize_image example may be useful for you.

aws s3 cp error "variable 'current_index' referenced before assignment"

I am trying to download a file (about 2T) to my local server from s3 such like:
aws s3 cp s3://outputs/star_output.tar.gz ./ --profile abcd --endpoint-url=https://abc.edu
It seems the downloadeing finished with a file like star_output.tar.gz.9AB04cEd but ended up with a failure:
download failed: s3://outputs/star_output.tar.gz to ./ local variable 'current_index' referenced before assignment
And the file star_output.tar.gz.9AB04cEd was also automatically deleted.
I tried a small text file and it downloaded no issue. Is this related to the size of the file (too big)?
Anyone knows the possible reason?

How to delete a file with an empty name from S3

Somehow, using the AWS Java API, we managed to upload a file to S3 without a name.
The file is shown if we run s3cmd ls s3://myBucket/MyFolder, but is not shown in the S3 GUI.
Running s3cmd del s3://myBucket/MyFolder/ give the following error:
ERROR: Parameter problem: Expecting S3 URI with a filename or --recursive: s3://myBucket/MyFolder/
Running the same command without the trailing slash does nothing.
How can the file be deleted?
As far as I know, it can't be done using s3cmd.
It can be done using the aws cli, by running:
aws s3 rm 3://myBucket/MyFolder/
Make sure you don't use the --recursive flag, or it will remove the entire directory.