AWS CloudFormation package does not upload local Lambda files to S3 - amazon-s3

I am trying to upload local artifacts that are referenced in CF template to an S3 bucket using aws cloudformation package command and then deploy the packaged one to S3. Then, when I run the command:
aws cloudformation package --template-file template-file.yaml --s3-bucket my-app-cf-s3-bucket
It creates the packaged YAML file but does not upload anything to my S3 bucket.
Here is my CloudFormation
Resources:
MyUserPoolPreSignupLambda:
type: AWS::Lambda::Function
Properties:
FunctionName: MyUserPoolPreSignup
Code: lambda-pre-signup.js
Runtime: nodejs-16
What am I doing wrong here?

Check if you provided IAM user credentials for the AWS CLI (credentials file / Env variables)
If the template has no change, CLI won't upload files for multiple calls. To upload forcefully even if there is no change in your template, use '--force upload'
--force-upload (boolean) Indicates whether to override existing files in the S3 bucket. Specify this flag to upload artifacts even if they match existing artifacts in the S3 bucket.

I had a typo in my cloudformation template. The type property must be capitalized:
type: AWS::Lambda::Function instead of Type: AWS::Lambda::Function

Related

AWS - SAM cli yaml template does not work with cloudformation stack

I'm having a problem with aws CloudFormation…
I guess as I'm new I'm missing something…
So I installed sam cli on my mac and it generated this .yaml file
then I go to cloud formation and try to upload this file to a stack
during creation it gives me an error:
Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless
Application Specification document. Number of errors found: 1. Resource
with id [HelloWorldFunction] is invalid. 'CodeUri' is not a valid S3 Uri
of the form 's3://bucket/key' with optional versionId query parameter..
Rollback requested by user.
What should I do here ?
I'm trying to create a lambda function with trigger on s3 file upload and I need an .yaml file for CloudFormation to describe all the services and triggers… I found it extremely difficult to find a template which works…
How should I try to fix this ? when even cli generated yaml files don't work ?
Shouldn't CloudFormation initialize a lambda function when there no such function created yet?
Thanks a lot
The templates that AWS SAM uses are more flexible than those that can be interpreted by AWS CloudFormation. The problem you're running into here is that AWS SAM can handle relative paths on your file system as a CodeUri for your lambda function, CloudFormation however expects an S3 uri in order to retrieve the function code to upload to the lambda.
You should have a look at the sam package command. This command will resolve all sam specific things (e.g., it will upload the code to S3 and replace the CodeUri in the template). And create a "packaged template" file that you will be able to upload to CloudFormation.
You can also use the sam deploy command, which will package the template and deploy it to cloudformation itself.

Is it possible to trigger lambda by changing the file of local s3 manually in serverless framework?

I used the serverless-s3-local to trigger aws lambda locally with serverless framework.
Now it worked when I created or updated a file by function in local s3 folder, but when I added a file or changed the context of the file in local s3 folder manually, it didn’t trigger the lambda.
Is there any good way to solve it?
Thanks for using serverlss-s3-local. I'm the author of serverless-s3-local.
How did you add a file or change the context of the file? Did you use the AWS command as following?
$ AWS_ACCESS_KEY_ID=S3RVER AWS_SECRET_ACCESS_KEY=S3RVER aws --endpoint http://localhost:8000 s3 cp ./face.jpg s3://local-bucket/incoming/face.jpg
{
"ETag": "\"6fa1ab0763e315d8b1a0e82aea14a9d0\""
}
If you don't use the aws command and apply these operations to the files directory, these modifications aren't detected by S3rver which is the local S3 emurator. resize_image example may be useful for you.

Flink on EMR cannot access S3 bucket from "flink run" command

I'm prototyping the use of AWS EMR for a Flink-based system that we're planning to deploy. My cluster has the following versions:
Release label: emr-5.10.0
Hadoop distribution: Amazon 2.7.3
Applications: Flink 1.3.2
In the documentation provided by Amazon here: Amazon flink documentation
and the documentation from Flink: Apache flink documentation
both mention directly using S3 resources as an integrated file system with the s3://<bucket>/<file> pattern. I have verified that all the correct permissions are set, I can use the AWS CLI to copy S3 resources to the Master node with no problem, but attempting to start a Flink job using a Jar from S3 does not work.
I am executing the following step:
JAR location : command-runner.jar
Main class : None
Arguments : flink run -m yarn-cluster -yid application_1513333002475_0001 s3://mybucket/myapp.jar
Action on failure: Continue
The step always fails with
JAR file does not exist: s3://mybucket/myapp.jar
I have spoken to AWS support, and they suggested having a previous step copy the S3 file to the local Master node and then referencing it with a local path. While this would obviously work, I would rather get the native S3 integration working.
I have also tried using the s3a filesystem and get the same result.
You need to download your jar from s3 to be available in the classpath.
aws s3 cp s3://mybucket/myapp.jar myapp.jar
and then run the run -m yarn-cluster myapp.jar

Could we use AWS Glue just copy a file from one S3 folder to another S3 folder?

I need to copy a zipped file from one AWS S3 folder to another and would like to make that a scheduled AWS Glue job. I cannot find an example for such a simple task. Please help if you know the answer. May be the answer is in AWS Lambda, or other AWS tools.
Thank you very much!
You can do this, and there may be a reason to use AWS Glue: if you have chained Glue jobs and glue_job_#2 is triggered on the successful completion of glue_job_#1.
The simple Python script below moves a file from one S3 folder (source) to another folder (target) using the boto3 library, and optionally deletes the original copy in source directory.
import boto3
bucketname = "my-unique-bucket-name"
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucketname)
source = "path/to/folder1"
target = "path/to/folder2"
for obj in my_bucket.objects.filter(Prefix=source):
source_filename = (obj.key).split('/')[-1]
copy_source = {
'Bucket': bucketname,
'Key': obj.key
}
target_filename = "{}/{}".format(target, source_filename)
s3.meta.client.copy(copy_source, bucketname, target_filename)
# Uncomment the line below if you wish the delete the original source file
# s3.Object(bucketname, obj.key).delete()
Reference: Boto3 Docs on S3 Client Copy
Note: I would use f-strings for generating the target_filename, but f-strings are only supported in >= Python3.6 and I believe the default AWS Glue Python interpreter is still 2.7.
Reference: PEP on f-strings
I think you can do it with Glue, but wouldn't it be easier to use the CLI?
You can do the following:
aws s3 sync s3://bucket_1 s3://bucket_2
You could do this with Glue but it's not the right tool for the job.
Far simpler would be to have a Lambda job triggered by a S3 created-object event. There's even a tutorial on AWS Docs on doing (almost) this exact thing.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
We ended up using Databricks to do everything.
Glue is not ready. It returns error messages that make no sense. We created tickets and waited for five days still no reply.
the S3 API lets you do a COPY command (really a PUT with a header to indicate source URL) to copy objects within or between buckets. It's used to fake rename()s regularly but you could initiate the call yourself, from anything.
There is no need to D/L any data; within the same S3 region the copy has a bandwidth of about 6-10 MB/s.
AWS CLI cp command can do this.
You can do that by downloading your zip file from s3 to tmp/ directory and then re-uploading the same to s3.
s3 = boto3.resource('s3')
Download file to local spark directory tmp:
s3.Bucket(bucket_name).download_file(DATA_DIR+file,'tmp/'+file)
Upload file from local spark directory tmp:
s3.meta.client.upload_file('tmp/'+file,bucket_name,TARGET_DIR+file)
Now you can write python shell job in glue to do it. Just select Type in Glue job Creation wizard to Python Shell. You can run normal python script in it.
Nothing required. I believe aws data pipeline is a best options. Just use command line option. Scheduled run also possible. I already tried. Successfully worked.

AWS CLI create lambda function cannot see my zip file that is in S3 "No such file or directory"

My first attempt was using the console and it worked. I have a new zip file that successfully uploaded to my bucket in S3. I can list the bucket and see both files there but when I try to use CLI to create the Lambda function it returns "Error parsing parameter '--zip-file': Unable to load paramfile"..." No such file or directory:"
I expect from the documentation that "fileb://path/to/file.zip" implies that the bucket name be included but I am unsure if the region URL is needed. I tried it with and without the region URL with the same results.
Again, I am able to use these file if I create the Lambda using the console, but not CLI. What am I missing?
[royce#localhost ~]$ aws s3 ls s3://uploads.lai
2017-08-18 10:27:48 60383836 userpermission-1.zip
2017-08-31 07:43:50 60389082 userpermission-4.zip
2017-08-18 14:15:43 1171 userpermission.db
[royce#localhost ~]$ aws lambda create-function --function-name awstest01 --zip-file "fileb://uploads.lai/userpermission-4.zip" --runtime java8 --role execution-role-arn --handler app.handler
Error parsing parameter '--zip-file': Unable to load paramfile fileb://uploads.lai/userpermission-4.zip: [Errno 2] No such file or directory: 'uploads.lai/userpermission-4.zip'
The --zip-file flag is if you are uploading your function from a local zip file.
If you are using S3, the CLI the command should be something along the lines of aws create function --code "S3Bucket=string,S3Key=string,S3ObjectVersion=string"
You may check the reference here:
http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html