I need my Spinnaker pipeline to trigger on changes to an AWS s3 bucket, specifically when a file is added or edited.
I cannot find a builtin mechanism to do that: there is nothing remotely related to S3 buckets in the drop-down list of triggers.
I thought I might be able to use a webhook from an AWS lambda that subscribes to S3 events on the bucket, and have the lambda webhook to https://my_spinnnaker.mybiz.com/webhooks/webhook/s3_new. However it does not seem possible to pass parameters to the hook, e.g. the key of new S3 object.
Any other ways of doing this?
The S3 Object Key can be read from the event that triggers the Lambda function. The event records have the S3 object key. 1
For a pipeline with parameters, the request sent from the Lambda function can contain parameter values in the request body. The format for the payload is given below. 2
{
"parameters": {
"<parameter-key>": "<parameter-value>"
}
}
Related
I have a lambda to process the files in a folder of a s3 bucket. I would like to setup an alarm/notification if objects are in the folders for more than 7 hours and not processed by the lambda
You can use the tags of objects in S3, have something like tag name Processed : true or false changed by your lambda processor.
Then in your another scheduled lambda you can check the creation object if > 7h and processed : false (that means not processed by the lambda), if found you create a notification in SNS
Set object expiration to 7 hours for the S3 bucket. Then have a lambda get triggered by the delete event. The lambda can be one that notifies you and saves the file into another bucket or forwards it to your original lambda. The lambda triggered could be the one that should have been triggered when uploading the object.
Alternatively, you can add tags to the uploaded files. A tag could be ttl: <date-to-delete>. You have a CloudWatch scheduled event that runs a lambda, for instance every hour, and checks for all objects in the S3 bucket whether the ttl-tag's value is older than the current time.
Personally, I would go with the first approach as it's more event driven and less scheduled processing.
On another note, it's rather strange that the lambda doesn't get triggered for some S3 objects. I don't know how you deploy your Lambda and configure the trigger to S3. If you're using serverless or CDK I don't see how your lambda doesn't get triggered for every uploaded file with a configuration similar to the following:
// serverless example
functions:
users:
handler: users.handler
events:
- s3:
bucket: photos
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
In this example the users lambda gets triggered for every jpg file that gets created in photos/uploads.
I am trying to create an AWS Lambda triggered on a file upload on an existing S3 bucket using Serverless Framework.
I managed to have the Lambda deployed but cannot have it triggered by upload on my existing S3 bucket.
I am well aware of
the existence of a existing parameter within serverless.yml:
functions:
copyToBufferS3:
handler: handler.copy_to_buffer_s3
description: Copies newly uploaded technical logs to a buffer S3 bucket
events:
- s3:
bucket: my.bucket.name
event: s3:ObjectCreated:*
rules:
- suffix: suffix.ext
existing: true
the fact that such parameter creates another lambda named <service name>-<stage>-custom-resource-existing-s3: I can see it in my console (so existing parameter and its section actually is taken into account, yay!)
the importance of indentation in serverless.yml: I double checked that parameters under - s3 section are 4 spaces-indented
The problem is: neither of these two lambdas have a trigger set.
How should I edit my serverless.yml (or something else) to actually have my lambda triggered on file upload?
It turned out I just did not have the permission to... display the trigger(s).
I am surprised that in this case, AWS Console displays "Triggers (0)" as if it actually was empty in this section (instead of clearly warning about permission as it usually does).
I have a lambda function that gets notification whenever and s3 object gets added. The function then reads some metadata of the s3 object. Is there any way to include the metadata in the notification itself, rather than me having to go and read the head object.
The event notification data structure does not include object metadata or tags, and there is not a way to enable their inclusion.
I am doing an upload like this:
curl -v -X PUT -T "test.xml" -H "Host: my-bucket-upload.s3-eu-central-1.amazonaws.com" -H "Content-Type: application/xml" https://my-bucket-upload.s3-eu-central-1.amazonaws.com/test.xml
The file gets uploaded and I can see it in my S3 bucket.
The trick is, when I try to create a lambda function to be triggered on creation, it never gets invoked. If I upload the file using the S3 web interface, it works fine. What am I doing wrong? Is there any clear recipe on how to do it?
Amazon S3 APIs such as PUT, POST, and COPY can create an object. Using
these event types, you can enable notification when an object is
created using a specific API, or you can use the s3:ObjectCreated:*
event type to request notification regardless of the API that was used
to create an object.
Check the notification event setup on the bucket
Go to bucket on AWS management console
Click the properties tab on the bucket
Click the Events to check the notification event setup
Case 1:
s3:ObjectCreated:* - Lambda should be invoked regardless of PUT, POST or COPY
Other case:-
If the event is setup for specific HTTP method, use that method on
your CURL command to create the object on S3 bucket. This way it
should trigger the Lambda function
Check the prefix in bucket/properties.
If there is a world like foo/, that means that only the objects inside the foo folder will trigger the evert to lambda.
Make sure the prefix you're adding contains safe special characters mentioned here. As per AWS documentation, some characters require special handling. Please be mindful of that.
Also, I noticed modifying the trigger on lambda page doesn't get applied until you delete the trigger and new one (even if it is same). Learned hard way. AWS does behaves weird sometime.
Faced similar issues and figured our that the folder names should not have spaces.
I want to create a AWS lambda event source to catch the action of upload a file via aws cli cp command, but it couldn't be triggered when i upload a file. Here is what i have done:
I configured the event source as following:
I have tried all the four option of Object Created event type, it just didn't work.
I use the aws cli as following:
aws s3 cp sample.html s3://ml.hengwei.me/data/
Is there anywhere i miss configured?
You are triggering your Lambda from the wrong event type.
Using the awscli to cp files up into S3 does not cause an s3:ObjectCreated:Copy event (which I believe relates to an S3 copy operation, copying an object from one bucket to another). In your case, the object is being uploaded to S3 and I presume that it results in either s3:ObjectCreated:Put or s3:ObjectCreated:CompleteMultipartUpload.
The events include:
s3:ObjectCreated:Put – An object was created by an HTTP PUT
operation.
s3:ObjectCreated:Post – An object was created by HTTP POST
operation.
s3:ObjectCreated:Copy – An object was created an S3 copy
operation.
s3:ObjectCreated:CompleteMultipartUpload – An object was
created by the completion of a S3 multi-part upload.
s3:ObjectCreated:* – An object was created by one of the event types
listed above or by a similar object creation event added in the
future.
Full list of events is here. Note that the awscli may or may not use multi-part upload so you need to handle both situations.