Is it possible to manipulate data before it is returned when using an S3 link. For example if I hit mybucket.s3.amazonaws.com/myfile could have a Lambda function run and manipulate a file before it is returned?
No, this is not possible -- the data will be returned as it actually exists in Amazon S3.
However, you might consider using Lambda#Edge (in preview at the time of writing this answer). This serves content via Amazon CloudFront and allows an AWS Lambda function to be run before the data is returned. (Actually, it can run code before fetching data from S3 and after receiving the data.)
Related
I have different data sources and I need to publish them to S3 in real-time. I also need to process and validate data before delivering them to S3 buckets. So, I have to use AWS Lambda and validating data. The question is that what is the difference between AWS Kinesis Data Firehose and using AWS Lambda to directly store data into S3 Bucket? Clearly, what is the advantages of using Kinesis Data Firehose? because we can use AWS Lambda to directly put records into S3!
We might want to clarify near real time, as for me, it is below 1 sec.
Kinesis Firehose in this case will batch the items before delivering them into S3. This will result in more items per S3 object.
You can configured how often you want the data to be stored. (You can also connect a lambda to firehose, so you can process the data before delivering them to S3). Kinesis Firehose will scale automatically.
Note that each PUT to S3 as a cost associated to it.
If you connect your data source to AWS Lambda, then each event will trigger the lambda (unless you have a batching mechanism in place, which you didn't mention) and for each event, you will make a PUT request to S3. This will result in a lot of small object in S3 and therefore a lot of S3 PUT api.
Also, depending on the number of items received per seconds, Lambda might not be able to scale and cost associated will increase.
I have a specific use-case where we have a huge amount of data that is continuously streamed into the AWS bucket.
we want a notification service for s3 bucket on the specific folder where if a folder reaches specific size(for example 100 TB) a cleaning service should be triggered via (SNS, Aws lambda)
I have checked into AWS documentation. I did not found any direct support from Aws regarding this issue.
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
We are planning to have a script that will periodically run and check the size of s3 Object and kicks AWS lambda.
is there any elegant way to handle case like this .any suggestion or opinion is really appreciated.
Attach s3 trigger event to a lambda function which will get triggered, whenever any file is added to the S3 bucket.
Then in the lambda function check for the file size. This will eliminate to run a script periodically to check the size.
Below is a sample code for adding S3 trigger to a lambda function.
s3_trigger:
handler: lambda/lambda.s3handler
timeout: 900
events:
- s3:
bucket: ${self:custom.sagemakerBucket}
event: s3:ObjectCreated:*
existing: true
rules:
- prefix: csv/
- suffix: .csv
There is no direct method for obtaining the size of a folder in Amazon S3 (because folders do not actually exist).
Here's a few ideas...
Periodic Lambda function to calculate total
Create an Amazon CloudWatch Event to trigger an AWS Lambda function at specific intervals. The Lambda function would list all objects with the given Prefix (effectively a folder) and total the sizes. If it exceeds 100TB, the Lambda function could trigger the cleaning process.
However, if there are thousands of files in that folder, this would be somewhat slow. Each API call can only retrieve 1000 objects. Thus, it might take many calls to count the total, and this would be done every checking interval.
Keep a running total
Configure Amazon S3 Events to trigger an AWS Lambda function whenever a new object is created with that Prefix. The Lambda function can retrieve increment the running total in a database. If the total exceeds 100TB, the Lambda function could trigger the cleaning process.
Which database to use? Amazon DynamoDB would be the quickest and it supports an 'increment' function, but you could be sneaky and just use AWS Systems Manager Parameter Store. This might cause a problem if new objects are created quickly because there's no locking. So, if files are coming in every few seconds or faster, definitely use DynamoDB.
Slow motion
You did not indicate how often this 100TB limit is likely to be triggered. If it only happens after a few days, you could use Amazon S3 Inventory, which provides a daily CSV containing a listing of objects in the bucket. This solution, of course, would not be applicable if the 100TB limit is hit in less than a day.
I'm trying to sync a large number of files from one bucket to another, some of the files are up to 2GB in size after using the aws cli's s3 sync command like so
aws s3 sync s3://bucket/folder/folder s3://destination-bucket/folder/folder
and verifying the files that had been transferred it became clear that the large files had lost the metadata that was present on the original file in the original bucket.
This is a "known" issue with larger files where s3 switches to multipart upload to handled the transfer.
This multipart handeling can be configured via the .aws/config file which has been done like so
[default]
s3 =
multipart_threshold = 4500MB
However when again testing the transfer the metadata on the larger files is still not present, it is present on any of the smaller files so it's clear that I'm heating the multipart upload issue.
Given this is an s3 to s3 transfer is the local s3 configuration taken into consideration at all?
As an alternative to this is there a way to just sync the metadata now that all the files have been transferred?
Have also tried doing aws s3 cp with no luck either.
You could use Cross/Same-Region Replication to copy the objects to another Amazon S3 bucket.
However, only newly added objects will copy between the buckets. You can, however, trigger the copy by copying the objects onto themselves. I'd recommend you test this on a separate bucket first, to make sure you don't accidentally lose any of the metadata.
The method suggested seems rather complex: Trigger cross-region replication of pre-existing objects using Amazon S3 inventory, Amazon EMR, and Amazon Athena | AWS Big Data Blog
The final option would be to write your own code to copy the objects, and copy the metadata at the same time.
Or, you could write a script that compares the two buckets to see which objects did not get their correct metadata, and have it just update the metadata on the target object. This actually involves copying the object to itself, while specifying the metadata. This is probably easier than copying ALL objects yourself, since it only needs to 'fix' the ones that didn't get their metadata.
Finally managed to implement a solution for this and took the oportunity to play around with the Serverless framework and Step Functions.
The general flow I went with was:
Step Function triggered using a Cloudwatch Event Rule targetting S3 Events of the type 'CompleteMultipartUpload', as the metadata is only ever missing on S3 objects that had to be transfered using a multipart process
The initial Task on the Step Function checks if all the required MetaData is present on the object that raised the event.
If it is present then the Step Function is finished
If it is not present then the second lambda task is fired which copies all metadata from the source object to the destination object.
This could be achieved without Step Functions however was a good simple exercise to give them a go. The first 'Check Meta' task is actually redundant as the metadata is never present if multipart transfer is used, I was originally also triggering off of PutObject and CopyObject as well which is why I had the Check Meta task.
I am using s3 bucket to store my data. And I keep pushing data to this bucket every single day. I wonder whether there is feature I can compare the files different in my bucket between two date. I not, is there a way for me to build one via aws cli or sdk?
The reason I want to check this is that I have a s3 bucket and my clients keep pushing data to this bucket. I want to have a look how much data they pushed since the last time I load them. Is there a pattern in aws support this query? Or do I have to create any rules in s3 bucket to analyse it?
Listing from Amazon S3
You can activate Amazon S3 Inventory, which can provide a daily file listing the contents of an Amazon S3 bucket. You could then compare differences between two inventory files.
List it yourself and store it
Alternatively, you could list the contents of a bucket and look for objects dated since the last listing. However, if objects are deleted, you will only know this if you keep a list of objects that were previously in the bucket. It's probably easier to use S3 inventory.
Process it in real-time
Instead of thinking about files in batches, you could configure Amazon S3 Events to trigger something whenever a new file is uploaded to the Amazon S3 bucket. The event can:
Trigger a notification via Amazon Simple Notification Service (SNS), such as an email
Invoke an AWS Lambda function to run some code you provide. For example, the code could process the file and send it somewhere.
I'm trying to send all my AWS IoT incoming sensor value messages to the same s3 bucket, but despite turning on versioning in my bucket, the file keeps getting overwritten and showing only the last input sensor value rather then all of them. I'm using "Store messages in an Amazon S3 bucket" direct from the AWS IoT console. Any easy way to solve this problem?
So after further research and speaking with Amazon Dev support you actually cant append records tot he same file in S3 from the IoT console directly. I mentioned this was a feature most IoT developers would want as a default, and he said it would likely be possible soon but not way to do it now. Anyway the simplest workaound I tested is to set up a Kinesis stream with a firehose to a S3 bucket. This will be constrained by an adjustable data size and stream duration but it works well otherwise. It also allows you to insert a Lambda functino for data transform if needed.