S3 bucket does not append new data objects - amazon-s3

I'm trying to send all my AWS IoT incoming sensor value messages to the same s3 bucket, but despite turning on versioning in my bucket, the file keeps getting overwritten and showing only the last input sensor value rather then all of them. I'm using "Store messages in an Amazon S3 bucket" direct from the AWS IoT console. Any easy way to solve this problem?

So after further research and speaking with Amazon Dev support you actually cant append records tot he same file in S3 from the IoT console directly. I mentioned this was a feature most IoT developers would want as a default, and he said it would likely be possible soon but not way to do it now. Anyway the simplest workaound I tested is to set up a Kinesis stream with a firehose to a S3 bucket. This will be constrained by an adjustable data size and stream duration but it works well otherwise. It also allows you to insert a Lambda functino for data transform if needed.

Related

AWS Kinesis Data Firehose and Lambda

I have different data sources and I need to publish them to S3 in real-time. I also need to process and validate data before delivering them to S3 buckets. So, I have to use AWS Lambda and validating data. The question is that what is the difference between AWS Kinesis Data Firehose and using AWS Lambda to directly store data into S3 Bucket? Clearly, what is the advantages of using Kinesis Data Firehose? because we can use AWS Lambda to directly put records into S3!
We might want to clarify near real time, as for me, it is below 1 sec.
Kinesis Firehose in this case will batch the items before delivering them into S3. This will result in more items per S3 object.
You can configured how often you want the data to be stored. (You can also connect a lambda to firehose, so you can process the data before delivering them to S3). Kinesis Firehose will scale automatically.
Note that each PUT to S3 as a cost associated to it.
If you connect your data source to AWS Lambda, then each event will trigger the lambda (unless you have a batching mechanism in place, which you didn't mention) and for each event, you will make a PUT request to S3. This will result in a lot of small object in S3 and therefore a lot of S3 PUT api.
Also, depending on the number of items received per seconds, Lambda might not be able to scale and cost associated will increase.

How store streaming data from Amazon Kinesis Data Firehose to s3 bucket

I want to improve my current application. I am using redis using ElastiCache in AWS in order to store some user data from my website.
This solution is not scalable and I want to scale it using Amazon Kinesis Data Firehose for the autoscale streaming output, AWS Lambda to modify my input data, store it in S3 bucket and access it using AWS Athena.
I have been googling for several days but I really don't know how Amazon Kinesis Data Firehose store the data in S3.
Is Firehose going to store the data as a single file per each process that it will process or there is a way to add this data in the same csv or group the data in different csv's?
Amazon Kinesis Data Firehose will group data into a file based on:
Size of data (eg 5MB)
Duration (eg every 5 minutes)
Whichever one hits the limit first will trigger the data storage in Amazon S3.
Therefore, if you need near-realtime reporting, go for a short duration. Otherwise, go for larger files.
Once a file is written in Amazon S3, it is immutable and Kinesis will not modify its contents. (No appending or modification of objects.)

aws s3 sync cli ignoring multipart upload config when syncing between buckets

I'm trying to sync a large number of files from one bucket to another, some of the files are up to 2GB in size after using the aws cli's s3 sync command like so
aws s3 sync s3://bucket/folder/folder s3://destination-bucket/folder/folder
and verifying the files that had been transferred it became clear that the large files had lost the metadata that was present on the original file in the original bucket.
This is a "known" issue with larger files where s3 switches to multipart upload to handled the transfer.
This multipart handeling can be configured via the .aws/config file which has been done like so
[default]
s3 =
multipart_threshold = 4500MB
However when again testing the transfer the metadata on the larger files is still not present, it is present on any of the smaller files so it's clear that I'm heating the multipart upload issue.
Given this is an s3 to s3 transfer is the local s3 configuration taken into consideration at all?
As an alternative to this is there a way to just sync the metadata now that all the files have been transferred?
Have also tried doing aws s3 cp with no luck either.
You could use Cross/Same-Region Replication to copy the objects to another Amazon S3 bucket.
However, only newly added objects will copy between the buckets. You can, however, trigger the copy by copying the objects onto themselves. I'd recommend you test this on a separate bucket first, to make sure you don't accidentally lose any of the metadata.
The method suggested seems rather complex: Trigger cross-region replication of pre-existing objects using Amazon S3 inventory, Amazon EMR, and Amazon Athena | AWS Big Data Blog
The final option would be to write your own code to copy the objects, and copy the metadata at the same time.
Or, you could write a script that compares the two buckets to see which objects did not get their correct metadata, and have it just update the metadata on the target object. This actually involves copying the object to itself, while specifying the metadata. This is probably easier than copying ALL objects yourself, since it only needs to 'fix' the ones that didn't get their metadata.
Finally managed to implement a solution for this and took the oportunity to play around with the Serverless framework and Step Functions.
The general flow I went with was:
Step Function triggered using a Cloudwatch Event Rule targetting S3 Events of the type 'CompleteMultipartUpload', as the metadata is only ever missing on S3 objects that had to be transfered using a multipart process
The initial Task on the Step Function checks if all the required MetaData is present on the object that raised the event.
If it is present then the Step Function is finished
If it is not present then the second lambda task is fired which copies all metadata from the source object to the destination object.
This could be achieved without Step Functions however was a good simple exercise to give them a go. The first 'Check Meta' task is actually redundant as the metadata is never present if multipart transfer is used, I was originally also triggering off of PutObject and CopyObject as well which is why I had the Check Meta task.

push logs in S3 to dynamoDB continuously

we have our application logs pumped to S3 via Kinesis Firehose. we want this data to also flow to DynamoDB so that we can efficiently query the data to be presented in web UI (Ember app). need for this is so that users are able to filter and sort the data and so on. basically to support querying abilities via web UI.
i looked into AWS Data pipeline. this is reliable but more tuned to one time imports or scheduled imports. we want the flow of data from s3 to dynamoDB to be continuous.
what other choices are out there to achieve this? moving data from S3 to dynamoDB isn't a very unique requirement. so how have you solved this problem?
Is an S3 event triggered lambda an option? if yes, then how to make this lambda fault tolerant?
For Full Text Querying
You can design your solution as follows for better querying using AWS Elasticsearch as the destination for rich querying.
Setup Kinesis Firehouse Destination to Amazon Elastic Search. This will allow you to do full text querying from your Web UI.
You can choose to either back up failed records only or all records. If you choose all records, Kinesis Firehose backs up all incoming source data to your S3 bucket concurrently with data delivery to Amazon Elasticsearch. 
For Basic Querying
If you plan to use DynamoDB to store the metadata of logs its better to configure S3 Trigger to Lambda which will retrieve the file and update the metadata to DynamoDB.
Is an S3 event triggered lambda an option?
This is definitely an option. You can create a PutObject event on your S3 bucket and have it call your Lambda function, which will invoke it asynchronously.
if yes, then how to make this lambda fault tolerant?
By default, asynchronous invocations will retry twice upon failure. To ensure fault-tolerance beyond the two retries, you can use Dead Letter Queues and send the failed events to an SQS queue or SNS topic to be handled at a later time.

Amazon S3, streaming video while still uploading it

I wanted to know if it would be possible to stream video while you are uploading it.
For example I have a 100MB video uploading to s3, the first 50MB are uploaded, so can a client start reproducing the video through cloudfront even tho it's not yet fully uploaded?
Or does S3 first wait for the upload to completely finish, then assemble the video file, and then publish it?
Thanks!
S3 provides read-after-write consistency for PUTS of new objects. The data will not be able to read until the write is complete.
Amazon S3 provides read-after-write consistency for PUTS of new
objects in your S3 bucket in all regions with one caveat. The caveat
is that if you make a HEAD or GET request to the key name (to find if
the object exists) before creating the object, Amazon S3 provides
eventual consistency for read-after-write.
S3 consistency model