Files disappearing from Amazon S3 - amazon-s3

Here are links to four files that I uploaded in the last week, but have now disappeared from my bucket on S3:
https://gh-resource.s3.amazonaws.com/ghImage/SWjqzgXy9rGCYvpRF-naypyidaw.jpg
https://gh-resource.s3.amazonaws.com/ghImage/SWjqzgXy9rGCYvpRF-london.jpg
https://gh-resource.s3.amazonaws.com/ghImage/SWjqzgXy9rGCYvpRF-brussels.jpg
https://s3.amazonaws.com/gh-resource/ghImage/SWjqzgXy9rGCYvpRF-ottawa.jpg
I know they successfully uploaded because I saw them on my website multiple times before they disappeared. The last file above (ottawa), I just now re-uploaded, so that I could look at the permissions and see if there was an expiry date or expiry rule. When I looked at the permissions, 'everyone' has read/download permission. Expiry date is None, expiry rule is N/A. This has been happening regularly for the last year or so. What could be causing this?

You should enable logging on your bucket. This will tell you who/what is deleting your files.
See: Logging Amazon S3 API Calls By Using AWS CloudTrail
I found that if you have an expiry policy setup you'll also see that in the logs. See Lifecycle and Other Bucket Configurations for more info.

Related

How to upload a temporary file on S3

Is there a way to upload a file to S3 and set the autodeletion in 2 hours, for example?
I really don't want to write a cleanup program
Now AWS has introduced an option to expire a file automatically. You can set the policy of the bucket to expire files after a certain period of time or move to a cheaper storage from s3 bucket.
You can find more information from this AWS Documentation.

AWS elasticbeanstalk automating deletion of logs published to S3

I have enabled publishing of logs from AWS elasticbeanstalk to AWS S3 by following these instructions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html
This is working fine. My question is how do I automate the deletion of old logs from S3, say over one week old? Ideally I'd like a way to configure this within AWS but I can't find this option. I have considered using logrotate but was wondering if there is a better way. Any help is much appreciated.
I eventually discovered how to do this. You can create an S3 Lifecycle rule to delete particular files or all files in a folder more than N days old. Note: you can also archive instead of delete or archive for a while before deleting, among other things- it's a great feature.
Reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectExpiration.html
and http://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-console.html

How to filter or cleanup my S3 buckets clutters by log file?

I use S3 and amazon cloud front to put images.
When I go on amazon S3 interface, it's hard to find the folder where i have put my images because i need to scroll 10 minutes past all the buckets it creates every 15 minutes/hour. There are literally thousands.
Is it normal?
Did I put something wrong on the settings of S3 or of the cloud front file I connected to this S3 folder?
What should I do to delete them? It seems I can only delete them one by one.
See here a snapshot:
AND SO ON.....FOR THOUSANDS OF FILES UNTIL...
Those are not buckets, but are actually log files generated by S3 because you enabled logging for your bucket and configured it to save the logs in the same bucket.
If you want to keep logging enabled but make it easier to work with the logs, just use a prefix in the logging configuration or set up logging to use a different bucket.
If you don't need the logs, just disable logging.
See http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html for more details.

Amazon S3 problems with S3fox

I have created an Amazon S3 account and trying to upload some files with S3fox add-on.
I have added S3fox and logged in with my accesskey and secure id credentials.
Now, i created a bucket by right clicking and selecting create a directory and selected the option to put the bucket in europe. Now when i try to drill down into my folder, i keep getting an error message saying "Error connecting! - Temporary Redirect". And also i can not transfer any files.
but if i create the bucket without selecting the option to put it into europe, then i am able to drill down into the bucket.
I would like my bucket to be in europe as i am from UK. Please suggest what i am missing and how can i resolve this issue?
Thanks
Sreekanth
I have the same problem - it still doesn't work after an hour. To save waiting I've installed Cloudberry (freeware for Windows), which seems to be a better alternative anyway (it looks more user-friendly and has more options): http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx

amazon S3 bucket level stats

I'd like to know if there's a way for me to have bucket-level stats in amazon s3.
Basically i want to charge customers for storage and GET requests on my system (which is hosted on s3).
So i created a specific bucket for each client, but i can't seem to get the stats just for a specific bucket.
I see the API lets me
GET Bucket
or
GET Bucket requestPayment
But i just can't find how to get the number of requests issued to said bucket and the total size of the bucket.
Thanks for help !
Regards
I don't think that what you are trying to achieve is possible using Amazon API. The GET Bucket request does not contain usage statistics (requests, etc) other than the timestamp of the latest modification (LastModified).
My suggestion would be that you enable logging in your buckets and perform the analysis that you want from there.
S3 starting page gives you an overview on it:
Amazon S3 also supports logging of requests made against your Amazon S3 resources. You can configure your Amazon S3 bucket to create access log records for the requests made against it. These server access logs capture all requests made against a bucket or the objects in it and can be used for auditing purposes.
And I am sure there is plenty of documentation on that matter.
HTH.