S3 Lifecycle Policies - amazon-s3

I have objects in an s3 bucket prefix called foo/. If I move the objects to another prefix using the aws cli mv command called bar/ which has a lifecycle policy on it to expire objects older than 60days, would the objects 'age' reset to 0 once it lands in the bar/ prefix or would it take into account the time it spent in the foo/ prefix?
I do not think it would restart the object lifetime but I would like to be 100% sure.

When you move a file, S3 deletes the original file and copies it to the new location under the hood.
Therefore, the age will reset to zero when you move the objects from one prefix to another.

Related

Are s3 prefixes deleted by the lifecycle management?

Are (empty) prefixes also deleted by the s3 lifecycle management?
S3 is a glorious hash table of (key, value) pairs. The presence of '/' in the key gives the illusion of folder structure and the S3 web UI also organizes the keys in a hierarchy. So, if lifecycle management rules end up deleting all the keys with a certain prefix, then it essentially means the prefix is also deleted (basically, there is no key with such a prefix). HTH.
Short answer: yes
More detail: prefix are just 0-byte object. When you use the Amazon S3 console to create a folder, Amazon S3 creates a 0-byte object with a key that's set to the folder name that you provided. For example, if you create a folder named photos in your bucket, the Amazon S3 console creates a 0-byte object with the key photos/. The console creates this object to support the idea of folders.How S3 supports folder idea

AWS S3 object life cycle

I want to delete object in S3 bucket in bulk after certain period is over. For example all object starting with 2016-08-01* in its name or *.xlsx files in a bucket. I can set life cycle for individual object not in * mode. How to do it?
According to S3 API you can list objects that start with a certain prefix and then you can do a multiple delete.

AWS S3 make folder IFA forever

In aws s3 web interface we can select a specific folder then navigate to properties and select IFA... like this
This will start processing all existing data, but if you open the same properties page again its not selected. If you select again and apply it will show a processing bar again...
Its ambiguous does that folder remain IFA once selected and saved? will future uploads to that folder stored in IFA storage? If not how do we do that?
I know that there is migration rule like after 30 days move to IFA but i know upfront that my data is suitable for IFA storage...
"For all selected items" means the current objects, not the folders.
The folders do not exist in S3 in any meaningful sense yes, you can "create" a folder but all that does is create a placeholder for convenience in the console -- there are never any files actually "in" the folder -- so it is impossible to actually set any kind of properties on them. The folders that appear in the console are just a human-friendly representation of a hierarchy, created by splitting the keys on / delimiters.
In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects.
http://docs.aws.amazon.com/AmazonS3/latest/UG/FolderOperations.html
If you want objects stored for their lifetime as either STANDARD_IA or REDUCED_REDUNDANCY then you have to initially upload them that way. If you want them to transition to STANDARD_IA or GLACIER later, then you use lifecycle policies.
Note also that changing storage classes in the console like you are doing incurs the same cost as re-uploading the object, because changing storage classes is accomplished by the console invoking the S3 copy operation -- using the same key for source and destination. It's $0.01 per 1000 objects, so use it wisely on large collections of objects. Objects and their metadata are immutable, so modifying them (including storage class, which isn't technically metadata) requires replacing the object with an identical object.

Folder won't delete on Amazon S3

I'm trying to delete a folder created as a result of a MapReduce job. Other files in the bucket delete just fine, but this folder won't delete. When I try to delete it from the console, the progress bar next to its status just stays at 0. Have made multiple attempts, including with logout/login in between.
I had the same issue and used AWS CLI to fix it:
aws s3 rm s3://<your-bucket>/<your-folder-to-delete>/ --recursive ;
(this assumes you have run aws configure and aws s3 ls s3://<your-bucket>/ already works)
First and foremost, Amazon S3 doesn't actually have a native concept of folders/directories, rather is a flat storage architecture comprised of buckets and objects/keys only - the directory style presentation seen in most tools for S3 (including the AWS Management Console itself) is based solely on convention, i.e. simulating a hierarchy for objects with identical prefixes - see my answer to How to specify an object expiration prefix that doesn't match the directory? for more details on this architecture, including quotes/references from the AWS documentation.
Accordingly, your problem might stem from a tool using a different convention for simulating this hierarchy, see for example the following answers in the AWS forums:
Ivan Moiseev's answer to the related question Cannot delete file from bucket, where he suggests to use another tool to inspect whether you might have such a problem and remedy it accordingly.
The AWS team response to What are these _$folder$ objects? - This is a convention used by a number of tools including Hadoop to make directories in S3. They're primarily needed to designate empty directories. One might have preferred a more aesthetic scheme, but well that is the way that these tools do it.
Good luck!
I was getting the following error when I tried to delete a bucket which was a directory that held log files from Cloudfront.
An unexpected error has occurred. Please try again later.
After I disabled logging in Cloudfront I was able to delete the folder successfully.
My guess is that it was a system folder used by Cloudfront that did not allow deletion by the owner.
In your case, you may want to check if MapReduce is holding on to the folder in question.
I was facing the same problem. Tried many login, logout attempts and refresh but problem persist. Searched stackoverflow and found suggestions to cut and paste folder in different folder then delete but didn't worked.
Another thing you should look is for versioning that might effect your bucket may be suspending the versioning allow you to delete the folder.
My solution was to delete it with code. I have used boto package in python for file handling over s3 and the deletion worked when I tried to delete that folder from my python code.
import boto
from boto.s3.key import Key
keyId = "your_aws_access_key"
sKeyId = "your_aws_secret_key"
fileKey="dummy/foldertodelete/" #Name of the file to be deleted
bucketName="mybucket001" #Name of the bucket, where the file resides
conn = boto.connect_s3(keyId,sKeyId) #Connect to S3
bucket = conn.get_bucket(bucketName) #Get the bucket object
k = Key(bucket,fileKey) #Get the key of the given object
k.delete() #Delete
S3 doesn't keep directory it just have a flat file structure so everything is managed with key.
For you its a folder but for S3 it just an key.
If you want to delete a folder named -> dummy
then key would be
fileKey = "/dummy/"
Firstly, read the content of directory from getBucket method, then you got a array list of all files, then delete the file from deleteObject method.
if (($contents = $this->S3->getBucket(AS_S3_BUCKET, "file_path")) !== false)
{
foreach ($contents as $file)
{
$result = $this->S3->deleteObject(AS_S3_BUCKET,$file['name']);
}
}
$this->S3 is S3 class object, and AS_S3_BUCKET is bucket name.

How to delete lot of objects named with common prefix from S3 bucket?

I have files in S3 bucket, and their names have the following format:
username#file_id#...
How to remove all john#doe#* items, without listing them? There are thousands of them, so when user request my app to delete all of them, he has to wait.
For anyone who stumbles upon this now, you can create a lifecycle rule to either delete or set expiration of files with a certain prefix.
There's no way to tell S3 to delete all files that meet a specific criteria - you have to delete one key at a time.
Most client libraries offer a way to filter and paginate such that you'd only list the files you need to delete and you can provide a status update. For an example, Boto's bucket listing accepts prefix as one of the parameters.
I have mistakenly create logging files in same bucket and there are like tons of log file in my bucket.
Luckily I came across a nodejs util node-s3-utils and it save my day!
example of delete files with foo/ prefix, having extension .txt
$ s3utils files delete -c ./.s3-credentials.json -p foo/ -r 'foo\/(\w)+\.txt'