How to create folder on Amazon S3 using objective-c - objective-c

I know this is a question that may have been asked before (at least in Python), but I am still struggling to get this right. I compare my local folder structure and content with what I have stored in my Amazon S3 bucket. The directories not exisiting on S3, but which are found locally, are to be created in my S3 bucket. It seems that Amazon S3 does not have the concept of a folder, but rather a folder is identified as an empty file of size 0. My question is, how can I easily create a folder in objective-c by putting an empty file (with name correspoding to the folder name) on S3 (I use ASIHTTP for my get and put events)? I want to create the directory explicitly and not implicitly by copying a new file to a non-exisiting folder. I appreciate your help on this.

It seems that Amazon S3 does not have the concept of a folder, but rather a folder is identified as an empty file of size 0
The / character is often used as a delimiter, when keys are used as pathnames. To make a folder called bar in the parent folder foo, create a key with the name /foo/bar/.
Amazon now has an AWS SDK for Objective C. The S3PutObjectRequest class has the method -initWithKey:inBucket:.

Related

How is it possible to have folders in object storage? [duplicate]

This question already has answers here:
Add folder in Amazon s3 bucket
(16 answers)
Closed last month.
As per my understanding, object storage has a 'flat' structure so you cannot create folders within buckets. However, in both GCP & AWS, I am able to upload regular folders to the buckets, which also look like regular folders on their web UI console. What is the difference between the folders I am seeing on these buckets, and the folders which are there in a file-storage system (like my personal laptop)?
As far as I know Object Storage has a 'flat' structure so you cannot create folders within buckets, nor can you nest buckets in buckets.
If you need to have some form of 'folder'-like structure, then using prefixes is the way to go. You'll then end up with this structure: {endpoint}/{bucket-name}/{object-prefix}/{object-name}.
thats what you are seeing according to me
Amazon S3 has a flat structure instead of a hierarchy as you would see in a file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. It does this by using a shared name prefix for objects (that is, objects have names that begin with a common string). Object names are also referred to as key names.
For example, you can create a folder on the console named photos and store an object named myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where photos/ is the prefix.
Here are two more examples:
If you have three objects in your bucket—logs/date1.txt,
logs/date2.txt, and logs/date3.txt—the console will show a folder
named logs. If you open the folder in the console, you will see three
objects: date1.txt, date2.txt, and date3.txt.
If you have an object named photos/2017/example.jpg, the console will
show you a folder named photos containing the folder 2017. The folder
2017 will contain the object example.jpg.
When you create a folder in Amazon S3, S3 creates a 0-byte object with a key that's set to the folder name that you provided. For example, if you create a folder named photos in your bucket, the Amazon S3 console creates a 0-byte object with the key photos/. The console creates this object to support the idea of folders.
You can read more in the Amazon S3 user guide.

Listing S3 folders via Java API, excluding files

I have an AWS S3 bucket with several folders, subfolders, and files. I want to get a list of all subfolders of a folder, excluding files. I think I understand that the S3 key concept makes such distinctions iffy, but the AWS web gui allows users to create folders without files.
The listObject() method defined in com.amazonaws.services.s3.AmazonS3 returns an ObjectListing with a list of S3ObjectSummary for the actual files. Knowing the delimiter, it would be possible to split the keys into a folder hierarchy and filenames, but this appears complicated and error-prone.
Is there an API to get a list of folders without parsing the key property of S3ObjectSummary?
'Objects' in S3 (ie files) do not exist on S3 in actual folders - the 'Keys' for those objects are all at the same level of the hierarchy, and contain slashes which are then displayed to the user on the console as if they were in folders. To find only the 'folders' as opposed to the files, you will need to work out for each object in the bucket whether there are any other files with the same key but with a slash (/) and then more characters. So eg if you had objects with keys like:
a
b
b/c
b/c/d1
b/c/d2
you would only know c is a folder because there are other objects 'inside' it (ie with extra characters after the slash.

Append file on S3 bucket

I am working on a requirement where I have to constantly append the file on S3 bucket. The scenario is similar to rolling log file. Once the script(or any other method) starts writing the data to the file, until I stop it, the file should be appended on S3 bucket. I searched several ways but could not find the solution. Most the available resources says how to upload the static file to S3 but not the dynamically generated file.
S3 objects can only be overwritten, not appended-to. It's not possible.
Once created, objects are durably stored and immutable. Any "change" to an object requires that the object be replaced.
While it is possible to stream a file into S3, this doesn't accomplish the purpose either, because the object you are creating is not accessible until the upload is finalized.

S3 — Auto generate folder structure?

I need to store user uploaded files in Amazon S3. I'm new to S3, but as I got from docs, S3 requires of me to specify file upload path in PUT method.
I'm wondering if there is a way to send file to S3, and simply get link for http(s) access? I wish Amazon to handle all headache related to file/folder structure itself. For example, I just pipe from node.js file to S3, and on callback I get http link with no expiration date. And Amazon itself creates smth like /2014/12/01/.../$hash.jpg and just returns me the final link? Such use case looks to be quite common.
Is it possible? If no, could you suggest any options to simplify file storage/filesystem tree structure in S3?
Many thanks.
S3 doesnt' have folders, actually. In a normal filesystem, 2014/12/01/blah.jpg would mean you've got a 2014 folder with a folder called 12 inside it and so on, but in S3 the entire 2014/12/01/blah.jpg it the key - essentially a single long filename. You don't have to create any folders.

Folder won't delete on Amazon S3

I'm trying to delete a folder created as a result of a MapReduce job. Other files in the bucket delete just fine, but this folder won't delete. When I try to delete it from the console, the progress bar next to its status just stays at 0. Have made multiple attempts, including with logout/login in between.
I had the same issue and used AWS CLI to fix it:
aws s3 rm s3://<your-bucket>/<your-folder-to-delete>/ --recursive ;
(this assumes you have run aws configure and aws s3 ls s3://<your-bucket>/ already works)
First and foremost, Amazon S3 doesn't actually have a native concept of folders/directories, rather is a flat storage architecture comprised of buckets and objects/keys only - the directory style presentation seen in most tools for S3 (including the AWS Management Console itself) is based solely on convention, i.e. simulating a hierarchy for objects with identical prefixes - see my answer to How to specify an object expiration prefix that doesn't match the directory? for more details on this architecture, including quotes/references from the AWS documentation.
Accordingly, your problem might stem from a tool using a different convention for simulating this hierarchy, see for example the following answers in the AWS forums:
Ivan Moiseev's answer to the related question Cannot delete file from bucket, where he suggests to use another tool to inspect whether you might have such a problem and remedy it accordingly.
The AWS team response to What are these _$folder$ objects? - This is a convention used by a number of tools including Hadoop to make directories in S3. They're primarily needed to designate empty directories. One might have preferred a more aesthetic scheme, but well that is the way that these tools do it.
Good luck!
I was getting the following error when I tried to delete a bucket which was a directory that held log files from Cloudfront.
An unexpected error has occurred. Please try again later.
After I disabled logging in Cloudfront I was able to delete the folder successfully.
My guess is that it was a system folder used by Cloudfront that did not allow deletion by the owner.
In your case, you may want to check if MapReduce is holding on to the folder in question.
I was facing the same problem. Tried many login, logout attempts and refresh but problem persist. Searched stackoverflow and found suggestions to cut and paste folder in different folder then delete but didn't worked.
Another thing you should look is for versioning that might effect your bucket may be suspending the versioning allow you to delete the folder.
My solution was to delete it with code. I have used boto package in python for file handling over s3 and the deletion worked when I tried to delete that folder from my python code.
import boto
from boto.s3.key import Key
keyId = "your_aws_access_key"
sKeyId = "your_aws_secret_key"
fileKey="dummy/foldertodelete/" #Name of the file to be deleted
bucketName="mybucket001" #Name of the bucket, where the file resides
conn = boto.connect_s3(keyId,sKeyId) #Connect to S3
bucket = conn.get_bucket(bucketName) #Get the bucket object
k = Key(bucket,fileKey) #Get the key of the given object
k.delete() #Delete
S3 doesn't keep directory it just have a flat file structure so everything is managed with key.
For you its a folder but for S3 it just an key.
If you want to delete a folder named -> dummy
then key would be
fileKey = "/dummy/"
Firstly, read the content of directory from getBucket method, then you got a array list of all files, then delete the file from deleteObject method.
if (($contents = $this->S3->getBucket(AS_S3_BUCKET, "file_path")) !== false)
{
foreach ($contents as $file)
{
$result = $this->S3->deleteObject(AS_S3_BUCKET,$file['name']);
}
}
$this->S3 is S3 class object, and AS_S3_BUCKET is bucket name.