We're using Amazon S3 for file storage and recently found out that we need to keep some sort of directory structure. Since S3 doesn't allow that, we know we can name the files according to their structure for storage. For example...
abc/123/draft.doc
What I want to know is if I want to provide a public link to this particular file is there anyway that the file can simply be draft.doc instead of abc/123/draft.doc ?
I feel stupid. After some more investigation I realized that by creating a GET url to the resource, I get exactly what I need.
Related
I need to store user uploaded files in Amazon S3. I'm new to S3, but as I got from docs, S3 requires of me to specify file upload path in PUT method.
I'm wondering if there is a way to send file to S3, and simply get link for http(s) access? I wish Amazon to handle all headache related to file/folder structure itself. For example, I just pipe from node.js file to S3, and on callback I get http link with no expiration date. And Amazon itself creates smth like /2014/12/01/.../$hash.jpg and just returns me the final link? Such use case looks to be quite common.
Is it possible? If no, could you suggest any options to simplify file storage/filesystem tree structure in S3?
Many thanks.
S3 doesnt' have folders, actually. In a normal filesystem, 2014/12/01/blah.jpg would mean you've got a 2014 folder with a folder called 12 inside it and so on, but in S3 the entire 2014/12/01/blah.jpg it the key - essentially a single long filename. You don't have to create any folders.
Wanting to use Orchard 1.7 with Media storage on S3 (as I'm deploying to AppHarbor)
So far I'm looking at the S3 Storage provider But its a bit out of date.
Has anyone done this ? is there a better way to use S3 with the new media manager?
I've got images uploading to s3, but they don't display when I click the folder.
here is the Gist of my updated S3Provider
Missing methods for create file, rename folder, get file, and Get storage path. any help on how to complete these would be appreciated.... however stepping through the debugger in VS this doesn't seem to be the root cause of my displaying images issue above.
Edit
Looks like the file is up loading to s3 but not to the database, due to the GetFile method throwing an error...
Edit 2
Added some code to the Get file method. Now that works; (gist updated) Can up load images. However the thumbnails are still not working, they just come back as empty tags ...Think this is because the media manager is using the Open get method - which is supposed to open a file so you can write a stream to it. Don't know how to achieve this with S3... any ideas welcome
As Part of the AWSSKD NuGet package version 1.5.28.3 you can access a S3FileInfo object. I've used this in my S3 Storage File and updated the S3 Storage provider.
This seem to work, need to do a bit more testing on it.
NOTE: I had to add some code on the GetFile Method to ensure the permissions where set correctly otherwise the updating of thumbnails overwrote permissions on the file.... I'm sure there is a better way to do this.
I uploaded a lot of files (about 5,800) to Amazon S3, which seemed to work perfectly well, but a few of them (about 30) had their filenames converted to lowercase.
The first time, I uploaded with Cyberduck. When I saw this problem, I deleted them all and re-uploaded with Transmit. Same result.
I see absolutely no pattern that would link the files that got their names changed, it seems very random.
Has anyone had this happen to them?
Any idea what could be going on?
Thank you!
Daniel
I let you know first that Amazon S3 object URLs are case sensitive. So when you upload file file with upper case and access that file with same URL, it was working. But after renaming objects in lower case and I hope you are trying same older URL so you may get access denied/NoSuchKey error message.
Can you try Bucket Explorer to generate the file URL for Amazon S3 object and then try to access that file?
Disclosure: I work for Bucket Explorer.
When I upload to Amazon servers, I always use Filezilla and STFP. I never had such a problem. I'd guess (and honestly, this is just a guess since I haven't used Cyberduck nor Transmit) that the utilities you're using are doing the filename changing. Try it with Filezilla and see what the result is.
I'd like to list all files from a remote folder (let's say www.mysite.com/folder, and this folder is already configured through .htaccess for directory listing).
After listing, i'll need to copy the remote files to a local folder.
For listing/copying only local files, I was using NSFileManager, but this doesn't work for the remote ones. I've been looking for some reference on it, but couldn't find so far...
While NSFileManager can in fact handle URLs, it's not going to download the apache HTML page with the directory listing and parse it to do this... you'll have to do that yourself. This sounds like a strange thing to be doing however, so you may want to explain the reasoning and we may be able to suggest better alternatives. WebDAV comes to mind.
UPDATE: Based on your comment, why not put the resources in a .zip (or similar) file and download that? Then it's a single download and you can just extract it locally. Sounds like it would save a lot of headaches and would make it much easier to do things like checksum validations on the download(s).
Maybe it's not the best way, but - instead of get directory listing - we're going to keep a list of files that should be transfered (could be a .txt or .xml).
For downloading and tracking multiple requests, we're going to use ASINetworkQueues (more details can be found on http://allseeing-i.com/ASIHTTPRequest).
Another good suggestion, given by d11wqt (thank you for your help), is compressing the files and just make one single request.
If someone goes to the url of my bucket, they are able to see every single file listed.
Although I want the files in my bucket to be able to be seen by the public, I'd prefer not to have this list view available. Is there a way to prevent "directory listings" like this?
you should remove read access for "All Users" built-in group from the bucket's ACL. You can do that using the tool like CloudBerry Explorer freeware
Make sure you keep read access on the files you want to serve from S3.
Thanks
Andy