Do I need to create folders in cloudinary? - cloudinary

I am building an new web application and want to use Cloudinary for users' images. My question is that do I need to create folders in my Cloudinary cloud? The reason I am asking is that if I were using a file system and start having 100,000+ images in one folder, it will start killing my app, and I would need to break then into several folders.
Is it the same for Cloudinary?
Thanks,

It depends on your current and future requirements.
In general, I believe that folders can help with better organizing your resources, especially when there are lots of them.
Note that besides folders, you can also assign tags to your images (e.g., by user) or add a prefix to the images' public IDs (e.g., user1-<image_name>).
You can later use Cloudinary's Admin API to list your resources either by folder/prefix or by tag.

Related

How can can i insert a user with profile picture using Blazor-server

I want the user to upload his picture when he registers his information.
The thing is when the user uploads his image.. should automatically create a folder with his ID to be like this wwwroot/images/UserID/fadi.jpg
Basically: you really shouldn't. The wwwroot is for static assets used by the application. You're using server-side, so in theory it might be possible but that's not what the folder is meant for. An alternative method like AWS would be preferred, but if you can't do that (either because of payment requirements or other complications) I would suggest saving the image to your database. One way to do this would be to base64 encode the image and save it that way. I'm not going to give an example of that here, there are plenty available elsewhere. One such example is this.

Alternative to Dropbox public folder for navigatable links

I am an iOS developer and, until recently, had been using Dropbox's public folder as a makeshift server for testing purposes. This is because public folders had static URLs and the subfolders and files they contain could be accessed directly though the file tree (ex: https://dl.dropboxusercontent.com/u/xxxxxxx/audioTests/). This was useful when iterating through a variable array of files, as I could just specify the folder url and append the individual file names (ex: https://dl.dropboxusercontent.com/u/xxxxxxx/audioTests/music.mp3).
As of this month, however, dropbox has discontinued this feature for free users and is planning to remove it for paid users later this year. Does anyone know of a (preferably free) alternative that offers this same kind of navigatable public folder? I need a minimal amount of space, as I will just be using it for testing purposes. All of the other similar services that I know of, such as Google Drive and OneDrive, only offer links to individual files and folders.

Storing user documents in S3 - design decisions

I have an angular web app that will store CVs, cover letters, and user images. I'm trying to come up with a solution of buckets and folders that makes the most sense for my app.
For avatars, which are unique, I think it would be simple to:
have a bucket named avatars
name each avatar with the user-id of the user
For CVs, there are two kinds: one with full contact information, and another one edited to not reveal the contact information. For cover letters, those are also mostly unique. If I store only the most recent version, I could:
have a bucket named Docs
create a folder with the user-id of the user
have items userid/cv-edited.doc, userid/cv-unedited.doc, userid/letter.doc
Is this a reasonable scheme? Are there any pitfalls?
Is there a strong reason to separate the avatars from the user-id folders? It sure seems simpler to keep a single path for all the data... Then you can have permissions driven purely by the path.

How to hide actual path Carrierwave

I'm building an application (Rails 3.2.8) where user can upload music tracks and associated clip. While clips can be publicly accessible, track can't be accessible without purchasing. I'm using carrierwave for uploading the both types of files. However, I do not want to expose the actual track path to the users.
What techniques such services use to protect hotlinking and/or unauthorized access to the files?
Currently, the carrierwave path is like:
def store_dir
"tracks/#{model.user_id}/"
end
However, this is very vulnerable. Anyone can easily guess the url.
For authorised downloading, i can consider:
1. Static download link (this link is valid all time for that user. however, no guests or other users can use that URL)
2. General temporary links for each download!
Please enlighten me with the ways I can consider (i will study them) so that i can secure the files from downloading without purchases.
Seems like you want both, public for the clip and private for the track.
I am trying to implement this as well with the approach below (not tested)
def fog_public
return #job.job_kind == 'public'
end
s3 allows you to store private file, they will be available only for a given period and with an access token.
With carrierwave, you simply have to set the fog_publig to false as described here https://github.com/jnicklas/carrierwave#using-amazon-s3

Searching Inside an Amazon S3 Bucket

If I have a bucket with hundreds of thousands of images, is it ok to have to search for each image I want to display in my site via it's ID or is there a more efficient way (including having multiple folders in a bucket maybe)?
I was also thinking of giving each image a unique hash or something similar in order to stop duplicated names in the bucket. Does that seem like a good idea?
You just link to each image using normal urls. for public files the urls are in the format:
http://mybucket.s3.amazonaws.com/myimage.jpg
For private urls, you need to generate a url (which is easy using any of the sdks) in the format:
http://mybucket.s3.amazonaws.com/myimage.jpg?AWSAccessKeyId=44CF9SAMPLEF252F707&Expires=1177363698&Signature=vjSAMPLENmGa%2ByT272YEAiv4%3D
There's nothing wrong with storing each file with a unique name. If you set the correct headers on the file, any downloads can still have the original name. eg Content-Disposition: attachment; filename=myimage.jpg;
For listing a buckets contents you would use the APIs GetBucket command. I find it easier to use the SDKs for any access via the API.
It can be a pain to search or do things in parallel over bucket objects as amazon lists everything lexicographically (the only way currently supported). The problem with using random IDs is that all of it would be written to the same block storage and you cannot do search in parallel to optimize.
Here is an interesting article on performance improvements. I use it for my work and see significant difference in high load.
http://aws.typepad.com/aws/2012/03/amazon-s3-performance-tips-tricks-seattle-hiring-event.html