How to hide actual path Carrierwave - ruby-on-rails-3

I'm building an application (Rails 3.2.8) where user can upload music tracks and associated clip. While clips can be publicly accessible, track can't be accessible without purchasing. I'm using carrierwave for uploading the both types of files. However, I do not want to expose the actual track path to the users.
What techniques such services use to protect hotlinking and/or unauthorized access to the files?
Currently, the carrierwave path is like:
def store_dir
"tracks/#{model.user_id}/"
end
However, this is very vulnerable. Anyone can easily guess the url.
For authorised downloading, i can consider:
1. Static download link (this link is valid all time for that user. however, no guests or other users can use that URL)
2. General temporary links for each download!
Please enlighten me with the ways I can consider (i will study them) so that i can secure the files from downloading without purchases.

Seems like you want both, public for the clip and private for the track.
I am trying to implement this as well with the approach below (not tested)
def fog_public
return #job.job_kind == 'public'
end

s3 allows you to store private file, they will be available only for a given period and with an access token.
With carrierwave, you simply have to set the fog_publig to false as described here https://github.com/jnicklas/carrierwave#using-amazon-s3

Related

Is it possible sanity asset make private?

Sanity team said "Asset files are not private, so even images uploaded to a private dataset can be viewed by unauthenticated users."
(https://www.sanity.io/docs/keeping-your-data-safe)
The material on my site is only viewable by those who pay a monthly fee.
The material is mainly a pdf file.
Uploading a pdf file with type="file" creates an asset url.
Anyone can view it by typing the path into a web browser.
In the case of Vimeo, videos are played only when accessing from a specific domain.
Similarly, is it possible to make sanity assets available only within a specific domain?
I am in a similar situation and have done some resarch. It seems like private assets can't exist on Sanity. I believe the main reason is that every asset under 10MB in size will be cached on a CDN which does not have the authentication logic required to protect the assets.
IMO they should provide a toggle to make certain assets private, with the disclaimer that it will not be cached on a CDN.
Read more about Sanity CDN: https://www.sanity.io/docs/asset-cdn
EDIT:
You can however use a custom asset source, but it doesn't really sovle the problem: https://www.sanity.io/docs/custom-asset-sources

In ASP.NET Core MVC, how do you make uploaded images accessible to logged users in app but not to general public?

I have image upload in my system. I am struggling to understand what is the logic of serving images.
If I upload directly to wwwroot, the files will be accessible to everyone, which is not what I want.
I understand I could save the file contents in the database as base64 but those can be big files, and I would like them on the server in files.
I could convert them on the fly when requested. Most probably getting the path to file, then loading it in a memory stream and spitting out the base64. But seems overkill, and not an elegant solution. I use Automapper for most data and I have to write some crazy custom mappers, which I will If there is no other way.
I could create virtual path, which from what I understand maps physical path on server to a url which doesn't seem any different than option 1
I fancy there is a way to spit out a link/url that this user has access to (or at least logged users) that can be passed to the app so it can load it. Is this impossible or unreasonable? Or am I missing something?
What is the correct way of doing in general?
Also, what is a quick way to do it without spending days for setup?
To protect the specific static files, you can try the solutions explained in this official doc.
Solution A: Store static files you want to authorize outside of wwwroot, and call UseStaticFiles to specify a path and other StaticFileOptions after calling UseAuthorization, then set the fallback authorization policy.
Solution B: Store static files you want to authorize outside of wwwroot, and serve it via a controller action method to which authorization is applied and return a FileResult object.

Do I need to create folders in cloudinary?

I am building an new web application and want to use Cloudinary for users' images. My question is that do I need to create folders in my Cloudinary cloud? The reason I am asking is that if I were using a file system and start having 100,000+ images in one folder, it will start killing my app, and I would need to break then into several folders.
Is it the same for Cloudinary?
Thanks,
It depends on your current and future requirements.
In general, I believe that folders can help with better organizing your resources, especially when there are lots of them.
Note that besides folders, you can also assign tags to your images (e.g., by user) or add a prefix to the images' public IDs (e.g., user1-<image_name>).
You can later use Cloudinary's Admin API to list your resources either by folder/prefix or by tag.

How will I display images that are not in the apache data directory?

I am using Zend Framework, and am creating a image upload for users. So, I decided to split the users images by having a folder per user in an example directory such as /var/www/upload/user and I am planning to use PHP mkdir to create a directory for each user. While the apache public data directory is at: /var/www/domain.com/public_html. So, I am not sure how am I supposed to proceed. Because the images are not in the public_html file, how will I link the images and display it on the website? What is the solution to this? or is my setup flawed?
If you store the images outside the public folder, then you probably need to create an controller/action that accepts info that identifies the image and then sends it back to the browser - complete with correct mime headers - via readfile().
In some circumstances, this makes sense: if these resources need to be restricted based upon some Auth/Acl criteria, for example. If the images uploaded by user X are not public to all users/visitors, but private to user X (or some other set of users/visitors you identify), then this approach applies.
But if these image resources are public to all, then running access to them through the entire MVC dispatch cycle seems like a lot of overhead.
I suggest you use a solution like this:
structure your applications:
application
data
library
public
in application.ini
data_uploads = APPLICATION_PATH "/../data/uploads"
in Bootstrap.php
public function _initDefines()
{
define('DATA_UPLOADS', $this->getOption('data_uploads'));
}
for single user create a dir in data/uploads
simple and effective! ;)

About signed URL's in Amazon S3

So I have been reading about signed URL's and some of its benefits. Especially the part about hot linking. Our app doesn't allow users to embed media (photo, video, audio) from our site. So signed URL's looks like the right direction. Mostly to prevent hotlinking.
So now that I know my requirements. I have a few questions.
Does this mean I have to add a policy to my bucket, denying read-write access to any of the files or folders in the bucket?
Do I have to create signed URL's for each page visit? So let's say 100 users visit the same page where the song can be played. Does this mean I have to create 100 signed URL's?
Creating S3 signed URL's are free?
Touching on point #2. Is it normal practice for Amazon S3 to create several signed URL's? I mean what happens if 1,000 users end up coming to the same song page..
Your thoughts?
REFERENCE:
For anyone interested on how I was able to generate signed url's. Based on https://github.com/appoxy/aws gem and the docs at http://rubydoc.info/gems/aws/2.4.5/frames :
s3 = Aws::S3.new(APP_CONFIG['amazon_access_key_id'], APP_CONFIG['amazon_secret_access_key'])
bucket_gen = Aws::S3Generator::Bucket.create(s3, 'bucket_name')
signed_url = bucket_gen.get(URI.unescape(URI.parse(URI.escape('http://bucket_name.s3.amazonaws.com/uploads/foobar.mp3')).path[1..-1]), 1.hour)
By default, your bucket will be set to private. When you upload files to S3, you can set the ACL (permissions) - in your case, you'll want to make sure the files are private.
The simplest solution is to create new signed urls for each visitor. You could do something like generate new urls everyday, store them somewhere, and then use those but that adds complexity for little benefit. The one place where you might need this though is too enable client side caching. Everytime you create a new url, the browser sees it as a different file and will download a fresh copy. If this isn't the behaviour you want, you need to generate urls that expire far in the future and reuse those - but that will reduce the effectiveness of preventing hotlinking.
Yes, generating urls are free. They are generated on the client and don't touch S3. I suppose there is a time/processing cost, but I have created pages with hundreds of urls that are generated on each visit and have not noticed any performance issues.