How do I restrict any access to the :original styled files in S3 but keep access to the rest of the styles's folders in the bucket?
I saw implementations on how to limit all access and then check on attributes of a model. I just want to limit access to :original styles
I did notice this line in paperclip, I just don't know how to use (if possible)
You can limit the files by accessing the files through an action of a controller. This way you can control, which files a user can access and which not.
If you simply make a privat s3 bucket, this won't help you. As a user with a valid key can access any files in the bucket. If you have really file which needs to be protected, you have only view ways to do it (as I think):
Restrict access to the bucket and serve the files through an action of a controller (no real way to work around this)
Rename the specific files to be not easy to predict (e.g. 32 or more characters of numbers and letters). This is quit simple to achieve and you can still serve the files directly from s3
Save the files somewhere else (maybe in an other s3 bucket), so nobody can predict them
For renaming files you can use this stackoverflow question: Paperclip renaming files after they're saved
The answer I am looking for (I think, didn't test it yet) can be found here
http://rdoc.info/github/thoughtbot/paperclip/Paperclip/Storage/S3
s3_permissions: This is a String that should be one of the "canned" access policies that S3 provides (more information can be found here: docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RESTAccessPolicy.html) The default for Paperclip is :public_read.
You can set permission on a per style bases by doing the following:
:s3_permissions => {
:original => :private
}
Or globaly:
:s3_permissions => :private
Related
I have image upload in my system. I am struggling to understand what is the logic of serving images.
If I upload directly to wwwroot, the files will be accessible to everyone, which is not what I want.
I understand I could save the file contents in the database as base64 but those can be big files, and I would like them on the server in files.
I could convert them on the fly when requested. Most probably getting the path to file, then loading it in a memory stream and spitting out the base64. But seems overkill, and not an elegant solution. I use Automapper for most data and I have to write some crazy custom mappers, which I will If there is no other way.
I could create virtual path, which from what I understand maps physical path on server to a url which doesn't seem any different than option 1
I fancy there is a way to spit out a link/url that this user has access to (or at least logged users) that can be passed to the app so it can load it. Is this impossible or unreasonable? Or am I missing something?
What is the correct way of doing in general?
Also, what is a quick way to do it without spending days for setup?
To protect the specific static files, you can try the solutions explained in this official doc.
Solution A: Store static files you want to authorize outside of wwwroot, and call UseStaticFiles to specify a path and other StaticFileOptions after calling UseAuthorization, then set the fallback authorization policy.
Solution B: Store static files you want to authorize outside of wwwroot, and serve it via a controller action method to which authorization is applied and return a FileResult object.
I am trying to make sure I did not miss anything in the AWS CloudFront documentation or anywhere else ...
I have a (not public) S3 bucket configured as origin in a CloudFront web distribution (i.e. I don't think it matters but I am using signed urls).
Let's say a have a file in a S3 path like
/someRandomString/someCustomerName/someProductName/somevideo.mp4
So, perhaps the url generated by CloudFront would be something like:
https://my.domain.com/someRandomString/someCustomerName/someProductName/somevideo.mp4?Expires=1512062975&Signature=unqsignature&Key-Pair-Id=keyid
Is there a way to obfuscate the path to actual file on the generated URL. All 3 parts before the filename can change, so I prefer not to use "Origin Path" on Origin Settings to hide the begging of the path. With that approach, I would have to create a lot of origins mapped to the same bucket but different paths. If that's the only way, then the limit of 25 origins per distribution would be a problem.
Ideally, I would like to get something like
https://my.domain.com/someRandomObfuscatedPath/somevideo.mp4?Expires=1512062975&Signature=unqsignature&Key-Pair-Id=keyid
Note: I am also using my own domain/CNAME.
Thanks
Cris
One way could be to use a lambda function that receives the S3 file's path, copies it into an obfuscated directory (maybe it has a simple mapping from source to origin) and then returns the signed URL of the copied file. This will ensure that only the obfuscated path is visible externally.
Of course, this will (potentially) double the data storage so you need some way to clean up the obfuscated folders. That could be done on a time-based manner, so if each signed URL is expected to expire after 24 hours, you could create folders based on date, and each of the obfuscated directories could be deleted every other day.
Alternatively, you could use a service like tinyurl.com or something similar to create a mapping. It would be much easier, save on storage, etc. The only downside would be that it would not reflect your domain name.
If you have the ability to modify the routing of your domain then this is a non-issue, but I presume that's not an option.
Obfuscation is not a form of security.
If you wish to control which objects users can access, you should use Pre-Signed URLs or Cookies. This way, you can grant access to private objects via S3 or CloudFront and not worry about people obtaining access to other objects.
See: Serving Private Content through CloudFront
I am using AMWS s3 in a ruby on rails project to store images for my models. Everything is working fine. I was just wondering if it okay/normal that if someone right clicks an image, it shows the following url:
https://mybucketname.s3.amazonaws.com/uploads/photo/picture/100/batman.jpg
Is this a hacking risk, letting people see your bucket name? I guess I was expecting to see a bunch of randomized letters or something. /Noob
Yes, it's normal.
It's not a security risk unless your bucket permissions allow unauthenticated actions like uploading and deleting objects by anonymous users (obviously, having the bucket name would be necessary if a malicious user wanted to overwrite your files) or your bucket name itself provides some kind of information you don't want revealed.
If it makes you feel better, you can always associate a CloudFront distribution with your bucket -- a CloudFront distribution has a default hostname like d1a2b3c4dexample.cloudfront.net, which you can use in your links, or you can associate a vanity hostname with the CloudFront distribution, like assets.example.com, neither of which will reveal the bucket name.
But your bucket name, itself, is not considered sensitive information. It is common practice to use links to objects in buckets, which necessarily include the bucket name.
I'm building an application (Rails 3.2.8) where user can upload music tracks and associated clip. While clips can be publicly accessible, track can't be accessible without purchasing. I'm using carrierwave for uploading the both types of files. However, I do not want to expose the actual track path to the users.
What techniques such services use to protect hotlinking and/or unauthorized access to the files?
Currently, the carrierwave path is like:
def store_dir
"tracks/#{model.user_id}/"
end
However, this is very vulnerable. Anyone can easily guess the url.
For authorised downloading, i can consider:
1. Static download link (this link is valid all time for that user. however, no guests or other users can use that URL)
2. General temporary links for each download!
Please enlighten me with the ways I can consider (i will study them) so that i can secure the files from downloading without purchases.
Seems like you want both, public for the clip and private for the track.
I am trying to implement this as well with the approach below (not tested)
def fog_public
return #job.job_kind == 'public'
end
s3 allows you to store private file, they will be available only for a given period and with an access token.
With carrierwave, you simply have to set the fog_publig to false as described here https://github.com/jnicklas/carrierwave#using-amazon-s3
I am trying to create a image upload site, where users can upload an image to the site.
Which is the easiest possible way to do this without using any plugin. How do you get exif/meta information from the image?
Specify you form type to be file.
$this->Form->create('Model', array('type' => 'file'));
$this->Form->input('filefield', array('type' => 'file'));
in you before save see the output of fielfield, it will contain the $_FILE like information, having all tmp-name, original filename and error code.
you can urself move_uploaded_files() to your convenient location. Store the filename into your table.
But, for the implementation you will have to deal with various things like two filenames having same name, file size, extensions allowed, permissions. Delete files when records get deleted.
So, for learning purposes u can try this out but for production mode it'd be better if you stick to a plugin.