Upload a file to S3 - file-upload

I want to upload the file to s3. If I provide the aws key and secret key as hidden field in the form will there be any security issues?
Is it safe to provide the keys in the form..

Yes.
An attacker can click View Source and steal your keys.

Related

Google Cloud Storage: Alternative to signed URLs for folders

Our application data storage is backed by Google Cloud Storage (and S3 and Azure Blob Storage). We need to give access to this storage to random outside tools (upload from local disk using CLI tools, unload from analytical database like Redshift, Snowflake and others). The specific use case is that users need to upload multiple big files (you can think about it much like m3u8 playlists for streaming videos - it's m3u8 playlist and thousands of small video files). The tools and users MAY not be affiliated with Google in any way (may not have Google account). We also absolutely need to data transfer to be directly to the storage, outside of our servers.
In S3 we use federation tokens to give access to a part of the S3 bucket.
So model scenario on AWS S3:
customer requests some data upload via our API
we give customers S3 credentials, that are scoped to s3://customer/project/uploadId, allowing upload of new files
client uses any tool to upload the data
client uploads s3://customer/project/uploadId/file.manifest, s3://customer/project/uploadId/file.00001, s3://customer/project/uploadId/file.00002, ...
other data (be it other uploadId or project) in the bucket is safe because the given credentials are scoped
In ABS we use STS token for the same purpose.
GCS does not seem to have anything similar, except for Signed URLs. Signed URLs have a problem though that they refer to a single file. That would either require us to know in advance how many files will be uploaded (we don't know) or the client would need to request each file's signed URL separately (strain on our API and also it's slow).
ACL seemed to be a solution, but it's only tied to Google-related identities. And those can't be created on demand and fast. Service users are also and option, but their creation is slow and generally they are discouraged for this use case IIUC.
Is there a way to create a short lived credentials that are limited to a subset of the CGS bucket?
Ideal scenario would be that the service account we use in the app would be able to generate a short lived token that would only have access to a subset of the bucket. But nothing such seems to exist.
Unfortunately, no. For retrieving objects, signed URLs need to be for exact objects. You'd need to generate one per object.
Using the * wildcard will specify the subdirectory you are targeting and will identify all objects under it. For example, if you are trying to access objects in Folder1 in your bucket, you would use gs://Bucket/Folder1/* but the following command gsutil signurl -d 120s key.json gs://bucketname/folderName/** will create a SignedURL for each of the files inside your bucket but not a single URL for the entire folder/subdirectory
Reason : Since subdirectories are just an illusion of folders in a bucket and are actually object names that contain a ‘/’, every file in a subdirectory gets its own signed URL. There is no way to create a single signed URL for a specific subdirectory and allow its files to be temporarily available.
There is an ongoing feature request for this https://issuetracker.google.com/112042863. Please raise your concern here and look for further updates.
For now, one way to accomplish this would be to write a small App Engine app that they attempt to download from instead of directly from GCS which would check authentication according to whatever mechanism you're using and then, if they pass, generate a signed URL for that resource and redirect the user.
Reference : https://stackoverflow.com/a/40428142/15803365

URL to S3 file in private bucket

I've uploaded a file using the SDK to my private S3 bucket.
I can access this file through the S3 UI.
However, I cannot access this file through a direct link. It gives me some XML that includes "AccessDenied" as a code and message.
It seems reasonable that since I'm authenticated in the browser and am clicking on a direct link to the file from the same browser, that I should be allowed through. At the very least, I should be directed to a login page.
Does anyone have any experience with this?
So after working on this for a bit, I discovered the best thing is to simply publish the console URL to the file.
https://s3.console.aws.amazon.com/s3/object/{your bucket}/{your file path}?region={the region of your bucket}&tab=overview
Be mindful to specify the correct region. If you're forming this programmatically, then use Amazon.RegionEndpoint.SystemName.
If you're not logged in, it will ask you for your login!
No signed URL is necessary.
Thanks to everyone who contributed!
There are 2 places that you need to make sure are set correctly based on how you want to setup access to the bucket. It will either have public or private access.
The properties tab:
Here you can set what you will use the bucket for.
The Permissions tab -> Bucket Policy:
With this, you can then setup access. I was able to generate a policy with this site:
http://awspolicygen.s3.amazonaws.com/policygen.html
EDIT:
Mine is working with the settings I have shown. I recommend asking the AWS boards if to get to the bottom of it. You could also try this:
You can use the direct link if you are inside a VPC. You have to :
1- Create a VPC endpoint for Amazon S3.
2- Add a bucket policy that allows access from the VPC endpoint.
All steps are described in the following link :
https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-no-authentication/?nc1=h_ls

Filepicker and s3 without revealing s3 secretkey to filepicker

Is it possible to use filepicker to upload to S3, using a presigned S3 policy, without revealing my s3 secret key to filepicker.
From their current documentation found here - https://developers.filepicker.io/page/s3/, we need to provide them our s3 secret key.
I know s3 supports browser based uploads using POST, which we can sign using the policy. Is this something filepicker can leverage?
While it is technically possible, the Filepicker system does not work in this way. We require your S3 keys in order to upload to S3.

Server-generated S3 policy with Fine Uploader

The Amazon S3 integration docs for Fine Uploader instruct users to create an AJAX handler to sign an S3 upload policy generated by the client after performing server-side verification.
In my application, it would make more sense to construct the policy on the server, sign it, and return the entire package to the client to present to S3 for the upload. Is there any way to configure Fine Uploader to pull a server-generated policy instead of asking the server to validate and sign a client-generated one?
To answer your initial question, it is possible to override some elements of the generated policy, but there are some items, such as the key, that you cannot change via the policy document. This is discussed more in Github issue #1120.
If you want to override portions of the policy document, you'll have to disable chunking (since policy documents aren't part of chunked uploads, as described in the comments). Your best bet is to simply validate the policy/header strings. It's pretty easy to do this, and what elements you validate depending entirely on your application requirements.

Does Amazon S3 help anything in this case?

I'm thinking about whether to host uploaded media files (video and audio) on S3 instead of locally. I need to check user's permissions on each download.
So there would be an action like get_file, which first checks the user's permissions and then gets the file from S3 and sends it using send_file to the user.
def get_file
if #user.can_download(params[:file_id])
# first, download the file from S3 and then send it to the user using send_file
end
end
But in this case, the server (unnecessarily) downloads the file first from S3 and then sends it to the user. I thought the use case for S3 was to bypass the Rails/HTTP server stack for reduced load.
Am I thinking this wrong?
PS. I'm using CarrierWave for file uploads. Not sure if that's relevant.
Amazon S3 provides something called RESTful authenticated reads, which are basically timeoutable URLs to otherwise protected content.
CarrierWave provides support for this. Simply declare S3 access policy to authenticated read:
config.s3_access_policy = :authenticated_read
and then model.file.url will automatically generate the RESTful URL.
Typically you'd embed the S3 URL in your page, so that the client's browser fetches the file directly from Amazon. Note however that this exposes the raw unprotected URL. You could name the file with a long hash instead of something predictable, so it's at least not guessable -- but once that URL is exposed, it's essentially open to the Internet. So if you absolutely always need access control on the files, then you'll need to proxy it like you're currently doing. In that case, you may decide it's just better to store the file locally.