What information do I need to give an external dev team to access and upload files to my Amazon S3 account? - amazon-s3

I'm new to Amazon S3, and do not want to give more information than is necessary for the team to whom I'm outsourcing a project. They are building an image hosting site, and would need access to my S3 credentials - what exactly would the devs need to have access to? Just my Access Key ID?
Thanks.

They'll need an Access Key ID & corresponding Secret Access Key.
You can generate a unique one for them to use via the Security Credentials Page in the Account section of the website.
When they're done, you can delete there key and make a different one to use for the live site. Just make sure that when they develop the app they put the key information in a configuration file so you can change it when they're done.

Related

Multiple users uploading into the same storage account via desktop app

would love to hear your ideas.
In this project, multiple users (let's say 1000 users) will upload files into the same storage account (AWS S3, Azure Blob Storage or DigitalOcean Spaces) using a Windows desktop app C#
The desktop app does have user authentication from a Web API
Questions
Is it correct that each user will have his/her own bucket?
What is the best way to securely introduce API key and bucket information into the desktop app so that files will be uploaded to the correct bucket and storage account?
Think about the structure of your S3 bucket and how you would later identify each object, which a user uploaded.
I would create for each user an initial key, which a user is able to upload the files, e.g.
username1/object1
/object2
/objectx
username2/object1
username3/object1
usernamex/objectx
This will give you the possibility, if a user is deleted, that you can just delete all objects with that username too. If you are using a generated key to identify the user, than you also can use the keyID instead of username.
The most interesting question is on how you will secure this, so that no other user will be able to see objects from others. If you have a underlying API, than it's "easy"... give the API the access to the S3 bucket and secure the requests, that only those objects will be listed for which the username or keyID matches.
If you are using IAM users (or roles), than you have automatically generate a policy for each base key (username1 or keyID) for the specific actions.
If you set up something like that, please be really sure to harden your security and also try to enable logging of this bucket to be sure, that user1 can't access objects from user2.

Setting different S3 read permissions based on uploader

I'm trying to arrive at a situation, where
one class of users can upload files that are subsequently not publicly available
another class of users can upload files that are publicly available.
I think I need to use two IAM users:
the first which has putObject permissions only and where I bake the secret key into javascript. (I use the AWS SDK putObject here, and bake in the first secret key)
the other where I keep the secret key on the server, and provide signatures for uploading to signed-in users of the right category. (I ended up using a POST command for this with multipart form-data, as I could not understand how to do it with the SDK other than baking in the second secret key, which would be bad as files can be uploaded and downloaded)
But I'm struggling to set up bucket permissions that support some files being publicly available while others are not at all.
Is there a way, or do I need to use separate buckets?
Update
Based on the first comment, I tried to add "acl": "public-read" to my policy and POST form data fields. The signatures are matching correctly, but I am now getting a forbidden response from AWS, which I don't get when this field is absent (but then the uploads are not publicly visible)

How do I access Google Drive Application Data from a remote server?

For my application I want the user to be able to store files on Google Drive and for my service to have access to these same files created with the application.
I created a Client ID for web application and was able to upload/list/download files from JavaScript (client side) with drive.appfolder scope. This is good, this is half of what I want to do.
Now I want to access the same files from Node.js (server side). I am lost as to how to do this. Do I create a new Client ID for the server? (if so, how will the user authenticate?) Do I pass the AuthToken my user got client-side and try to use that on the server? I don't think this will work as the AuthToke is time-sensitive (and probably not intended to be used from multiple IPs).
Any direction or example server-side code will be helpful. Again, all I want is to access these same files the user created with my application, not any other files in the user's Google Drive.
CLARIFICATION: I think my question boils down to: "Is it possible to access the same Application Data on Google Drive both client-side and server-side?"
Do I create a new Client ID for the server?
Up to you. You don't need to, but you can. See below.
if so, how will the user authenticate?
Up to you. OAuth is about authorisation, not authentication.
Just in case you meant authorisation, the user authorises the Project, which may contain multiple client IDs.
Do I pass the AuthToken my user got client-side and try to use that on the server?
You can do, but not a good idea for the reason you state. The preferred approach is to have a separate server Client ID, and use that to request offline access, which returns (eventually) a Refresh Token, which you store in your server. You then use that Refresh Token to request Access Tokens whenever you need them.
AuthToken is ... (and probably not intended to be used from multiple IPs).
It is not bound to a specific IP address
Is it possible to access the same Application Data on Google Drive both client-side and server-side?"
Yes
Most of what you need is at https://developers.google.com/accounts/docs/OAuth2WebServer

Best Practices in Protecting Amazon S3 Files?

For example, I have a website with User A and B.
Both of them can login to my website using my own login system.
How do I make certain files from S3 accessible only to User A once he login to my website?
Note: I saw "Permission" in AWS Management Console with "Authenticated Users" option but it seems that it's meant for other S3 users only, is it something I can use to achieve my goal?
You need to use Amazon IAM - you can define what part of any S3 bucket A can see, as well as B and each will not have access to do 'anything'. In general you should never use the account ID and secret for anything, always make an IAM user have just whats needed to run your stuff. The admin user likely does not need EC2 or SQS, or SimpleDB, etc.
Federated access is great for allowing arbitrary users to sign into your website and only be granted access for say 12 hours. They get special AWSIDs for that access that will work only on the section of S3 you let them look at.

are authenticated urls at s3 secure?

I have some files stored at amazon. all in private mode, and since I need provide users a way to download these files, each time an user needs to download a file I just create a authenticated url according to Authenticating REST Requests and the user can download the file for a gap of 5 minutes.
BUT once the url is generated I can see in the url my amazon key, is this something I should worry about? (I mean I know you need to have the secret key also to access to any object) but still this being secure?
The key is fine to publicly distribute, the secret is not.
So the answer is yes!
Edit: The public key along with the secret is used to generate the nonce / signature. You need both to generate valid (secured) requests for amazon. The secret is private however.