I want to have ability to upload/delete a blob to Azure storage, but deny to update (modify) blob. I see only 3 types of access - Private, Blob, Container. But none from them does not allow to upload and deny to update.
Is it possible?
In Azure Storage this can be done creating the policies on containers and blobs. checkout this msdn blog
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-2/
Related
would love to hear your ideas.
In this project, multiple users (let's say 1000 users) will upload files into the same storage account (AWS S3, Azure Blob Storage or DigitalOcean Spaces) using a Windows desktop app C#
The desktop app does have user authentication from a Web API
Questions
Is it correct that each user will have his/her own bucket?
What is the best way to securely introduce API key and bucket information into the desktop app so that files will be uploaded to the correct bucket and storage account?
Think about the structure of your S3 bucket and how you would later identify each object, which a user uploaded.
I would create for each user an initial key, which a user is able to upload the files, e.g.
username1/object1
/object2
/objectx
username2/object1
username3/object1
usernamex/objectx
This will give you the possibility, if a user is deleted, that you can just delete all objects with that username too. If you are using a generated key to identify the user, than you also can use the keyID instead of username.
The most interesting question is on how you will secure this, so that no other user will be able to see objects from others. If you have a underlying API, than it's "easy"... give the API the access to the S3 bucket and secure the requests, that only those objects will be listed for which the username or keyID matches.
If you are using IAM users (or roles), than you have automatically generate a policy for each base key (username1 or keyID) for the specific actions.
If you set up something like that, please be really sure to harden your security and also try to enable logging of this bucket to be sure, that user1 can't access objects from user2.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
I'm updating accountB's S3 bucket by accountA's CodeBuild project.
A problem is, all the object from accountA's CodeBuild deny to access.
My purpose is using this S3 bucket for static hosting.
I set all requirements for static hosting and it's working fine when I uploaded simple index.html manually.
But the individual object from accountA's CodeBuild project show below attached error.
ex) index.html properties & permission
I checked the Disable artifact encryption option in the artifact setting in the CodeBuild project.
and also on the override params,
encryptionDisabled: true
This code build project is working fine when I save the output in the same account S3.
(S3 static hosting site in AccountA is working well)
But getting access issue in accountB's S3.
Before try to touch KMS policy, I want to know if I missed some configurations in the CodeBuild.
Please advice me what I have to do or missed...
Thanks.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
Upload the objects with bucket-owner-full-control canned ACL, otherwise the objects will be still "owned" by the source account.
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
It says:
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure)
So the object has ACL of the source bucket, it's not very obvious, but you can provide an ACL during the PutObject action from the source account. So it can still be just one call.
I blocked public access to my s3 bucket, but I wanted to allow the bigquery connection using the bigquery transfer service.
I saw the options to deny access using IP address, but since google doesn't have the public IP address, I don't know how to solve this issue.
Any experience?
Thanks
You don't need to deny access based on IP address. It's much easier to create an IAM user, attach S3ReadOnly policy, generate the access keys, and let bigquery to use the access keys to get data from S3. Restrict access based on IAM is better than the IP address approach.
You can use a bucket policy for providing access. Refer to this article - https://medium.com/#Keithweaver_/only-allowing-access-to-your-s3-bucket-via-your-website-5ca5c8546152
Hope it helps.
Is it possible to set this permission through the Cloud Console UI for cloud storage? Or is it only settable through the API (for example, following the guidance in this post)
In the documentation for Google's cloud storage, one of the defined permission scopes is "domain". This allows you to specify that the read or write permission is granted to any authenticated user that is part of your Google Apps domain.
When accessing a storage container UI in the cloud console, you can set user or group permissions, but entering a naked domain with either "User" or "Group" selected results in an "Invalid Value" message when the changes are saved.
This setting is now exposed via the Cloud Console UI. You should notice 3 sections in the dropdown: user, group, and domain.
The setting is also available via the API and via the command-line utility, gsutil. To grant read access to the domain my-domain.org from gsutil, you'd do something like this:
gsutil acl ch -g my-domain.org:R gs://bucket
For example, I have a website with User A and B.
Both of them can login to my website using my own login system.
How do I make certain files from S3 accessible only to User A once he login to my website?
Note: I saw "Permission" in AWS Management Console with "Authenticated Users" option but it seems that it's meant for other S3 users only, is it something I can use to achieve my goal?
You need to use Amazon IAM - you can define what part of any S3 bucket A can see, as well as B and each will not have access to do 'anything'. In general you should never use the account ID and secret for anything, always make an IAM user have just whats needed to run your stuff. The admin user likely does not need EC2 or SQS, or SimpleDB, etc.
Federated access is great for allowing arbitrary users to sign into your website and only be granted access for say 12 hours. They get special AWSIDs for that access that will work only on the section of S3 you let them look at.