i want to list access that were provided on storage via ACL.
Is there a API solution for this?
I want to list all entities (AD group, Service Principal etc) (like one marked in green) that has access to storage via ACLs
Idea is to create audit platform which can list all access that are provided via ACL
I tried path, as suggested in one of the comment. "x-ms-acl" is missing in response. (refer screenshot)
after changing blob to "dfs" in blob, it worked.
The API you would want to use is Path - Get Properties with action query parameter as getAccessControl. This should return you ACL in x-ms-acl response header.
You will need to use DFS endpoint (instead of blob endpoint).
If you are using Azure.Storage.Files.DataLake (.Net SDK for Azure DataLake), the method you would want to use is DataLakeDirectoryClient.GetAccessControlAsync.
Related
I'd like to create a role that can access only the read-only endpoint.
Constructing the resource arn as described here will allow access to both read and write endpoints.
I tried setting the resource id of the READER instance in the arn in these ways:
arn:aws:neptune-db:region:account-id:reader-instance-resource-id/*
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id/*
But none of these work. Is there a way to give a role the read access only?
The roles and policies that Amazon Neptune currently supports are listed here. Currently, the NeptuneReadOnlyAccess managed policy applies only to the control plane. It allows you to read but not alter configurations. That policy does not apply to the data plane (running queries).
It is possible that a future Amazon Neptune update may add additional access control policies.
For right now, you will need to manage access to instances and endpoints as part of your application architecture.
I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.
I've been creating an extension for VSTS, and so far i have stored some data in documents in collections (https://learn.microsoft.com/en-us/vsts/extend/develop/data-storage).
The problem I have now, is that I need to GET these documents somehow from an external application. I have looked into: https://github.com/Microsoft/vsts-auth-samples/tree/master/ClientLibraryConsoleAppSample to get the authorization done, but then I am unable to get the documents. If I try to access through the REST API I have issues authorizing myself(without the personal access token provided. The application is supposed to work for every user, and i cannot get and use every user's personal access token. This is not feasible for 350+ people) as well as I am unable to get the REST API working. The documentation on all of this is severely lacking.
Anyone able to help?
The documentation is lacking, because the Data Storage is isolated for the extension and there is no easy way to access the data from outside of the extension. If you need external access, you also need to store your data externally. Azure storage or in a TFVC/Git repo under the VSTS account.
As for per-user storage access, that's also isolated and would indeed require either a account owner token or a user specific Oauth or PAT token.
I have found the solution. The documentation states that there are 2 ways of working with the documents/collections. REST API and their VSS wrappers. The url required to get all documents in a certain collection is as follows:
https://{account}.extmgmt.visualstudio.com/_apis/ExtensionManagement/InstalledExtensions/{publisherName}/{extensionName}/Data/Scopes/Default/Current/Collections/{collectionName}/Documents/{documentName}.
Using this in a browser works just fine. All that needs to be done in order to use this with an external application is authorization.
If you use sdk methods from docs like VSS.getService(VSS.ServiceIds.ExtensionData) you can view (easiest in dev tool in browser) the request.
Its look like:
https://extmgmt.dev.azure.com/{organization}/_apis/ExtensionManagement/InstalledExtensions/{publisher id}/{extension id}/Data/Scopes/Default/Current/Collections/{collections (by default 'MyCollection')}/Documents
I am sending blobs to an Azure Storage account. I have one customer with 3 IOT clients who each write to their own container.
I use a share access policy to create a SAS URI to each container.
I am not using an expire data when creating the shared access policy. The generated SAS URI is copied to a config file that each of the clients use this to write blobs to the storage.
This works fine. On the client I create the container using
CloudBlobContainer _container = new CloudBlobContainer(new Uri("https://myhubstorage.blob.core.windows.net/containername?sv=2015-04-05&sr=c&si=containername&sig=xxxxx"));
The token above is retrieved from a config file
To send blobs I use
var newBlob = _container.GetBlockBlobReference(filePath);
Now this works, but I'm not sure if this is the best approach. The reason is that I do not have an expiry on the shared access policy used to create the container SAS token. I don't want to distribute a new SAS token for the container each time it expires (would have to update the config file.
Also I do not want the clients to have access to the storage account key).
If a client is compromised I can revoke the shared access policy so the other clients will not be affected.
But is this the best approach to solve this regarding security? Input would be appreciated.
Using a shared access policy is suggested, however, you may need to notice that you can set up to 5 stored access policies for a container (It looks not a problem for you since there are only 3 IoT clients).
You might also want to refer to best practices for using SAS for a full list.
Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.