How to limit access to the read only endpoint in Amazon Neptune? - amazon-neptune

I'd like to create a role that can access only the read-only endpoint.
Constructing the resource arn as described here will allow access to both read and write endpoints.
I tried setting the resource id of the READER instance in the arn in these ways:
arn:aws:neptune-db:region:account-id:reader-instance-resource-id/*
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id/*
But none of these work. Is there a way to give a role the read access only?

The roles and policies that Amazon Neptune currently supports are listed here. Currently, the NeptuneReadOnlyAccess managed policy applies only to the control plane. It allows you to read but not alter configurations. That policy does not apply to the data plane (running queries).
It is possible that a future Amazon Neptune update may add additional access control policies.
For right now, you will need to manage access to instances and endpoints as part of your application architecture.

Related

list ACL for storage

i want to list access that were provided on storage via ACL.
Is there a API solution for this?
I want to list all entities (AD group, Service Principal etc) (like one marked in green) that has access to storage via ACLs
Idea is to create audit platform which can list all access that are provided via ACL
I tried path, as suggested in one of the comment. "x-ms-acl" is missing in response. (refer screenshot)
after changing blob to "dfs" in blob, it worked.
The API you would want to use is Path - Get Properties with action query parameter as getAccessControl. This should return you ACL in x-ms-acl response header.
You will need to use DFS endpoint (instead of blob endpoint).
If you are using Azure.Storage.Files.DataLake (.Net SDK for Azure DataLake), the method you would want to use is DataLakeDirectoryClient.GetAccessControlAsync.

WSO2 Control several APIs with the same endpoint with XACML poicies

I have followed the tutorial for enforcing policies on API calls
http://wso2.com/library/tutorials/2016/02/tutorial-how-to-enable-role-based-access-control-for-wso2-api-manager-using-xacml/
It wasn't easy but I got something up and running. I can change access to different endpoints of an API depending on the user's role.
I have a question. Here's a fictional setup to complete the tutorial:
API EduCollege, with endpoints /student/info and /staff/info (tutorial)
API Prison, with endpoints /prisoner/info and /staff/info (note that it's the same endpoint)
I write a policy EDUCollegePolicy that enables only those with role college_admin to access /staff/info (tutorial).
But there seems to be no way to restrict these college admins from accessing staff info of the prison!
The field resource only contains info about the endpoint.
Is there any way, using this setup, to limit by API?
Or does it maybe require a different JAR add-in, that would send a resource value set to API/version/endpoint instead of just /endpoint?
Oh, by the way: I couldn't set policies according to the endpoints provided in the tutorial. It doesn't seem that it's /staff/info, but I got it to work with regexp .*staff.*info.*. Not nice. I wonder what the actual resource sent from JAR to PDP is, I couldn't find it in any logs, including IDS logs (the IDS acts as PDP)

AWS IAM Api for Policy Summary and Access Advisor info

I am trying to fetch the list of services that is allowed by a role. I see the AWS console has the Access Advisor information which fits my needs but I see no API support. Anyone know a way to call policy summary call (or something similar) that can provide that information without having to do this manually on the client side?
You are correct. There is no API call that provides information similar to the Access Advisor.
The closest option is the ability to fetch the IAM policy, but you would then need to interpret the policy into something human-intelligible.

Service account -- limiting access to only big query

Is there a way to create a service account in the context of Google's cloud services that can only access BigQuery and not any other service (GCE, App Engine, &c)? Or is it necessary to create a new "project" and put the account in that project?
There are two ways to scope access:
ACLs and group membership allow control over what the service account has access to.
OAuth credentials can be scoped to individual services / apis.
Either option could work for you, depending on what your ultimate goal is.
How to use ACLs to limit access to only BigQuery
A service account is an identity, just like an email address is an identity.
Identity access is controlled through ACLs, either on the project or on the individual datasets you want to manage. BigQuery's access control is described here: https://cloud.google.com/bigquery/access-control. Other services and apis offer their own ACL controls. Together, these options give you fine grained control over access.
For example, if you put the service account in the project owners ACL, then that service account will have access to everything a project owner would have: BigQuery, Google Storage, etc.
Alternatively, if you put that service account only on a single BigQuery Dataset, then it would only have access to that dataset. (If you also want that service account to be able to run BigQuery jobs, then it would need to be a member of some project since jobs run in the context of a project. If you have a requirement that the project you run BigQuery jobs in cannot be the same project that you store Google Storage data in, then you will need multiple projects.)
How to use OAuth Scopes to limit access to only BigQuery
When you create the OAuth credentials for your service account, you can specify the Scopes that the credentials are valid for. Each api documents the scopes required in order to call the api. BigQuery's scopes are documented here: https://cloud.google.com/bigquery/authorization.
For example, if you only provide BigQuery scopes, then your code will only be able to make BigQuery api calls. Attempting to call a Google Storage API with credentials bound to BigQuery won't work.

How to restrict Amazon S3 API access?

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.