I have AWS OpenSearch configured. I have a huge list of Security roles to be created with below items:
Cluster Permissions: indices_monitor, cluster_monitor
Index: log-*
Document level security: {have some json}
Field level security: Include
Tenant: global_tenant
Is there any script that i can run from OpenSearch DEV tools or any sort of automation?
Related
My Spring Boot application is going to be deployed on Openshift and from my application i need to download files from AWS S3 bucket on other n/w.
What is the best way to connect to S3 and get the files. I am trying to use AmazonS3 client. Do i need to do configurations at the openshift infra level? Is there any other way with which we can download the files?
This is my suggested method using IAM roles.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
Scenario: Testing using Task IAM Role credentials
The endpoints container image can also vend credentials from an IAM Role; this allows you to test your application locally using a Task IAM Role.
NOTE: You should not use your production Task IAM Role locally. Instead, create a separate testing role, with equivalent permissions scoped to testing resources. Modifying the trust boundary of a production role will expand its scope.
In order to use a Task IAM Role locally, you must modify its trust policy. First, get the ARN of the IAM user defined by your default AWS CLI Profile (replace default with a different Profile name if needed):
aws --profile default sts get-caller-identity
Then modify your Task IAM Role so that its trust policy includes the following statement. You can find instructions for modifying IAM Roles in the IAM Documentation.
{
"Effect": "Allow",
"Principal": {
"AWS": <ARN of the user found with get-caller-identity>
},
"Action": "sts:AssumeRole"
}
To use your Task IAM Role in your docker compose file for local testing, simply change the value of the AWS container credentials relative URI environment variable on your application container:
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/role/"
For example, if your role is named ecs_task_role, then the environment variable should be set to "/role/ecs_task_role". That is all that is required; the ecs-local-endpoints container will now vend credentials obtained from assuming the task role. You can use this to validate that the permissions set on your Task IAM Role are sufficient to run your application.
We have lighthouse configured and I am trying to extract azure aks RBAC permissions information for a managing subscription from a managed tenant:
Get-AzRoleAssignment -scope "/subscriptions/0000000-0000-0000-00000000000000/resourcegroups/testrg/providers/Microsoft.ContainerService/managedClusters/testakscluster
Can we extract role assignments for a managing tenant's subscription while logged in a managed tenant cloud shell?
Thanks for your help
When using the Get-AzRoleAssignment command, it will call the Azure AD Graph - getObjectsByObjectIds meanwhile to validate the objects in Azure AD.
To solve the issue, make sure your user account logged in the cloud shell has permission to call the API, if your user account type is member, it will has the permission by default. So I suppose your user account is a guest, if so, there are two ways.
1.Navigate to the Azure Active Directory in the portal -> User settings -> click Manage external collaboration settings -> select the first option like below.
2.Navigate to the Azure Active Directory in the portal -> Roles and administrators -> search for Directory readers -> click it -> Add assignments -> add your user account as a Directory readers role.
Just select any of the options above, then the command will work fine.
For anyone coming to this thread after some searching: I had the same issue with this call across multiple versions of the AZ.Resources module: 2.5.0, 4.1.0 an 5.6.0. All my rights where setup correctly, both for an SPN and a user, both got the same error.
Changing the call to use the Azure CLI and that just works 😠.
az role assignment list -g [resource group name]
Is there a way to download the list of BigQuery users to audit?
There is no export option in BQ's IAM page for members.
You can use Searching Cloud IAM policies option, which allows you to use a custom query language to search Cloud Identity and Access Management (Cloud IAM) policies within a project, folder or organization. Before using, we have to enable the Cloud Asset API for your project and grant the cloudasset.assets.searchAllIamPolicies permission to the user account or service account that is making the request.
Following command will show you the users who are granted with BigQuery roles on the specified project:
gcloud beta asset search-all-iam-policies --scope=projects/<PROJECT_ID> --query="policy : bigquery" | egrep "role:|user:"
Moreover, you can save the result of a query to the file by adding > <FILENAME>.txt at the end of above command. To gain more knowledge about this command, please, refer to the official documentation.
I hope you find the above pieces of information useful.
I am deploying a project with the Serverless framework that includes different resources (a lambda function, cognito user pool, cognito identity pool, etc...)
For a previous project, we created from the console (so manually) the configuration for a second Api Gateway (in addition to the one that we configured with Serverless on the lambda) to just be the proxy for our s3 bucket, so we were able to add and get files from the bucket without using the lambda.
Now, I want to make the exact thing to this new project, but instead making the second Api Gateway manually from the console, there is a way to declare this proxy directly from Serverless configuration?
I searched for different solutions, but I didn't find any guide for this.
What I'm trying to make in the configuration is what this amazon guide explains.
You can use this plugin that allows setting up API Gateway service proxies very easily (I'm one of the collaborators).
serverless.yml example:
service: s3-proxy
provider:
name: aws
runtime: nodejs10.x
plugins:
- serverless-apigateway-service-proxy
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{key}
method: post
action: PutObject
bucket:
Ref: S3Bucket
key:
pathParam: key
cors: true
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Below is the error coming while creating a cluster:
(gcloud.container.clusters.create) ResponseError: code=403, message=Request had insufficient authentication
scopes
Check the IAM roles for the "Compute Engine default service account" and make sure it has enough permission to run the command [2]. Usually it would have an owner or editor role.
If you are on the Google Cloud Console, when creating an instance you need to look for the 'Identity and API access' section, and select 'Allow full access to all Cloud APIs' [1]
[1]https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances?hl=en_US&_ga=2.168486115.-390700867.1538154355
[2]https://cloud.google.com/iam/docs/granting-roles-to-service-accounts