How to assign a vault policy to a ldap group/user - ldap

I am trying to use vault in my application. The authentication mechanism i am using is LDAP. I have done the configuration and my users are able to login to vault but they are not able to see any secret engines that I created as a root user.
For example I have enabled a secret engine secrets/kv and created 2 keys inside it. What i want is my ldap users to read/write secrets directly from UI. My policy file looks like this -
path "secret/kv"
{
capabilities = ["read", "update", "list"]
}
path "auth/*"
{
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
And use issued the below command to write the data -
vault write auth/ldap/groups/ldap-group policies=my-policy
Still the users can't see the kv engine on the UI to read/write secrets.
Let me know if anyone can help me with this.

This policy should solve your issue.You don't need to prefix the path with secret.
path "kv/*"
{
capabilities = ["read", "update", "list"]
}

Related

Can I access Google Secrets Manager secrets externally

I am writing an app that I will be hosting in Google Cloud Functions with some config stored in Secrets Manager. I would like to share this information with another node app that is running on my local machine. Is this possible? I have tried using the npm package but I can’t figure out how I can authenticate to get access to the manager.
I am using a service key to access firestore:
import { initializeApp } from "firebase/app";
import { getFirestore } from "firebase/firestore";
const service_key = {
apiKey: myKey,
authDomain: "my-poroject.firebaseapp.com",
projectId: "my-poroject",
storageBucket: "my-poroject.appspot.com",
messagingSenderId: "0123456789",
appId: "0:00000000:web:00000000000000"
}
const app = initializeApp(service_Key);
export const db = getFirestore(app);
This all works perfectly, but I can't see how I would apply the key or 'app' when using secret manager:
const {SecretManagerServiceClient} = require('#google-cloud/secret-manager');
const client = new SecretManagerServiceClient();
As public cloud provider, most of the Google Cloud services are publicly accessible. So YES, you can access the secret from outside.
However, you must have the required credentials and permissions to access the secrets.
You can use a service account key file, which is also a secret (and I never recommend that option, but in some cases, it's useful), to generate an access token and to query safely secret manager. The problem is the service account key file, it's a secret to protect secret... The security level depends on your external platform.
You can also have a look to Identity Federation Pool that can help you to use your already known identity and to be transparently authenticated on Google Cloud. It's very powerful and you no longer need secret on your side and you increase your security posture.

Custom domain for "cognito-idp.us-east-1.amazonaws.com"

I have a Cognito app client configured to use USER_PASSWORD_AUTH flow. By POSTing this request:
{
"AuthParameters": {
"USERNAME": "{{Username}}",
"PASSWORD": "{{Password}}"
},
"AuthFlow": "USER_PASSWORD_AUTH",
"ClientId": "{{AppClientId}}"
}
to "cognito-idp.us-east-1.amazonaws.com", I am able to successfully authenticate and retrieve JWTs.
I would like to CNAME the URL to be something like "auth.mydomain.com", but when I do that, I get a client certificate validation error. Is there anyway to associate a valid certificate so I can CNAME the URL successfully?
You can configure a custom domain within your Cognito user pool. That's what we had to do to make this work. Check out this Cognito documentation. It discuses using the hosted UI stuff, but it should also apply to your scenario where you provide the login UI.

How to download files from AWS S3 bucket to springboot app on openshift?

My Spring Boot application is going to be deployed on Openshift and from my application i need to download files from AWS S3 bucket on other n/w.
What is the best way to connect to S3 and get the files. I am trying to use AmazonS3 client. Do i need to do configurations at the openshift infra level? Is there any other way with which we can download the files?
This is my suggested method using IAM roles.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
Scenario: Testing using Task IAM Role credentials
The endpoints container image can also vend credentials from an IAM Role; this allows you to test your application locally using a Task IAM Role.
NOTE: You should not use your production Task IAM Role locally. Instead, create a separate testing role, with equivalent permissions scoped to testing resources. Modifying the trust boundary of a production role will expand its scope.
In order to use a Task IAM Role locally, you must modify its trust policy. First, get the ARN of the IAM user defined by your default AWS CLI Profile (replace default with a different Profile name if needed):
aws --profile default sts get-caller-identity
Then modify your Task IAM Role so that its trust policy includes the following statement. You can find instructions for modifying IAM Roles in the IAM Documentation.
{
"Effect": "Allow",
"Principal": {
"AWS": <ARN of the user found with get-caller-identity>
},
"Action": "sts:AssumeRole"
}
To use your Task IAM Role in your docker compose file for local testing, simply change the value of the AWS container credentials relative URI environment variable on your application container:
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/role/"
For example, if your role is named ecs_task_role, then the environment variable should be set to "/role/ecs_task_role". That is all that is required; the ecs-local-endpoints container will now vend credentials obtained from assuming the task role. You can use this to validate that the permissions set on your Task IAM Role are sufficient to run your application.

Using Hashicorp Packer with Vault Secret Engine KV2

I'm trying to use packer with vault secret engine kv2, but so far I'm hitting an auth/permission error. I'm trying to read a secret from vault, as shown in the examples. In my test.json file I have a variables object, and inside I have an access_key and a secret_key keys. Each one of those contain {{ vault/secret/data/Foo/testaccess_key}}.
"variables": {
"access_key": "{{ vault `/secret/data/Foo/test` `access_key`}}",
"secret_key": "{{ vault `/secret/data/Foo/test` `secret_key`}}"
}
In vault, I created a token (which I use with packer), and the token has a policy such that:
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/test" {
capabilities = ["read"]
}
According to docs, this should be enough for packer to be able to read the secret, but when I run packer I get
Error initializing core: error interpolating default value for 'access_key':
template: root:1:3: executing "root" at <vault `/secret/data/...>:
error calling vault: Error reading vault secret: Error making API request.
Permission denied.
URL: GET
https://vault.*******.com/v1/secret/data/Foo/test
Code: 403. Errors:
* 1 error occurred:
* permission denied
If I understand correctly, the cause of the problem is the policy not granting enough permissions to packer in order to allow it to read my secret. Am I right? If "yes", how should I modify my policy?
Try something like this for your Packer token policy (don't forget to remake the token with the new policy, you can't update policies on preexisting tokens):
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/*" {
capabilities = ["read"]
}
I've been in the process of learning Vault and have found that whenever I specifically hardcode any path in a policy, to a particular secret, I run into the same error. Hopefully this helps you out. This guide details how to use AppRole authentication with tokens, it may help.

Run BitTorrent Sync in API mode

I want to use the API of BitTorrent Sync. For this I first have to run it in API mode.
I was checking the "Enabling the API" section in the following link:
http://www.bittorrent.com/sync/developers/api
But I am unable to run it.
Can anybody please share some experience with it. I am new to it.
Here is what I execute in command prompt:-
C:\Program Files (x86)\BitTorrent Sync>btsync.exe /config D:\config.api
Any help would be greatly appreciated.
It was my mistake. This is the right way to run it:
BTSync.exe /config D:\config.api
The problem was with the config file. Here is the way it should be:
{
// path to folder where Sync will store its internal data,
// folder must exist on disk
"storage_path" : "c://Users/Folder1/btsync",
// run Sync in GUI-less mode
"use_gui" : true,
"webui" : {
// IP address and port to access HTTP API
"listen" : "0.0.0.0:9090",
// login and password for HTTP basic authentication
// authentication is optional, but it's recommended to use some
// secret values unique for each Sync installation
"login" : "api",
"password" : "secret",
// replace xxx with API key received from BitTorrent
"api_key" : "xxx"
}
}