I'm trying to use packer with vault secret engine kv2, but so far I'm hitting an auth/permission error. I'm trying to read a secret from vault, as shown in the examples. In my test.json file I have a variables object, and inside I have an access_key and a secret_key keys. Each one of those contain {{ vault/secret/data/Foo/testaccess_key}}.
"variables": {
"access_key": "{{ vault `/secret/data/Foo/test` `access_key`}}",
"secret_key": "{{ vault `/secret/data/Foo/test` `secret_key`}}"
}
In vault, I created a token (which I use with packer), and the token has a policy such that:
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/test" {
capabilities = ["read"]
}
According to docs, this should be enough for packer to be able to read the secret, but when I run packer I get
Error initializing core: error interpolating default value for 'access_key':
template: root:1:3: executing "root" at <vault `/secret/data/...>:
error calling vault: Error reading vault secret: Error making API request.
Permission denied.
URL: GET
https://vault.*******.com/v1/secret/data/Foo/test
Code: 403. Errors:
* 1 error occurred:
* permission denied
If I understand correctly, the cause of the problem is the policy not granting enough permissions to packer in order to allow it to read my secret. Am I right? If "yes", how should I modify my policy?
Try something like this for your Packer token policy (don't forget to remake the token with the new policy, you can't update policies on preexisting tokens):
path "secret/*" {
capabilities = ["list"]
}
path "secret/data/Foo/*" {
capabilities = ["read"]
}
I've been in the process of learning Vault and have found that whenever I specifically hardcode any path in a policy, to a particular secret, I run into the same error. Hopefully this helps you out. This guide details how to use AppRole authentication with tokens, it may help.
Related
I am using Vercel Deployments with a NextJS app. Deployments are automatically run when I push to master, but I don't want to store keys in GitHub. My serverless functions are reliant on my database. When run locally, I can simply use Google's Default Authentication Credentials, but this is not possible when deployed to Vercel. As such, I created a Service Account to enable the server to have access.
How do I load the service account credentials without pushing they key itself to GitHub?
I tried adding the key as described in this issue, but that didn't work.
AFAIK setting an environment variable in Vercel is not helpful because Google environment variables require a path/JSON file (vs. simply text).
Rather than using a path to a JSON file, you can create an object and include environment variables as the values for each object key. For example:
admin.initializeApp({
credential: admin.credential.cert({
client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY,
project_id: 'my-project'
}),
databaseURL: 'https://my-project.firebaseio.com'
});
Then, you can add the environment variables inside your project settings in Vercel.
Adding to #leerob's answer, I found that putting quotes around the FIREBASE_PRIVATE_KEY environment variable in my .env file fixed an error I kept getting relating to the PEM file when making a request. I didn't need any quotes around the key for calls to the standard firebase library though.
This was the config I used to access the Google Cloud Storage API from my app:
const { Storage } = require('#google-cloud/storage');
const storage = new Storage({ projectId: process.env.FIREBASE_PROJECT_ID,
credentials: { client_email: process.env.FIREBASE_CLIENT_EMAIL,
private_key: process.env.FIREBASE_PRIVATE_KEY_WITH_QUOTES
}
})
I had this problem too but with google-auth-library. Most of Googles libraries provide a way to add credentials through a options object that you pass when initializing it. To be able to get information from Google Sheets or Google Forms you can for example do this:
const auth = new GoogleAuth({
credentials:{
client_id: process.env.GOOGLE_CLIENT_ID,
client_email: process.env.GOOGLE_CLIENT_EMAIL,
project_id: process.env.GOOGLE_PROJECT_ID,
private_key: process.env.GOOGLE_PRIVATE_KEY
},
scopes: [
'https://www.googleapis.com/auth/someScopeHere',
'https://www.googleapis.com/auth/someOtherScopeHere'
]
});
You can just copy the info from your credentials.json file to the corresponding environment variables. Just take care that when your working on localhost you will need to have the private_key in double quotes but when you put it into Vercel you should not include the quotes.
I am trying to use vault in my application. The authentication mechanism i am using is LDAP. I have done the configuration and my users are able to login to vault but they are not able to see any secret engines that I created as a root user.
For example I have enabled a secret engine secrets/kv and created 2 keys inside it. What i want is my ldap users to read/write secrets directly from UI. My policy file looks like this -
path "secret/kv"
{
capabilities = ["read", "update", "list"]
}
path "auth/*"
{
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
And use issued the below command to write the data -
vault write auth/ldap/groups/ldap-group policies=my-policy
Still the users can't see the kv engine on the UI to read/write secrets.
Let me know if anyone can help me with this.
This policy should solve your issue.You don't need to prefix the path with secret.
path "kv/*"
{
capabilities = ["read", "update", "list"]
}
Vault version 1.5.2
My end goal is to use Vault in some Terraform code to retrieve temporary credentials. The issue is Terraform will always generate a new child token even if the current token is a 5 minute token. This means the current VAULT_TOKEN must be some sort of super root token because I've tried logging in with the LDAP backend and it doesn't matter which policies or token roles I try to use it seems I can't ever generate new tokens.
To replicate what Terraform is doing:
vault login -address vault.example -ca-cert ca.pem -method ldap -path ldap_users user=botman
Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
Password (will be hidden):
s.<token>
I have all of the permission as defined by the policies and everything seems fine.
Now try to create a child token:
vault token create -address vault.example -ca-cert ca.pem -role superrole
Error creating token: Error making API request.
URL: POST https://vault.example/v1/auth/token/create/superrole
Code: 400. Errors:
* restricted use token cannot generate child tokens
Remove the -role parameter and we get the same error.
I've tried looking everywhere to see what I'm missing, but the only token that can create child tokens seems to be the root token.
I apologize if I missed something very simple.
An example policy that I have attached (I've tried many polices but this one seems the most extreme)
path "auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
Any help is appreciated.
EDIT:
I took the time to setup some approles to test this out. I added the exact same policies as the LDAP backend. With the approle I can get a token and then create new tokens from the initial token. I took a look at the LDAP documentation to see if I missed somewhere that says you can't create child tokens from tokens originating from LDAP and I couldn't find anything: https://www.vaultproject.io/docs/auth/ldap.html
EDIT2:
Pulumi config for the LDAP Auth Backend
return vault.ldap.AuthBackend(
resource_name="vault-ldap-{}".format(ldap.name),
binddn=bind_dn,
bindpass=bind_pass,
certificate=cert,
description=ldap.desc,
discoverdn=False,
groupattr="cn",
groupdn=ldap.groupdn,
groupfilter="(&(objectClass=group)(member:1.2.840.113556.1.4.1941:={{.UserDN}}))",
insecure_tls=False,
path="ldap_{}".format(ldap.name),
starttls=False,
tls_max_version="tls12",
tls_min_version="tls10",
token_explicit_max_ttl=14 * 60 * 60 * 24,
token_max_ttl=7 * 60 * 60 * 24,
token_num_uses=56,
url=url,
userattr="samaccountname",
userdn=ldap.userdn,
opts=opts,
)
I see your LDAP backend is setting token_num_uses. From this documentation and this discussion, having token_num_uses set to something non-zero will prevent the token from creating child tokens.
The reason it is broken for LDAP and working for AppRole is because the LDAP backend is applying the token_num_uses property, whereas I'm guessing the AppRole backend is not.
I just tried to test wolkenkit’s authentication with the chat template following the wolkenkit docs. User login seems to work, but the user is redirected to Auth0 even when they're already logged in (without the client calling the auth.login method).
Here’s a code snippet from the client:
wolkenkit.connect({
host: 'local.wolkenkit.io',
port: 3000,
authentication: new wolkenkit.authentication.OpenIdConnect({
identityProviderUrl: 'https://<myIdentity>.eu.auth0.com/authorize',
clientId: '<myClientID>',
strictMode: false
})
}).
then(chat => {
console.log("chat.auth.isLoggedIn() = " + chat.auth.isLoggedIn());
console.log(chat.auth.getProfile());
if (!chat.auth.isLoggedIn()) {
return chat.auth.login();
}
});
In package.json, the identity provider is configured as followed:
"wolkenkit": {
"environments": {
"default": {
"identityProvider": {
"name": "https://<myIdentity>.eu.auth0.com/",
"certificate": "/server/keys/<myIdentity>.eu.auth0.com"
},...
Browser log after clearing cookies (I censored the provider identity and the object returned by chat.auth.getProfile()):
Navigated to http://local.wolkenkit.io:8080/
index.js:14 chat.auth.isLoggedIn() = false
index.js:15 undefined
Navigated to https://<myIdentity>.eu.auth0.com/login?client=<clientID>...
Navigated to http://local.wolkenkit.io:8080/
index.js:14 chat.auth.isLoggedIn() = true
index.js:15 {iss: "https://<myIdentity>.eu.auth0.com/", sub: "auth0|...", aud: "...", iat: ..., exp: ..., …}
Navigated to https://<myIdentity>.eu.auth0.com/login?client=<clientID>...
Being redirected although you configured authentication typically means that there is an error in the way the authentication is configured.
You might want to check these settings:
The token must be signed using RS256, not HS256 (which, for some accounts, seems to be the default of Auth0). To find out which signature algorithm is being used, get the token from the browser's local storage and paste it into the JWT debugger. Then you can see how the token was signed. If you need to change the signature algorithm, you can find this in the Advanced Settings of your client in Auth0.
Using the very same debugger you can also verify whether the token and the certificate you are using match each other. If they don't, you probably have copied the wrong certificate, or you have configured the path to point to a wrong certificate.
The certificate file must be named certificate.pem. If it has another name, or the path in package.json is incorrect, wolkenkit should not even start the application, but to be sure double-check that the file is named correctly.
In the package.json, besides the path to the certificate, you also have to provide the name of the identity provider you use, in your case this is https://<myIdentity>.eu.auth0.com/. Please note that this must exactly match the iss claim within the token. Often the claim in the token contains a trailing slash, while the value in package.json does not. If they differ, use the token's value in package.json.
Once you have made your changes, make sure to empty local storage, and to restart your wolkenkit application using the following command (otherwise your changes won't become effective):
$ wolkenkit restart
Hope this helps :-)
I want to use the API of BitTorrent Sync. For this I first have to run it in API mode.
I was checking the "Enabling the API" section in the following link:
http://www.bittorrent.com/sync/developers/api
But I am unable to run it.
Can anybody please share some experience with it. I am new to it.
Here is what I execute in command prompt:-
C:\Program Files (x86)\BitTorrent Sync>btsync.exe /config D:\config.api
Any help would be greatly appreciated.
It was my mistake. This is the right way to run it:
BTSync.exe /config D:\config.api
The problem was with the config file. Here is the way it should be:
{
// path to folder where Sync will store its internal data,
// folder must exist on disk
"storage_path" : "c://Users/Folder1/btsync",
// run Sync in GUI-less mode
"use_gui" : true,
"webui" : {
// IP address and port to access HTTP API
"listen" : "0.0.0.0:9090",
// login and password for HTTP basic authentication
// authentication is optional, but it's recommended to use some
// secret values unique for each Sync installation
"login" : "api",
"password" : "secret",
// replace xxx with API key received from BitTorrent
"api_key" : "xxx"
}
}