I can use the JWT auth method and obtain a token:
export VAULT_TOKEN=\
$(vault write -field=token auth/jwt/login role=$my_role_name jwt=$CI_JOB_JWT)
I can also source variables with vault kv get after preforming the above. However, I can't use gitlab's builtin method as described here:
https://docs.gitlab.com/ee/ci/secrets/index.html#use-vault-secrets-in-a-ci-job
test:
stage: validate
secrets:
TESTSECRET:
vault: gitlab-ci/TEST_SEC/value#$SECRET_MOUNT
script:
- echo $TESTSECRET
No errors returned, but the secret does not get sourced.
In CI variables I have:
VAULT_SERVER_URL: "http://myvaultserver.myvaultdomain.net:8200"
VAULT_AUTH_ROLE: "my_role_name"
I'm not sure whether gitlab-ci requires any more config in order to make this builtin method work, as the vault CLI method (within the CI job), works with no issues.
Did you try using the secret mount name directly in the vault keyword instead of reading it from the variable $SECRET_MOUNT?
Related
I have a typescript/node-based application where the following line of code is throwing an error:
const res = await s3.getObject(obj).promise();
The error I'm getting in terminal output is:
❌ Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
However, I do actually have a credentials file in my .aws directory with values for aws_access_key_id and aws_secret_access_key. I have also exported the values for these with the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I have also tried this with and without running export AWS_SDK_LOAD_CONFIG=1 but to no avail (same error message). Would anyone be able to provide any possible causes/suggestions for further troubleshooting?
Install npm i dotenv
Add a .env file with your AWS_ACCESS_KEY_ID etc credentials in.
Then in your index.js or equivalent file add require("dotenv").config();
Then update the config of your AWS instance:
region: "eu-west-2",
maxRetries: 3,
httpOptions: { timeout: 30000, connectTimeout: 5000 },
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
Try not setting AWS_SDK_LOAD_CONFIG to anything (unset it). Unset all other AWS variables. In Mac/linux, you can do export | grep AWS_ to find others you might have set.
Next, do you have AWS connectivity from the command line? Install the AWS CLI v2 if you don't have it yet, and run aws sts get-caller-identity from a terminal window. Don't bother trying to run node until you get this working. You can also try aws configure list.
Read through all the sections of Configuring the AWS CLI, paying particular attention to how to use the credentials and config files at $HOME/.aws/credentials and $HOME/.aws/config. Are you using the default profile or a named profile?
I prefer to use named profiles, but I use more than one so that may not be needed for you. I have always found success using the AWS_PROFILE environment variable:
export AWS_PROFILE=your_profile_name # macOS/linux
setx AWS_PROFILE your_profile_name # Windows
$Env:AWS_PROFILE="your_profile_name" # PowerShell
This works for me both with an Okta/gimme-aws-creds scenario, as well as an Amazon SSO scenario. With the Okta scenario, just the AWS secret keys go into $HOME/.aws/credentials, and further configuration such as default region or output format go in $HOME/.aws/config (this separation is so that tools can completely rewrite the credentials file without touching the config). With the Amazon SSO scenario, all the settings go in the config.
I have 5 profiles for my Spring Boot application
application.yml
application-prod.yml
application-stg.yml
application-dev.yml
application-local.yml
One default config and 4 for different environments.
application.yml looks like this
spring:
cloud:
vault:
enabled: ${VAULT_ENABLED:false}
host: ${VAULT_HOST}
port: ${VAULT_PORT}
authentication: aws_iam
aws-iam:
role: ${VAULT_POLICY}
server-name: ${VAULT_HOST}
kv:
backend: kv
enabled: true
Some of the properties are provided by the host in the environment variables.
To support local development I am overriding authentication in local profile like this
spring:
cloud:
vault:
enabled: true
authentication: token
token: ${VAULT_TOKEN}
Now the question is how to import config correctly?
If I will do spring.config.import: "vault:" in application.yml it will fail while running with local profile. As ConfigData API will try to resolve vault properties immediately after default profile is processed (but auth info not yet loaded). But as local profile is supposed to use different auth method, it cannot access Vault and fails.
Another question is how to disable Vault in some cases? I could do spring.cloud.vault.enabled=false, but this again would cause failure as ConfigData cannot resolve vault:.
Yes I could use legacy bootstrap mode which would work fine for my scenario, but in the longer run wouldn't be ideal...
Only thing that comes on my mind is to create additional profile, eg vault which would be loaded as a last one. With enabling / disabling this profile I could control if config from Vault is imported or not...
Any other ideas?
We have the same problem, but we have found a workaround overriding the default import order of Spring Boot by importing also the profile-specific configuration files explicitly using spring.config.import in application.yml like this:
spring:
profiles:
active: ${STAGE:local}
config:
import:
- optional:classpath:application-${STAGE:local}.yml
- vault://secret/our-secret
Note that the STAGE environment variable corresponds to the profile used per stage. We made the import of the profile-specific configuration file optional, as we don't have a dedicated file for every stage.
By providing the import for the profile-specific configuration files explicitly before the vault config, we can override the default vault settings before the vault is accessed.
Still, this approach feels a bit awkward, but it's the only way so far we found to work around the issue, so better solutions would be appreciated.
I want to add the token that I generated in firebase-tools using firebase login:ci to Gitlab CI. I went to Settings -> CI/CD -> Variables and added the environment variable with the key as FIREBASE_TOKEN.
However I get:
Validation failed:
- Variables value is invalid.
The value I gave is a 25 digit key generated by Firebase CLI as mentioned above.
What is wrong in this and what must I do?
I found the answer myself. Gitlab doesn't allow certain characters, such as - or / as the value for the environment variables. So I split the key into 2 environment variables.
EDIT #1: The problem was because I had the option 'Mask' turned on. So turned it off and I was able to give the whole key as a single variable. Voila!
I am trying to figure out how to manage secrets for different environements while creating serverless applications
my serverless.yml file looks something like this:
provider:
name: aws
runtime: nodejs6.10
stage: ${opt:stage}
region: us-east-1
environment:
NODE_ENV: ${opt:stage}
SOME_API_KEY: // this is different depending upon dev or prod
functions:
....
when deploying i use the following command
serverless deploy --stage prod
I want the configuration information to be picked up from AWS parameter store as described here:
https://serverless.com/blog/serverless-secrets-api-keys/
However I do not see a way to provide different keys for development or prod environment.
Any suggestions ?
I put prefixes in my Parameter Store variables.
e.g.
common/SLACK_WEBHOOK
development/MYSQL_PASSWORD
production/MYSQL_PASSWORD
Then, in my serverless.yml, I can do...
...
environment:
SLACK_WEBHOOK: ${ssm:common/SLACK_WEBHOOK}
MYSQL_PASSWORD: ${ssm:${opt:stage}/MYSQL_PASSWORD}
You should be able to do something similar.
I would like to use a custom module for which I require "hostname" so that I can initiate SSH connection from the custom module and run commands. So I pass transport = "local" to the Runner object. However, I find no way to obtain "hostname" information in the custom module.
I am using Ansible 1.9.2 using Python API.
A module only has the information available that was explicitly passed to it. What you might be interested in instead is an action plugin, which by (non-exisiting) definition runs local on the control machine and has access to more (all?) data.
You can see some action plugin code here: https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/action
PS: Don't you want to upgrade to Ansible 2 before getting started writing custom modules/plugins? The API changed completely and once you upgrade you have to rewrite you module/plugin.
Okay, silly me. It's exactly the same way in the API too. You can extract hostname using {{ inventory_hostname }}.