Using Spring Cloud Vault and ConfigData API with multiple profile files - spring-vault

I have 5 profiles for my Spring Boot application
application.yml
application-prod.yml
application-stg.yml
application-dev.yml
application-local.yml
One default config and 4 for different environments.
application.yml looks like this
spring:
cloud:
vault:
enabled: ${VAULT_ENABLED:false}
host: ${VAULT_HOST}
port: ${VAULT_PORT}
authentication: aws_iam
aws-iam:
role: ${VAULT_POLICY}
server-name: ${VAULT_HOST}
kv:
backend: kv
enabled: true
Some of the properties are provided by the host in the environment variables.
To support local development I am overriding authentication in local profile like this
spring:
cloud:
vault:
enabled: true
authentication: token
token: ${VAULT_TOKEN}
Now the question is how to import config correctly?
If I will do spring.config.import: "vault:" in application.yml it will fail while running with local profile. As ConfigData API will try to resolve vault properties immediately after default profile is processed (but auth info not yet loaded). But as local profile is supposed to use different auth method, it cannot access Vault and fails.
Another question is how to disable Vault in some cases? I could do spring.cloud.vault.enabled=false, but this again would cause failure as ConfigData cannot resolve vault:.
Yes I could use legacy bootstrap mode which would work fine for my scenario, but in the longer run wouldn't be ideal...
Only thing that comes on my mind is to create additional profile, eg vault which would be loaded as a last one. With enabling / disabling this profile I could control if config from Vault is imported or not...
Any other ideas?

We have the same problem, but we have found a workaround overriding the default import order of Spring Boot by importing also the profile-specific configuration files explicitly using spring.config.import in application.yml like this:
spring:
profiles:
active: ${STAGE:local}
config:
import:
- optional:classpath:application-${STAGE:local}.yml
- vault://secret/our-secret
Note that the STAGE environment variable corresponds to the profile used per stage. We made the import of the profile-specific configuration file optional, as we don't have a dedicated file for every stage.
By providing the import for the profile-specific configuration files explicitly before the vault config, we can override the default vault settings before the vault is accessed.
Still, this approach feels a bit awkward, but it's the only way so far we found to work around the issue, so better solutions would be appreciated.

Related

getting Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 despite having credentials in config file

I have a typescript/node-based application where the following line of code is throwing an error:
const res = await s3.getObject(obj).promise();
The error I'm getting in terminal output is:
❌ Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
However, I do actually have a credentials file in my .aws directory with values for aws_access_key_id and aws_secret_access_key. I have also exported the values for these with the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I have also tried this with and without running export AWS_SDK_LOAD_CONFIG=1 but to no avail (same error message). Would anyone be able to provide any possible causes/suggestions for further troubleshooting?
Install npm i dotenv
Add a .env file with your AWS_ACCESS_KEY_ID etc credentials in.
Then in your index.js or equivalent file add require("dotenv").config();
Then update the config of your AWS instance:
region: "eu-west-2",
maxRetries: 3,
httpOptions: { timeout: 30000, connectTimeout: 5000 },
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
Try not setting AWS_SDK_LOAD_CONFIG to anything (unset it). Unset all other AWS variables. In Mac/linux, you can do export | grep AWS_ to find others you might have set.
Next, do you have AWS connectivity from the command line? Install the AWS CLI v2 if you don't have it yet, and run aws sts get-caller-identity from a terminal window. Don't bother trying to run node until you get this working. You can also try aws configure list.
Read through all the sections of Configuring the AWS CLI, paying particular attention to how to use the credentials and config files at $HOME/.aws/credentials and $HOME/.aws/config. Are you using the default profile or a named profile?
I prefer to use named profiles, but I use more than one so that may not be needed for you. I have always found success using the AWS_PROFILE environment variable:
export AWS_PROFILE=your_profile_name # macOS/linux
setx AWS_PROFILE your_profile_name # Windows
$Env:AWS_PROFILE="your_profile_name" # PowerShell
This works for me both with an Okta/gimme-aws-creds scenario, as well as an Amazon SSO scenario. With the Okta scenario, just the AWS secret keys go into $HOME/.aws/credentials, and further configuration such as default region or output format go in $HOME/.aws/config (this separation is so that tools can completely rewrite the credentials file without touching the config). With the Amazon SSO scenario, all the settings go in the config.

Spinnaker - SQL backend for front50

I am trying to setup SQL backend for front50 using the document below.
https://www.spinnaker.io/setup/productionize/persistence/front50-sql/
I have fron50-local.yaml for the mysql config.
But, not sure how to disable persistent storage in halyard config. Here, I can not completely remove persistentStorage and persistentStoreType should be one of a3,azs,gcs,redis,s3,oracle.
There is no option to disable persistent storage here.
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spinnaker
rootFolder: front50
maxKeys: 1000
oracle: {}
So within your front50-local.yaml you will want to disable the service you used to utilize e.g.
spinnaker:
gcs:
enabled: false
s3:
enabled: false
You may need/want to remove the section from your halconfig and run your apply with
hal deploy apply --no-validate
There are a number of users dealing with these same issues and some more help might be found on the Slack: https://join.spinnaker.io/
I've noticed the same issue just recently. Maybe this is because, for example Kayenta (which is an optional component to enable) is still missing the non-object storage persistent support, or...
I've created a GitHub issue on this here: https://github.com/spinnaker/spinnaker/issues/5447

Spring Cloud Config Basic Security throwing 401 error

I have following configuration on server side:
server:
port: 8888
spring:
profiles:
active: native
cloud:
config:
server:
native:
search-locations: "classpath:/config"
security:
user:
name: test
password: test
And following configuration on client side:
spring:
cloud:
config:
fail-fast: true
profile: "${spring.profiles.active}"
uri: "${SPRING_CLOUD_CONFIG_URI:http://localhost:8888/}"
username: test
password: test
I can successfully access properties from browser using user/pwd as test/test, but when my client tries to fetch it failed with 401 error:
INFO 7620 --- [5cee934b64bfd92] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
WARN 7620 --- [5cee934b64bfd92] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: 401 null
I tried setting the log level for spring cloud to DEBUG but nothing additional got logged, so I have no clue why I'm getting a 401 from client while I can access properties successfully via browser using the same credentials.
I've also tried removing the security from server and client and it worked perfectly, which means rest of the configurations are quite ok. But then the question is, what am I overlooking when I apply basic security and why it is not working and throwing a 401 instead?
Try checking these configurations:
spring.cloud.config.username
spring.cloud.config.password
Both properties should be defined at bootstrap.properties (not application.properties)
Please check if the way you are specifying profile name is correct and if it is getting resolved properly in Java code. You can implement CommandLineRunner and print active profiles from Environment variable.
If you specified property spring.profiles.active as native in pom.xml, you can resolve it in application/yaml file as #spring.profiles.active#
If you specified property file as VM argument, then it should work with current implementation.
If you did not specify spring.profiles.active in pom or VM argument, it will resolve to default profile, not native profile. Profile in config client and config server should be same.
Yes we have to use bootstrap properties because When the Spring Cloud application starts, it creates a bootstrap context. The bootstrap context is searching for a bootstrap.properties or a bootstrap.yaml file, whereas the application context is searching for an application.properties or an application.yaml file. The bootstrap context is the parent context for the main application

How to set proxy in keystone js

I am new in Keystone JS and I observed there is no much support available on google for it.
I am uploading files to S3 on AWS, but I am facing timeout issue, I figured out that, I need to set proxy for it.
But I don't know how to set proxy in keystone, I searched on its site but found nothing.
Note:: I am using keystone.storage and keystone-storage-adapter-s3
The package keystone-storage-adapter-s3 that you are using is using knox, and knox's proxy support proxy was added in pull 137.
After make sure you have updated knox, you can add your proxy options to
var storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
key: 's3-key', // required; defaults to process.env.S3_KEY
secret: 'secret', // required; defaults to process.env.S3_SECRET
bucket: 'mybucket', // required; defaults to process.env.S3_BUCKET
proxy: '<your proxy>',
...
More information:
Setting proxy server for connections in Knox
https://www.npmjs.com/package/keystone-storage-adapter-s3
https://github.com/keystonejs/keystone-storage-adapter-s3/blob/master/index.js
Note: I have not try this but I just provide the information based on the source code trace, good luck.

Turn off firewall when developing in early stages with Symfony2?

Following this tutorial i'm developing a web application using symfony authentication/authorization architecture.
After designing the whole structure (routes, pages and security levels) i'm stuck: how can i develop my pages without enter credentials all the time? Is there any way to disable or turn off the entire firewall functionality? Should i use data fixtures?
In your app/config/security.yml file, under the firewalls config option add or modify the dev...
firewalls:
dev:
pattern: ^/
security: false
The security.firewalls.dev: configuration is used in every Symfony environment (dev,test,prod)!
In Symfony 4, to achieve disabling firewalls for all routes in just dev environment, you could do something like this:
Setup:
config/packages/security.yaml:
parameters:
# Adds a fallback SECURITY_DEV_PATTERN if the env var is not set.
env(SECURITY_DEV_PATTERN): '^/(_(profiler|wdt)|css|images|js)/'
security:
firewalls:
dev:
pattern: '%env(SECURITY_DEV_PATTERN)%'
security: false
Override per Symfony environment:
create a new file config/packages/dev/parameters.yaml:
parameters:
env(SECURITY_DEV_PATTERN): '^/'
Now all routes are reachable without firewall in Symfony dev environ
Override using environment variables:
You could also override SECURITY_DEV_PATTERN in the .env file:
SECURITY_DEV_PATTERN=^/
This only works if you don't include the .env in your production environment, or if you specifically override the SECURITY_DEV_PATTERN environment variable there as well.