I have a weird issue with envFrom:
- name: template-api
envFrom:
- secretRef:
name: common-secrets
- secretRef:
name: template-api
in common-secrets I have variables like this:
MAILHOST=smtp.gmail.com
MAILPORT=587
And template-api is like:
MAIL_HOST=$MAILHOST
MAIL_PORT=$MAILPORT
This is like that, because pods have different variables names for same info.
But when the container is running the variables are replaced with literal $VAR instead of var value.
Maybe Im using the wrong solution for this. Did somebody face the same issue?
Kubernetes won't update it that way, if you are running that approach with any code or script it will work inside the code. like process.env($MAILHOST)
Whatever you have mentioned in secret it will get injected into the OS environment now if the same values are there it will get overwritten.
Kubernetes inject the secret based on the YAML configuration either to the file system or either inside the OS.
Kubernetes simply inject the values into the POD as set in secret. it won't check whether anything is already set in the environment and replaces it with values.
Related
I'm trying to migrate a legacy app into a new .net 6 version, the issue that I have is that this app has a 3rd party library with keys that will be looked up in the appsettings.json file.
Something like this(note the dots in the key):
{
"one.special.key":"one value"
}
The issue that I'm facing now is that my new app will be running inside a container and the keys will be injected using environment variables and I don't think that containers environments (aka - linux) accept environment variables with dots, only the convention with one/double underscore like this: one_special_key.
How can I override an appsetting.json that has a key with dots in it like some.key.with.dots=hello instead of the traditional some_key_without_dots=hello?
If I am not wrong - if you keep an environment variable like this one__special__key - the application will use this value instead of the value from the appsettings.json file.
From the documentation
Using the default configuration, the
EnvironmentVariablesConfigurationProvider loads configuration from
environment variable key-value pairs after reading appsettings.json,
appsettings.{Environment}.json, and user secrets. Therefore, key
values read from the environment override values read from
appsettings.json, appsettings.{Environment}.json, and user secrets.
Found the answer in this k8s PR: https://github.com/kubernetes/kubernetes/pull/48986 the validation was made more permiss and it allows to pass dot based environment variables names.
Currently deploying PostgREST in AWS. When I use Fargate and just hardcoded type in the environment variables for the connection string, the machine works like a charm.
However I recently replaced these values with secrets. In the secret I copy-pasted the entire string in the value and in the environment variable I set the source from "Value" to "ValueFrom".
So the value now is:
postgres://<myuser>:<mypass>#<amazon-rds-instance>:5432/<db>
When I use this connectionstring directly in the environment variable I can easily connect, so I know the information is correct.
The logs come back with the following error:
{"details":"missing \"=\" after \"{\"postgrest_db_connection\":\"postgres://myuser:mypass#amazon-rds-instance:5432/db\"}\" in connection info string\n","code":"","message":"Database connection error"}
I also checked I have no characters in the string that need to be escaped. What can I be missing here?
So I figured it out. Unfortunately this line was it:
It is only supported to inject the full contents of a secret as an environment variable. Specifying a specific JSON key or version is not supported at this time.
This means that whenever you use the secrets as ValueFrom setting in the environment variables (when working with Fargate), the entire secret's value gets copy-pasted.
I tested this using a secret for the PostgREST schema variable. I got back the value:
{'PGRST_SCHEMA_URL': 'public'}
Whilst I was expecting it to be just:
public
This is why the configuration went bad as well. Thanks everyone for searching.
We want to have each of our terraform environments in a separate AWS account in a way that will make it hard for accidental deployments to production to occur. How is this best accomplished?
We are assuming that an account is dedicated to Production, another to PreProduction and potentially other sandbox environments also have unique accounts, perhaps on a per-admin basis. One other assumption is that you have an S3 bucket in each AWS account that is specific to your environment. Also, we expect your AWS account credentials to be managed in ~/.aws/credentials (or with an IAM role perhaps).
Terraform Backend Configuration
There are two states. For the primary state we’re using the concept of Partial Configuration. We can’t pass variables into the backend config through modules or other means because it is read before those are determined.
Terraform Config Setup
This means that we declare the backend with some details missing and then provide them as arguments to terraform init. Once initialized, it is setup until the .terraform directory is removed.
terraform {
backend "s3" {
encrypt = true
key = "name/function/terraform.tfstate"
}
}
Workflow Considerations
We only need to make a change to how we initialize. We use the -backend-config arguments on terraform init. This provides the missing parts of the configuration. I’m providing all of the missing parts through bash aliases in my ~/.bash_profile like this.
alias terrainit='terraform init \
-backend-config "bucket=s3-state-bucket-name" \
-backend-config "dynamodb_table=table-name" \
-backend-config "region=region-name"'
Accidental Misconfiguration Results
If the appropriate required -backend-config arguments are left off, initialization will prompt you for them. If one is provided incorrectly, it will likely cause failure for permissions reasons. Also, the remote state must be configured to match or it will also fail. Multiple mistakes in identifying the appropriate account environment must occur in order to deploy to Production.
Terraform Remote State
The next problem is that the remote states also need to change and can’t be configured through pulling configuration from the backend config; however, the remote states can be set through variables.
Module Setup
To ease switching accounts, we’ve setup a really simple module which takes in a single variable aws-account and returns a bunch of outputs that the remote state can use with appropriate values. We also can include other things that are environment/account specific. The module is a simple main.tf with map variables that have a key of aws-account and a value that is specific to that account. Then we have a bunch of outputs that do a simple lookup of the map variable like this.
variable "aws-region" {
description = "aws region for the environment"
type = "map"
default = {
Production = "us-west-2"
PP = "us-east-2"
}
}
output "aws-region" {
description = “The aws region for the account
value = "${lookup(var.aws-region, var.aws-account, "invalid AWS account specified")}"
}
Terraform Config Setup
First, we must pass the aws-account to the module. This will probably be near the top of main.tf.
module "environment" {
source = "./aws-account"
aws-account = "${var.aws-account}"
}
Then add a variable declaration to your variables.tf.
variable "aws-account" {
description = "The environment name used to identify appropriate AWS account resources used to configure remote states. Pre-Production should be identified by the string PP. Production should be identified by the string Production. Other values may be added for other accounts later."
}
Now that we have account specific variables output from the module, they can be used in the remote state declarations like this.
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
key = "name/vpc/terraform.tfstate"
region = "${module.environment.aws-region}"
bucket = "${module.environment.s3-state-bucket-name}"
}
}
Workflow Consideration
If the workflow changes in no way after setting up like this, the user will be prompted to provide the value for aws-account variable through a prompt like this whenever a plan/apply or the like is performed. The contents of the prompt are the description of the variable in variables.tf.
$ terraform plan
var.aws-account
The environment name used to identify appropriate AWS account
resources used to configure remote states. Pre-Production should be
identified by the string PP. Production should be identified by the
string Production. Other values may be added for other accounts later.
Enter a value:
You can skip the prompt by providing the variable on the command line like this
terraform plan -var="aws-account=PP"
Accidental Misconfiguration Results
If the aws-account variable isn’t specified, it will be requested. If an invalid value is provided that the aws-account module isn’t aware of, it will return errors including the string “invalid AWS account specified” several times because that is the default values of the lookup. If the aws-account is passed correctly, but it doesn’t match up with the values identified in terraform init, it will fail because the aws credentials being used won’t have access to the S3 bucket being identified.
We faced a similar problema and we solved (partially) creating pipelines in Jenkins or any other CI tool.
We had 3 different envs (dev, staging and prod).Same code, different tfvars, different aws accounts.
When terraform code is merged to master can be applied to staging and only when staging is Green, production can be executed.
Nobody runs terraform manually in prod, aws credentials are stored in the CI tool.
This setup can solve an accident like you decribed but also prevents different users applying different local code.
I would like to know is there anyway we can encrypt the server.ssl.key-store-password value and store it in application.properties file instead of storing it in plain text.
i couldn't find any documentation on this. Any help on this is highly appreciated.
Thanks in advance.
Spring allows you to encrypt the properties file but the key for that encryption needs to be kept somewhere. This answer suggest keeping them in environment variables and points to a guide about how to encrypt them if you still want to.
You can use "jasypt-spring-boot-starter" for your need. All you to need to do are the following steps.
Download the "jasypt-spring-boot-starter" from maven central repo.
com.github.ulisesbocchio
jasypt-spring-boot-starter
x.x.x
In your Spring Boot start file where the "#SpringBootApplication" annotation is located, just include "#EnableEncryptableProperties". A point to note here is that once you place encryptable properties annotation on the main start file, all the property files of your application will be loaded and scanned by Jaspyt module for any property value that is marked starting with "ENC".
In your "application.properties" file there are few more configurations that needed to be added like below (all these are defaults and you can change these according to your requirement):
jasypt.encryptor.password=<Some password for encryption>
jasypt.encryptor.algorithm=PBEWITHHMACSHA256ANDAES_128
jasypt.encryptor.key-obtention-iterations=1000
jasypt.encryptor.pool-size=1
jasypt.encryptor.salt-generator-classname=org.jasypt.salt.RandomSaltGenerator
jasypt.encryptor.iv-generator-classname=org.jasypt.iv.RandomIvGenerator
jasypt.encryptor.string-output-type=base64
Once you are done with the above steps, now you can place your encrypted property value under the ENC(). Jasypt will scan values which are enclosed in ENC() and will try to decrypt the value.
For e.g.
spring.datasource.password=ENC(tHe0atcRsE+uOTxt2GmFYPXNHREch9R/12qD082gw7vv6bby5Rk)
This doesn't seem like a situation that is unique to me, but I haven't been able to find an answer anywhere.
I am attempting to build Jmeter scripts that can be executed both in the GUI and command line. The command line is going to need values to pass into the test cases, but the same test cases need to be executed via the GUI as well. I initially had separate scripts for GUI and command line, but it seemed redundant to have the same test cases duplicated with just a couple parameters changed.
For example, the GUI test case has the Web Server name set to:
<!-- ${ENV} set in User Defined Variables -->
<stringProp name="HTTPSampler.domain">${ENV}</stringProp>
The command line test case uses the following for parameters:
<!-- Define via command line w/ -JCMDDEV -->
<stringProp name="HTTPSampler.domain">${__P(CMDENV)}</stringProp>
Both work for their served purpose, but I want to combine the tests to be easier maintained and to have the ability to run them via GUI or command line.
I got passed one hurdle, which was combining the GUI Variables to be used as well as Properties for the command line by setting the User Defined Variable ${ENV} as the following:
Name Value
----- --------
ENV ${__P(ENV,dev.address.com)}
I am now able to run the same test case via GUI and command line (defining a new environment with -JENV)
I'm not sure if I'm overthinking this, but I want to be able to add a variable to the property default in order to avoid typos, etc while handing it off to others. I tried a few variations that didn't seem to work:
Name Value
----- --------
ENV ${__P(ENV,${__V(DEV)})}
DEV dev.address.com
This gave me the following Request:
POST http://DEV/servlet
Instead of:
POST http://dev.address.com/servlet
I also tried using:
${__P(ENV,${DEV})}
${__property(ENV,,${__V(DEV)})}
${__property(ENV,,${DEV})}
I was looking into Jmeter nested variables, but it didn't provide any working solutions.
So to my main question, am I able to use variables as the property defaults. If so, how would I achieve that?
I found a way around this. It's not exactly how I wanted it, but it could work for right now.
I really wanted to keep everything in one place where people had to make edits, but I was able to get the User Defined Variables to work by adding the ${__P(ENV,${DEV})} to the HTTP Request Defaults Web Server Name instead of pre-defining it as a variable.
Now there are two Config Elements that potentially need to be edited with GUI execution, but I think it should work out better in the long run.
Yes, seems the author is right - looks like nested variable can't be evaluated in JMeter from the same variables scope.
I've created a different "User Defined Variables" set, added there "defaultValue" - and after that this option works:
${__P(myProperty, ${defaultValue})}