I have {{- if .Values.canary.enable }} option on my second kubernetes deployment resource. Deploying the chart with canary.enable: false doesn’t create deployment-v2 at all (which I have expected).
However, deploying with Spinnaker creates deployment-v2, although I have set canary.enable: false on override values.
Why is this kind of weird thing happening ?..
Related
I have a weird issue with envFrom:
- name: template-api
envFrom:
- secretRef:
name: common-secrets
- secretRef:
name: template-api
in common-secrets I have variables like this:
MAILHOST=smtp.gmail.com
MAILPORT=587
And template-api is like:
MAIL_HOST=$MAILHOST
MAIL_PORT=$MAILPORT
This is like that, because pods have different variables names for same info.
But when the container is running the variables are replaced with literal $VAR instead of var value.
Maybe Im using the wrong solution for this. Did somebody face the same issue?
Kubernetes won't update it that way, if you are running that approach with any code or script it will work inside the code. like process.env($MAILHOST)
Whatever you have mentioned in secret it will get injected into the OS environment now if the same values are there it will get overwritten.
Kubernetes inject the secret based on the YAML configuration either to the file system or either inside the OS.
Kubernetes simply inject the values into the POD as set in secret. it won't check whether anything is already set in the environment and replaces it with values.
Short Version
Please read at the very bottom for a short version of the question.
Situation
In a question I asked last week, I struggled in finding a solution, which makes our asp.net error visualization waterproof, since there are some edge cases where the asp.net exception handling fails and hence no proper exception visualizations can be created:
How to properly set up ASP.NET web.config to show application specific, safe and user friendly asp.net error messages in edge cases
Desired Solution
As an alternative to the way I described there, in my opinion the best way to make the exception visualization reliable, would be to use the httpErrors-element in system.webServer as a failsave so that any error which is not properly handled by asp.net, leads to a generic error page which is shown based on the settings of the httpErrors-element .
To accomplish this, there must be two things possible:
Error pages properly handled by the application must pass through iis without being replaced with a generic error message
Errors which could not be processed properly in asp.net, must be replaced through IIS.
It is my understanding, that this very behaviour is meant by the existingResponse="Auto" parameter in the httpErrors-element.
The ms documentation states:
Leaves the response untouched only if the SetStatus flag is set.
This is exactly what is necessary: Any successful error page creation in the application (through Application_Error or through an explicitly defined error handling page) can set
Response.TrySkipIisCustomErrors = true and IIS would let the error page pass through. However, every other error which was not successfully handled by the application in asp.net, would not set the flag and hence get the error page which is specified in the httpErrors-element.
The Problem
Sadly, it seems that in MVC5-applications (I don’t known whether the same behavior is true in other environments), the Response.TrySkipIisCustomErrors (fTrySkipCustomErrors) seems to be set automatically to [true], even if it is not set by the application.
Hence we are at the same place, as in my other post: If the error handling of the application blows, there is no way to show an application specific error with existingResponse="Auto", since its not possible to reset the fTrySkipCustomErrors flag.
As an alternative, one can set existingResponse="PassThrough". That's what we do currently, since we want to generate our error pages with a support-code and other helpfull information about the error to be shown to the user, or one can use existingResponse="Replace", but this is not an option, since it replaces any error page so that we don’t can show the user any error-specific information such as the support-code mentionen before.
Quesition in Short
The question is therefore, how to make sure that MVC5 (asp.net) does not set the fTrySkipCustomErrors flag automatically to [true], since there are situations, where no application code is executed and hence the Response.TrySkipIisCustomErrors (fTrySkipCustomErrors) cannot be set to false, what renders the existingResponse="Auto" parameter moot.
To check such a situation where the asp.net MVC5 exception handling blows but the fTrySkipCustomErrors flag is set to true, please request the following page from your MVC5 application:
http[s]://[yourWebsite]/com1
Please note: I'm not interested in disabling the above error. It's an example. I want the error visualization reliable and not to have to circumvent every error that possibly can blow asp.net's error handling mechanisms.
I moved from EhCache to Infinispan and I have requirement that be able to toggle caches (not only globally but also for specific cache name).
Using EhCache there was a option setDisable(Boolean) to disable a cache.
I would like to achieve something similar with Infinispan, however I don't want to change my app code. I mean I don't want something like
if cache is enabled
...
else
...
I'm waiting to something that NoOp cache operation, like calling put(key, object) do nothing at all (not only storing, but no serialization, no computation) and same for others methods.
Since I'm using Spring integration I was thinking about using CompositeCacheManager with NoOpCacheManager fallback but current existing Spring integration uses dynamic cache creation, so getCache(String) never returns null (thus NoOpCacheManager is never used).
Set the property infinispan.embedded.cache.enabled to false
We want to have each of our terraform environments in a separate AWS account in a way that will make it hard for accidental deployments to production to occur. How is this best accomplished?
We are assuming that an account is dedicated to Production, another to PreProduction and potentially other sandbox environments also have unique accounts, perhaps on a per-admin basis. One other assumption is that you have an S3 bucket in each AWS account that is specific to your environment. Also, we expect your AWS account credentials to be managed in ~/.aws/credentials (or with an IAM role perhaps).
Terraform Backend Configuration
There are two states. For the primary state we’re using the concept of Partial Configuration. We can’t pass variables into the backend config through modules or other means because it is read before those are determined.
Terraform Config Setup
This means that we declare the backend with some details missing and then provide them as arguments to terraform init. Once initialized, it is setup until the .terraform directory is removed.
terraform {
backend "s3" {
encrypt = true
key = "name/function/terraform.tfstate"
}
}
Workflow Considerations
We only need to make a change to how we initialize. We use the -backend-config arguments on terraform init. This provides the missing parts of the configuration. I’m providing all of the missing parts through bash aliases in my ~/.bash_profile like this.
alias terrainit='terraform init \
-backend-config "bucket=s3-state-bucket-name" \
-backend-config "dynamodb_table=table-name" \
-backend-config "region=region-name"'
Accidental Misconfiguration Results
If the appropriate required -backend-config arguments are left off, initialization will prompt you for them. If one is provided incorrectly, it will likely cause failure for permissions reasons. Also, the remote state must be configured to match or it will also fail. Multiple mistakes in identifying the appropriate account environment must occur in order to deploy to Production.
Terraform Remote State
The next problem is that the remote states also need to change and can’t be configured through pulling configuration from the backend config; however, the remote states can be set through variables.
Module Setup
To ease switching accounts, we’ve setup a really simple module which takes in a single variable aws-account and returns a bunch of outputs that the remote state can use with appropriate values. We also can include other things that are environment/account specific. The module is a simple main.tf with map variables that have a key of aws-account and a value that is specific to that account. Then we have a bunch of outputs that do a simple lookup of the map variable like this.
variable "aws-region" {
description = "aws region for the environment"
type = "map"
default = {
Production = "us-west-2"
PP = "us-east-2"
}
}
output "aws-region" {
description = “The aws region for the account
value = "${lookup(var.aws-region, var.aws-account, "invalid AWS account specified")}"
}
Terraform Config Setup
First, we must pass the aws-account to the module. This will probably be near the top of main.tf.
module "environment" {
source = "./aws-account"
aws-account = "${var.aws-account}"
}
Then add a variable declaration to your variables.tf.
variable "aws-account" {
description = "The environment name used to identify appropriate AWS account resources used to configure remote states. Pre-Production should be identified by the string PP. Production should be identified by the string Production. Other values may be added for other accounts later."
}
Now that we have account specific variables output from the module, they can be used in the remote state declarations like this.
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
key = "name/vpc/terraform.tfstate"
region = "${module.environment.aws-region}"
bucket = "${module.environment.s3-state-bucket-name}"
}
}
Workflow Consideration
If the workflow changes in no way after setting up like this, the user will be prompted to provide the value for aws-account variable through a prompt like this whenever a plan/apply or the like is performed. The contents of the prompt are the description of the variable in variables.tf.
$ terraform plan
var.aws-account
The environment name used to identify appropriate AWS account
resources used to configure remote states. Pre-Production should be
identified by the string PP. Production should be identified by the
string Production. Other values may be added for other accounts later.
Enter a value:
You can skip the prompt by providing the variable on the command line like this
terraform plan -var="aws-account=PP"
Accidental Misconfiguration Results
If the aws-account variable isn’t specified, it will be requested. If an invalid value is provided that the aws-account module isn’t aware of, it will return errors including the string “invalid AWS account specified” several times because that is the default values of the lookup. If the aws-account is passed correctly, but it doesn’t match up with the values identified in terraform init, it will fail because the aws credentials being used won’t have access to the S3 bucket being identified.
We faced a similar problema and we solved (partially) creating pipelines in Jenkins or any other CI tool.
We had 3 different envs (dev, staging and prod).Same code, different tfvars, different aws accounts.
When terraform code is merged to master can be applied to staging and only when staging is Green, production can be executed.
Nobody runs terraform manually in prod, aws credentials are stored in the CI tool.
This setup can solve an accident like you decribed but also prevents different users applying different local code.
I need to update the JSF2.0 (Primefaces) tooltips dynamically without server restart.
Meaning need to find a way where tooltips (atm from properties file) of the a running application can be changed without requiring a server restart.
We are running websphere and deploying a non exploded EAR (can probably convince to deploy exploded war)
Any Ideas or tips please. Thanks you
The value attribute of the p:toolTip component must be an EL expression or a literal text. Usually, one would reference a resource bundle declared using the var attribute of the f:loadBundle tag, in the EL expression for the tooltip.
The underlying resource bundle declared using the basename attribute could be backed by a property file itself (in which case you need to place the property file in the appropriate directory on the classpath), or for that matter it could be a custom ResourceBundle implementation that could read from a properties file (located outside the container), or a database or any store for that matter.
You could therefore change your existing EL expression from the existing one defined as:
<f:loadBundle var="msg" basename="propfile_location" />
to
<f:loadBundle var="msg" basename="fully qualified class name of the ResourceBundle class" />
In simpler words, you will need to roll your own ResourceBundle class(es) to support the various locales. Needless to state, but you will need to override the ResourceBundle.getObject(java.lang.String) method, as it is invoked by the ResourceBundleELResolver implementation when evaluating the EL expressions referencing ResourceBundles.
Additionally, you will need to ensure that the ResourceBundle.getObject(java.lang.String) implementation of your ResourceBundle will always re-fetch and return the value corresponding to the provided key. Failure to ensure this would mean that the initial value fetched by the resource bundle may be returned on subsequent invocations, especially if you are caching the initial value. You are likely to encounter this behavior even if you deploy an exploded WAR file where you can modify the property file contents without a redeployment of the application, and that is why it is important to use a custom ResourceBundle implementation that does not cache values.