Auto reload configuration changes without restarting pod/container using Kubernetes ConfigMap for application with large number of configuration files - asp.net-core

Our team is planning the migration of legacy enterprise application developed in ASP.net web-Forms, to .Net Core 6 as use the containerized approach. For this, mostly we will target the Kubernetes container orchestration platform.
The application is highly configurable and can be integrated with related apps up to certain extent. It has large number of XML based configuration files (more than 100). Current mode of deployment is IIS (on-premise).
The major technical challenge that we are facing is to manage our application configuration.
So ConfigMap is one the option available in Kubernetes can be used for configuration management. ConfigMap APIs allows to generate ConfigMap from environment, yaml file, existing configuration file or directory. Directory based approach seems more suitable. However, considering the maximum size limit of ConfigMap we may end up creating multiple ConfigMap.
We need to make sure:
Migrated app should be able to use configuration however application image should be separate and configuration can be injected from outside.
The configuration changes should be reflected in application without POD.
Since the ConfigMap is kind of read-only resource when container starts, I am currently looking for mechanism to use with configuration reload without the need of restarting POD/container.
Initial focus is to achieve this. (The impact of changed configuration on active users who might be referring to application feature based on previous configuration is a different topic altogether).

You can do it without restarting the POD using configmap only, however still it more depends on your application end.
You can inject your configmap and mount it to POD Kubernetes auto-reload the config map if mounted to the directory. To note it does not work if you are using the subpath.
Auto reload config map into Kubernetes without restarting the POD, you can more here: https://medium.com/#harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388
YAML example
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
hello: world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
replicas: 1
template:
metadata:
labels:
app: test
spec:
containers:
- name: configmaptestapp
image: <Image>
volumeMounts:
- mountPath: /config
name: data-volume
ports:
- containerPort: 80
volumes:
- name: data-volume
configMap:
name: test-config
Official documentation : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically
Mounted ConfigMaps are updated automatically When a ConfigMap
currently consumed in a volume is updated, projected keys are
eventually updated as well. The kubelet checks whether the mounted
ConfigMap is fresh on every periodic sync. However, the kubelet uses
its local cache for getting the current value of the ConfigMap. The
type of the cache is configurable using the
ConfigMapAndSecretChangeDetectionStrategy field in the
KubeletConfiguration struct. A ConfigMap can be either propagated by
watch (default), ttl-based, or by redirecting all requests directly to
the API server. As a result, the total delay from the moment when the
ConfigMap is updated to the moment when new keys are projected to the
Pod can be as long as the kubelet sync period + cache propagation
delay, where the cache propagation delay depends on the chosen cache
type (it equals to watch propagation delay, ttl of cache, or zero
correspondingly).
ConfigMaps consumed as environment variables are not updated
automatically and require a pod restart.
Note:
A container using a ConfigMap as a subPath volume mount will not
receive ConfigMap updates.
In this case your application need to be handling the content properly with change detection etc.

Related

Deploying spinnaker using halyard on Kubernetes

Trying to deploy spinnaker in Kubernetes using halyard.
All my custom configurations are under
~/.hal/default/service-settings
~/.hal/default/profile
So, running the below command deploys the configuration.
hal deploy apply
This reads my settings under default. Is it possible to have a folder other than default? if so, How can i change the config to use the config under the new folder as opposed to default.
Also, All the pods are using the test & local profiles while starting.
com.netflix.spinnaker.front50.Main : The following profiles are active: test,local
Is this only for test or local deployment? Is there any production profile for production grade spinnaker?
About the "default", this is called "Deployment" - see this: https://www.spinnaker.io/reference/halyard/#deployments
And, on the profile names, I would not worry too much... You add overrides to the "profiles" directory on these...

Access S3 bucket without running aws configure with kubernetes

I have an S3 bucket with some sql scripts and some backup files using mysqldump.
I also have a .yaml file that deploys a fresh mariadb image.
As I'm not very experienced with kubernetes yet, if I want to restore one of those backup files into the pod, I need to bash into it, run aws cli, insert my credentials, then sync the bucket locally and run mysql < backup.sql
This, obviously, destroys the concept of full automated deployment.
So, the question is... how can I securely make this pod immediately configured to access S3?
I think you should consider mounting S3 bucket inside the pod.
This can be achieved by for example s3fs-fuse.
There are two nice articled about Mounting a S3 bucket inside a Kubernetes pod and Kubernetes shared storage with S3 backend, I do recommend reading both to understand how this works.
You basically have to build your own image from Dockerfile and supply necessary S3 bucket info and AWS security credentials.
Once you have the storage mounted you will be able to call scripts from it in a following way:
apiVersion: v1
kind: Pod
metadata:
name: test-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]

Set mfsymlinks when mounting Azure File volume to ACI instance

Is there a way to specify the mfsymlinks option when mounting an Azure Files share to an ACI container instance?
As shown on learn.microsoft.com symlinks can be supported in Azure Files when mounted in Linux with this mfsymlinks option enabling Minshall+French symlinks.
I would like to use an Azure Files share mounted to an Azure Container Instance but I need to be able to use symlinks in the mounted file system, but I cannot find a way to specify this. Does anyone know of a way to do this?
Unfortunately, as far as I know, when you create the container and mount the Azure File Share through the CLI command az container create with parameters such as
--azure-file-volume-account-key
--azure-file-volume-account-name
--azure-file-volume-mount-path
--azure-file-volume-share-name
You cannot set the symlinks as you want and there also no parameter for you to set it.
In addition, if you take a look at the Template for Azure Container Instance, then you can find that there no property shows the setting about symlinks. In my opinion, it means you cannot set the symlinks for Azure Container Instance as you want. Hope this will help you.
As a workaround that suits my use case, once the file structure, including symlinks, has been created on the container's local FS, I tar up the files onto the Azure Files share: tar -cpzf /mnt/letsencrypt/etc.tar.gz -C / etc/letsencrypt/ Then when the container runs again, it extracts from the tarball, preserving the symlinks: tar -xzf /mnt/letsencrypt/etc.tar.gz -C /
I'll leave this open for now to see if ACI comes to support the option natively.
Update from Azure docs (azure-files-volume#mount-options):
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl

How to correctly setup RabbitMQ on Openshift

I have created new app on OpenShift using this image: https://hub.docker.com/r/luiscoms/openshift-rabbitmq/
It runs successfully and I can use it. I have added a persistent volume to it.
However, every time a POD is restarted, I loos all my data. This is because RabbitMq uses a hostname to create database directory.
For example:
node : rabbit#openshift-rabbitmq-11-9b6p7
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : BsUC9W6z5M26164xPxUTkA==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbit#openshift-rabbitmq-11-9b6p7
How can I set RabbitMq to always use same database dir?
You should be able to set an environment variable RABBITMQ_MNESIA_DIR to override the default configuration. This can be done via the OpenShift console by add an entry to environment in the deployment config or via the oc tool, for example:
oc set env dc/my-rabbit RABBITMQ_MNESIA_DIR=/myDir
You would then need to mount the persistent volume inside the Pod at the required path. Since you have said it is already created, then you just need to update it, example:
oc volume dc/my-rabbit --add --overwrite --name=my-pv-name --mount-path=/myDir
You will need to make sure you have correct r/w access on the provided mount path
EDIT: Some additional workarounds based on issues in comments
The issues caused by the dynamic hostname could be solved in a number of ways:
1.(Preferred IMO) Move the deployment to a StatefulSet. StatefulSet will provide stability in the naming and hence network identifier of the Pod, which must be fronted by a headless service. This feature is out of beta as of Kubernetes 1.9 and tech preview in OpenShift since version 3.5
Set the hostname for the Pod if Statefulsets are not an option. This can be done by adding the environment variable oc set env dc/example HOSTNAME=example to make the hostname static and setting RABBITMQ_NODENAME to do likewise.
I was able to get it to work by setting the HOSTNAME environment variable. OSE normally sets that value to the pod name, so it changes everytime the pod restarts. By setting it the pod's hostname doesn't change when the pod restarts.
Combined with a Persistent Volume the the queues, messages users and i assume whatever other configuration is persisted through pod restarts.
This was done on an OSE 3.2 server. I just added an environment variable to the deployment config. You can do it through the UI or with the OC CLI:
oc set env dc/my-rabbit HOSTNAME=some-static-name
This will probably be an issue if you run multiple pods for the service, but in that case you would need to setup proper RabbitMq clustering, which is a whole different beast.
The easiest and production-safest way to run RabbitMQ on K8s including OpenShift is the RabbitMQ Cluster Operator.
See this video on how to deploy RabbitMQ on OpenShift.

Hiding artifacts or deploying from something else than artifacts?

Background and problem
I have this open source repository that I have an AppVeyor build configuration for.
This configuration creates an artifact for a website that needs to get published. This is because it only seems to be possible for AppVeyor to do Web Deploy using an artifact, and not a path.
This poses a problem, because my website (before it gets deployed), needs to write some secret values (like API secrets for a Patreon API) down into a file before deploying to production using Web Deploy. But if I do this before creating the artifacts, the secrets will be part of the artifact as well.
The questions
How can I set specific configuration values that my website application can read without exposing it to the viewers of the build configuration and yet still deploy it to production using AppVeyor?
If I could deploy a path instead of an artifact I could mutate the files before deploying, but since an artifact is public to everyone, I don't want to do that. Is this possible?
Alternatively it would be great if I could hide artifacts from others or prevent them from being shown via permissions or something similar, but I haven't found anything that allows me to do that. Is this possible?
What I've tried and more technical details
I have already encrypted the values in my appveyor.yml file:
environment:
patreon_client_id:
secure: PLU/ujLWtFY+Tw/UN6vbHoUSgxeykAIa7dJfLeuHyAyLtnhMqJCARZjN7G6zhO3m9yjr2pClq+VRScJEL+4vSTcJSndZWCqBA5YLFhM6xeE=
patreon_client_secret:
secure: tHr/9QE88kYtxaqdLM332mB3xD+4QRNg8y06DY5qAWf155NtSqi7G4zNpjeFCiTPa86f0LDdPAAjyrWZsLEXoCKZmA7PDBxU5kcllrub2cE=
patreon_creators_access_token:
secure: viBR0QyoO8HxK9X/n93AHhF0SNPs9hG0BEqoQKWV688=
patreon_creators_refresh_token:
secure: qJzAlyrpLkpWxEb7zL17uYnC0HLAwU8M3xcxzI7vkGc=
Here's the part where I create my artifact.
- path: build\website\Website.zip
name: Website
type: WebDeployPackage
As you can see, a Website artifact is created. I then publish this artifact with Web Deploy:
- provider: WebDeploy
server: https://shapeshifter.scm.azurewebsites.net:443/msdeploy.axd?site=shapeshifter
website: shapeshifter
username: $shapeshifter
password:
secure: 5Urzbp6Aj24/wHED9+Q/CtH4EjN7nv9PGdCdBDr5XECq8wnDxQcHK5YoS246hOqcEBNCU2OZ4rq26LVWCRbfbw==
artifact: Website
aspnet_core: true
remove_files: true
app_offline: true
aspnet_core_force_restart: true
Please take a look at Web Deploy Parametrization