Azure Appconfiguration and IOption|snappshot|monitor pattern - asp.net-core

If you use IOptions pattern i.e typed settings approach how should you then be able to have a dynamic name convention for parameters in App Configuration (AC) ? Let's say we have 3 environments test, stage and prod and in AC we would like to have a name convention for parameters as:
<environment>:<application name>:<param name>
Is that possible to achieve due to when I have tested there seems to be some "behind the scene" mapping based IOptions entity name and appsettings.json structure or can I override this this behavior to achieve a more dynamic param name convention based on Env parameters as (test|stage|prod), Env parameter for service name and a more generic name convention in IOptions entity/appsettings files for all parameters that should be centrally/dynamically stored
Thanks!

The naming convention you plan to use will work, however, I would recommend naming config keys like below and use labels in AC for environments.
<application name>:<param name>
For example,
Key
Value
Label
app1:settings:debug
true
dev
app1:settings:debug
true
stage
app2:settings:color
yellow
test
Your application then can load only the configuration that is relevant to it (app1 vs. app2) and for the environment where it runs (dev/state/test etc.) by using key filters and label filters. You can find more details of this discussion in https://learn.microsoft.com/en-us/azure/azure-app-configuration/concept-key-value.
Here is a tutorial that shows how to use IOptions with App Configuration in an ASP.NET Core app:
https://learn.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-aspnet-core

Related

What is this syntax: $[] - it’s referenced in Serverless Framework docs but not explained in the Variables section

There are several references using $[...] syntax in the serverless-plugin-aws-alerts docs: Serverless Framework: Plugins
I understand about ${…} variables from the relevant docs: Serverless Framework Variables
But I can’t find anything to describe what is happening in the below code snippet (taken from the aws-alerts plugin docs linked above)
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # Optionally - naming template for the alarms, overwrites globally defined one
prefixTemplate: $[stackName] # Optionally - override the alarm name prefix, overwrites globally defined one
https://www.serverless.com/plugins/serverless-plugin-aws-alerts It's described here, under Custom Naming:
You can define a custom naming template for the alarms.
nameTemplate property under alerts configures naming template for all the alarms, while placing nameTemplate under alarm definition configures (overwrites) it for that specific alarm only. Naming template provides interpolation capabilities, where supported placeholders are:
$[functionName] - function name (e.g. helloWorld)
$[functionId] - function logical id (e.g. HelloWorldLambdaFunction)
$[metricName] - metric name (e.g. Duration)
$[metricId] - metric id (e.g. BunyanErrorsHelloWorldLambdaFunction for the log based alarms, $[metricName] otherwise)
Note: All the alarm names are prefixed with stack name (e.g. fooservice-dev).
So
alerts:
nameTemplate: $[functionName]-$[metricName]-Alarm # configures names for all alarms
alerts:
alarms:
definitions:
customAlarm:
nameTemplate: $[functionName]-Duration-IMPORTANT-Alarm # configures (overwrites) it for that specific alarm only.

Can I change default production branch and/or integration branch?

to be continuous is a set of advanced ready-to use templates for GitLab CI.
By default, every to be continuous template is considering master as the default production branch, and develop the default integration branch.
Can this default behavior be changed ? For instance, use main instead of master as the production branch ?
Sure you can.
Production and integration branches are variabilized using regular expressions:
variables:
# default production ref name (pattern)
PROD_REF: '/^master$/'
# default integration ref name (pattern)
INTEG_REF: '/^develop$/'
Simply overriding them shall change the behavior.
Example in your .gitlab-ci.yml file:
variables:
# my production branch
PROD_REF: '/^main$/'
You could even decide that every branch with format prod-xxx should be considered as production.
Using a regex here helps:
variables:
# my production branch(es)
PROD_REF: '/^prod-.*$/'
/!\ $PROD_REF and $INTEG_REF are used to implement pattern matching in GitLab CI rules, so beware of this GitLab bug.
If you have a close look at the issue, the conclusion is that only 3 regex patterns are working:
pattern1: '/^abcde$/'
pattern5: '/^abcde.*/'
pattern6: '/^abcde/'
So make sure you're using one of those.

Spring Cloud Server serving multiple property files for the same application

Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);

NHibernate HybridSessionBuilder, how to switch hibernate cfg based upon url values

I am using the HybridSessionBuilder supplied by Palermo and his team .. link ..
We have our staging environments set up so that the url will be one of the following based on the environment
web-test.company.com
web-cert.company.com
web.company.com
what we normally do is take a look at the url and if it has "-test" we use the test configurations and so on (connection strings, etc).
This is the first project that uses nhibernate in this type of environment. What would be a good way to tell the Session Builder to use the correct hibernate cfg (I will build 1 for each environment).
The HybridSessionBuilder lives in an infrastructure layer and is injected into repositories via StructureMap.
Here's how I select a single configuration file using the HybridSessionBuilder:
public Configuration GetConfiguration()
{
var configuration = new Configuration();
string cfgFile = Path.GetDirectoryName(Assembly.GetAssembly(this.GetType()).CodeBase) +
"\\com.Data.nHibernate.cfg.xml";
configuration.Configure(cfgFile);
configuration.AddAssembly("com.Data");
return configuration;
}
If you want to select configuration files based on the URL I would just identify the call stack that leads to this function and pass in either an enum value or the config file's name directly.