Available placeholders for {name}, {profile} and {label} in Spring Cloud Config Server? - spring-cloud-config

I'd like Spring Cloud Config Server to resolve some placeholders at rendering time based on the {name}, {profile} and {label} values passed in the request.
For example I'd like the following config.txt plain text file :
name=${name}
profile=${profile}
to be rendered as :
name=my_app_name
profile=my_profile_name
when calling /my_app_name/my_profile_name/config.txt.
Is it possible ? And in that case what placeholders should I use ? Is there anything to configure in the server to enable this feature ?

Related

Flowable - concat dynamic variable and a string in http task (url)

I need to convert the base url according to the production and other environments.
I am using script task before a http task to perform this logic.
baseUrl = http://localhost:8080
baseUrl, is the output of the script task. Now I need to add this base url as a prefix in http task url
Url = ${baseUrl}/application/find (something like this).
I am getting the following issue
Unknown Property used in the expression ${baseUrl}/application/find
Script
var env = execution.getVariable("env")
if(env == "prod") {
var baseUrl = "http://localhost:8080";
execution.setVariable("baseUrl", baseUrl);
}
Please assist.
This typically means that it is unable to find a property in the expression (as the message says). The only expression you are using is baseUrl which means that the issue is around the baseUrl. The concatenation as you have done it is correct and doesn't need to have an adaption.
You should check if the variable really exists, this you can do by introducing a wait state before your HTTP task and check afterwards if the variable is created. Rather than using outputs, you can also use the Java API in your script task to create the variable:
execution.setVariable("baseUrl", "http://localhost:8080");
Assuming you are using Spring Boot, for your specific use-case it would be also an option to use the application.properties to specify your base-url and then refer to the baseUrl with the following expression:
${environment.getProperty("baseUrl")}/application/find
This will allow you to change the baseUrl independent of your process definition.

Spring Cloud Server serving multiple property files for the same application

Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer

Karate JDBC Connection

In Karate script is there a way to cache the DB connections ? . To be more specific the DB connections are through a Java program , every time we make DB calls the connection call is also
def dbDemo=Java.type('tests.DataBaseAssertions')
The above line of code is used in all the feature files . Is there a way to cache this object so that all script can refer to that .
Application level
Sounds like you are looking for the callSingle() syntax, please refer to the docs:
https://github.com/intuit/karate#hooks
var result = karate.callSingle('classpath:jdbc.feature');

How to configure kafka s3 sink connector for json using its fields AND time based partitioning?

I have a json coming in like this:
{
"app" : "hw",
"content" : "hello world",
"time" : "2018-05-06 12:53:04"
}
I wish to push to S3 in the following file format:
/upper-directory/$jsonfield1/$jsonfield2/$date/$HH
I know I can achieve:
/upper-directory/$date/$HH
with TimeBasedPartitioner and Topic.dir, but how do I put in the 2 json fields as well?
You need to write your own Partitioner to achieve a combination of TimeBased and Field Partitioners
That means make a new Java project, look at the source code for a reference point, build a JAR out of the project, and then copy the jar into kafka-connect-storage-common on all servers running Kafka Connect, which is picked up by the S3 connector. After you've copy the JAR, you will need to reboot the Connect process.
Note: there's already a PR that is trying to add this - https://github.com/confluentinc/kafka-connect-storage-common/pull/73/files

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);