Spring Profiles in combination with ConfigServer - properties

I have a very basic Spring Boot Config Server (just added the dependency and annotated mainclass with #EnableConfigServer).
In general I would like to support multiple environments with different propertysources for each of my applications, here is the example of the ConfigServer itself:
Profile: default (application.yml on classpath):
Profile: docker (application-docker.yml on classpath):
Profile: default (application.yml in repository of ConfigServer):
So in my case all of the properties from all of the three screenshots should be active, I'd expect the order/priority as follows:
application.yml from classpath
application-ANY_PROFILE.yml from classpath
application.yml from config repo
APP-NAME.yml from config repo (does not exists in this case)
So far this works flawlessly, except the issue that I'm having is that my application-docker.yml on classpath is beeing ignored when I start the application with the command (of course inside the container):
java -jar -Dspring-boot.run.profiles=docker *.jar
as you can see here:
My question is, even when I provide the profile as command line argument its not beeing picked up.
Why is that?
UPDATE, here is the Dockerfile and entrpoint.sh:

To activate one or more profiles do one of the following:
Activate using the VM parameters -Dspring.profiles.active=<profiles>
Activate using program arguments --spring.profiles.active=<profiles>
Following your example, the following should work:
java -jar -Dspring.profiles.active=docker *.jar

Related

Increase memory allocated to application deployed to payara micro

Am running my application from a payara micro UberJar and would like to increase the memory allocated to the application. How can I do this at the point of creating the uberJar?
There are a couple of ways you can do this. The first way I'll mention is the preferred way:
1. Use asadmin commands
The latest edition of Payara Micro introduces an option called --postbootcommandfile which allows you to run asadmin commands against Payara Micro. Your file should include something like this:
delete-jvm-options -Xmx=512m
create-jvm-options -Xmx=1g
create-jvm-options -Xms=1g
You will need to make sure you delete the existing options before applying new ones.
You can then use the file similar to this:
java -jar payara-micro.jar --postbootcommandfile myCommands.txt --deploy myApp.war --outputuberjar myPayaraMicroApp.jar
Your settings should now persist in the resulting Uber JAR.
2. Supply a custom domain.xml
The alternative to this would be modifying a domain.xml of your own and overriding the in-built domain.xml with your own.
You can use the --rootdir option to get Payara Micro to output its configuration to a directory so you can make changes there. This process is outlined in this blog:
http://blog.payara.fish/working-with-external-configuration-files-in-payara-micro
If you already have a custom domain.xml to hand, you can use the --domainconfig property to supply it, as follows:
java -jar payara-micro.jar --domainconfig myCustomDomain.xml --deploy myApp.war --outputuberjar myPayaraMicroApp.jar
After following either of these methods, you can simply start the resulting JAR and all the settings and configuration will be applied:
java -jar myPayaraMicroApp.jar
Payara Micro uber JAR is a plain JAR and it doesn't start a new JVM like Payara Server does. Therefore there's no way to modify JVM memory settings from within the JAR as the JVM is already started. Although it's possible to add the JVM settings into the Payara Micro configuration, they are ignored and not applied. Those configuration values are only used within Payara Server.
With Payara Micro uber JAR, you need to specify the JVM options on the command line, like this:
java -Xmx=1g -Xms=1g -jar myPayaraMicroApp.jar
If you need to specify JVM arguments in the uber JAR, you need to use a solution like capsule.io to wrap the JAR into a launcher JAR that would spawn a separate JVM for Payara Micro and pass the arguments to it.

How to run WildFly with standalone-full.xml from IntelliJ IDEA?

I'm trying to run Wildfly 8.0 from Intellij IDEA. When starting WildFly through commmand-line I can use the -c standalone-full.xml parameter to use the standalone-full.xml configuration file. How can I specify this when running WildFly from Intellij IDEA?
In my opinion switch -c standalone-full.xml is not a VM Option so I will post a little bit different solution:
In the Run/Debug configuration for your server in the tab Startup/Connection you have the ability to set Startup script: On the end of line there is checkbox Use default. Please unselect it and paste on the end of the input -c standalone-full.xml
Adding -Djboss.server.default.config=standalone-full.xml to VM_OPTIONS is the equivalent of running standalone -c standalone-full.xml from a shell
As said by Mike Holdsworth -Djboss.server.default.config=standalone-full.xml works perfectly.
But there is another advantage over the -c standalone-full.xml method.
When you rename your standalone.xml file to create custom configuration files for multiple environments. Like env1.xml, env2.xml, etc.
If you use -c env1.xml, Intellij will give you the following message:
Error: HTTP management port configuration not found.
So you have to put a basic standalone.xml who will be overriden at the startup by the one you give with the -c option.
The -Djboss.server.default.config=env1.xml will prevent it.
Look out for different startup scripts for "Run" and "Debug" in Intellij IDEA. If you don't uncheck "Use default" in both of them then you can end up with two different profiles on "Run" and "Debug". It is easy to forget and annoying to figure it out.
If you want to run it by default w/o passing any command line parameters than go
to standalone.(bat|sh)
Append to the SERVER_OPTS variable: --server-config=standalone-full.xml
At least now you'll run it in full mode from any place (ide, service, command line)
I'm on a cross-platform team and we share our run configs. Modifying the startup script could cause problems (other teammate's paths and startup scripts are different), so my solution was:
Made a backup of standalone.xml
Renamed standalone-full.xml to standalone.xml
This doesn't answer the OP's question directly, but may be helpful for folks.
In the Run/Debug configuration for your server you have the ability to set VM options. You can put your switch in there. You may have problems however with jboss identifying the correct path for the file, so you may have to play with that a little bit before it works for you.
Run -> Edit configurations -> Click '+' in the top left corner -> JBoss Server -> Local
There you can configure your JBoss instance and set VM options and so on.

log4j properties files based on leiningen test metadata?

How can I use different log4j properties files based on leiningen test metadata? I have functions that have debug logging output to a file. Often, there is a lot of data being written to this debug log file, slowing down the function. Normal runs of the application will not have debug file writing, so I want to benchmark the normal running of the function without that file writing. For benchmarking, I am using criterium. Let's assume that the metadata for benchmarking deftest defitions is :benchmark.
The trouble with doing this based on test metadata is that all tests are run in a single JVM instance, and modifying the Log4j configuration on the fly within a JVM is not exactly easy. Instead, I would set up profiles in your project.clj to disable the :benchmark tests by default, and set up a separate profile for running benchmarks. Assuming that you have your :resource-paths set up to include a debug-level log4j.properties file, your benchmark profile could then set up the classpath or system profiles as appropriate to use a different file. For example:
(defproject myproject
...
:test-selectors {:default (complement :benchmark)}
:profiles {
:benchmark {
:test-selectors {:default :benchmark}
:jvm-opts ["-Dlog4j.configuration=log4j-benchmark.properties"]
}
})
You could then run the benchmarks with:
> lein with-profile +benchmark test

Play 2 & Cloudbees: Could not resolve substitution to a value

I am deploying my Play! 2.1 application on Cloudbees.
I have in my application.conf:
# Database configuration
# ~~~~~
db.default.driver=com.mysql.jdbc.Driver
db.default.url=${MYSQL_URL_DB}
db.default.user=${MYSQL_USERNAME_DB}
db.default.password=${MYSQL_PASSWORD_DB}
I defined those values in Cloudbees configuration:
$ bees config:list -a myself/my-app
Application Parameters:
proxyBuffering=false
MYSQL_URL_DB=jdbc:cloudbees://my-app
MYSQL_USERNAME_DB=my-app
MYSQL_PASSWORD_DB=yummy
Runtime Parameters:
java_version=1.7
I publish my app using git (git push cloudbees cloudbees:master) which triggers Jenkins. But when it comes to deploying application, I get in Jenkins logs:
[error] (compile:compile)
com.typesafe.config.ConfigException$UnresolvedSubstitution:
conf/application.conf: 16: Could not resolve substitution to a value:
${MYSQL_PASSWORD_DB}
Is there anything else to do to make Jenkins aware of the configuration? Did I misunderstand something?
Thanks for your help!
Alban
You can add "?" to the beginning, so it will be treated as an override.
db.default.url=${?MYSQL_URL_DB}
You can also handle fallback situations with this approach, if you like.
db.default.url=mysql://fallback_url
db.default.url=${?MYSQL_URL_DB}
If MYSQL_URL_DB does not exist, fallback_url will be used.
This configuration is injected at runtime, not build time. You have to find a way to make the sbt build ignore unresolved substitution.
It seems a possible workaround is to set MYSQL_URL_DB=foo, etc as build environment variables, so that the check don't break, as they won't be actually injected in your configuration
I use a config like this:
https://github.com/CloudBees-community/play2-clickstart/blob/master/conf/application.conf
and a build command like this:
java -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=384M -jar /opt/sbt/sbt-launch-0.11.3-2.jar -Dsbt.log.noformat=true clean compile test dist
And it does not worry about the missing environment variables.
My guess is that there is a scala macro or something that triggers the compiler to resolve those variables. Adding them in is fine.
I have amended the clickstart to set default values in case they are needed.

Running ScalaTest in IntelliJ removes everything but idea_rt.jar from classpath

I am trying to run a ScalaTest test case from IntelliJ
I specify the class path to use in the run/debug configuration dialogue and this appears to work ok.
However, if I try to interrogate this classpath using System.getProperty("java.class.path")
I only get the idea_rt.jar
This wouldn't be a problem except I am trying to do this inside an Integration test:
val execArgs = Array("java",
"-classpath",
System.getProperty("java.class.path"),
mainClass,
args)
val process = Runtime.getRuntime.exec(execArgs)
And, of course, my mainClass is not found.
My config.
scalatest_2.9.2-1.8.RC1
IntelliJ IDEA 11.1.3
scala-library-2.9.2
scala plugin version 0.5.977
Try to edit bin/idea.properties or Info.plist if you are on Mac, set this to true:
#---------------------------------------------------------------------
# Configure if a special launcher should be used when running processes from within IDE.
# Using Launcher enables "soft exit" and "thread dump" features
#---------------------------------------------------------------------
idea.no.launcher=true