I'm new to micronaut and server side programming in general. The micronaut documentation, unfortunately, does not make a lot of sense to me, as I do not have a Java background. A lot of the terms like "ApplicationContext" make sense in english, but I have no idea how to use them in practice.
Trying to start with a very basic app that prints different configurations ("localhost", "dev", "prod") depending on the environment it is in.
Here's my controller
#Controller("/")
class EnvironmentController {
// this should return "localhost", "DEV", "PROD" depending on the environment
#Get("/env")
#Produces(MediaType.TEXT_PLAIN)
fun env() = "???" // what should I put here ?
// this should return the correct mongodb connection string for the environment
#Get("/mongo")
#Produces(MediaType.TEXT_PLAIN)
fun mongo() = "???" // what should I put here ?
}
Here's the application.yml. Ideally I'd have 1 yml file for each environment
micronaut:
application:
name: myApp
server:
port: 8090
environment: localhost
mongodb:
uri: 'mongodb://localhost:27017'
Application.kt is untouched with the rest of the files generated by the mn cli tool. How can I set per environment parameters, or pass the yml file as a parameter when starting micronaut?
Are there any conventions around this?
You can specify an environment with -Dmicronaut.environments, or by specifying them in the context builder Micronaut.run in your Application class.
https://docs.micronaut.io/latest/guide/index.html#environments
Then for example application-env.yml will be loaded.
https://docs.micronaut.io/latest/guide/index.html#propertySource
The docs are pretty clear on this
By default Micronaut only looks for application.yml. Then, for tests,dev and prod, it loads application.yml and overrides any values there with the ones defined in application-test.yml, application-dev.yml and application-prod.yml
If you want to enable any other environment, you need to do it manually
public static void main(String[] args) {
Micronaut.build(args)
.mainClass(Application.class)
.defaultEnvironments("dev")
.start();
}
https://docs.micronaut.io/latest/guide/index.html#_default_environment
Related
According to the .NET Core documentation, I should be able to set the application name using an environment variable.
Environment variable: ASPNETCORE_APPLICATIONKEY
I am not seeing this to be the case. I added the WebHostDefaults.ApplicationKey setting to the Program.cs but I am still unable to override it with an environment variable.
private static IWebHost BuildWebHost(string[] args)
{
var config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("hosting.json", true)
.AddEnvironmentVariables("ASPNETCORE_")
.AddCommandLine(args)
.Build();
return WebHost.CreateDefaultBuilder(args)
.ConfigureLogging((context, builder) => { builder.ClearProviders(); })
.UseConfiguration(config)
.PreferHostingUrls(true)
.UseStartup<Startup>()
.UseSetting(WebHostDefaults.ApplicationKey, "CustomApplicationName")
.Build();
}
In startup.cs I am only seeing "CustomApplicationName" rather than the environment variable.
public class Startup
{
public Startup(IConfiguration configuration, IHostingEnvironment hostingEnvironment)
{
Configuration = configuration;
Log.Information($"Startup of application {hostingEnvironment.ApplicationName} in Environment Mode {hostingEnvironment.EnvironmentName}");
}
}
I have tried using double underscore in the environment variable name as well.
I am running on Mac OS.
As mentioned in other answers, the correct environment variable name is ASPNETCORE_APPLICATIONNAME, and it is documented here. However, it will not work, even as of .NET Core 3.1. There is a GitHub issue that describes the details of this bug, but essentially, the code inside the UseStartup<>() method sets ApplicationName back to its default value, which is the name of the assembly.
Even if you could override it back using the UseSetting() method, I wouldn't do it, based on the warnings in the discussion thread at this related GitHub issue. The safest bet for now seems to use your own separate environment variable.
I suspect this is something that the documentation "invented" and isn't actually implemented.
ASP.NET Core is hosted on github. I did a search. The only place where ASPNETCORE_APPLICATIONKEY shows up is in the documentation itself. The only issue/PR where it comes up is https://github.com/aspnet/Docs/pull/7493 which is the commit that added this environment variable to the docs and includes this insightful statement:
Did I just make up ASPNETCORE_APPLICATIONKEY? Is that a thing?
I have a Ratpack app written with the Groovy DSL. (Embedded in Java, so not a script.)
I want to load the server's SSL certificates from a config file supplied in the command line options. (The certs will directly embedded in the config, or possibly in a PEM file referenced somewhere in the config.)
For example:
java -jar httpd.jar /etc/app/sslConfig.yml
sslConfig.yml:
---
ssl:
privateKey: file:///etc/app/privateKey.pem
certChain: file:///etc/app/certChain.pem
I seem to have a chicken-and-egg problem using the serverConfig's facilities for reading the config file in order to configure the SslContext later in the serverConfig. The server config isn't created at the point I want to load the SslContext.
To illustrate, the DSL definition I have is something like this:
// SSL Config POJO definition
class SslConfig {
String privateKey
String certChain
SslContext build() { /* ... */ }
}
// ... other declarations here...
Path configPath = Paths.get(args[1]) // get this path from the CLI options
ratpack {
serverConfig {
yaml "/defaultConfig.yaml" // Defaults defined in this resource
yaml configPath // The user-supplied config file
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
ssl sslConfig // HOW DO I GET AN INSTANCE OF that SslConfig POJO HERE?
baseDir BaseDir.find()
}
handlers {
get { // ...
}
}
}
Possibly there is a solution to this (loading the SSL context in a later block?)
Or possibly just a better way to go about the whole thing..?
You could create a separate ConfigDataBuilder to load up a config object to deserialize your ssl config.
Alternatively, you can bind directly to server.ssl. All of the ServerConfig properties bind to the server space within the config.
The solution I am currently using is this, with an addition of a #builder method to SslConfig which returns a SslContextBuilder defined using its other fields.
ratpack {
serverConfig {
// Defaults defined in this resource
yaml RatpackEntryPoint.getResource("/defaultConfig.yaml")
// Optionally load the config path passed via the configFile parameter (if not null)
switch (configPath) {
case ~/.*[.]ya?ml/: yaml configPath; break
case ~/.*[.]json/: json configPath; break
case ~/.*[.]properties/: props configPath; break
}
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
baseDir BaseDir.find()
// This is the important change.
// It apparently needs to come last, because it prevents
// later config directives working without errors
ssl build().getAsConfigObject('/ssl',SslConfig).object.builder().build()
}
handlers {
get { // ...
}
}
}
Essentially this performs an extra build of the ServerConfig in order to redefine the input to the second build, but it works.
I am writing integration tests using Arquillian with embedded glassfish 3.1.2.2 using TestNG. I want to be able to run those tests in parallel, and for this case i need to dynamically configure glassfish ports and database name (we already have this set-up, and I want to reuse it of arquillian tests). What I am missing is a 'before container start' hook, where I could prepare the database, lookup free ports and update my glassfish configuration (domain.xml, could also be glassfish-resources.xml). Is there a 'clean' solution for this, or my usecase was not foreseen by Arquillian developers?
The hacky way I solved it currently is to override arquillian's beforeSuite method but this one gets called twice - at test startup and then in the container (therefore my pathetic static flag). Secondly, this solution would not work for JUnit based tests as there's no way to intercept arquillian's before suite:
public class FullContainerIT extends Arquillian {
private static boolean dbInitialized;
//#RunAsClient <-supported by #Test only
#Override
#BeforeSuite(groups = "arquillian", inheritGroups = true)
public void arquillianBeforeSuite() throws Exception {
if (dbInitialized == false) {
initializeDb();
dbInitialized = true;
}
super.arquillianBeforeSuite();
}
}
Some ideas I had:
+ having #BeforeSuite #RunAsClient seems to be what I need, but #RunAsClient is supported for #Test only;
+ I have seen org.jboss.arquillian.container.spi.event.container.BeforeStart event in Arquillian JavaDocs, but I have no clue how to listen to Arquillian events;
+ I have seen there is a possibility to have #Deployment creating a ShrinkWrap Descriptor, but these do not support Glassfish resources.
I found a clean solution for my problem on JBoss forum. You can register a LoadableExtension SPI and modify the arquillian config (loaded from xml). This is where I can create a database and filter glassfish-resources.xml with proper values. The setup looks like this:
package com.example.extenstion;
public class AutoDiscoverInstanceExtension
implements org.jboss.arquillian.core.spi.LoadableExtension {
#Override
public void register(ExtensionBuilder builder) {
builder.observer(LoadContainerConfiguration.class);
}
}
package com.example.extenstion;
public class LoadContainerConfiguration {
public void registerInstance(#Observes ContainerRegistry, ServiceLoader serviceLoader) {
//Do the necessary setup here
String filteredFilename = doTheFiltering();
//Get the container defined in arquillian.xml and modify it
//"default" is the container's qualifier
Container definition = registry.getContainer("default");
definition.getContainerConfiguration()
.property("resourcesXml", filteredFilename);
}
}
You also need to configure the SPI Extension by creating a file
META-INF/services/org.jboss.arquillian.core.spi.LoadableExtension
with this contents:
com.example.extenstion.AutoDiscoverInstanceExtension
I have YAML config for my symfony2 project using Doctrine2. I'm not understanding how exactly to adapt the cookbook entry to a YAML setup.
My doctrine mapping is at /path/to/my/bundle/Resources/config/doctrine/IpRange.orm.yml
When running PHPUnit, I get the error:
Doctrine\ORM\Mapping\MappingException: No mapping file found named 'Yitznewton.FreermsBundle.Entity.IpRange.orm.yml' for class 'Yitznewton\FreermsBundle\Entity\IpRange'.
Sounds like I need to configure the test rig to use the symfony file naming conventions, but I don't know how to do that.
Using symfony-standard.git checked out to v2.0.7
Here's my test:
<?php
namespace Yitznewton\FreermsBundle\Tests\Utility;
use Doctrine\Tests\OrmTestCase;
use Doctrine\ORM\Mapping\Driver\DriverChain;
use Doctrine\ORM\Mapping\Driver\YamlDriver;
use Yitznewton\FreermsBundle\Entity\IpRange;
use Yitznewton\FreermsBundle\Entity\IpRangeRepository;
class IpRangeRepositoryTest extends OrmTestCase
{
private $_em;
protected function setup()
{
// FIXME: make this path relative
$metadataDriver = new YamlDriver('/var/www/symfony_2/src/Yitznewton/FreermsBundle/Resources/config/doctrine');
$metadataDriver->setFileExtension('.orm.yml');
$this->_em = $this->_getTestEntityManager();
$this->_em->getConfiguration()
->setMetadataDriverImpl($metadataDriver);
$this->_em->getConfiguration()->setEntityNamespaces(array(
'FreermsBundle' => 'Yitznewton\\FreermsBundle\\Entity'));
}
protected function getRepository()
{
return $this->_em->getRepository('FreermsBundle:IpRange');
}
public function testFindIntersecting_RangeWithin_ReturnsIpRange()
{
$ipRange = new IpRange();
$ipRange->setStartIp('192.150.1.1');
$ipRange->setEndIp('192.160.1.1');
$this->assertEquals(1, count($this->getRepository()
->findIntersecting($ipRange)),
'some message');
}
Looking again at the symfony docs, it seems that integration testing with a live test database is preferred to unit testing for repository classes. That is, there is no support for stubbing EntityManagers.
Looking to make a Client that sends serialized Message objects back to a server via WCF.
To make things easy for the end-developer (different departments) would be best that they didn't need to know how to edit their config file to set up the client end point data.
That said, would also be brilliant that the endpoint wasn't embedded/hard-coded into the Client either.
A mix scenario would appear to me to be the easiest solution to roll out:
IF (described in config) USE config file ELSE fallback to hard-coded endpoint.
What I've found out is:
new Client(); fails if no config file definition found.
new Client(binding,endpoint); works
therefore:
Client client;
try {
client = new Client();
}catch {
//Guess not defined in config file...
//fall back to hard coded solution:
client(binding, endpoint)
}
But is there any way to check (other than try/catch) to see if config file has an endpoint declared?
Would the above not fail as well if defined in config file, but not configured right? Would be good to distinguish between the two conditions?
I would like to propose improved version of AlexDrenea solution, that uses only special types for configuration sections.
Configuration configuration = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ServiceModelSectionGroup serviceModelGroup = ServiceModelSectionGroup.GetSectionGroup(configuration);
if (serviceModelGroup != null)
{
ClientSection clientSection = serviceModelGroup.Client;
//make all your tests about the correcteness of the endpoints here
}
here is the way to read the configuration file and load the data into an easy to manage object:
Configuration c = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);
ConfigurationSectionGroup csg = c.GetSectionGroup("system.serviceModel");
if (csg != null)
{
ConfigurationSection css = csg.Sections["client"];
if (css != null && css is ClientSection)
{
ClientSection cs = (ClientSection)csg.Sections["client"];
//make all your tests about the correcteness of the endpoints here
}
}
The "cs" object will expose a collection named "endpoints" that allows you to access all the properties that you find in the config file.
Make sure you also treat the "else" branches of the "if"s and treat them as fail cases (configuration is invalid).