Load elixir configs hierarchal in multiple projects - config

I am writing a small project in Elixir, where I will use the built in configuration capability. The way it looks like I have a general project that will call APIs:
api/confix.exs:
use Mix.Config
config :api, :status, "awesome"
I now have a second project that should utilize these variables
api_consumer/mix.exs
def application do
[applications: [:logger, :api]]
end
When I run a console in api_consumer accessing the variable yields a nil result.
iex -S mix
iex(1)> Application.get_env(:api, :status)
=> nil
From what I understand (and from what I read here) that should work.
Does anybody know what's going on here?

mix.exs is used to configure the current application, while config.exs is used to configure other applications. In your :api application, you should put the default values in the application/0 function inside mix.exs:
# api/mix.exs
def application do
[
applications: [:logger, :api],
env: [status: "awesome"]
]
end
Then, you can override this setting in your :api_consumer application inside the config.exs file:
# api_consumer/config/config.exs
config :api, status: "fantastic"
More info can be found here.

Related

How to provide an HttpClient to ktor server from the outside to facilitate mocking external services?

I am trying to provide an HttpClient from the outside to my ktor server so that I can mock external services and write tests, however I get this exception when I run my test:
Please make sure that you use unique name for the plugin and don't install it twice. Conflicting application plugin is already installed with the same key as `Compression`
io.ktor.server.application.DuplicatePluginException: Please make sure that you use unique name for the plugin and don't install it twice. Conflicting application plugin is already installed with the same key as `Compression`
at app//io.ktor.server.application.ApplicationPluginKt.install(ApplicationPlugin.kt:112)
at app//com.example.plugins.HTTPKt.configureHTTP(HTTP.kt:13)
at app//com.example.ApplicationKt.module(Application.kt:14)
at app//com.example.ApplicationTest$expected to work$1$1.invoke(ApplicationTest.kt:39)
at app//com.example.ApplicationTest$expected to work$1$1.invoke(ApplicationTest.kt:38)
and thats a bit unexpected to me because I am not applying the Compression plugin twice as far as I can tell. If I run the server normally and manually call my endpoint with curl then it works as expected. What am I doing wrong?
I added a runnable sample project here with a failing test.
sample project
official ktor-documentation-sample project.
The problem is that you have the application.conf file and by default, the testApplication function tries to load modules which are enumerated there. Since you also explicitly load them in the application {} block the DuplicatePluginException occurs. To solve your problem you can explicitly load an empty configuration instead of the default one:
// ...
application {
module(client)
}
environment {
config = MapApplicationConfig()
}
// ...

Janus Graph Remote Graph NoSuchFieldError: V3_0 error

I follow this example;
https://github.com/JanusGraph/janusgraph/tree/master/janusgraph-examples/example-remotegraph
and I would like to debug this project, I configured(HBase+Solr) and run Janus Graph server with
$JANUSGRAPH_HOME/bin/gremlin-server.sh $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml
command.
I passed this argument to IDEA via Run Configuration > Program Arguments
[Project Home]/conf/jgex-remote.properties
my jgex-remote.properties file is:
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
# cluster file has the remote server configuration
gremlin.remote.driver.clusterFile=[Project Home]/conf/remote-objects.yaml
# source name is the global graph traversal source defined on the server
gremlin.remote.driver.sourceName=g
and my remote-objects.yaml file includes:
hosts: [127.0.0.1]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
}
}
It tries to run this command:
cluster = Cluster.open(conf.getString("gremlin.remote.driver.clusterFile"));
And throws this exception:
Exception in thread "main" java.lang.NoSuchFieldError: V3_0 at
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0.(GryoMessageSerializerV3d0.java:41)
at
org.apache.tinkerpop.gremlin.driver.ser.Serializers.simpleInstance(Serializers.java:77)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:472)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:469)
at
org.apache.tinkerpop.gremlin.driver.Cluster.getBuilderFromSettings(Cluster.java:167)
at
org.apache.tinkerpop.gremlin.driver.Cluster.build(Cluster.java:159)
at org.apache.tinkerpop.gremlin.driver.Cluster.open(Cluster.java:233)
at
com.ets.dataplatform.init.RemoteGraphApp.openGraph(RemoteGraphApp.java:72)
at com.ets.dataplatform.init.GraphApp.runApp(GraphApp.java:290) at
com.ets.dataplatform.init.RemoteGraphApp.main(RemoteGraphApp.java:195)
It is not meaningful for me.
Thanks in advance.
I would try to align your versions. I assume that you are using JanusGraph 0.2.0. If you look at the pom.xml for that version you'll see that it is bound to TinkerPop 3.2.6:
https://github.com/JanusGraph/janusgraph/blob/v0.2.0/pom.xml#L68
Change to that version in your application and see if the connection works. Taking that approach should not only fix your problem but also ensure that you don't run into other incompatibilities. That is not to say that you can't configure later versions of TinkerPop to work with 3.2.6, but it requires a bit more configuration and you have to be aware of minor changes that might affect how certain operations might behave.

Laravel 5: Why Class 'App\Console\Commands\SSH' not found?

I'm following instructions on
https://laravelcollective.com/docs/5.1/ssh
in order to use SSH to perform SFTP download from a private server.
I've done so far:
$> composer require laravelcollective/remote
added in config app :
'providers' => [
....
Collective\Remote\RemoteServiceProvider::class,
...
]
'aliases' => [
....
'SSH' => Collective\Remote\RemoteFacade::class,
...
]
published it:
$> php artisan vendor:publish --provider="Collective\Remote\RemoteServiceProvider"
Then I also run a composer update
But still in my console command if I test it like:
$contents = SSH::into('production')->getString('/hi.txt');
dd($contents);
I get the error in my question.
When a service provider is defined like above, the class is globally accessible? Or still I need to put the directive use Path/to/Class ?
If so, since the Alias ahas a different name from the real classname, how should I specify the use path directive?
use Collective\Remote\RemoteServiceProvider
or
use Collective\Remote\RemoteServiceProvider
?
What am I missing?... I've tested other preconfigured services that comes with laravel 5.2 fresh install (i.e. Redis) and they seems to be found without any additional use directive in class.....
Thanks
Just use it as global class \SSH and it will work.
$contents = \SSH::into('production')->getString('/hi.txt');
dd($contents);

how to invoke a play application without hitting the URL (http request)?

I'm using play application (using Play version 2.1.0) with RabbitMQ and do not have any view component.
So i would like to invoke this play application without hitting the execution URL (http://localhost:9000/<routing_info>) on server startup.
Would also like to know if there is any way in Play 2.1.0 version to run the application on server startup, i mean bootstrapping. Is this option available in play 2.1.0.
As i've read through the documentation its mentioned only for 1.2 version.
Please help!!
Play allows you to define a 'global' object which will be instantiated automatically by Play when the application starts.
In application.conf you should find the following:
# Global object class
# ~~~~~
# Define the Global object class for this application.
# Default to Global in the root package.
application.global=global.Global
On a new play application, this line is commented out. I've uncommented it and made it point to an object called Global in the global package. You can make it what ever you want.
Your global object should extend GlobalSettings.
In my applications, I use a static initialiser block to run code when that class is loaded:
public class Global extends GlobalSettings
{
static
{
...
}
}

rails 3.0.9 resque-scheduler and delayed job error undefined method enqueue_at

Context of rails 3.0.9, using resque 1.17.1 and resque-scheduler 2.0.0.0d.
Trying to follow the document at https://github.com/bvandenbos/resque-scheduler/tree/v2.0.0.d, I've created a resque_scheduler.rake file :
# Resque tasks
require 'resque/tasks'
require 'resque_scheduler/tasks'
namespace :resque do
task :setup do
require 'resque'
require 'resque_scheduler'
require 'resque/scheduler'
# you probably already have this somewhere
Resque.redis = 'localhost:6379'
# The schedule doesn't need to be stored in a YAML, it just needs to
# be a hash. YAML is usually the easiest.
#Resque.schedule = YAML.load_file('your_resque_schedule.yml')
# If your schedule already has +queue+ set for each job, you don't
# need to require your jobs. This can be an advantage since it's
# less code that resque-scheduler needs to know about. But in a small
# project, it's usually easier to just include you job classes here.
# So, someting like this:
#require 'jobs'
# If you want to be able to dynamically change the schedule,
# uncomment this line. A dynamic schedule can be updated via the
# Resque::Scheduler.set_schedule (and remove_schedule) methods.
# When dynamic is set to true, the scheduler process looks for
# schedule changes and applies them on the fly.
# Note: This feature is only available in >=2.0.0.
Resque::Scheduler.dynamic = true
end
end
For the time being I'm only interested in delayed job, so I don't have any resque_schedule.yml file.
I've tested my worker class with resque and it is working fine. When I try to add a delay and user enqueue_at in my controller...
def do_delay_job user_id,delay
Resque.enqueue_at(delay.minutes.from_now, JobDelayer, :user_id => user_id)
#Resque.enqueue(JobDelayer, user_id) # using basic resque mechanism.
end
...it just fails
undefined method `enqueue_at' for Resque Client connected to redis://127.0.0.1:6379/0:Module
Any clue or hint to figure out this issue will be appreciated.
Here couple of issue. documentation is not alway obvious and assumed that you should know...I didn't. So after digging all over the place I got resque nice and smooth ;-)
initializers\resque.rb must reference resque_schedule.
require 'resque_scheduler'
resque task must be started:
COUNT=5 QUEUE=* rake resque:work
resque-schedule task must be started:
rake resque:scheduler
To monitor resque-schedule, resque-web must be started with the config file of resque as parameter. This one must not reference anything from rails directly as resque-web is a sinatra app and it won't be able to load it properly.
resque-web ~/pathToYourApp/config/initializers/resque.rb
Starting both worker and scheduler processes was necessary indeed.
What I found out in addition was that I needed to call
require 'resque_scheduler'
before I called Resque.enqueue_at(...). This was the very cause of "undefined method" error in my case.
And resque-web can be actually hooked to your rails app. Add following lines in "config/routes.rb", reboot rails app, then you can access to the resque-web via $YOUR_RAILS_ROOT_URL/resque.
require 'resque_scheduler'
mount Resque::Server, :at => "/resque"