I've been playing around with Elixir/Phoenix third-party modules. ( Modules that are used to fetch some data from a 3rd party service ) One of those module looking like so:
module TwitterService do
#twitter_url "https://api.twitter.com/1.1"
def fetch_tweets(user) do
# The actual code to fetch tweets
HTTPoison.get(#twitter_url)
|> process_response
end
def process_response({:ok, resp}) do
{:ok, Poison.decode! resp}
end
def process_response(_fail), do: {:ok, []}
end
The actual data doesn't matter in my question. So now, I'm interested in how can I dynamically configure the #twitter_url module variable in tests to make some of the tests fail on purpose. For example:
module TwitterServiceTest
test "Module returns {:ok, []} when Twitter API isn't available"
# I'd like this to be possible ( coming from the world of Rails )
TwitterService.configure(:twitter_url, "new_value") # This line isn't possible
# Now the TwiterService shouldn't get anything from the url
tweets = TwitterService.fetch_tweets("test")
assert {:ok, []} = tweets
end
end
How can I achieve this?
Note: I know I can use :configs to configure #twiter_url separately in dev and test environments, but I'd like to be able to test on a real response from the Twitter API too, and that would change the URL on the entire Test environment.
One of the solutions that I came up with was
def fetch_tweets(user, opts \\ []) do
_fetch_tweets(user, opts[:fail_on_test] || false)
end
defp _fetch_tweets(user, [fail_on_test: true]) do
# Fails
end
defp _fetch_tweets(user, [fail_on_test: false]) do
# Normal fetching
end
But that just seems hackish and silly, there must be a better solution to this.
As it was suggested by José in Mocks And Explicit Contracts, the best way would be probably to use a dependency injection:
module TwitterService do
#twitter_url "https://api.twitter.com/1.1"
def fetch_tweets(user, service_url \\ #twitter_url) do
# The actual code to fetch tweets
service_url
|> HTTPoison.get()
|> process_response
end
...
end
Now in tests you just inject another dependency when necessary:
# to test against real service
fetch_tweets(user)
# to test against mocked service
fetch_tweets(user, SOME_MOCK_URL)
This approach will also make it easier to plug in different service in the future. The processor implementation should not depend on it’s underlying service, assuming the service follows some contract (responds with json given a url in such a particular case.)
config sounds like a good way here. You can modify the value in the config at runtime in your test and then restore it after the test.
First, in your actual code, instead of #twitter_url, use Application.get_env(:my_app, :twitter_url).
Then, in your tests, you can use a wrapper function like this:
def with_twitter_url(new_twitter_url, func) do
old_twitter_url = Application.get_env(:my_app, :twitter_url)
Application.set_env(:my_app, :twitter_url, new_twitter_url)
func.()
Application.set_env(:my_app, :twitter_url, old_twitter_url)
end
Now in your tests, do:
with_twitter_url "<new url>", fn ->
# All calls to your module here will use the new url.
end
Make sure you're not using async tests for this as this technique modifies global environment.
Related
I've been trying to figure out a way to test the following piece of code (in Elixir):
defmacro __using__(_) do
quote do
# API functions will be used from this client
import Client.API
def list_storages do
case list_buckets do
{:ok, res} ->
case res.status_code do
200 ->
res.body
|> Friendly.find("name")
|> Enum.map(fn bucket -> bucket.text end)
_ ->
res |> show_error_message_and_code
end
{:error, reason} ->
parse_http_error reason
end
end
...
The problem is the list_buckets function is being import from the Client.API module (that's already being tested in another project which I can't really change anything there). My idea was to stub/mock/dummy the API functions so that they return just a dummy reply. I've tried using defoverridable to override the list_buckets function but that doesn't work since the function definition is happening in another module.
I've read the following post by José Valim and that has helped test the Client.API module but I don't find a way to apply those concepts to this particular problem.
My only (and stupid) idea so far is to just reimplement every function inside the macro in a test file and use a dummy API function defined there but that feels very wrong and not helpful if there are code changes in the non-testing code.
Basically I want to test if the three possible cases are correct:
Receiving {:ok, res} and code 200 -> Outputs the correct data
Receiving {:ok, res} and another code -> Output the error message and code
Receiving {:error, reason} -> parses the HTTP error and outputs the reason for failure
Can anyone help me with this?
You can still use the principles from that blog post. Instead of importing Client.Api pass it as a last variable to list_storages.
def list_storages(api \\ Client.Api) do
case api.list_buckets do
This way you don't need to change anything in your application code and you can pass dummy mock when testing the function.
I would like to create some changeable boundary module. Ideally the result would look like any other module but the behaviour could be set at compile time or in the configuration files. I think I am looking for something like define in erlang
Say I have a SystemClock module and a DummyClock tuple module. Ideally the Clock module would be one or other of the two modules above chosen in the config file.
In config/test.ex
define(Clock, {DummyClock, 12345678})
Later
Clock.now
# => 12345678
In config/prod.ex
define(Clock, SystemClock)
Later
Clock.now
# => 32145687
I think the easiest way to do this is with configs and Application.get_env/2.
in config/test.exs
config :my_application, clock: DummyClock
in config/dev.exs and config/prod.exs
config :my_application, clock: RealClock
in the code that uses the clock
defp clock, do: Application.get_env(:my_application, :clock)
def my_function(arg1, arg2) do
now = clock.now
# ...
end
An option that I've used many times is Meck, which you can get/read about here. For your example, we might write something like:
test "something that depends on Clock calls Clock.now" do
:meck.new(Clock, [])
:meck.expect(Clock, :now, fn -> 12345678 end)
# we expect that the module under test is getting it's result from Clock
assert ModuleUnderTest.execute == 12345678
end
Instead of re-compiling your code to get :test behavior, Meck replaces the functionality of your modules on the fly, during your tests.
The discussion referenced in other comments here points out some reasons not to use mocking; Yes, you have to sacrifice certain things (sequential testing, and José's opinion of your testing code), but in exchange, you can dramatically simplify testing code in tricky situations.
I'm making a library with a module that when use'd injects some functions dependant on the contents of a directory, and I want to test the behaviour with different directories. Currently I get the path to the directory through application config with Application.get_env/3.
If I'm changing the directory Application.put_env/4 it means my tests have to run sequentially as this is effective a global value, correct?
Can I stub out the call to Application.get_env/3? Or should I be passing in the value in another way? (such as via the use macro)
The simplest way is to pass in the value as an argument. Your module could depend on Application.get_env only absent a passed in value. Something like:
Application.put_env(MyApplication, :some_key, "hello")
defmodule Test do
def test(string \\ Application.get_env(MyApplication, :some_key)) do
IO.inspect(string)
end
end
# Default behaviour
Test.test # => "hello"
# In your tests
Test.test("world") # => "world"
Is it possible to rename an existing Erlang module? I have some code in several Erlang modules that I want to use in an Elixir project (assertion library).
I don't want to convert all the Erlang modules to Elixir as they are complete, but I want to rename them, and possible add additional functions to them. Both modules are simply collections of functions, they don't implement behaviors or do anything unusual.
I want to be able to take an existing Erlang module:
-module(foo_erl).
-export([some_fun/1]).
some_fun(Arg) ->
ok.
And write an Elixir module to extend the Erlang one:
defmodule ExFoo do
somehow_extends_erlang_module :foo_erl
another_fun(arg) do
:ok
end
end
And then be able to use the functions defined in the Erlang module:
iex(1)> ExFoo.some_fun(:arg) #=> :ok
Is this possible in Elixir? If so, how would I go about doing this?
Here's the first thing that I can suggest. I'm 100% sure that it could be solved in a more elegant way. I'll back to this later since I'm not an Elixir guru yet.
defmodule Wrapper do
defmacro __using__(module) do
exports = module.module_info[:exports] -- [module_info: 0, module_info: 1]
for {func_name, arity} <- exports do
args = make_args(arity)
quote do
def unquote(func_name)(unquote_splicing(args)) do
unquote(module).unquote(func_name)(unquote_splicing(args))
end
end
end
end
defp make_args(0), do: []
defp make_args(arity) do
Enum.map 1..arity, &(Macro.var :"arg#{&1}", __MODULE__)
end
end
defmodule WrapperTest do
use ExUnit.Case, async: true
use Wrapper, :lists
test "max function works properly" do
assert (max [1, 2]) == 2
end
end
In general, we suggest to avoid wrapping Erlang modules. In case you want to wrap many modules in an automatic fashion, velimir's post is right on the spot.
If you have one-off case though, where you definitely want to wrap an Erlang module, I would suggest to simply use defdelegate:
defmodule MyLists do
defdelegate [flatten(list), map(fun, list)], to: :lists
end
This is a two part question, but may have the same answer.
Part One:
In our app, one particular controller get hit a lot -- so much so that we'd like to it be logged in a file separate from all other requests. Setting the FoosController.logger is not what I'm looking for, because the request exercises some lib files and active record object that have their logger object, and rails will log some info before handing control to the controller in question.
Part Two:
We have a global before filter included in our root application_controller.rb that is run before most actions of most controllers. This before_filter is very wordy in the logs, and is a candidate for having all its logging info sent to a separate file. This before filter also calls out to libs and ActiveRecord code with their own refererences to the logger.
One possible solution is to run the single controller as its own standalone application. I haven't tried it yet, because it's pretty tied into the internals of app. This approach also would not help with the before_filter.
Are there any good solutions for more fine-grained logging in rails apps?
Thanks!
For part I
I recommend creating your own logger class (possibly inheriting from the ruby logger (1), have the logger filter out the request url and based on that log to a specific file.
For part II
the easiest solution would be to just use a seperate logger for these methods. Instead of logger.debug "your message" you just call method_logger.debug "your message".
Creating your own logger is simple, just use the ruby logger class (1). The following code creates a logger that logs to a my_method.log file in the logs dir of your rails application.
method_logger = Logger.new(Rails.root.join('logs','my_method.log')
You can then write to your log with the following command, which should look familiar to you as rails uses the Ruby logger as well.
method_logger.debug "your debug message here"
(1) http://www.ruby-doc.org/stdlib-1.9.3/libdoc/logger/rdoc/Logger.html
Here's the code for a custom logger that you can use to solve Part I. I adapted it for my own use from another Stack Overflow question, even including the answerer's inline comments. The main difference is that you can specify a different log path for development and production:
# app/concerns/event_notifications_logger.rb
class EventNotificationsLogger < Rails::Rack::Logger
def initialize(app, opts = {})
#default_logger = Rails.logger
#notifications_logger = Logger.new(notifications_log_path)
#notifications_logger.formatter = LogFormat.new
#notifications_logger.level = #default_logger.level
#app = app
#opts = opts
end
def call(env)
logger = if env['PATH_INFO'] == '/event_notifications/deliver'
#notifications_logger
else
#default_logger
end
# What?! Why are these all separate?
ActiveRecord::Base.logger = logger
ActionController::Base.logger = logger
Rails.logger = logger
# The Rails::Rack::Logger class is responsible for logging the
# 'starting GET blah blah' log line. We need to call super here (as opposed
# to #app.call) to make sure that line gets output. However, the
# ActiveSupport::LogSubscriber class (which Rails::Rack::Logger inherits
# from) caches the logger, so we have to override that too
#logger = logger
super
end
private
def notifications_log_path
Rails.env.production? ? '/var/log/event-notifications.log' : Rails.root.join('log/event-notifications.log')
end
end
Then include it in your application configuration:
require File.expand_path('../boot', __FILE__)
require 'rails/all'
require File.expand_path('../../app/concerns/event_notifications_logger', __FILE__)
module YourApp
class Application < Rails::Application
config.middleware.swap Rails::Rack::Logger, EventNotificationsLogger
# ...
end
end
Rails 3.2 provides filter options for the logs, called "Tagged Logging".
See the announcement or the new Rails 3.2 guide on ActiveSupport.