I'm currently working on my first big elixir project and wanted to properly utilize testing this time.
However, if I add my Modules to the "normal" supervisor, i cannot start them again with start_supervised! and all tests fail with Reason: already started: #PID<0.144.0>
Here is my code:
(application.ex)
defmodule Websocks.Application do
# See https://hexdocs.pm/elixir/Application.html
# for more information on OTP Applications
#moduledoc false
use Application
def start(_type, _args) do
children = [
{Websocks.PoolSupervisor, []},
{Websocks.PoolHandler, %{}}
# {Websocks.Worker, arg}
]
# See https://hexdocs.pm/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: Websocks.Supervisor]
Supervisor.start_link(children, opts)
end
end
Some of my tests:
defmodule PoolHandlerTest do
use ExUnit.Case, async: true
alias Websocks.PoolHandler
doctest PoolHandler
setup do
start_supervised!({PoolHandler, %{}})
%{}
end
test "adding two pools and checking if they are there" do
assert PoolHandler.add(:first) == :ok
assert PoolHandler.add(:second) == :ok
assert PoolHandler.get_pools() == {:ok,%{:first => nil, :second => nil}}
end
and the pool handler:
defmodule Websocks.PoolHandler do
use GenServer
# Client
def start_link(default) when is_map(default) do
GenServer.start_link(__MODULE__, default, name: __MODULE__)
end
# Server (callbacks)
#impl true
def init(arg) do
{:ok, arg}
end
end
(I cut out the stuff i think is not necessary, but the complete code is on github here: github)
Thanks in advance for any help i get!
As #Everett mentioned in the comment - your application will already be started for you when you mix test, so there is no need to start your GenServers again. It seems like you're interacting with the global instance in your test, so if that's what you want, then it should just work.
However, if you'd like to start a separate instance just for your test, you need to start an unnamed one. For example, you could add an optional pid argument to your wrapper functions:
defmodule Websocks.PoolHandler do
# ...
def add(server \\ __MODULE__, value) do
GenServer.call(server, {:add, value})
end
# ...
end
Then, instead of using using start_supervised! like you do, you can start an unnamed instance in your setup and use it in your tests like so:
setup do
{:ok, pid} = GenServer.start_link(PoolHandler, %{})
{:ok, %{handler: pid}}
end
test "adding two pools and checking if they are there", %{handler: handler} do
PoolHandler.add(handler, :first)
# ...
end
Related
I am having an issue with Process.monitor/1. My initial use case was to monitor Phoenix Channel and do some cleanup after it dies. However, I didn't manage to set it up in Phoenix and decided to test it out with pure GenServers.
So, I have a simple GenServer and I want to track when it dies:
defmodule Temp.Server do
use GenServer
def start_link(_), do: GenServer.start_link(__MODULE__, %{})
def init(args) do
Temp.Monitor.monitor(self())
{:ok, args}
end
end
And another GenServer that monitors:
defmodule Temp.Monitor do
use GenServer
require Logger
def start_link(_) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
def monitor(pid) do
Process.monitor(pid)
end
def handle_info({:DOWN, ref, :process, _, _}, state) do
Logger.info("DOWN")
{:noreply, state}
end
end
So, if I understand correctly, Process.monitor will start monitoring the Temp.Server process, and should call the handle_info matching :DOWN when Server process dies. If I try it in iex:
iex> {_, pid} = Temp.Server.start_link([])
{:ok, #PID<0.23068.3>}
iex> Process.exit(pid, :kill)
true
I expect handle_info being called from Monitor module and logging "DOWN", however that doesn't happen. What am I doing wrong? I assume it doesn't work because I call to monitor from Server process Temp.Monitor.monitor(self()), but I just can't figure out how else should I do it.
When you call the Temp.Monitor.monitor/1 method, it's still running in Temp.Server's own process, not Temp.Monitor's. That means the :DOWN message is sent to Temp.Server when Temp.Server dies, which is redundant.
What you want to do is, pass the pid of your server process to Temp.Monitor and have it call the Process.Monitor method from it's own process so it can monitor it. That can only happen from one of the GenServer callbacks.
You can do that by moving your implementation in to handle_call/3 or handle_cast/3:
defmodule Temp.Monitor do
use GenServer
require Logger
def start_link(_) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
def monitor(pid) do
GenServer.cast(__MODULE__, {:monitor, pid})
end
def handle_cast({:monitor, pid}, state) do
Process.monitor(pid)
{:noreply, state}
end
def handle_info({:DOWN, ref, :process, _, _}, state) do
Logger.info("DOWN")
{:noreply, state}
end
end
I've got a integration test that is run through capybara. It visits a webpage, creates an object, and renders the results. When the object is created, the controller enqueues several jobs that create some related objects.
When I run my integration test, I want to be able to examine the rendered page as if those jobs had finished. The 2 obvious solutions are:
Set the queue adapter as :inline
Manually execute/clear the enqueued jobs after creating the object.
For 1), I've attempted to set the queue adapter in a before(:each) to :inline, but this does not change the adapter, it continues to use the test adapter (which is set in my test.rb config file):
before(:each) { ActiveJob::Base.queue_adapter = :inline }
after(:each) { ActiveJob::Base.queue_adapter = :test }
it "should work" do
puts ActiveJob::Base.queue_adapter
end
which outputs: #<ActiveJob::QueueAdapters::TestAdapter:0x007f93a1061ee0 #enqueued_jobs=[], #performed_jobs=[]>
For 2), I'm not sure if this is actually possible. ActiveJob::TestHelpers provides perform_enqueued_jobs, but this methods isn't helpful, as it seems to work only for jobs explicitly referenced in the passed in block.
Assuming you're using RSpec the easiest way to use perform_enqueued_jobs is with an around block. Combining that with metatdata tags you can do something like
RSpec.configure do |config|
config.include(RSpec::ActiveJob)
# clean out the queue after each spec
config.after(:each) do
ActiveJob::Base.queue_adapter.enqueued_jobs = []
ActiveJob::Base.queue_adapter.performed_jobs = []
end
config.around :each, perform_enqueued: true do |example|
#old_perform_enqueued_jobs = ActiveJob::Base.queue_adapter.perform_enqueued_jobs
ActiveJob::Base.queue_adapter.perform_enqueued_jobs = true
example.run
ActiveJob::Base.queue_adapter.perform_enqueued_jobs = #old_perform_enqueued_jobs
end
config.around :each, peform_enququed_at: true do |example|
#old_perform_enqueued_at_jobs = ActiveJob::Base.queue_adapter.perform_enqueued_at_jobs
ActiveJob::Base.queue_adapter.perform_enqueued_at_jobs = true
example.run
ActiveJob::Base.queue_adapter.perform_enqueued_at_jobs = #old_perform_enqueued_at_jobs
end
end
Note: you need to specify the queue_adapter as :test in your config/environments/test.rb if it's not already set
You can then specify :perform_enqueued metadata on a test and any jobs specified will be run
it "should work", :perform_enqueued do
# Jobs triggered in this test will be run
end
I'm integrating Bunny gem for RabbitMQ with Rails, should I start Bunny thread in an initializer that Rails starts with application start or do it in a separate rake task so I can start it in a separate process ?
I think if I'm producing messages only then I need to do it in Rails initializer so it can be used allover the app, but if I'm consuming I should do it in a separate rake task, is this correct ?
You are correct: you should not be consuming from the Rails application itself. The Rails application should be a producer, in which case, an initializer is the correct place to start the Bunny instance.
I essentially have this code in my Rails applications which publish messages to RabbitMQ:
# config/initializers/bunny.rb
MESSAGING_SERVICE = MessagingService.new(ENV.fetch("AMQP_URL"))
MESSAGING_SERVICE.start
# app/controllers/application_controller.rb
class ApplicationController
def messaging_service
MESSAGING_SERVICE
end
end
# app/controllers/uploads_controller.rb
class UploadsController < ApplicationController
def create
# save the model
messaging_service.publish_resize_image_request(model.id)
redirect_to uploads_path
end
end
# lib/messaging_service.rb
class MessagingService
def initialize(amqp_url)
#bunny = Bunny.new(amqp_url)
#bunny.start
at_exit { #bunny.stop }
end
attr_reader :bunny
def publish_resize_image_request(image_id)
resize_image_exchange.publish(image_id.to_s)
end
def resize_image_exchange
#resize_image_exchange ||=
channel.exchange("resize-image", passive: true)
end
def channel
#channel ||= bunny.channel
end
end
For consuming messages, I prefer to start executables without Rake involved. Rake will fork a new process, which will use more memory.
# bin/image-resizer-worker
require "bunny"
bunny = Bunny.new(ENV.fetch("AMQP_URL"))
bunny.start
at_exit { bunny.stop }
channel = bunny.channel
# Tell RabbitMQ to send this worker at most 2 messages at a time
# Else, RabbitMQ will send us as many messages as we can absorb,
# which would be 100% of the queue. If we have multiple worker
# instances, we want to load-balance between each of them.
channel.prefetch(2)
exchange = channel.exchange("resize-image", type: :direct, durable: true)
queue = channel.queue("resize-image", durable: true)
queue.bind(exchange)
queue.subscribe(manual_ack: true, block: true) do |delivery_info, properties, payload|
begin
upload = Upload.find(Integer(payload))
# somehow, resize the image and/or post-process the image
# Tell RabbitMQ we processed the message, in order to not see it again
channel.acknowledge(delivery_info.delivery_tag, false)
rescue ActiveRecord::RecordNotFound => _
STDERR.puts "Model does not exist: #{payload.inspect}"
# If the model is not in the database, we don't want to see this message again
channel.acknowledge(delivery_info.delivery_tag, false)
rescue Errno:ENOSPC => e
STDERR.puts "Ran out of disk space resizing #{payload.inspect}"
# Do NOT ack the message, in order to see it again at a later date
# This worker, or another one on another host, may have free space to
# process the image.
rescue RuntimeError => e
STDERR.puts "Failed to resize #{payload}: #{e.class} - #{e.message}"
# The fallback should probably be to ack the message.
channel.acknowledge(delivery_info.delivery_tag, false)
end
end
Given all that though, you may be better off with pre-built gems and using Rails' abstraction, ActiveJob.
I often see code of the form
RSpec.configure do |config|
config.include ApiHelper
# ## Mock Framework
#
# If you prefer to use mocha, flexmock or RR, uncomment the appropriate line:
#
# config.mock_with :mocha
# config.mock_with :flexmock
# config.mock_with :rr
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = true
# If true, the base class of anonymous controllers will be inferred
# automatically. This will be the default behavior in future versions of
# rspec-rails.
config.infer_base_class_for_anonymous_controllers = false
config.include FactoryGirl::Syntax::Methods
end
Is there a feature in Rails that lets me do something similar? I want to be able to configure my own library with a similar syntax within config/initializers/my_class.rb
MyClass.configure do |config|
# allow configuration here
end
Nothing special is needed in Rails - it is simple Ruby code. Here is how it can be done:
class MyClass
def self.configure(&block)
a_hash = { :car => "Red" }
puts "Hello"
yield a_hash
puts "There"
end
end
MyClass.configure do |config|
puts "In Block"
puts config[:car]
end
Output:
Hello
In Block
Red
There
I am yielding a hash, but you can yield whatever object you want to.
Rails will load all the Ruby files in the config/initializers directory when starting up the server.
If you want to use the same style for your own custom configurable class then you just have to implement a configure class method that accepts a block and passes a configuration object to that block. e.g.
class MyClassConfiguration
# configuration attributes
end
class MyClass
def self.configure
yield configuration if block_given?
end
def self.configuration
#config ||= MyClassConfiguration.new
end
end
Using phoet's gem would be even easier.
Its worth taking a look at how RSpec does it if you are curious:
The RSpec.configure method is in https://github.com/rspec/rspec-core/blob/master/lib/rspec/core.rb
The Configuration class is implemented in https://github.com/rspec/rspec-core/blob/master/lib/rspec/core/configuration.rb
i don't know if rails provides a helper for that, but i wrote my own tiny solution for this problem that i use in several gems: https://github.com/phoet/confiture
it let's you define configurations:
module Your
class Configuration
include Confiture::Configuration
confiture_allowed_keys(:secret, :key)
confiture_defaults(secret: 'SECRET_STUFF', key: 'EVEN_MOAR_SECRET')
end
end
and have an easy api to do the configuration:
Your::Configuration.configure do |config|
config.secret = 'your-secret'
config.key = 'your-key'
end
besides this, there are a lot of other config tools out there like configatron or simpleconfig.
A global configuration set via the block.
module VkRobioAPI
module Configuration
OPTION_NAMES = [
:client_id,
:redirect_uri,
:display,
:response_type
]
attr_accessor *OPTION_NAMES
def configure
yield self if block_given?
self
end
end
end
module VkRobioAPI
extend VkRobioAPI::Configuration
class << self
def result
puts VkRobioAPI.client_id
end
end
end
Example:
VkRobioAPI.configure do |config|
config.client_id = '3427211'
config.redirect_uri = 'bEWLUZrNLxff1oQpEa6M'
config.response_type = 'http://localhost:3000/oauth/callback'
config.display = 'token'
end
Result:
VkRobioAPI.result
#3427211
#=> nil
If you are using rails, you can just include ActiveSupport::Configurable into your class and off you go. Outside of rails, you will have to add the activesuport gem to your Gemfile or gemspec and then call require 'active_support/configurable'.
Example
class MyClass
include ActiveSupport::Configurable
end
Usage
With block
MyClass.configure do |config|
config.key = "something"
config.key2 = "something else"
end
or inline, without block
MyClass.config.key = "something"
MyClass.config.key2 = "something else"
or like a hash
MyClass.config[:key] = "somehting"
MyClass.config[:key2] = "something else"
Note: key can be any character of your choice.
Transactional fixtures in rspec prevent after_commit from being called, but even when I disable them with
RSpec.configure do |config|
config.use_transactional_fixtures = false
end
The after_commit callback does not run.
Here is a rails app with the latest rspec / rails that I have produced the issue on:
git://github.com/sheabarton/after_commit_demo.git
One way around this is to trigger the commit callbacks manually. Example:
describe SomeModel do
subject { ... }
context 'after_commit' do
after { subject.run_callbacks(:commit) }
it 'does something' do
subject.should_receive(:some_message)
end
end
end
A little late, but hope this helps others.
In my case I resolved such problem with database_cleaner's settings placed below:
config.use_transactional_fixtures = false
config.before(:suite) do
DatabaseCleaner.strategy = :deletion
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
Thanks to Testing after_commit/after_transaction with Rspec
This is similar to #jamesdevar's answer above, but I couldn't add a code block, so I have to make a separate entry.
You don't have the change the strategy for the whole spec suite. You can keep using :transaction globally then just use :deletion or :truncation (they both work) as needed. Just add a flag to the relevant spec.
config.use_transactional_fixtures = false
config.before(:suite) do
# The :transaction strategy prevents :after_commit hooks from running
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each, :with_after_commit => true) do
DatabaseCleaner.strategy = :truncation
end
then, in your specs:
describe "some test requiring after_commit hooks", :with_after_commit => true do
If you're using database_cleaner you'll still run into this. I'm using the test_after_commit gem, and that seems to do the trick for me.
This Gist helped me.
It monkey-patches ActiveRecord to fire after_commit callbacks even if using transactional fixtures.
module ActiveRecord
module ConnectionAdapters
module DatabaseStatements
#
# Run the normal transaction method; when it's done, check to see if there
# is exactly one open transaction. If so, that's the transactional
# fixtures transaction; from the model's standpoint, the completed
# transaction is the real deal. Send commit callbacks to models.
#
# If the transaction block raises a Rollback, we need to know, so we don't
# call the commit hooks. Other exceptions don't need to be explicitly
# accounted for since they will raise uncaught through this method and
# prevent the code after the hook from running.
#
def transaction_with_transactional_fixtures(options = {}, &block)
rolled_back = false
transaction_without_transactional_fixtures do
begin
yield
rescue ActiveRecord::Rollback => e
rolled_back = true
raise e
end
end
if !rolled_back && open_transactions == 1
commit_transaction_records(false)
end
end
alias_method_chain :transaction, :transactional_fixtures
#
# The #_current_transaction_records is an stack of arrays, each one
# containing the records associated with the corresponding transaction
# in the transaction stack. This is used by the
# `rollback_transaction_records` method (to only send a rollback hook to
# models attached to the transaction being rolled back) but is usually
# ignored by the `commit_transaction_records` method. Here we
# monkey-patch it to temporarily replace the array with only the records
# for the top-of-stack transaction, so the real
# `commit_transaction_records` method only sends callbacks to those.
#
def commit_transaction_records_with_transactional_fixtures(commit = true)
unless commit
real_current_transaction_records = #_current_transaction_records
#_current_transaction_records = #_current_transaction_records.pop
end
begin
commit_transaction_records_without_transactional_fixtures
rescue # works better with that :)
ensure
unless commit
#_current_transaction_records = real_current_transaction_records
end
end
end
alias_method_chain :commit_transaction_records, :transactional_fixtures
end
end
end
Put this a new file in your Rails.root/spec/support directory, e.g. spec/support/after_commit_with_transactional_fixtures.rb.
Rails 3 will automatically load it in the test environment.