Elixir, how running multiple processes under the supervision of a single Supervisor - process

There is a program of three modules. The Print module receives a number from the keyboard, passes it to another module, receives the response, and displays it on the screen. The Proc1 and Proc2 modules receive a number, perform calculations, and send the result back.
defmodule Launch do
#moduledoc """
Documentation for `Launch`.
"""
#doc """
"""
def start() do
children = [
%{
id: Print,
start: {Print, :print, []}
},
%{
id: Proc1,
start: {Proc1, :proc1, []}
},
%{
id: Proc2,
start: {Proc2, :proc2, []}
}
]
Supervisor.start_link(children, strategy: :one_for_one)
end
end
defmodule Print do
def print() do
num =
IO.gets("Input number: ")
|> String.trim()
|> String.to_integer()
if num >= 0 do
send(Proc1, {self(), num})
else
send(Proc2, {self(), num})
end
receive do
num -> IO.puts(num)
after
500 ->
print()
end
print()
end
end
defmodule Proc1 do
def proc1() do
receive do
{pid, num} ->
send(pid, 100/num)
proc1()
_e ->
IO.puts("Error")
end
end
end
defmodule Proc2 do
def proc2() do
receive do
{pid, num} ->
send(pid, 1000/num)
proc2()
_e ->
IO.puts("Error")
end
end
end
I am trying to run all processes under the supervision of a single Supervisor. But there is a problem-only the first "child" is started, the other "children" are not started. In the example above, the Print process will start, but Proc1 and Proc2 will not start. How do I run all processes under one Supervisor? Important note: the Print process must get the addresses of the Proc1 and Proc2 processes for communication.

There are many issues with the code you’ve posted.
Registered processes
To be able to use process name as Process.dest() in a call to Kernel.send/2, one should start the named process.
Supervisor.start_link/2
Supervisor.start_link/2 expects a list of tuples, with modules and functions that immediately return, having the process started as a side effect. These functions are called, and there would not be any magic: if this is an infinitely recursive function, the execution flow would be deadlocked inside, waiting for the message in receive/1.
Supervisor performs some magic by automatically monitoring and restarting children for you, but it does nothing to spawn the separate processes. GenServer encapsulates this functionality and provides a handy way to not bother about spawning processes.
Solution
What you might do, is to spawn all three processes, manually monitor them, and react on {:DOWN, ref, :process, pid, reason} message respawning the died process. This is exactly what Supervisor effectively does under the hood for children.
Launch
defmodule Launch do
def start() do
proc1 = spawn(&Proc1.proc1/0)
proc2 = spawn(&Proc2.proc2/0)
print = spawn(fn -> Print.print(proc1, proc2) end)
Process.monitor(proc1)
Process.monitor(proc2)
Process.monitor(print)
receive do
msg -> IO.inspect(msg)
end
end
end
Print
defmodule Print do
def print(pid1, pid2) do
num =
IO.gets("Input number: ")
|> String.trim()
|> String.to_integer()
if num >= 0 do
send(pid1, {self(), num})
else
send(pid2, {self(), num})
end
receive do
num -> IO.puts(num)
end
print(pid1, pid2)
end
end
The other two modules are fine.
Here is how it will look like in iex
iex|1 ▶ c "/tmp/test.ex"
#⇒ [Launch, Print, Proc1, Proc2]
iex|2 ▶ Launch.start
Input number: 10
10.0
Input number: 1000
0.1
Input number: a
#⇒ {:DOWN, #Reference<0.3632020665.3980394506.95298>,
# :process, #PID<0.137.0>,
# {:badarg,
# [
# {:erlang, :binary_to_integer, ["a"], []},
# {Print, :print, 2, [file: '/tmp/test.ex', line: 22]}
# ]}}
Now instead of printing this out, respawn the failed process, and you will get a bare implementation of the supervised intercommunicating processes. For all_for_one strategy that could be achieved with:
receive do
{:DOWN, _, _, _, _} ->
Process.exit(print, :normal)
Process.exit(proc1, :normal)
Process.exit(proc2, :normal)
start()
end

Related

How to write a test for Plug error handling

I'm trying to use Plug.Test to test error handling implemented with Plug.ErrorHandler -- with assert conn.status == 406 and alike.
I have the defp handle_errors (containing a single send_resp statement) and it seems to be called, however, my tests fail with the same exception still (as if handle_errors has no effect).
A reference to a sample advanced Plug (not Phoenix) app will also be appreciated.
Try something like this (not tested):
defmodule NotAcceptableError do
defexception plug_status: 406, message: "not_acceptable"
end
defmodule Router do
use Plug.Router
use Plug.ErrorHandler
plug :match
plug :dispatch
get "/hello" do
raise NotAcceptableError
send_resp(conn, 200, "world")
end
def handle_errors(conn, %{kind: _kind, reason: reason, stack: _stack}) do
send_resp(conn, conn.status, reason.message)
end
end
test "error" do
conn = conn(:get, "/hello")
assert_raise Plug.Conn.WrapperError, "** (NotAcceptableError not_acceptable)", fn ->
Router.call(conn, [])
end
assert_received {:plug_conn, :sent}
assert {406, _headers, "not_acceptable"} = sent_resp(conn)
end
Use assert_error_sent/2 to assert that you raised an error and it was wrapped and sent with a particular status. Match against its {status, headers, body} return value to assert the rest of the HTTP response met your expectations.
response = assert_error_sent 404, fn ->
get(build_conn(), "/users/not-found")
end
assert {404, [_h | _t], "Page not found"} = response

Elixir - Using variables in doctest

In my application there is a GenServer, which can create other processes. All process IDs are saved to a list.
def create_process do
GenServer.call(__MODULE__, :create_process)
end
def handle_call(:create_process, _from, processes) do
{:ok, pid} = SomeProcess.start_link([])
{:reply, {:ok, pid}, [pid | processes]}
end
There is also a function to get the list of PIDs.
def get_processes do
GenServer.call(__MODULE__, :get_processes)
end
def handle_call(:get_processes, _from, processes) do
{:reply, processes, processes}
end
I tried to write a doctest for the get_processes function like this:
#doc """
iex> {:ok, pid} = MainProcess.create_process()
iex> MainProcess.get_processes()
[pid]
"""
However the test runner doesn't seem to see the pid variable, and I get an undefined function pid/0 error.
I know it could be simply solved in with a regular test, but i want to know it it possible to solve in doctest.
The problem is the [pid] line in your expected result. The expected result should be an exact value, not a variable. You can't reference the variable from the expected result. You can work around it by checking the pid on the previous line:
iex> {:ok, pid} = MainProcess.create_process()
iex> [pid] === MainProcess.get_processes()
true

How to Rspec system that ActionCable broadcast messages appear in the view

I want to test that messages that are broadcast when some background jobs are completed are actually appearing in the view.
I have unit tests for this which work fine. I would actually like to ensure that the JS gets run so that the view is updated with the correct message.
So far I have not been able to find any way to do this.
Here is the test I have where I would like to add the expectation for the broadcast message:
require 'rails_helper'
require 'sidekiq/testing'
RSpec.describe 'sending a quote request', js: true do
let(:quote_request_form) { build(:quote_request_form) }
before do
create(:job_rate, :proofreading)
create(:proofreader_with_work_events)
end
it 'shows the user their quotation' do
visit new_quote_request_path
fill_in("quote_request_form_name", with: quote_request_form.name)
fill_in("quote_request_form_email", with: quote_request_form.email)
attach_file('customFile','/Users/mitchellgould/RailsProjects/ProvenWordNew/spec/test_documents/quote_request_form/1.docx', make_visible: true)
click_on "Submit"
Sidekiq::Testing.inline! do
page.execute_script("$('#invisible-recaptcha-form').submit()")
expect(current_path).to eq(quote_confirm_path)
#add expectation here:
expect(page).to have_content("Calculating Time Required")
page.execute_script("window.location.pathname = '#{quotation_path(Quotation.first)}'")
expect(current_path).to eq(quotation_path(Quotation.first))
expect(page).to have_content("Here is your quotation")
end
end
end
Here is my .coffee file:
$(document).on 'turbolinks:load', ->
if $("meta[name='current_user']").length > 0
App.notification = App.cable.subscriptions.create "NotificationChannel",
connected: ->
# Called when the subscription is ready for use on the server
disconnected: ->
# Called when the subscription has been terminated by the server
received: (data) ->
$('.background_message').html(data.content)
if data.head == 302 && data.path
window.location.pathname = data.path
else if App.notification
App.quotation.unsubscribe()
delete App.notification
Here is one of the background jobs that broadcasts a message when its done:
class CreateParagraphDetailsJob < ApplicationJob
queue_as :default
after_perform :broadcast_message, :calculate_proofreading_job_duration
def perform(document, proofreading_job_id, current_user_id)
document.create_paragraph_details
end
private
def calculate_proofreading_job_duration
CalculateDurationJob.set(wait: 1.seconds).perform_later proofreading_job_id, current_user_id
end
def broadcast_message
ActionCable.server.broadcast "notification_channel_user_#{current_user_id}", content: "Analyzed writing quality of paragraphs"
end
def document
self.arguments.first
end
def proofreading_job_id
self.arguments.second
end
def current_user_id
self.arguments.last
end
end
Any ideas on how to do this?

Elixir - testing a full script

I'm writing a test to check a function (called automatically by GenServer when a new file enters a folder) that calls other functions in the same module with pipes in order to read a file, process its content to insert it if needed and returns a list (:errors and :ok maps).
results looks like :
[
error: "Data not found",
ok: %MyModule{
field1: field1data,
field2: field2data
},
ok: %MyModule{
field1: field1data,
field2: field2data
},
error: "Data not found"
the code :
def processFile(file) do
insertResultsMap =
File.read!(file)
|> getLines()
|> extractMainData()
|> Enum.map(fn(x) -> insertLines(x) end)
|> Enum.group_by(fn x -> elem(x, 0) end)
handleErrors(Map.get(insertResultsMap, :error))
updateAnotherTableWithLines(Map.get(insertResultsMap, :ok))
end
defp getLines(docContent) do
String.split(docContent, "\n")
end
defp extractMainData(docLines) do
Enum.map(fn(x) -> String.split(x, ",") end)
end
defp insertLines([field1, field2, field3, field4]) do
Attrs = %{
field1: String.trim(field1),
field2: String.trim(field2),
field3: String.trim(field3),
field4: String.trim(field4)
}
mymodule.create_stuff(Attrs)
end
defp handleErrors(errors) do
{:ok, file} = File.open(#errorsFile, [:append])
saveErrors(file, errors)
File.close(file)
end
defp saveErrors(_, []), do: :ok
defp saveErrors(file, [{:error, changeset}|rest]) do
changes = for {key, value} <- changeset.changes do
"#{key} #{value}"
end
errors = for {key, {message, _}} <- changeset.errors do
"#{key} #{message}"
end
errorData = "data: #{Enum.join(changes, ", ")} \nErrors: #{Enum.join(errors, ", ")}\n\n"
IO.binwrite(file, errorData)
saveErrors(file, rest)
end
defp updateAnotherTableWithLines(insertedLines) do
Enum.map(insertedLines, fn {:ok, x} -> updateOtherTable(x) end)
end
defp updateOtherTable(dataForUpdate) do
"CLOSE" -> otherModule.doStuff(dataForUpdate.field1, dataForUpdate.field2)
end
I have several questions, and some will be pretty basic since I'm still learning :
What do you think of the code ? Any advices ? (take into account I voluntarily obfuscated names).
If I want to test this, is it the right way to test only processFile function ? Or should I make public more of them and test them individually ?
When I test the processFile function, I check that I'm receiving a list. Any way to make sure this list has only elements I'm waiting for, thus error: "String" or ok: %{}" ?
What do you think of the code? Any advices? (take into account I voluntarily obfuscated names).
Opinion based.
If I want to test this, is it the right way to test only processFile function?
Yes.
Or should I make public more of them and test them individually?
No, this is an implementation detail and testing it is an anti-pattern.
When I test the processFile function, I check that I'm receiving a list. Any way to make sure this list has only elements I'm waiting for, thus error: "String" or ok: %{}"?
You receive a Keyword. To check the explicit value, one might use:
foo = processFile(file)
assert not is_nil(foo[:ok])
OTOH, I’d better return a map from there and pattern match it:
assert %{ok: _} = processFile(file)
To assert that the result does not have anything save for :oks and :errors, one might use list subtraction:
assert Enum.uniq(Keyword.keys(result)) -- [:ok, :error] == []

Bunny and RabbitMQ - Adapting the WorkQueue tutorial to cancel subscribing when the Queue is completely worked

I fill up my queue, check it has the right number of tasks to work and then have workers in parallell set to prefetch(1) to ensure each just takes one task at a time.
I want each worker to work its task, send a manual acknowledgement, and keep working taking from the queue if there is more work.
If there is not more work, i.e. the queue is empty, I want the worker script to finish up and return(0).
So, this is what I have now:
require 'bunny'
connection = Bunny.new("amqp://my_conn")
connection.start
channel = connection.create_channel
queue = channel.queue('my_queue_name')
channel.prefetch(1)
puts ' [*] Waiting for messages.'
begin
payload = 'init'
until queue.message_count == 0
puts "worker working queue length is #{queue.message_count}"
_delivery_info, _properties, payload = queue.pop
unless payload.nil?
puts " [x] Received #{payload}"
raise "payload invalid" unless payload[/cucumber/]
begin
do_stuff(payload)
rescue => e
puts "Error running #{payload}: #{e.backtrace.join('\n')}"
#failing stuff
end
end
puts " [x] Done with #{payload}"
end
puts "done with queue"
connection.close
exit(0)
ensure
connection.close
end
I want to still make sure I am done when the queue is empty. This is the example from the RabbitMQ site... https://www.rabbitmq.com/tutorials/tutorial-two-ruby.html . It has a number of things we want for our work queue, most importantly manual acknowledgements. But it does not stop running and I need that to happen programmatically when the queue is done:
#!/usr/bin/env ruby
require 'bunny'
connection = Bunny.new(automatically_recover: false)
connection.start
channel = connection.create_channel
queue = channel.queue('task_queue', durable: true)
channel.prefetch(1)
puts ' [*] Waiting for messages. To exit press CTRL+C'
begin
queue.subscribe(manual_ack: true, block: true) do |delivery_info, _properties, body|
puts " [x] Received '#{body}'"
# imitate some work
sleep body.count('.').to_i
puts ' [x] Done'
channel.ack(delivery_info.delivery_tag)
end
rescue Interrupt => _
connection.close
end
How can this script be adapted to exit out when the queue has been completely worked (0 total and 0 unacked)?
From what I understand, you want your subscriber to end if there are no pending messages in the RabbitMQ queue.
Given your second script, you could avoid passing block: true, and that will return nothing when there's no more data to process. In that case, you could exit the program.
You can see that in the documentation: http://rubybunny.info/articles/queues.html#blocking_or_nonblocking_behavior
By default it's non-blocking.