Erlang ssl non-blocking parallel acceptors - ssl

I have built a very naive parallel ssl acceptor.
-module(multiserver).
-export([start/0,client/1]).
%% This is a dummy SSL Erlang server/client example
start() ->
spawn_link(fun() -> init([]) end).
init([]) ->
ssl:start(),
{ok, ListenSocket} = ssl:listen(9990, [{certfile, "cert.pem"}, {keyfile, "privkey.pem"} ,{reuseaddr, true},{active, true}, binary]),
Pid = self(),
spawn_link(fun() -> listener(ListenSocket, Pid, 1) end),
spawn_link(fun() -> listener(ListenSocket, Pid, 2) end),
loop().
loop() ->
receive
{new, _Pid} ->
%% Do stuff here
loop()
end.
listener(ListenSocket, Pid, Num) ->
{ok, ClientSocket} = ssl:transport_accept(ListenSocket),
ok = ssl:ssl_accept(ClientSocket),
io:format("listener ~p accepted ~n", [Num]),
ok = ssl:send(ClientSocket, "server"),
io:format("listener ~p sent~n", [Num]),
receive
X -> io:format("listener ~p: ~p ~n", [Num, X])
after 5000 ->
io:format("listener ~p timeout ~n", [Num]),
timeout
end,
ssl:close(ClientSocket),
listener(ListenSocket, Pid, Num).
client(Message) ->
ssl:start(),
{ok, Socket} = ssl:connect("localhost", 9990, [binary, {active,true}], infinity),
receive
X -> io:format("~p ~n", [X])
after 2000 ->
timeout
end,
ok = ssl:send(Socket, Message),
ssl:close(Socket),
io:format("client closed~n").
The probelm I have is that listener 2 does not seem to be able to receive any messages. A sample run of the program looks like this:
First I start the server in shell 1.
Shell 1:
1> multiserver:start().
<0.34.0>
Then I call the client/1 three times from a different shell.
Shell 2:
2> multiserver:client("client").
{ssl,{sslsocket,new_ssl,<0.51.0>},<<"server">>}
client closed
ok
3> multiserver:client("client").
{ssl,{sslsocket,new_ssl,<0.54.0>},<<"server">>}
client closed
ok
4> multiserver:client("client").
{ssl,{sslsocket,new_ssl,<0.56.0>},<<"server">>}
client closed
ok
This is the printouts to server shell.
Shell 1:
listener 1 accepted
listener 1 sent
listener 1: {ssl,{sslsocket,new_ssl,<0.51.0>},<<"client">>}
listener 2 accepted
listener 2 sent
listener 1 accepted
listener 1 sent
listener 1: {ssl,{sslsocket,new_ssl,<0.54.0>},<<"client">>}
listener 2 timedout
2>
I have spent some hours with this and I cant understand why it is not possible for listener 2 to receive any data. If I edit the code to use gen_tcp it works as expected.
Is there something I am missing?
Is it possible to do this with the current ssl module?

The reason for the timeout is that in the second process uses the socket option {active,false}, i.e. the receive will never get any message.
The erlang docs for the ssl module states that the socket created by calling transport_accept/1 should inherit the options set for the listener socket. The first process inherits the options when it does transport_accept/3, but for some reason the second process doesn't.
You can inspect the options with
ssl:getopts(ClientSocket,[mode, active])
I have no idea why this happens, but a workaround is to explicitly set the options on the newly accepted socket
ssl:setopts(ClientSocket, [{active,true}, {mode,binary}])

Related

How do RabbitMQ per connection flow-control block client from publishing messages?

I'm new to RabbitMQ and I really want to figure out the detailed implementation of per connection flow control of RabbitMQ.
I read the source code of RabbitMQ server but can't find how flow-control blocking rabbitmq-java-client from publishing messages.
In rabbit_reader.erl I find control_throttle method which most likely do the flow control work.
% rabbitmq-server v3.7.x rabbit_reader.erl
control_throttle(State = #v1{connection_state = CS,
throttle = #throttle{blocked_by = Reasons} = Throttle}) ->
Throttle1 = case credit_flow:blocked() of
true ->
Throttle#throttle{blocked_by = sets:add_element(flow, Reasons)};
false ->
Throttle#throttle{blocked_by = sets:del_element(flow, Reasons)}
end,
State1 = State#v1{throttle = Throttle1},
case CS of
running -> maybe_block(State1);
%% unblock or re-enable blocking
blocked -> maybe_block(maybe_unblock(State1));
_ -> State1
end.
But in maybe_block method, it will send a "connection.blocked" message to client, which I find in rabbit-java-client is high-water-mark blocking message, not flow-control blocking.
So how does flow control block the client from publishing messages?
Where are the codes for blocking?
Does it block only on server-side or both on server and client side?
Thanks

Wait on the reply of a process Erlang

Is it possible to spawn a process p in a function funct1 of a module module1, to send a message to p in a function funct2 of module1 and to wait for a reply of p inside funct2, without having to spawn f2 that is therefore considered as self()? If so, what is the best way to implement the waiting part? You can see the code below to have an overview of what I am looking for.
Thanks in advance.
-module(module1)
...
funct1(...)
->
Pid = spawn(module2, function3, [[my_data]]),
...
funct2(...)
->
...
Pid ! {self(), {data1, data2}},
% wait here for the reply from Pid
% do something here based on the reply.
The Answer
Yes.
The Real Problem
You are conflating three concepts:
Process (who is self() and what is pid())
Function
Module
A process is a living thing. A process has its own memory space. These processes are the things making calls. This is the only identity that really matters. When you think about "who is self() in this case" you are really asking "what is the calling context?" If I spawn two instances of a process, they both might call the same function at some point in their lives -- but the context of those calls are completely different because the processes have their own lives and their own memory spaces. Just because Victor and Victoria are both jumping rope at the same time doesn't make them the same person.
Where people get mixed up about calling context the most is when writing module interface functions. Most modules are, for the sake of simplicity, written in a way that they define just a single process. There is no rule that mandates this, but it is pretty easy to understand what a module does when it is written this way. Interface functions are exported and available for any process to call -- and they are calling in the context of the processes calling them, not in the context of a process spawned to "be an instance of that module" and run the service loop defined therein.
There is nothing trapping a process "within" that module, though. I could write a pair of modules, one that defines the AI of a lion and another that defines the AI of a shark, and have a process essentially switch identities in the middle of its execution -- but this is almost always a really bad idea (because it gets confusing).
Functions are just functions. That's all they are. Modules are composed of functions. There is nothing more to say than this.
How to wait for a message
We wait for messages using the receive construct. It matches on the received message (which will always be an Erlang term) and selects what to do based on the shape and/or content of the message.
Read the following very carefully:
1> Talker =
1> fun T() ->
1> receive
1> {tell, Pid, Message} ->
1> ok = io:format("~p: sending ~p message ~p~n", [self(), Pid, Message]),
1> Pid ! {message, Message, self()},
1> T();
1> {message, Message, From} ->
1> ok = io:format("~p: from ~p received message ~p~n", [self(), From, Message]),
1> T();
1> exit ->
1> exit(normal)
1> end
1> end.
#Fun<erl_eval.44.87737649>
2> {Pid1, Ref1} = spawn_monitor(Talker).
{<0.64.0>,#Ref<0.1042362935.2208301058.9128>}
3> {Pid2, Ref2} = spawn_monitor(Talker).
{<0.69.0>,#Ref<0.1042362935.2208301058.9139>}
4> Pid1 ! {tell, Pid2, "A CAPITALIZED MESSAGE! RAAAR!"}.
<0.64.0>: sending <0.69.0> message "A CAPITALIZED MESSAGE! RAAAR!"
{tell,<0.69.0>,"A CAPITALIZED MESSAGE! RAAAR!"}
<0.69.0>: from <0.64.0> received message "A CAPITALIZED MESSAGE! RAAAR!"
5> Pid2 ! {tell, Pid1, "a lower cased message..."}.
<0.69.0>: sending <0.64.0> message "a lower cased message..."
{tell,<0.64.0>,"a lower cased message..."}
<0.64.0>: from <0.69.0> received message "a lower cased message..."
6> Pid1 ! {tell, Pid1, "Sending myself a message!"}.
<0.64.0>: sending <0.64.0> message "Sending myself a message!"
{tell,<0.64.0>,"Sending myself a message!"}
<0.64.0>: from <0.64.0> received message "Sending myself a message!"
7> Pid1 ! {message, "A direct message from the shell", self()}.
<0.64.0>: from <0.67.0> received message "A direct message from the shell"
{message,"A direct message from the shell",<0.67.0>}
A standalone example
Now consider this escript of a ping-pong service. Notice there is only one kind of talker defined inside and it knows how to deal with target, ping and pong messages.
#! /usr/bin/env escript
-mode(compile).
main([CountString]) ->
Count = list_to_integer(CountString),
ok = io:format("~p: Starting pingpong script. Will iterate ~p times.~n", [self(), Count]),
P1 = spawn_link(fun talker/0),
P2 = spawn_link(fun talker/0),
pingpong(Count, P1, P2).
pingpong(Count, P1, P2) when Count > 0 ->
P1 ! {target, P2},
P2 ! {target, P1},
pingpong(Count - 1, P1, P2);
pingpong(_, P1, P2) ->
_ = erlang:send_after(1000, P1, {exit, self()}),
_ = erlang:send_after(1000, P2, {exit, self()}),
wait_for_exit([P1, P2]).
wait_for_exit([]) ->
ok = io:format("~p: All done, Returing.~n", [self()]),
halt(0);
wait_for_exit(Pids) ->
receive
{exiting, Pid} ->
ok = io:format("~p: ~p is done.~n", [self(), Pid]),
NewPids = lists:delete(Pid, Pids),
wait_for_exit(NewPids)
end.
talker() ->
receive
{target, Pid} ->
ok = io:format("~p: Sending ping to ~p~n", [self(), Pid]),
Pid ! {ping, self()},
talker();
{ping, From} ->
ok = io:format("~p: Received ping from ~p. Replying with pong.~n", [self(), From]),
From ! pong,
talker();
pong ->
ok = io:format("~p: Received pong.~n", [self()]),
talker();
{exit, From} ->
ok = io:format("~p: Received exit message from ~p. Retiring.~n", [self(), From]),
From ! {exiting, self()}
end.
There are some details there, like use of erlang:send_after/3 that are used because message sending is so fast that it will beat the speed of the calls to io:format/2 that slow down the actual talker processes and result in a weird situation where the exit messages (usually) arrive before the pings and pongs between the two talkers.
Here is what happens when it is run:
ceverett#changa:~/Code/erlang$ ./pingpong 2
<0.5.0>: Starting pingpong script. Will iterate 2 times.
<0.61.0>: Sending ping to <0.62.0>
<0.62.0>: Sending ping to <0.61.0>
<0.61.0>: Sending ping to <0.62.0>
<0.62.0>: Sending ping to <0.61.0>
<0.61.0>: Received ping from <0.62.0>. Replying with pong.
<0.62.0>: Received ping from <0.61.0>. Replying with pong.
<0.61.0>: Received ping from <0.62.0>. Replying with pong.
<0.62.0>: Received ping from <0.61.0>. Replying with pong.
<0.61.0>: Received pong.
<0.62.0>: Received pong.
<0.61.0>: Received pong.
<0.62.0>: Received pong.
<0.61.0>: Received exit message from <0.5.0>. Retiring.
<0.62.0>: Received exit message from <0.5.0>. Retiring.
<0.5.0>: <0.61.0> is done.
<0.5.0>: <0.62.0> is done.
<0.5.0>: All done, Returing.
If you run it a few times (or on a busy runtime) there is a chance that some of the output will be in different order. That is just the nature of concurrency.
If you are new to Erlang the above code might take a while to sink in. Play with that pingpong script yourself. Edit it. Make it do new things. Create a triangle of pinging processes. Spawn a random circuit of talkers that do weird things. This will make sense suddenly once you mess around with it.

Erlang: spawn a process to a function from another module

I started this week with this programming language and I'm trying to do a very simple web server for a subject on my university. I have a question when I spawn a process. I need to spawn N process, so I create a function inside my module dispatcher.erl:
create_workers(NWorkers, Servers, Manager, Dispatcher_Pid) ->
case NWorkers of
0 -> Servers;
_ when NWorkers > 0 ->
Process_Pid = spawn(server, start, [Manager]),
create_workers((NWorkers - 1), lists:append(Servers, [{server, Process_Pid}]), Manager, Dispatcher_Pid)
end.
The function I'm trying to call is in another module (server.erl), which contains this code:
start(Manager, Dispatcher_Pid) ->
receive
{request, Url, Id, WWW_Root} ->
case file:read_file((WWW_Root ++ Url)) of
{ok, Msg} ->
conn_manager:reply(Manager, {ok, binary:bin_to_list(Msg)}, Id);
{error, _} ->
conn_manager:reply(Manager, not_found, Id)
end
end,
Dispatcher_Pid ! {done, self()},
start(Manager, Dispatcher_Pid).
So, I'm trying to spawn a process from the dispatcher.erl module to a function on server.erl module, but I get this error on every spawn:
=ERROR REPORT==== 24-Mar-2015::01:42:43 ===
Error in process <0.165.0> with exit value: {undef,[{server,start,[<0.159.0>],[]}]}
I don't know what is happening here, because I think I'm calling the spawn function as the Erlang documentation says, can I get some help?
Thanks for your time!
Okay, I resolved it from myself. When I spawn a new process, I was passing it less arguments than the arity of the function I'm trying to spawn.

Broadcasting to processes that are on different nodes

I have a function that does the broadcasting:
broadcast(Msg, Reason) ->
Fun = fun(P) -> P ! {self(), Msg, Reason} end, %line 27
lists:foreach(Fun, nodes()).
But it's not working,I get this error:
=ERROR REPORT==== 12-Apr-2014::15:42:23 ===
Error in process <0.45.0> on node 'sub#Molly' with exit value: {badarg,[{subscri
ber,'-broadcast/2-fun-0-',3,[{file,"subscriber.erl"},{line,27}]},{lists,foreach,
2,[{file,"lists.erl"},{line,1323}]},{subscriber,loop,0,[{file,"subscriber.erl"},
{line,38}]}]}
Line 38 is a line where I call the function
broadcast(Reason, Msg)
I can't wrap my head around the error. Why doesn't this work?
! takes the same arguments as erlang:send/2. The documentation specifies that the target can be one of:
a pid
a port
an atom, meaning a process registered on the local node
{RegName, Node}, for a process registered on a remote node
You're sending messages to the elements of the return value of nodes(). These are atoms, but they are node names, not locally registered processes. If the process you want to send the message to is registered as foo on the remote node, write {foo, P} ! {self(), Msg, Reason} instead.
On the other hand, if you have the pids of the processes on the remote node, there is no need to specify the node name, as the pids contain that information. Just send the message to the remote pid as you would for a local pid.
Node is just an atom, you can't send a message to it. What you need is a pid on that node. For example it could be a registered process and the pid could be obtained by calling rpc:call(Node, erlang, where, [Name]). Another option could be to use gproc.

eunit: How to test a simple process?

I'm currently writing a test for a module that runs in a simple process started with spawn_link(?MODULE, init, [self()]).
In my eunit tests, I have a setup and teardown function defined and a set of test generators.
all_tests_test_() ->
{inorder, {
foreach,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
The setup fun creates the process-under-test:
setup() ->
{ok, Pid} = protocol:start_link(),
process_flag(trap_exit,true),
error_logger:info_msg("[~p] Setting up process ~p~n", [self(), Pid]),
Pid.
The test looks like this:
my_test(Pid) ->
[ fun() ->
error_logger:info_msg("[~p] Sending to ~p~n", [self(), Pid]),
Pid ! something,
receive
Msg -> ?assertMatch(expected_result, Msg)
after
500 -> ?assert(false)
end
end ].
Most of my modules are gen_server but for this I figured it'll be easier without all gen_server boilerplate code...
The output from the test looks like this:
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.117.0>] Setting up process <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.124.0>] Sending to <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.122.0>] Sending expected_result to <0.117.0>
protocol_test: my_test...*failed*
in function protocol_test:'-my_test/1-fun-0-'/0 (test/protocol_test.erl, line 37)
**error:{assertion_failed,[{module,protocol_test},
{line,37},
{expression,"false"},
{expected,true},
{value,false}]}
From the Pids you can see that whatever process was running setup (117) was not the same that was running the test case (124). The process under test however is the same (122). This results in a failing test case because the receive never gets the message und runs into the timeout.
Is that the expected behaviour that a new process gets spawned by eunit to run the test case?
An generally, is there a better way to test a process or other asynchronous behaviour (like casts)? Or would you suggest to always use gen_server to have a synchronous interface?
Thanks!
[EDIT]
To clarify, how protocol knows about the process, this is the start_link/0 fun:
start_link() ->
Pid = spawn_link(?MODULE, init, [self()]),
{ok, Pid}.
The protocol ist tightly linked to the caller. If the either of them crashes I want the other one to die as well. I know I could use gen_server and supervisors and actually it did that in parts of the application, but for this module, I thought it was a bit over the top.
did you try:
all_tests_test_() ->
{inorder, {
foreach,
local,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
From the doc, it seems to be what you need.
simple solution
Just like in Pascal answer, adding the local flag to test description might solve some your problem, but it will probably cause you some additional problems in future, especially when you link yourself to created process.
testing processes
General practice in Erlang is that while process abstraction is crucial for writing (designing and thinking about) programs, it is not something that you would expose to user of your code (even if it is you). Instead expecting someone to send you message with proper data, you wrap it in function call
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
Msg
after 500
timeouted
end
and then test this function rather than receiving something "by hand".
To distinguish real timeout from received timeouted atom, one can use some pattern matching, and let it fail in case of error
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
{ok, Msg}
after 500
timeouted
end
in_my_test() ->
{ok, ValueToBeTested} = get_me_some_expected_result().
In addition, since your process could receive many different messages in meantime, you can make sure that you receive what you think you receive with little pattern-matching and local reference
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
And now receive will ignore (leave for leter) all messages that will not have same Reg that you send to your process.
major concern
One thing that I do not really understand, is how does process you are testing know where to send back received message? Only logical solution would be getting pid of it's creator during initialization (call to self/0 inside protocol:start_link/0 function). But then our new process can communicate only with it's creator, which might not be something you expect, and which is not how tests are run.
So simplest solution would be sending "return address" with each call; which again could be done in our wrapping function.
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref, self()},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
Again, anyone who will use this get_me_some_expected_result/1 function will not have to worry about message passing, and testing such functions makes thing extremely easier.
Hope this helps at least a little.
Maybe it's simply because you are using the foreach EUnit fixture in place of the setup one.
There, try the setup fixture: the one that uses {setup, Setup, Cleanup, Tests} instead of {inorder, {foreach, …}}