ring server messages not traversing ring - process

I've almost implemented the well-known ring server, however the messages do not seem to be traversing the ring - here's my code:
module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M,N,Msg)->
start(M,N,[],Msg).
start(0,0,_,_)->
ok;
start(M,0,PList,Msg)->
nth(1,PList)!nthtail(1,reverse(zip(append(duplicate(M,PList)),duplicate(M*length(PList),Msg))));
start(M,N,PList,Msg)->
start(M,N-1,[spawn(ring, server, [])|PList],Msg).
server()->
receive
[]->
ok;
[{Pid,Msg}|T]->
io:format("~p,~p~n",[self(),Msg]),
Pid!T;
terminate->
true
end.
it returns the following when executed:
2> ring:start(3,3,"hello").
<0.42.0>,"hello"
<0.41.0>,"hello"
[{<0.41.0>,"hello"},
{<0.42.0>,"hello"},
{<0.40.0>,"hello"},
{<0.41.0>,"hello"},
{<0.42.0>,"hello"},
{<0.40.0>,"hello"},
{<0.41.0>,"hello"},
{<0.42.0>,"hello"}]
May I ask where I am going wrong here? I've checked the zipped process list, and it seems as if it should work. Thanks in advance.

There are 2 mistakes here, the main one is that you do not recursively recall server after printing the message.
The second one is that you should not remove the head of your list when you build the first message, otherwise you will have only m*n - 1 messages passed.
this code works, but it leaves all the processes active, you should enhance it to kill all processes at the end of the "ring":
-module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M,N,Msg)->
start(M,N,[],Msg).
start(0,0,_,_)->
ok;
start(M,0,PList,Msg)->
nth(1,PList)!reverse(zip(append(duplicate(M,PList)),duplicate(M*length(PList),Msg)));
start(M,N,PList,Msg)->
start(M,N-1,[spawn(ring, server, [])|PList],Msg).
server()->
receive
[]->
ok;
[{Pid,Msg}|T]->
io:format("In Server ~p,~p~n",[self(),Msg]),
Pid!T,
server();
terminate->
true
end.
On my side, I don't use import, I find it more explicit when I have lists:reverse..., You can use hd/1 to get the first element of a list, and you could use list comprehension to generate your list of {Pid,Msg}

Your processes are terminating, not looping (and there is never a condition under which they will receive 'terminate'). To solve the termination issue, have them spawn_link instead of just spawn so that when the first one dies, they all die.
It is good to kick things off by messaging yourself, to make sure you're not stripping the first message entirely.
-module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M, N, Msg)->
Scroll = start(M, N, [], Msg),
self() ! Scroll.
start(0, 0, _, _)->
ok;
start(M, 0, PList, Msg)->
nth(1, PList) ! nthtail(1, reverse(zip(append(duplicate(M, PList)),
duplicate(M * length(PList), Msg))));
start(M, N, PList, Msg)->
start(M, N - 1, [spawn_link(ring, server, []) | PList], Msg).
server()->
receive
[]->
ok;
[{Pid, Msg} | T]->
io:format("~p,~p~n", [self(), Msg]),
Pid ! T,
server()
end.
This produces:
1> c(ring).
{ok,ring}
2> ring:start(3,3,"raar!").
<0.42.0>,"raar!"
<0.41.0>,"raar!"
[{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"},
{<0.40.0>,"raar!"},
{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"},
{<0.40.0>,"raar!"},
{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"}]
<0.42.0>,"raar!"
<0.40.0>,"raar!"
<0.41.0>,"raar!"
<0.42.0>,"raar!"
<0.40.0>,"raar!"
<0.41.0>,"raar!"
As expected. Note that I'm calling server() again on receipt of a message. When the empty list is received they all die because they are linked. No mess.
Some spacing for readability would have been a little more polite. Erlangers usually aren't so leet that we want to plow through a solid mass of characters -- and this actually causes errors with a few operators in some edge cases (like in map syntax). Also, using imported functions is a little less clear. I know is gums up the code with all that namespace business, but its a lifesaver when you start writing larger programs (there are a lot of list operations in the erlang module -- so some of this importing is unnecessary anyway http://www.erlang.org/doc/man/erlang.html#hd-1).

Related

Corrupted MD-SAL queries when trying to read flows on tables: missing flow rules

I am experiencing a glitch from OpenDaylight (using Mininet).
Essentially, I am querying flow rules on specific nodes and on specific tables. The relevant code is the following, and is run by 1 separate thread per node that I am polling:
public static final InstanceIdentifier<Nodes NODES_II = InstanceIdentifier
.builder(Nodes.class).build();
public static InstanceIdentifier<Table> makeTableIId(NodeId nodeId, Short tableId) {
return NODES_IID.child(Node.class, new NodeKey(nodeId))
.augmentation(FlowCapableNode.class)
.child(Table.class, new TableKey(tableId));
}
and
InstanceIdentifier<Table> tableIId = makeTableIId(nodeId, tableId);
Optional<Table> tableOptional = dataBroker.newReadOnlyTransaction()
.read(LogicalDatastoreType.OPERATIONAL, tableIId).get();
if(!tableOptional.isPresent()) {
continue;
}
List<Flow> flows = tableOptional.get().getFlow();
The behavior: tableOptional is present, and getFlow() returns an empty list.
The observation: there ARE flow rules installed on ALL nodes on the tables I am querying, but for some reason, some of these nodes show none of these flows on none of the tables (here, tables 3, 4, 5, and 6).
The weirdness: On one of the problematic nodes, I have four rules, installed on tables 9, 13, 17 and 22 respectively. They timeout simultaneously after 150 seconds. After they disappear, the query suddenly begins to "see" the flows installed on tables 3, 4, 5, and 6, returning these for each table.
Question: How is this even possible?
EDIT I just realized that the rules whose timeout "suddenly fix everything" were also rules that generated warnings in ODL's log (OpenFlowPlugin to be more specific). I did not observe any obvious issue, so I'd sort of brushed it aside.
Here is the code relevant to the error:
https://pastebin.com/yJDZesXU
Here are the errors I get every time I install a rule that walks through these lines:
https://pastebin.com/c9HYLBt6
I must stress that these rules work as intended, and that printing them out reveals no evident formatting issue. Again, they appear fine when dumped.
My hypothesis is that this warning is a symptom of ODL "messing up" trying to store the rules in MD-SAL, which ends up messing a lot of rule-reading queries. On uninstallation of the garbage that ensues, rule-reading queries become functional again.
This makes sense to me, but then... I haven't understood how to fix these warnings, or what these warnings were about in the first place.
EDIT 2: By commenting lines suspecting of causing the warnings in the above pastebin:
//ipv4MatchBuilder.setIpv4SourceAddressNoMask(...);
//ipv4MatchBuilder.setIpv4SourcenArbitraryBitmask(...);
The warnings disappear, AND the flows appear correctly on all tables, when pinged. This confirms my hypothesis that somewhere, something wrong happens in the data store.
EDIT 3: I have found that by setting any non-trivial arbitrary bitmask, this error goes away. That is, I have tried setting an arbitrary bitmask which was neither null nor "255.255.255.255", and this error has gone away. The problem is I might like having a bitmask for the source, but an exact match on the destination. Even setting the bitmask to "127.255.255.255" (as I tried) is still unnerving. It really feels to me like this is an OpenFlowPlugin glitch, though.
EDIT 4: Steps for reproducing the bug
Install a rule with ipv4 arbitrary bitmask match, with the destination ip set, and the destination arbitrary bitmask either null or set to 255.255.255.255.
Ipv4MatchArbitraryBitMaskBuilder ipv4MatchBuilder = new Ipv4MatchArbitraryBitMaskBuilder();
ipv4MatchBuilder.setIpv4DestinationAddressNoMask(new Ipv4Address("10.0.0.1"));
ipv4MatchBuilder.setIpv4DestinationArbitraryBitmask(new DottedQuad("255.255.255.255"));
matchBuilder = new MatchBuilder().setEthernetMatch(ethernetMatchBuilder.build()).setLayer3Match(ipv4MatchBuilder.build());
... and so on ...
Extra optional steps: Install one such rule for the destination, one such rule for the source, and install equivalent rules where the bitmask is set to something else, like 127.255.255.255.
Make a query to MDSal to fetch flow information from the node on which you installed the flow rule.
Now, do "log:display" inside your ODL controller. You should have a warning about a malformed destination address. Additionally, the Table object you queried should contain no flows, so tableObject.getFlow() should return an empty list.

Why are my flows not connecting?

Just starting with noflo, I'm baffled as why I'm not able to get a simple flow working. I started today, installing noflo and core components following the example pages, and the canonical "Hello World" example
Read(filesystem/ReadFile) OUT -> IN Display(core/Output)
'package.json' -> IN Read
works... so far fine, then I wanted to change it slightly adding "noflo-rss" to the mix, and then changing the example to
Read(rss/FetchFeed) OUT -> IN Display(core/Output)
'http://xkcd.com/rss.xml' -> IN Read
Running like this
$ /node_modules/.bin/noflo-nodejs --graph graphs/rss.fbp --batch --register=false --debug
... but no cigar -- there is no output, it just sits there with no output at all
If I stick a console.log into the sourcecode of FetchFeed.coffee
parser.on 'readable', ->
while item = #read()
console.log 'ITEM', item ## Hack the code here
out.send item
then I do see the output and the content of the RSS feed.
Question: Why does out.send in rss/FetchFeed not feed the data to the core/Output for it to print? What dark magic makes the first example work, but not the second?
When running with --batch the process will exit when the network has stopped, as determined by a non-zero number of open connections between nodes.
The problem is that rss/FetchFeed does not open a connection on its outport, so the connection count drops to zero and the process exists.
One workaround is to run without --batch. Another one I just submitted as a pull request (needs review).

Is it possible to send a message to an unregistered processes in Erlang?

I am aware that you can preform simple message passing with the following:
self() ! hello.
and you can see the message by calling:
flush().
I can also create simple processes in functions with something like:
spawn(module, function, args).
However I am not clear how one can send messages to the processes with out registering the Pid.
I have seen examples showing that you can pattern match against this in the shell to get the Pid assigned to a var, so if i create a gen_server such as:
...
start_link() ->
gen_server:start_link(?MODULE, init, []).
init(Pid) ->
{ok, Pid}.
...
I can then call it with the following from the shell:
{ok, Pid} = test_sup:start_link().
{ok,<0.143.0>}
> Pid ! test.
test
So my question is, can you send messages to Pids in the form <0.0.0> with out registering them to an atom or variable in the shell? Experimenting and searching as proved fruitless...
If you happen to need to send a message to a Pid based on the textual representation of its Pid, you can do (assuming the string is "<0.42.0>"):
list_to_pid("<0.42.0>") ! Message
This is almost only useful in the shell (where you can see the output of log messages or monitor data from something like Observer); any spawned process should normally be a child of some form of parent process to which it is linked (or monitored).
As for sending a message to something you just spawned, spawn returns a Pid, so you can assign it directly to a variable (which is not the same as registering it):
Pid = spawn(M, F, A),
Pid ! Message.
If you have the string "" to identify a pid, it is
either because you are working in the shell, and you use the representation you see, and you forgot to store this pid in a variable. Then simply use pid(X,Y,Z) to get it;
either because you did something like io_lib:format("~p",[Val]) where Val is the pid or a an erlang term which contain this pid. Then simply assign the pid to a variable (directly or extracting it from the term). It can be stored in an ets, send to another process without transformation
You should avoid to use the shell (or string) representation. One reason is that this representation is different when you ask the pid of one process from 2 different nodes as shown in the next screen capture.

eunit: How to test a simple process?

I'm currently writing a test for a module that runs in a simple process started with spawn_link(?MODULE, init, [self()]).
In my eunit tests, I have a setup and teardown function defined and a set of test generators.
all_tests_test_() ->
{inorder, {
foreach,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
The setup fun creates the process-under-test:
setup() ->
{ok, Pid} = protocol:start_link(),
process_flag(trap_exit,true),
error_logger:info_msg("[~p] Setting up process ~p~n", [self(), Pid]),
Pid.
The test looks like this:
my_test(Pid) ->
[ fun() ->
error_logger:info_msg("[~p] Sending to ~p~n", [self(), Pid]),
Pid ! something,
receive
Msg -> ?assertMatch(expected_result, Msg)
after
500 -> ?assert(false)
end
end ].
Most of my modules are gen_server but for this I figured it'll be easier without all gen_server boilerplate code...
The output from the test looks like this:
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.117.0>] Setting up process <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.124.0>] Sending to <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.122.0>] Sending expected_result to <0.117.0>
protocol_test: my_test...*failed*
in function protocol_test:'-my_test/1-fun-0-'/0 (test/protocol_test.erl, line 37)
**error:{assertion_failed,[{module,protocol_test},
{line,37},
{expression,"false"},
{expected,true},
{value,false}]}
From the Pids you can see that whatever process was running setup (117) was not the same that was running the test case (124). The process under test however is the same (122). This results in a failing test case because the receive never gets the message und runs into the timeout.
Is that the expected behaviour that a new process gets spawned by eunit to run the test case?
An generally, is there a better way to test a process or other asynchronous behaviour (like casts)? Or would you suggest to always use gen_server to have a synchronous interface?
Thanks!
[EDIT]
To clarify, how protocol knows about the process, this is the start_link/0 fun:
start_link() ->
Pid = spawn_link(?MODULE, init, [self()]),
{ok, Pid}.
The protocol ist tightly linked to the caller. If the either of them crashes I want the other one to die as well. I know I could use gen_server and supervisors and actually it did that in parts of the application, but for this module, I thought it was a bit over the top.
did you try:
all_tests_test_() ->
{inorder, {
foreach,
local,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
From the doc, it seems to be what you need.
simple solution
Just like in Pascal answer, adding the local flag to test description might solve some your problem, but it will probably cause you some additional problems in future, especially when you link yourself to created process.
testing processes
General practice in Erlang is that while process abstraction is crucial for writing (designing and thinking about) programs, it is not something that you would expose to user of your code (even if it is you). Instead expecting someone to send you message with proper data, you wrap it in function call
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
Msg
after 500
timeouted
end
and then test this function rather than receiving something "by hand".
To distinguish real timeout from received timeouted atom, one can use some pattern matching, and let it fail in case of error
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
{ok, Msg}
after 500
timeouted
end
in_my_test() ->
{ok, ValueToBeTested} = get_me_some_expected_result().
In addition, since your process could receive many different messages in meantime, you can make sure that you receive what you think you receive with little pattern-matching and local reference
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
And now receive will ignore (leave for leter) all messages that will not have same Reg that you send to your process.
major concern
One thing that I do not really understand, is how does process you are testing know where to send back received message? Only logical solution would be getting pid of it's creator during initialization (call to self/0 inside protocol:start_link/0 function). But then our new process can communicate only with it's creator, which might not be something you expect, and which is not how tests are run.
So simplest solution would be sending "return address" with each call; which again could be done in our wrapping function.
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref, self()},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
Again, anyone who will use this get_me_some_expected_result/1 function will not have to worry about message passing, and testing such functions makes thing extremely easier.
Hope this helps at least a little.
Maybe it's simply because you are using the foreach EUnit fixture in place of the setup one.
There, try the setup fixture: the one that uses {setup, Setup, Cleanup, Tests} instead of {inorder, {foreach, …}}

How to flush the io buffer in Erlang?

How do you flush the io buffer in Erlang?
For instance:
> io:format("hello"),
> io:format(user, "hello").
This post seems to indicate that there is no clean solution.
Is there a better solution than in that post?
Sadly other than properly implementing a flush "command" in the io/kernel subsystems and making sure that the low level drivers that implement the actual io support such a command you really have to simply rely on the system quiescing before closing. A failing I think.
Have a look at io.erl/io_lib.erl in stdlib and file_io_server.erl/prim_file.erl in kernel for the gory details.
As an example, in file_io_server (which effectively takes the request from io/io_lib and routes it to the correct driver), the command types are:
{put_chars,Chars}
{get_until,...}
{get_chars,...}
{get_line,...}
{setopts, ...}
(i.e. no flush)!
As an alternative you could of course always close your output (which would force a flush) after every write. A logging module I have does something like this every time and it doesn't appear to be that slow (it's a gen_server with the logging received via cast messages):
case file:open(LogFile, [append]) of
{ok, IODevice} ->
io:fwrite(IODevice, "~n~2..0B ~2..0B ~4..0B, ~2..0B:~2..0B:~2..0B: ~-8s : ~-20s : ~12w : ",
[Day, Month, Year, Hour, Minute, Second, Priority, Module, Pid]),
io:fwrite(IODevice, Msg, Params),
io:fwrite(IODevice, "~c", [13]),
file:close(IODevice);
io:put_chars(<<>>)
at the end of the script works for me.
you could run
flush().
from the shell, or try
flush()->
receive
_ -> flush()
after 0 -> ok
end.
That works more or less like a C flush.