How to flush the io buffer in Erlang? - input

How do you flush the io buffer in Erlang?
For instance:
> io:format("hello"),
> io:format(user, "hello").
This post seems to indicate that there is no clean solution.
Is there a better solution than in that post?

Sadly other than properly implementing a flush "command" in the io/kernel subsystems and making sure that the low level drivers that implement the actual io support such a command you really have to simply rely on the system quiescing before closing. A failing I think.
Have a look at io.erl/io_lib.erl in stdlib and file_io_server.erl/prim_file.erl in kernel for the gory details.
As an example, in file_io_server (which effectively takes the request from io/io_lib and routes it to the correct driver), the command types are:
{put_chars,Chars}
{get_until,...}
{get_chars,...}
{get_line,...}
{setopts, ...}
(i.e. no flush)!
As an alternative you could of course always close your output (which would force a flush) after every write. A logging module I have does something like this every time and it doesn't appear to be that slow (it's a gen_server with the logging received via cast messages):
case file:open(LogFile, [append]) of
{ok, IODevice} ->
io:fwrite(IODevice, "~n~2..0B ~2..0B ~4..0B, ~2..0B:~2..0B:~2..0B: ~-8s : ~-20s : ~12w : ",
[Day, Month, Year, Hour, Minute, Second, Priority, Module, Pid]),
io:fwrite(IODevice, Msg, Params),
io:fwrite(IODevice, "~c", [13]),
file:close(IODevice);

io:put_chars(<<>>)
at the end of the script works for me.

you could run
flush().
from the shell, or try
flush()->
receive
_ -> flush()
after 0 -> ok
end.
That works more or less like a C flush.

Related

Why are my flows not connecting?

Just starting with noflo, I'm baffled as why I'm not able to get a simple flow working. I started today, installing noflo and core components following the example pages, and the canonical "Hello World" example
Read(filesystem/ReadFile) OUT -> IN Display(core/Output)
'package.json' -> IN Read
works... so far fine, then I wanted to change it slightly adding "noflo-rss" to the mix, and then changing the example to
Read(rss/FetchFeed) OUT -> IN Display(core/Output)
'http://xkcd.com/rss.xml' -> IN Read
Running like this
$ /node_modules/.bin/noflo-nodejs --graph graphs/rss.fbp --batch --register=false --debug
... but no cigar -- there is no output, it just sits there with no output at all
If I stick a console.log into the sourcecode of FetchFeed.coffee
parser.on 'readable', ->
while item = #read()
console.log 'ITEM', item ## Hack the code here
out.send item
then I do see the output and the content of the RSS feed.
Question: Why does out.send in rss/FetchFeed not feed the data to the core/Output for it to print? What dark magic makes the first example work, but not the second?
When running with --batch the process will exit when the network has stopped, as determined by a non-zero number of open connections between nodes.
The problem is that rss/FetchFeed does not open a connection on its outport, so the connection count drops to zero and the process exists.
One workaround is to run without --batch. Another one I just submitted as a pull request (needs review).

How redis pipe-lining works in pyredis?

I am trying to understand, how pipe lining in redis works? According to one blog I read, For this code
Pipeline pipeline = jedis.pipelined();
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
pipeline.set("" + i, "" + i);
}
List<Object> results = pipeline.execute();
Every call to pipeline.set() effectively sends the SET command to Redis (you can easily see this by setting a breakpoint inside the loop and querying Redis with redis-cli). The call to pipeline.execute() is when the reading of all the pending responses happens.
So basically, when we use pipe-lining, when we execute any command like set above, the command gets executed on the server but we don't collect the response until we executed, pipeline.execute().
However, according to the documentation of pyredis,
Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request.
I think, this implies that, we use pipelining, all the commands are buffered and are sent to the server, when we execute pipe.execute(), so this behaviour is different from the behaviour described above.
Could someone please tell me what is the right behaviour when using pyreids?
This is not just a redis-py thing. In Redis, pipelining always means buffering a set of commands and then sending them to the server all at once. The main point of pipelining is to avoid extraneous network back-and-forths-- frequently the bottleneck when running commands against Redis. If each command were sent to Redis before the pipeline was run, this would not be the case.
You can test this in practice. Open up python and:
import redis
r = redis.Redis()
p = r.pipeline()
p.set('blah', 'foo') # this buffers the command. it is not yet run.
r.get('blah') # pipeline hasn't been run, so this returns nothing.
p.execute()
r.get('blah') # now that we've run the pipeline, this returns "foo".
I did run the test that you described from the blog, and I could not reproduce the behaviour.
Setting breakpoints in the for loop, and running
redis-cli info | grep keys
does not show the size increasing after every set command.
Speaking of which, the code you pasted seems to be Java using Jedis (which I also used).
And in the test I ran, and according to the documentation, there is no method execute() in jedis but an exec() and sync() one.
I did see the values being set in redis after the sync() command.
Besides, this question seems to go with the pyredis documentation.
Finally, the redis documentation itself focuses on networking optimization (Quoting the example)
This time we are not paying the cost of RTT for every call, but just one time for the three commands.
P.S. Could you get the link to the blog you read?

ring server messages not traversing ring

I've almost implemented the well-known ring server, however the messages do not seem to be traversing the ring - here's my code:
module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M,N,Msg)->
start(M,N,[],Msg).
start(0,0,_,_)->
ok;
start(M,0,PList,Msg)->
nth(1,PList)!nthtail(1,reverse(zip(append(duplicate(M,PList)),duplicate(M*length(PList),Msg))));
start(M,N,PList,Msg)->
start(M,N-1,[spawn(ring, server, [])|PList],Msg).
server()->
receive
[]->
ok;
[{Pid,Msg}|T]->
io:format("~p,~p~n",[self(),Msg]),
Pid!T;
terminate->
true
end.
it returns the following when executed:
2> ring:start(3,3,"hello").
<0.42.0>,"hello"
<0.41.0>,"hello"
[{<0.41.0>,"hello"},
{<0.42.0>,"hello"},
{<0.40.0>,"hello"},
{<0.41.0>,"hello"},
{<0.42.0>,"hello"},
{<0.40.0>,"hello"},
{<0.41.0>,"hello"},
{<0.42.0>,"hello"}]
May I ask where I am going wrong here? I've checked the zipped process list, and it seems as if it should work. Thanks in advance.
There are 2 mistakes here, the main one is that you do not recursively recall server after printing the message.
The second one is that you should not remove the head of your list when you build the first message, otherwise you will have only m*n - 1 messages passed.
this code works, but it leaves all the processes active, you should enhance it to kill all processes at the end of the "ring":
-module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M,N,Msg)->
start(M,N,[],Msg).
start(0,0,_,_)->
ok;
start(M,0,PList,Msg)->
nth(1,PList)!reverse(zip(append(duplicate(M,PList)),duplicate(M*length(PList),Msg)));
start(M,N,PList,Msg)->
start(M,N-1,[spawn(ring, server, [])|PList],Msg).
server()->
receive
[]->
ok;
[{Pid,Msg}|T]->
io:format("In Server ~p,~p~n",[self(),Msg]),
Pid!T,
server();
terminate->
true
end.
On my side, I don't use import, I find it more explicit when I have lists:reverse..., You can use hd/1 to get the first element of a list, and you could use list comprehension to generate your list of {Pid,Msg}
Your processes are terminating, not looping (and there is never a condition under which they will receive 'terminate'). To solve the termination issue, have them spawn_link instead of just spawn so that when the first one dies, they all die.
It is good to kick things off by messaging yourself, to make sure you're not stripping the first message entirely.
-module(ring).
-import(lists,[duplicate/2,append/1,zip/2,nthtail/2,nth/2,reverse/1]).
-export([start/3,server/0]).
start(M, N, Msg)->
Scroll = start(M, N, [], Msg),
self() ! Scroll.
start(0, 0, _, _)->
ok;
start(M, 0, PList, Msg)->
nth(1, PList) ! nthtail(1, reverse(zip(append(duplicate(M, PList)),
duplicate(M * length(PList), Msg))));
start(M, N, PList, Msg)->
start(M, N - 1, [spawn_link(ring, server, []) | PList], Msg).
server()->
receive
[]->
ok;
[{Pid, Msg} | T]->
io:format("~p,~p~n", [self(), Msg]),
Pid ! T,
server()
end.
This produces:
1> c(ring).
{ok,ring}
2> ring:start(3,3,"raar!").
<0.42.0>,"raar!"
<0.41.0>,"raar!"
[{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"},
{<0.40.0>,"raar!"},
{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"},
{<0.40.0>,"raar!"},
{<0.41.0>,"raar!"},
{<0.42.0>,"raar!"}]
<0.42.0>,"raar!"
<0.40.0>,"raar!"
<0.41.0>,"raar!"
<0.42.0>,"raar!"
<0.40.0>,"raar!"
<0.41.0>,"raar!"
As expected. Note that I'm calling server() again on receipt of a message. When the empty list is received they all die because they are linked. No mess.
Some spacing for readability would have been a little more polite. Erlangers usually aren't so leet that we want to plow through a solid mass of characters -- and this actually causes errors with a few operators in some edge cases (like in map syntax). Also, using imported functions is a little less clear. I know is gums up the code with all that namespace business, but its a lifesaver when you start writing larger programs (there are a lot of list operations in the erlang module -- so some of this importing is unnecessary anyway http://www.erlang.org/doc/man/erlang.html#hd-1).

eunit: How to test a simple process?

I'm currently writing a test for a module that runs in a simple process started with spawn_link(?MODULE, init, [self()]).
In my eunit tests, I have a setup and teardown function defined and a set of test generators.
all_tests_test_() ->
{inorder, {
foreach,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
The setup fun creates the process-under-test:
setup() ->
{ok, Pid} = protocol:start_link(),
process_flag(trap_exit,true),
error_logger:info_msg("[~p] Setting up process ~p~n", [self(), Pid]),
Pid.
The test looks like this:
my_test(Pid) ->
[ fun() ->
error_logger:info_msg("[~p] Sending to ~p~n", [self(), Pid]),
Pid ! something,
receive
Msg -> ?assertMatch(expected_result, Msg)
after
500 -> ?assert(false)
end
end ].
Most of my modules are gen_server but for this I figured it'll be easier without all gen_server boilerplate code...
The output from the test looks like this:
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.117.0>] Setting up process <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.124.0>] Sending to <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.122.0>] Sending expected_result to <0.117.0>
protocol_test: my_test...*failed*
in function protocol_test:'-my_test/1-fun-0-'/0 (test/protocol_test.erl, line 37)
**error:{assertion_failed,[{module,protocol_test},
{line,37},
{expression,"false"},
{expected,true},
{value,false}]}
From the Pids you can see that whatever process was running setup (117) was not the same that was running the test case (124). The process under test however is the same (122). This results in a failing test case because the receive never gets the message und runs into the timeout.
Is that the expected behaviour that a new process gets spawned by eunit to run the test case?
An generally, is there a better way to test a process or other asynchronous behaviour (like casts)? Or would you suggest to always use gen_server to have a synchronous interface?
Thanks!
[EDIT]
To clarify, how protocol knows about the process, this is the start_link/0 fun:
start_link() ->
Pid = spawn_link(?MODULE, init, [self()]),
{ok, Pid}.
The protocol ist tightly linked to the caller. If the either of them crashes I want the other one to die as well. I know I could use gen_server and supervisors and actually it did that in parts of the application, but for this module, I thought it was a bit over the top.
did you try:
all_tests_test_() ->
{inorder, {
foreach,
local,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
From the doc, it seems to be what you need.
simple solution
Just like in Pascal answer, adding the local flag to test description might solve some your problem, but it will probably cause you some additional problems in future, especially when you link yourself to created process.
testing processes
General practice in Erlang is that while process abstraction is crucial for writing (designing and thinking about) programs, it is not something that you would expose to user of your code (even if it is you). Instead expecting someone to send you message with proper data, you wrap it in function call
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
Msg
after 500
timeouted
end
and then test this function rather than receiving something "by hand".
To distinguish real timeout from received timeouted atom, one can use some pattern matching, and let it fail in case of error
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
{ok, Msg}
after 500
timeouted
end
in_my_test() ->
{ok, ValueToBeTested} = get_me_some_expected_result().
In addition, since your process could receive many different messages in meantime, you can make sure that you receive what you think you receive with little pattern-matching and local reference
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
And now receive will ignore (leave for leter) all messages that will not have same Reg that you send to your process.
major concern
One thing that I do not really understand, is how does process you are testing know where to send back received message? Only logical solution would be getting pid of it's creator during initialization (call to self/0 inside protocol:start_link/0 function). But then our new process can communicate only with it's creator, which might not be something you expect, and which is not how tests are run.
So simplest solution would be sending "return address" with each call; which again could be done in our wrapping function.
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref, self()},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
Again, anyone who will use this get_me_some_expected_result/1 function will not have to worry about message passing, and testing such functions makes thing extremely easier.
Hope this helps at least a little.
Maybe it's simply because you are using the foreach EUnit fixture in place of the setup one.
There, try the setup fixture: the one that uses {setup, Setup, Cleanup, Tests} instead of {inorder, {foreach, …}}

Erlang process event error

I'm basically following the tutorial on this site Learn you some Erlang:Designing a concurrent application and I tried to run the code below with the following commands and got an error on line 48. I did turn off my firewall just in case that was the problem but no luck. I'm on windows xp SP3.
9> c(event).
{ok,event}
10> f().
ok
11> event:start("Event",0).
=ERROR REPORT==== 9-Feb-2013::15:05:07 ===
Error in process <0.61.0> with exit value: {function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
<0.61.0>
12>
-module(event).
-export([start/2, start_link/2, cancel/1]).
-export([init/3, loop/1]).
-record(state, {server,
name="",
to_go=0}).
%%% Public interface
start(EventName, DateTime) ->
spawn(?MODULE, init, [self(), EventName, DateTime]).
start_link(EventName, DateTime) ->
spawn_link(?MODULE, init, [self(), EventName, DateTime]).
cancel(Pid) ->
%% Monitor in case the process is already dead
Ref = erlang:monitor(process, Pid),
Pid ! {self(), Ref, cancel},
receive
{Ref, ok} ->
erlang:demonitor(Ref, [flush]),
ok;
{'DOWN', Ref, process, Pid, _Reason} ->
ok
end.
%%% Event's innards
init(Server, EventName, DateTime) ->
loop(#state{server=Server,
name=EventName,
to_go=time_to_go(DateTime)}).
%% Loop uses a list for times in order to go around the ~49 days limit
%% on timeouts.
loop(S = #state{server=Server, to_go=[T|Next]}) ->
receive
{Server, Ref, cancel} ->
Server ! {Ref, ok}
after T*1000 ->
if Next =:= [] ->
Server ! {done, S#state.name};
Next =/= [] ->
loop(S#state{to_go=Next})
end
end.
%%% private functions
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
Now = calendar:local_time(),
ToGo = calendar:datetime_to_gregorian_seconds(TimeOut) -
calendar:datetime_to_gregorian_seconds(Now),
Secs = if ToGo > 0 -> ToGo;
ToGo =< 0 -> 0
end,
normalize(Secs).
%% Because Erlang is limited to about 49 days (49*24*60*60*1000) in
%% milliseconds, the following function is used
normalize(N) ->
Limit = 49*24*60*60,
[N rem Limit | lists:duplicate(N div Limit, Limit)].
It's running purely locally on your machine so the firewall will not affect it.
The problem is the second argument you gave when you started it event:start("Event",0).
The error reason:
{function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
says that it is a function_clause error which means that there was no clause in the function definition which matched the arguments. It also tells you that it was the function event:time_to_go/1 on line 48 which failed and that it was called with the argument 0.
It you look at the function time_to_go/ you will see that it expects its argument to be a tuple of 2 elements where each element is a tuple of 3 elements:
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
The structure of this argument is {{Year,Month,Day},{Hour,Minute,Second}}. If you follow this argument backwards you that time_to_go/ is called from init/3 where the argument to time_to_go/1, DateTime, is the 3rd argument to init/3. Almost there now. Now init/3 is the function which the process spawned in start/2 (and start_link/2) and the 3rd argument toinit/3is the second argument tostart/2`.
So when you call event:start("Event",0). it is the 0 here which is passed into the call time_to_go/1 function in the new peocess. And the format is wrong. You should be calling it with something like event:start("Event", {{2013,3,24},{17,53,62}}).
To add background to rvirding's answer, you get the error because the
example works up until the final code snippet as far
as I know. The normalize function is used first, which deals with the
problem. Then the paragraph right after the example in the question
above, the text says:
And it works! The last thing annoying with the event module is that we
have to input the time left in seconds. It would be much better if we
could use a standard format such as Erlang's datetime ({{Year, Month,
Day}, {Hour, Minute, Second}}). Just add the following function that
will calculate the difference between the current time on your
computer and the delay you inserted:
The next snippet introduces the code bit that takes only a date/time and
changes it to the final time left.
I could not easily link to all transitional versions of the file, which
is why trying the linked file directly with the example doesn't work
super easily in this case. If the code is followed step by step, snippet
by snippet, everything should work fine. Sorry for the confusion.