I'm basically following the tutorial on this site Learn you some Erlang:Designing a concurrent application and I tried to run the code below with the following commands and got an error on line 48. I did turn off my firewall just in case that was the problem but no luck. I'm on windows xp SP3.
9> c(event).
{ok,event}
10> f().
ok
11> event:start("Event",0).
=ERROR REPORT==== 9-Feb-2013::15:05:07 ===
Error in process <0.61.0> with exit value: {function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
<0.61.0>
12>
-module(event).
-export([start/2, start_link/2, cancel/1]).
-export([init/3, loop/1]).
-record(state, {server,
name="",
to_go=0}).
%%% Public interface
start(EventName, DateTime) ->
spawn(?MODULE, init, [self(), EventName, DateTime]).
start_link(EventName, DateTime) ->
spawn_link(?MODULE, init, [self(), EventName, DateTime]).
cancel(Pid) ->
%% Monitor in case the process is already dead
Ref = erlang:monitor(process, Pid),
Pid ! {self(), Ref, cancel},
receive
{Ref, ok} ->
erlang:demonitor(Ref, [flush]),
ok;
{'DOWN', Ref, process, Pid, _Reason} ->
ok
end.
%%% Event's innards
init(Server, EventName, DateTime) ->
loop(#state{server=Server,
name=EventName,
to_go=time_to_go(DateTime)}).
%% Loop uses a list for times in order to go around the ~49 days limit
%% on timeouts.
loop(S = #state{server=Server, to_go=[T|Next]}) ->
receive
{Server, Ref, cancel} ->
Server ! {Ref, ok}
after T*1000 ->
if Next =:= [] ->
Server ! {done, S#state.name};
Next =/= [] ->
loop(S#state{to_go=Next})
end
end.
%%% private functions
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
Now = calendar:local_time(),
ToGo = calendar:datetime_to_gregorian_seconds(TimeOut) -
calendar:datetime_to_gregorian_seconds(Now),
Secs = if ToGo > 0 -> ToGo;
ToGo =< 0 -> 0
end,
normalize(Secs).
%% Because Erlang is limited to about 49 days (49*24*60*60*1000) in
%% milliseconds, the following function is used
normalize(N) ->
Limit = 49*24*60*60,
[N rem Limit | lists:duplicate(N div Limit, Limit)].
It's running purely locally on your machine so the firewall will not affect it.
The problem is the second argument you gave when you started it event:start("Event",0).
The error reason:
{function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
says that it is a function_clause error which means that there was no clause in the function definition which matched the arguments. It also tells you that it was the function event:time_to_go/1 on line 48 which failed and that it was called with the argument 0.
It you look at the function time_to_go/ you will see that it expects its argument to be a tuple of 2 elements where each element is a tuple of 3 elements:
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
The structure of this argument is {{Year,Month,Day},{Hour,Minute,Second}}. If you follow this argument backwards you that time_to_go/ is called from init/3 where the argument to time_to_go/1, DateTime, is the 3rd argument to init/3. Almost there now. Now init/3 is the function which the process spawned in start/2 (and start_link/2) and the 3rd argument toinit/3is the second argument tostart/2`.
So when you call event:start("Event",0). it is the 0 here which is passed into the call time_to_go/1 function in the new peocess. And the format is wrong. You should be calling it with something like event:start("Event", {{2013,3,24},{17,53,62}}).
To add background to rvirding's answer, you get the error because the
example works up until the final code snippet as far
as I know. The normalize function is used first, which deals with the
problem. Then the paragraph right after the example in the question
above, the text says:
And it works! The last thing annoying with the event module is that we
have to input the time left in seconds. It would be much better if we
could use a standard format such as Erlang's datetime ({{Year, Month,
Day}, {Hour, Minute, Second}}). Just add the following function that
will calculate the difference between the current time on your
computer and the delay you inserted:
The next snippet introduces the code bit that takes only a date/time and
changes it to the final time left.
I could not easily link to all transitional versions of the file, which
is why trying the linked file directly with the example doesn't work
super easily in this case. If the code is followed step by step, snippet
by snippet, everything should work fine. Sorry for the confusion.
Related
Consider the following minimal [?] example:
defmodule Foo do
def bar() do
n = IO.read(:line) |> String.trim() |> String.to_integer()
for _ <- 0..n - 1 do
IO.read(:line) |> IO.write()
end
end
end
import ExUnit.CaptureIO
capture_io("2\nabc\ndef", Foo.bar)
I did look into the documentation, and it puts no limitations on ExUnit.CaptureIO usage, but the aforelisted code hangs, waiting for the first line of input, as if it hasn't been fed. Have I missed something?
If it matters, I'm using Elixir 1.7.3.
The second argument to capture_io needs to be a function to run with the capturing enabled. Here, you're passing in the result of running Foo.bar. That hangs forever, as it is expecting input from stdio, which never comes. Long story short you need to pass it as a function:
capture_io("2\nabc\ndef", &Foo.bar/0)
because Foo.bar is the same as Foo.bar().
I'm writing a web game in Elm with lot of time-dependent events and I'm looking for a way to schedule an event at a specific time delay.
In JavaScript I used setTimeout(f, timeout), which obviously worked very well, but - for various reasons - I want to avoid JavaScript code and use Elm alone.
I'm aware that I can subscribe to Tick at specific interval and recieve clock ticks, but this is not what I want - my delays have no reasonable common denominator (for example, two of the delays are 30ms and 500ms), and I want to avoid having to handle a lot of unnecessary ticks.
I also came across Task and Process - it seems that by using them I am somehow able to what I want with Task.perform failHandler successHandler (Process.sleep Time.second).
This works, but is not very intuitive - my handlers simply ignore all possible input and send same message. Moreover, I do not expect the timeout to ever fail, so creating the failure handler feels like feeding the library, which is not what I'd expect from such an elegant language.
Is there something like Task.delayMessage time message which would do exactly what I need to (send me a copy of its message argument after specified time), or do I have to make my own wrapper for it?
An updated and simplified version of #wintvelt's answer for Elm v0.18 is:
delay : Time.Time -> msg -> Cmd msg
delay time msg =
Process.sleep time
|> Task.perform (\_ -> msg)
with the same usage
One thing that may not be obvious at first is the fact that subscriptions can change based on the model. They are effectively evaluated after every update. You can use this fact, coupled with some fields in your model, to control what subscriptions are active at any time.
Here is an example that allows for a variable cursor blink interval:
subscriptions : Model -> Sub Msg
subscriptions model =
if model.showCursor
then Time.every model.cursorBlinkInterval (always ToggleCursor)
else Sub.none
If I understand your concerns, this should overcome the potential for handling unnecessary ticks. You can have multiple subscriptions of different intervals by using Sub.batch.
If you want something to happen "every x seconds", then a subscription like solution, as described by #ChadGilbert is what you need. (which is more or less like javascript's setInterval().
If, on the other hand you want something to happen only "once, after x seconds", then Process.sleep route is the way to go. This is the equivalent of javascript's setTimeOut(): after some time has passed, it does something once.
You probably have to make your own wrapper for it. Something like
-- for Elm 0.18
delay : Time -> msg -> Cmd msg
delay time msg =
Process.sleep time
|> Task.andThen (always <| Task.succeed msg)
|> Task.perform identity
To use e.g. like this:
---
update msg model =
case msg of
NewStuff somethingNew ->
...
Defer somethingNew ->
model
! [ delay (Time.second * 5) <| NewStuff somethingNew ]
Elm v0.19
To execute once and delay:
delay : Float -> msg -> Cmd msg
delay time msg =
-- create a task that sleeps for `time`
Process.sleep time
|> -- once the sleep is over, ignore its output (using `always`)
-- and then we create a new task that simply returns a success, and the msg
Task.andThen (always <| Task.succeed msg)
|> -- finally, we ask Elm to perform the Task, which
-- takes the result of the above task and
-- returns it to our update function
Task.perform identity
To execute a repeating task:
every : Float -> (Posix -> msg) -> Sub msg
I'm all new to erlang, and i got this task:
Write a function "setalarm(T,Message)" what starts two processes at
the same time. After T miliseconds the first process sends a message
to the second process, and that message will be the Message arg.
It's forbidden to use function library, only primitives (send, receive, spawn)
Me as a novice useful to write more code, so I suggest such an option:
setalarm(T,Message)->
S = spawn(sotest,second,[]),
Pid = spawn(sotest,first,[S,T,Message]).
first(Pid,T,Message) ->
receive
after T -> Pid ! Message
end.
second() ->
receive
Message -> io:format("The message is ~p~n",[Message])
end.
I'm currently writing a test for a module that runs in a simple process started with spawn_link(?MODULE, init, [self()]).
In my eunit tests, I have a setup and teardown function defined and a set of test generators.
all_tests_test_() ->
{inorder, {
foreach,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
The setup fun creates the process-under-test:
setup() ->
{ok, Pid} = protocol:start_link(),
process_flag(trap_exit,true),
error_logger:info_msg("[~p] Setting up process ~p~n", [self(), Pid]),
Pid.
The test looks like this:
my_test(Pid) ->
[ fun() ->
error_logger:info_msg("[~p] Sending to ~p~n", [self(), Pid]),
Pid ! something,
receive
Msg -> ?assertMatch(expected_result, Msg)
after
500 -> ?assert(false)
end
end ].
Most of my modules are gen_server but for this I figured it'll be easier without all gen_server boilerplate code...
The output from the test looks like this:
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.117.0>] Setting up process <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.124.0>] Sending to <0.122.0>
=INFO REPORT==== 31-Mar-2014::21:20:12 ===
[<0.122.0>] Sending expected_result to <0.117.0>
protocol_test: my_test...*failed*
in function protocol_test:'-my_test/1-fun-0-'/0 (test/protocol_test.erl, line 37)
**error:{assertion_failed,[{module,protocol_test},
{line,37},
{expression,"false"},
{expected,true},
{value,false}]}
From the Pids you can see that whatever process was running setup (117) was not the same that was running the test case (124). The process under test however is the same (122). This results in a failing test case because the receive never gets the message und runs into the timeout.
Is that the expected behaviour that a new process gets spawned by eunit to run the test case?
An generally, is there a better way to test a process or other asynchronous behaviour (like casts)? Or would you suggest to always use gen_server to have a synchronous interface?
Thanks!
[EDIT]
To clarify, how protocol knows about the process, this is the start_link/0 fun:
start_link() ->
Pid = spawn_link(?MODULE, init, [self()]),
{ok, Pid}.
The protocol ist tightly linked to the caller. If the either of them crashes I want the other one to die as well. I know I could use gen_server and supervisors and actually it did that in parts of the application, but for this module, I thought it was a bit over the top.
did you try:
all_tests_test_() ->
{inorder, {
foreach,
local,
fun setup/0,
fun teardown/1,
[
fun my_test/1
]}
}.
From the doc, it seems to be what you need.
simple solution
Just like in Pascal answer, adding the local flag to test description might solve some your problem, but it will probably cause you some additional problems in future, especially when you link yourself to created process.
testing processes
General practice in Erlang is that while process abstraction is crucial for writing (designing and thinking about) programs, it is not something that you would expose to user of your code (even if it is you). Instead expecting someone to send you message with proper data, you wrap it in function call
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
Msg
after 500
timeouted
end
and then test this function rather than receiving something "by hand".
To distinguish real timeout from received timeouted atom, one can use some pattern matching, and let it fail in case of error
get_me_some_expected_result(Pid) ->
Pid ! something,
receive
Msg ->
{ok, Msg}
after 500
timeouted
end
in_my_test() ->
{ok, ValueToBeTested} = get_me_some_expected_result().
In addition, since your process could receive many different messages in meantime, you can make sure that you receive what you think you receive with little pattern-matching and local reference
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
And now receive will ignore (leave for leter) all messages that will not have same Reg that you send to your process.
major concern
One thing that I do not really understand, is how does process you are testing know where to send back received message? Only logical solution would be getting pid of it's creator during initialization (call to self/0 inside protocol:start_link/0 function). But then our new process can communicate only with it's creator, which might not be something you expect, and which is not how tests are run.
So simplest solution would be sending "return address" with each call; which again could be done in our wrapping function.
get_me_some_expected_result(Pid) ->
Ref = make_ref(),
Pid ! {something, Ref, self()},
receive
{Ref, Msg} ->
{ok, Msg}
after 500
timeouted
end
Again, anyone who will use this get_me_some_expected_result/1 function will not have to worry about message passing, and testing such functions makes thing extremely easier.
Hope this helps at least a little.
Maybe it's simply because you are using the foreach EUnit fixture in place of the setup one.
There, try the setup fixture: the one that uses {setup, Setup, Cleanup, Tests} instead of {inorder, {foreach, …}}
How do you flush the io buffer in Erlang?
For instance:
> io:format("hello"),
> io:format(user, "hello").
This post seems to indicate that there is no clean solution.
Is there a better solution than in that post?
Sadly other than properly implementing a flush "command" in the io/kernel subsystems and making sure that the low level drivers that implement the actual io support such a command you really have to simply rely on the system quiescing before closing. A failing I think.
Have a look at io.erl/io_lib.erl in stdlib and file_io_server.erl/prim_file.erl in kernel for the gory details.
As an example, in file_io_server (which effectively takes the request from io/io_lib and routes it to the correct driver), the command types are:
{put_chars,Chars}
{get_until,...}
{get_chars,...}
{get_line,...}
{setopts, ...}
(i.e. no flush)!
As an alternative you could of course always close your output (which would force a flush) after every write. A logging module I have does something like this every time and it doesn't appear to be that slow (it's a gen_server with the logging received via cast messages):
case file:open(LogFile, [append]) of
{ok, IODevice} ->
io:fwrite(IODevice, "~n~2..0B ~2..0B ~4..0B, ~2..0B:~2..0B:~2..0B: ~-8s : ~-20s : ~12w : ",
[Day, Month, Year, Hour, Minute, Second, Priority, Module, Pid]),
io:fwrite(IODevice, Msg, Params),
io:fwrite(IODevice, "~c", [13]),
file:close(IODevice);
io:put_chars(<<>>)
at the end of the script works for me.
you could run
flush().
from the shell, or try
flush()->
receive
_ -> flush()
after 0 -> ok
end.
That works more or less like a C flush.