Let's say I have an Elixir function that should do something once every 30 minutes... Or not more often than once every 30 seconds, no matter how often it is called. Is there any good way of testing this, without having the test case take hours?
It is difficult to give an answer without having a specific use case, however one fairly simple option is to have the timeout be configurable (perhaps as a function argument) and test that.
e.g.
defmodule MyTest do
use Exunit.Case
test "message sent every 100 milliseconds" do
pid = self
MyModule.report_count_every(100, pid)
assert_receive({:ok, 1}, 500)
assert_receive({:ok, 2}, 500)
end
end
This assumes that MyModule is a GenServer that maintains a counter which it broadcasts based on the first argument passed to report_count_every/2
Related
I'm using google mock as a framework for testing a real time system, I want to verify that the configuration to the sensors is being implemented correctly. To do that I am sending a new Frequency to said sensor and expecting my sensor data callback to be called x number of times.
Due to this I have to sleep the testing thread for a few seconds, so my callback can be called a few times.
Due to this the expected calls are not a set number, for example, if I expect my callback to be called once per second and I sleep the thread 5 seconds, it can actually be called between 4 and 6 times, otherwise the test would be too restrictive.
This is the problem, I haven't found a way to test if expect call is between 4 and 6, I tried the following:
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AnyNumber());
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtMost(6));
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtLeast(4));
And
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AnyNumber());
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtLeast(4));
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtMost(6));
Try Between from https://github.com/google/googletest/blob/master/googlemock/docs/cheat_sheet.md#cardinalities-cardinalitylist. It is exactly for the purpose of asserting that given call be called between m and n times.
I am currently reading the Programming Erlang Second Edition Writing Software for a concurrent world written by Joe Armstrong and I have the following assignment :
Write a function start(AnAtom, Fun) to register AnAtom as spawn(Fun). Make sure your program works correctly in the case when two parallel processes simultaneously evaluate start/2. In this case you must guarantee that one succeeds and the other fails.
I understand the first bit. I need to register the process of Fun to the AnAtom. However what does the second part want me to do?
If two processes call start/2 at the same time then one of them must fail? Why? Given that the AnAtom is different to any others (which will be done inside the body of start/2 why would I want to fail one of the processes?
From what I can understand so far we have:
a = spawn(process1).
b = spawn(process2).
a ! {self(), registerProcess} //which should call the start/2
b ! {self(), registerProcess} //which should call the start/2
What is the problem here? Two processes will evaluate start/2. Why fail one of them? I'm probably missing the logic here or what I understood so far is completely wrong. Can anybody explain this in easier terms so I can get my head around it?
I believe the exercise is asking you to think about what happens when two parallel process evaluate start/2 using the SAME atom as the first parameter. When start(a, MyFunction) completes, there should be a spawned function (running MyFunction) associated with the name (atom) a.... what happens if
start(cool, MyFun1) and
start(cool, MyFun2)
are both executed simultaneously? How do you guarantee that one succeeds and the other fails.... does this help?
EDIT: I think you are not understanding the register process part of the assignment. When you get done with start(name, MyFun), doing a whereis(name) from the repl should return the process identifier of the process that got created.
This is not about sending the process a message to give it a name, it is about registering the process your created under the name passed in as the first parameter to start/2
An example of this problem is when a user creates a resource and deletes a resource. We will perform the operation and also increment (decrement) a counter cache.
In testing, there is sometimes a race condition where the counter cache has not been updated by the go routine.
EDIT: Sorry about the confusion, to clarify: the counter cache is not in memory, it is actually a field in the database. The race condition is not to a variable in memory, it is actually that the goroutine might be slow to write into the database itself!
I currently use a 1 second sleep after the operation to ensure that the counter cache has been updated before testing the counter cache. Is there another way to test go routine without the arbitrary 1 second sleep to wait for the go routine to finish?
Cheers
In testing, there is sometimes a race condition where the counter cache has not been updated by the go routine. I currently use a 1 second sleep after the operation to ensure that the counter cache has been updated before testing the counter cache.
Yikes, I hate to say it, but you're doing it wrong. Go has first-class features to make concurrency easy! If you use them correctly, it's impossible to have race conditions.
In fact, there's a tool that will detect races for you. I'll bet it complains about your program.
One simple solution:
Have the main routine create a goroutine for keeping track of the counter.
the goroutine will just do a select and get a message to increment/decrement or read the counter. (If reading, it will be passed in a channel to return the number)
when you create/delete resources, send an appropriate message to the goroutine counter via it's channel.
when you want to read the counter, send a message for read, and then read the return channel.
(Another alternative would be to use locks. It would be a tiny bit more performant, but much more cumbersome to write and ensure it's correct.)
One solution is to make to let your counter offer a channel which is updated as soon as the value
changes. In go it is common practice to synchronize by communicating the result. For example your
Couter could look like this:
type Counter struct {
value int
ValueChange chan int
}
func (c *Counter) Change(n int) {
c.value += n
c.ValueChange <- c.value
}
Whenever Change is called, the new value is passed through the channel and whoever is
waiting for the value unblocks and continues execution, therefore synchronizing with the
counter. With this code you can listen on ValueChange for changes like this:
v := <-c.ValueChange
Concurrently calling c.Change is no problem anymore.
There is a runnable example on play.
My current solution uses the ThreadPool to process transactions. Every couple minutes I grab 1-200 transactions and queue each one via the QueueUserWorkItem function. Something like this where 'trans' is my collection of transactions:
For Each t As ManagerTransaction In trans
Threading.ThreadPool.QueueUserWorkItem(AddressOf ProcessManagerTransaction, t)
Next
I want to switch it over to use the TPL, however, after much research I am still unsure of the best way to go about it. I have the following options yet I haven't been able to find a general consensus on what the best practice is.
1) Threading.Tasks.Parallel.ForEach(trans, AddressOf ProcessManagerTransaction)
Where "t" is an individual transaction in my "trans" collection
2) Task.Factory.StartNew(AddressOf ProcessManagerTransaction, t)
2a) Task.Factory.StartNew(Sub() ProcessManagerTransaction(t)
And this combination of the two:
3) Task.Factory.StartNew(Function() Parallel.ForEach(trans, AddressOf ProcessManagerTransaction))
The first option is generally perferrable because it does everything you want: parallelization and propagation of errors. Options 2 and 3 require additional means to propagate errors.
Option 2 might come into play if you require having tasks so you can compose them with other tasks.
I do not really see a case where I would use option 3.
I am using valgrind callgrind to profile a program on gtk. And then I use kcachedgrind to read the result. I have captured an update a screenshot of kcachedgrind here: http://i41.tinypic.com/168spk0.jpg. It said the function gtk_moz_embed_new() costed '15.61%'.
But I dont understand how is that possible. the function gtk_moz_embed_new() literally has 1 line: and it is just calling a g_object_new().
GtkWidget *
gtk_moz_embed_new(void)
{
return GTK_WIDGET(g_object_new(GTK_TYPE_MOZ_EMBED, NULL));
}
Can you please help understanding the result or how to use kcachedgrind.
Thank you.
If i remember correctly that should mean (more or less) that function gtk_moz_embed_new() was executing 15.61% of the time the the app was running.
You see, that function returns an inline call to other functions (or classes or whatever) that also take time to execute. When they are all done it's then that the function gtk_moz_embed_new() acutally returns a value. The very same reason it takes main() 99% of the time to execute, it finisesh execution after all included code in it is executed.
Note that self value for the gtk_moz_embed_new() is 0 which is "exclusive cost" meaning that function it self did not really took any time to execute (it's really only a return call)
But to be exact:
1.1 What is the difference between 'Incl.' and 'Self'?
These are cost attributes for
functions regarding some event type.
As functions can call each other, it
makes sense to distinguish the cost of
the function itself ('Self Cost') and
the cost including all called
functions ('Inclusive Cost'). 'Self'
is sometimes also referred to as
'Exclusive' costs.
So e.g. for main(), you will always
have a inclusive cost of almost 100%,
whereas the self cost is neglectable
when the real work is done in another
function.