Use deferred to make an infinite call loop - twisted

Can we use deferred (http://twistedmatrix.com/documents/current/core/howto/defer.html) to make an infinite call loop in which a function adds itself to a deferred chain? I tried to do this, but it doesn't work:
d = deferred.Deferred()
first = True
def loopPrinting(dump):
ch = chr(random.randint(97, 122))
print ch
global d, first
d.addCallback(loopPrinting)
if first:
d.callback('a')
first = False
return d
loopPrinting('a')
reactor.run()

This isn't a good use for Deferreds. Instead, try using reactor.callLater:
from twisted.internet import reactor
def loopPrinting():
print chr(random.randint(97, 122))
reactor.callLater(1.0, loopPrinting)
loopPrinting()
reactor.run()
Or twisted.internet.task.LoopingCall:
from twisted.internet import task, reactor
def loopPrinting():
print chr(random.randint(97, 122))
loop = task.LoopingCall(loopPrinting)
loop.start(1.0)
reactor.run()
Your Deferred-based version has a couple problems. First, it defines a callback on a Deferred that returns the same Deferred. Returning a Deferred (let's call it a) from a callback on another Deferred (let's call it b) does something called "chaining". It makes b pause its callback chain until a has a result. In the case where a and b are actually the same Deferred instance, this makes little or no sense.
Second, when adding a callback to a Deferred that already has a result, the callback will be called immediately. In your case, your callback adds another callback. And that callback adds another callback. So you have an infinite loop all contained inside your d.addCallback(loopPrinting) line. This will prevent the reactor from ever running, breaking any other part of your program.

Related

Unable to assert state flow value in view model

The view model is given below
class ClickRowViewModel #Inject constructor(
private val clickRowRepository: ClickRowRepository
): ViewModel() {
private val _clickRowsFlow = MutableStateFlow<List<ClickRow>>(mutableListOf())
val clickRowsFlow = _clickRowsFlow.asStateFlow()
fun fetchAndInitialiseClickRows() {
viewModelScope.launch {
_clickRowsFlow.update {
clickRowRepository.fetchClickRows()
}
}
}
}
My test is as follows:
I am using InstantTaskExecutorRule as follows
#get:Rule
val instantTaskExecutorRule = InstantTaskExecutorRule()
The actual value never resolves to the expected value even though $result seems to have two elements but the actualValue is an empty list. I don't know what I am doing wrong.
Update
I tried to use the first terminal operator as well but the returned output returns an empty list.
Update # 2
I tried async but I got the following error
kotlinx.coroutines.test.UncompletedCoroutinesError: After waiting for 60000 ms, the test coroutine is not completing, there were active child jobs: [DeferredCoroutine{Active}#a4a38f0]
at kotlinx.coroutines.test.TestBuildersKt__TestBuildersKt$runTestCoroutine$3$3.invokeSuspend(TestBuilders.kt:342)
Update # 3
This test passes in Android Studio, but fails using CLI
Test failing in CLI
You can't call toList on a SharedFlow like that:
Shared flow never completes. A call to Flow.collect on a shared flow never completes normally, and neither does a coroutine started by the Flow.launchIn function.
So calling toList will hang forever, because the flow never hits an end point where it says "ok that's all the elements", and toList needs to return a final value. Since StateFlow only contains one element at a time anyway, and you're not collecting over a period of time, you probably just want take(1).toList().
Or use first() if you don't want the wrapping list, which it seems you don't - each element in the StateFlow is a List<ClickRow>, which is what clickRowRepository.fetchClickRows() returns too. So expectedValue is a List<ClickRow>, whereas actualValue is a List<List<ClickRow>> - so they wouldn't match anyway!
edit your update (using first()) has a couple of issues.
First of all, the clickRowsFlow StateFlow in your ViewModel only updates when you call fetchAndInitialiseClickRows(), because that's what fetches a value and sets it on the StateFlow. You're not calling that in your second example, so it won't update.
Second, that StateFlow is going to go through two state values, right? The first is the initial empty list, the second is the row contents you get back from the repo. So when you access that StateFlow, it either needs to be after the update has happened, or (better) you need to ignore the first state and only return the second one:
val actualValue = clickRowViewModel.clickRowsFlow
.drop(1) // ignore the initial state
.first() // then take the first result after that
// start the update -after- setting up the flow collection,
// so there's no race condition to worry about
clickRowsViewModel.fetchAndInitialiseClickRows()
This way, you subscribe to the StateFlow and immediately get (and drop) the initial state. Then when the update happens, it should push another value to the subscriber, which takes that first new value as its final result.
But there's another complication - because fetchAndInitialiseClickRows() kicks off its own coroutine and returns immediately, that means the fetch-and-update task is running asynchronously. You need to give it time to finish, before you start asserting any results from it.
One option is to start the coroutine and then block waiting for the result to show up:
// start the update
clickRowsViewModel.fetchAndInitialiseClickRows()
// run the collection as a blocking operation, which completes when you get
// that second result
val actualValue = clickRowViewModel.clickRowsFlow
.drop(1)
.first()
This works so long as fetchAndInitialiseClickRows doesn't complete immediately. That consumer chain up there requires at least two items to be produced while it's subscribed - if it never gets to see the initial state, it'll hang waiting for that second (really a third) value that's never coming. This introduces a race condition and even if it's "probably fine in practice" it still makes the test brittle.
Your other option is to subscribe first, using a coroutine so that execution can continue, and then start the update - that way the subscriber can see the initial state, and then the update that arrives later:
// async is like launch, but it returns a `Deferred` that produces a result later
val actualValue = async {
clickRowViewModel.clickRowsFlow
.drop(1)
.first()
}
// now you can start the update
clickRowsViewModel.fetchAndInitialiseClickRows()
// then use `await` to block until the result is available
assertEquals(expected, actualValue.await())
You always need to make sure you handle waiting on your coroutines, otherwise the test could finish early (i.e. you do your asserting before the results are in). Like in your first example, you're launching a coroutine to populate your list, but not ensuring that has time to complete before you check the list's contents.
In that case you'd have to do something like advanceUntilIdle() - have a look at this section on testing coroutines, it shows you some ways to wait on them. This might also work for the one you're launching with fetchAndInitialiseClickRows (since it says it waits for other coroutines on the scheduler, not the same scope) but I'm not really familiar with it, you could look into it if you like!

Assign a variable to the result of a call inside a coroutine

Is there a one-liner for this?
...
var x = ""
coroutine.launch {
x = StoreHelper.getProductPrice(mdProducts[0].id)
}
return x
You shouldn't rely that variable x will have a different value apart from "" in that case. Coroutines run asynchronously, so the x can be returned with value "" before the code StoreHelper.getProductPrice() runs.
You can rewrite the code to something like the following:
coroutine.launch {
val x = StoreHelper.getProductPrice(mdProducts[0].id)
// do smth with x
}
but you can't return x from the function in this case.
And as per my knowledge there is no one-liner for that.
You might want to use async instead, which returns a Deferred
return coroutine.async { StoreHelper.getProductPrice(mdProducts[0].id) }
and later you can await the result in a coroutine - or if you want to use the experimental getCompleted function and hope it's completed, you can do that. Not the best idea for handling an asynchronous result though, await is what you should really be using.
Bear in mind that this is the whole issue with async code - stuff happens simultaneously, some things take longer than others, so you need a way to handle the results when they're ready. You can't return x until you have it, so you either need to block until the coroutine finishes, or return something like the Deferred (which is like a Java Future) so you have a reference to the running job you can check later. Or don't return a result at all, and just have that job do something when it's finished, like calling a function.
This section on async functions might be helpful to look through, to give you some ideas about what you need and what you can do.

How to use OCMock to verify that an asynchronous method does not get called in Objective C?

I want to verify that a function is not called. The function is executed in an asynchronous block call inside the tested function and therefore OCMReject() does not work.
The way I have tested if async functions are indeed called would be as follows:
id mock = OCMClassMock([SomeClass class]);
OCMExpect([mock methodThatShouoldExecute]);
OCMVerifyAllWithDelay(mock, 1);
How would a test be done to test if a forbidden function is not called?
Something like:
VerifyNotCalled([mock methodThatShouoldExecute]);
OCMVerifyAllWithDelay(mock, 1);
I would recommend using an OCMStrictClassMock instead of the OCMClassMock (which gives you a nice mock). A strict mock will instantly fail your test if any method is called on it that you did not stub or expect, which makes your tests a lot more rigorous.
If that's not an option for you, you can do what you described with:
OCMReject([mock methodThatShouoldExecute]);
See the "Failing fast for regular (nice) mocks" section in the OCMock docs.
Now as for waiting for your code which may call the forbidden method, that's another matter. You can't use OCMVerifyAllWithDelay since that returns immediately as soon as all expectations are met, it doesn't wait around a full second to see if illegal calls will be made to it. One option is to put a 1 second wait before verifying the mock each time. Ideally, you could also wait explicitly on your asynchronous task with an XCTestExpectation. Something like:
XCTestExpectation *asyncTaskCompleted = [self expectationWithDescription:#"asyncTask"];
// Enqueued, in an onCompletion block, or whatever call
// ... [asyncTaskCompleted fulfill]
[self waitForExpectationsWithTimeout:1 handler:nil]

The stack of a lua coroutine is entered implicitly without a call to resume?

I am using lua coroutines (lua 5.1) to create a plugin system for an application. I was hoping to use coroutines so that the plugin could operate as if it were a separate application program which yields once per processing frame. The plugin programs generally follow a formula something like:
function Program(P)
-- setup --
NewDrawer(function()
-- this gets rendered in a window for this plugin program --
drawstuff(howeveryouwant)
end)
-- loop --
local continue = true
while continue do
-- frame by frame stuff excluding rendering (handled by NewDrawer) --
P = coroutine.yield()
end
end
Each plugin is resumed in the main loop of the application once per frame. Then when drawing begins each plugin has an individual window it draws in which is when the function passed to NewDrawer is executed.
Something like this:
while MainContinue do
-- other stuff left out --
ExecutePluginFrames() -- all plugin coroutines resumed once
BeginRendering()
-- other stuff left out --
RenderPluginWindows() -- functions passed to NewDrawer called.
EndRendering()
end
However I found that this suddenly began acting strangely and messing up my otherwise robust error handling system whenever an error occurred in the rendering. It took me a little while to wrap my head around what was happening but it seems that the call to WIN:Draw() which I expected to be in the main thread's call stack (because it is handled by the main application) was actually causing an implicit jump into the coroutine's call stack.
At first the issue was that the program was closing suddenly with no useful error output. Then after looking at a stack traceback of the rendering function defined in the plugin program I saw that everything leading up to the window's Draw from the main thread was not there and that yield was in the call stack.
It seems that because the window was created in the thread and the drawing function, that they are being handled by that thread's call stack, which is a problem because it means they are outside of the pcall set up in the main thread.
Is this suppose to happen? is it the result of a bug/shortcut in the C source? am I doing something wrong or at least not correctly enough? is there a way to handle this cleanly?
I can't reproduce the effect you are describing. This is the code I'm running:
local drawer = {}
function NewDrawer(func)
table.insert(drawer, func)
end
function Program(P)
NewDrawer(function()
print("inside program", P)
end)
-- loop --
local continue = true
while continue do
-- frame by frame stuff excluding rendering (handled by NewDrawer) --
P = coroutine.yield()
end
end
local coro = coroutine.create(Program)
local MainContinue = true
while MainContinue do
-- other stuff left out --
-- ExecutePluginFrames() -- all plugin coroutines resumed once
coroutine.resume(coro, math.random(10))
-- RenderPluginWindows() -- functions passed to NewDrawer called.
for _, plugin in ipairs(drawer) do
plugin()
end
MainContinue = false
end
When I step through the code and look at the stack, the callback that is set in NewDrawer is called in the "main" thread as it should. You can see it yourself if you call coroutine.running() which returns the current thread or nil if you are inside the main thread.
I have discovered why this was happening in my case. The render objects which called the function passed to NewDrawer are initialized on creation (by the c code) with a pointer to the lua state that created them and this is used for accessing their associated lua data and for calling the draw function. I had not seen the connection between lua_State and coroutines. So as it turns out it is possible for functions to be called in the stack after yield if C code is causing them.
As far as a solution goes I've decided to break the program into two coroutines, one for rendering and one for processing. This fixes the problem by allowing the creating thread of the render objects to also be the calling thread, and keeps the neat advantages of the independence of the rendering loop and processing loop.

Enforcing method order in a Python module [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
What is the most Pythonic way to deal with a module in which methods must be called in a certain order?
As an example, I have an XML configuration that must be read before doing anything else because the configuration affects behavior.
The parse_config() must be called first with the configuration file provided. Calling other supporting methods, like query_data() won't work until parse_config() has been called.
I first implemented this as a singleton to ensure that a filename for the configuration is passed at the time of initialization, but I was noticing that modules are actually singletons. It's no longer a class, but just a regular module.
What's the best way to enforce the parse_config being called first in a module?
It is worth noting is that the function is actually parse_config(configfile).
If the object isn't valid before it's called, then call that method in __init__ (or use a factory function). You don't need any silly singletons, that's for sure.
The model I have been using is that subsequent functions are only available as methods on the return value of previous functions, like this:
class Second(object):
def two(self):
print "two"
return Third()
class Third(object):
def three(self):
print "three"
def one():
print "one"
return Second()
one().two().three()
Properly designed, this style (which I admit is not terribly Pythonic, yet) makes for fluent libraries to handle complex pipeline operations where later steps in the library require both the results of early calculations and fresh input from the calling function.
An interesting result is error handling. What I've found is the best way of handling well-understood errors in pipeline steps is having a blank Error class that supposedly can handle every function in the pipeline (except initial one) but those functions (except possibly terminal ones) return only self:
class Error(object):
def two(self, *args):
print "two not done because of earlier errors"
return self
def three(self, *args):
print "three not done because of earlier errors"
class Second(object):
def two(self, arg):
if arg == 2:
print "two"
return Third()
else:
print "two cannot be done"
return Error()
class Third(object):
def three(self):
print "three"
def one(arg):
if arg == 1:
print "one"
return Second()
else:
print "one cannot be done"
return Error()
one(1).two(-1).three()
In your example, you'd have the Parser class, which would have almost nothing but a configure function that returned an instance of a ConfiguredParser class, which would do all the thing that only a properly configured parser could do. This gives you access to such things as multiple configurations and handling failed attempts at configuration.
As Cat Plus Plus said in other words, wrap the behaviour/functions up in a class and put all the required setup in the __init__ method.
You might complain that the functions don't seem like they naturally belong together in an object and, hence, this is bad OO design. If that's the case, think of your class/object as a form of name-spacing. It's much cleaner and more flexible than trying to enforce function calling order somehow or using singletons.
The simple requirement that a module needs to be "configured" before it is used is best handled by a class which does the "configuration" in the __init__ method, as in the currently-accepted answer. Other module functions become methods of the class. There is no benefit in trying to make a singleton ... the caller may well want to have two or more differently-configured gadgets operating simultaneously.
Moving on from that to a more complicated requirement, such as a temporal ordering of the methods:
This can be handled in a quite general fashion by maintaining state in attributes of the object, as is usually done in any OOPable language. Each method that has prerequisites must check that those prequisites are satisfied.
Poking in replacement methods is an obfuscation on a par with the COBOL ALTER verb, and made worse by using decorators -- it just wouldn't/shouldn't get past code review.
It comes down to how friendly you want your error messages to be if a function is called before it is configured.
Least friendly is to do nothing extra, and let the functions fail noisily with AttributeErrors, IndexErrors, etc.
Most friendly would be having stub functions that raise an informative exception, such as a custom ConfigError: configuration not initialized. When the ConfigParser() function is called it can then replace the stub functions with real functions.
Something like this:
File config.py
class ConfigError(Exception):
"configuration errors"
def query_data():
raise ConfigError("parse_config() has not been called")
def _query_data():
do_actual_work()
def parse_config(config_file):
load_file(config_file)
if failure:
raise ConfigError("bad file")
all_objects = globals()
for name in ('query_data', ):
working_func = all_objects['_'+name]
all_objects[name] = working_func
If you have very many functions you can add decorators to keep track of the function names, but that's an answer for a different question. ;)
Okay, I couldn't resist -- here is the decorator version, which makes my solution much easier to actually implement:
class ConfigError(Exception):
"various configuration errors"
class NeedsConfig(object):
def __init__(self, module_namespace):
self._namespace = module_namespace
self._functions = dict()
def __call__(self, func):
self._functions[func.__name__] = func
return self._stub
#staticmethod
def _stub(*args, **kwargs):
raise ConfigError("parseconfig() needs to be called first")
def go_live(self):
for name, func in self._functions.items():
self._namespace[name] = func
And a sample run:
needs_parseconfig = NeedsConfig(globals())
#needs_parseconfig
def query_data():
print "got some data!"
#needs_parseconfig
def set_data():
print "set the data!"
def okay():
print "Okay!"
def parse_config(somefile):
needs_parseconfig.go_live()
try:
query_data()
except ConfigError, e:
print e
try:
set_data()
except ConfigError, e:
print e
try:
okay()
except:
print "this shouldn't happen!"
raise
parse_config('config_file')
query_data()
set_data()
okay()
And the results:
parseconfig() needs to be called first
parseconfig() needs to be called first
Okay!
got some data!
set the data!
Okay!
As you can see, the decorator works by remembering the functions it decorates, and instead of returning a decorated function it returns a simple stub that raises a ConfigError if it is ever called. When the parse_config() routine is called, it needs to call the go_live() method which will then replace all the error raising stubs with the actual remembered functions.
A module doesn't do anything it isn't told to do so put your function calls at the bottom of the module so that when you import it, things get ran in the order you specify:
test.py
import testmod
testmod.py
def fun1():
print('fun1')
def fun2():
print('fun2')
fun1()
fun2()
When you run test.py, you'll see fun1 is ran before fun2:
python test.py
fun1
fun2