Disable schedule function in VOLTTRON - schedule

To execute the function periodically, the following command is used.
self.core.schedule(periodic(t), periodic_function)
I would like to disable the above function when certain conditions are met. Anyone knows how?

#GYOON
I would do what the code here does: https://github.com/VOLTTRON/volttron/blob/master/services/core/VolttronCentralPlatform/vcplatform/agent.py#L320
Basically what is going on is in the agent's onstart/onconfig method a function that is to be executed is called in a spawn later greenlet
class MyAgent(Agent):
def __init__(self, **kwargs):
self._periodic_event = None
self.enabled = True
#Core.receiver('onstart')
def onstart(self, sender, **kwargs):
self.core.spawn_later(1, self._my_periodic_event)
def _my_periodic_event(self):
if self._periodic_event is not None:
self._periodic_event.cancel()
# do stuff here within this event loop
if self.enabled:
# note this is an internal volttron function see the referenced link for
# import
now = get_aware_utc_now()
next_update_time = now + datetime.timedelta(seconds=20)
self._periodic_event = self.core.schedule(next_update_time, self._my_periodic_event)
The good thing about this is it allows you to have complete control over the scheduling process. The enable, disable, when to start etc. You can change the number of seconds if you need to with member variables.
Again sorry for the late response on this, but hopefully this helps!

Related

How to close a QInputDialog with after a defined amount of time

I'm currently working on an application that run in the background and sometime create an input dialog for the user to answer. If the user doesn't interact, I'd like to close the dialog after 30 seconds. I made a QThread that act like a timer and the "finished" signal should close the dialog. I unfortunately cannot find a way to close it.
At this point I'm pretty much lost. I completely new to QThread and a beginner in PyQt5
Here is a simplified version of the code (we are inside a class running a UI):
def Myfunction(self,q):
# q : [q1,q2,q3]
self.popup = counter_thread()
self.popup.start()
self.dial = QInputDialog
self.popup.finished.connect(self.dial.close)
text, ok = self.dial.getText(self, 'Time to compute !', '%s %s %s = ?'%(q[0], q[2], q[1]))
#[...]
I tried ".close()" and others but i got this error message:
TypeError: close(self): first argument of unbound method must have type 'QWidget'
I did it in a separated function but got the same problem...
You cannot close it because the self.dial you created is just an alias (another reference) to a class, not an instance.
Also, getText() is a static function that internally creates the dialog instance, and you have no access to it.
While it is possible to get that dialog through some tricks (installing an event filter on the QApplication), there's no point in complicating things: instead of using the static function, create a full instance of QInputDialog.
def Myfunction(self,q):
# q : [q1,q2,q3]
self.popup = counter_thread()
self.dial = QInputDialog(self) # <- this is an instance!
self.dial.setInputMode(QInputDialog.TextInput)
self.dial.setWindowTitle('Time to compute !')
self.dial.setLabelText('%s %s %s = ?'%(q[0], q[2], q[1]))
self.popup.finished.connect(self.dial.reject)
self.popup.start()
if self.dial.exec():
text = self.dial.textValue()
Note that I started the thread just before showing the dialog, in the rare case it may return immediately, and also because, for the same reason, the signal should be connected before starting it.

Most elegant way to execute CPU-bound operations in asyncio application?

I am trying to develop part of system that has the following requirement:
send health status to a remote server(every X seconds)
receive request for executing/canceling CPU bound job(s)(for example - clone git repo, compile(using conan) it.. etc).
I am using the socketio.AsyncClient to handle these requirements.
class CompileJobHandler(socketio.AsyncClientNamespace):
def __init__(self, namespace_val):
super().__init__(namespace_val)
// some init variables
async def _clone_git_repo(self, git_repo: str):
// clone repo and return its instance
return repo
async def on_availability_check(self, data):
// the health status
await self.emit('availability_check', " all good ")
async def on_cancel_job(self, data):
// cancel the current job
def _reset_job(self):
// reset job logics
def _reset_to_specific_commit(self, repo: git.Repo, commit_hash: str):
// reset to specific commit
def _compile(self, is_debug):
// compile logics - might be CPU intensive
async def on_execute_job(self, data):
// **request to execute the job(compile in our case)**
try:
repo = self._clone_git_repo(job_details.git_repo)
self._reset_to_specific_commit(repo, job_details.commit_hash)
self._compile(job_details.is_debug)
await self.emit('execute_job_response',
self._prepare_response("SUCCESS", "compile successfully"))
except Exception as e:
await self.emit('execute_job_response',
self._prepare_response(e.args[0], e.args[1]))
finally:
await self._reset_job()
The problem with the following code is that when execute_job message arrives, there is a blocking code running that blocks the whole async-io system.
to solve this problem, I have used the ProcessPoolExecutor and the asyncio event loop, as shown here: https://stackoverflow.com/questions/49978320/asyncio-run-in-executor-using-processpoolexecutor
after using it, the clone/compile functions are executed in another process - so that almost achieves my goals.
the questions I have are:
How can I design the code of the process more elegantly?(right now I have some static functions, and I don't like it...)
one approach is to keep it like that, another one is to pre-initialize an object(let's call it CompileExecuter and create instance of this type, and pre-iniailize it prior starting the process, and then let the process use it)
How can I stop the process in the middle of its execution?(if I received on_cancel_job request)
How can I handle the exception raised by the process correctly?
Other approaches to handle these requirements are welcomed

How to wrap asyncio with iterator

I have the following simplified code:
async def asynchronous_function(*args, **kwds):
statement = await prepare(query)
async with conn.transaction():
async for record in statement.cursor():
??? yield record ???
...
class Foo:
def __iter__(self):
records = ??? asynchronous_function ???
yield from records
...
x = Foo()
for record in x:
...
I don't know how to fill in the ??? above. I want to yield the record data, but it's really not obvious how to wrap asyncio code.
While it is true that asyncio is intended to be used across the board, sometimes it is simply impossible to immediately convert a large piece of software (with all its dependencies) to async. Fortunately there are ways to combine legacy synchronous code with newly written asyncio portions. A straightforward way to do so is by running the event loop in a dedicated thread, and using asyncio.run_coroutine_threadsafe to submit tasks to it.
With those low-level tools you can write a generic adapter to turn any asynchronous iterator into a synchronous one. For example:
import asyncio, threading, queue
# create an asyncio loop that runs in the background to
# serve our asyncio needs
loop = asyncio.get_event_loop()
threading.Thread(target=loop.run_forever, daemon=True).start()
def wrap_async_iter(ait):
"""Wrap an asynchronous iterator into a synchronous one"""
q = queue.Queue()
_END = object()
def yield_queue_items():
while True:
next_item = q.get()
if next_item is _END:
break
yield next_item
# After observing _END we know the aiter_to_queue coroutine has
# completed. Invoke result() for side effect - if an exception
# was raised by the async iterator, it will be propagated here.
async_result.result()
async def aiter_to_queue():
try:
async for item in ait:
q.put(item)
finally:
q.put(_END)
async_result = asyncio.run_coroutine_threadsafe(aiter_to_queue(), loop)
return yield_queue_items()
Then your code just needs to call wrap_async_iter to wrap an async iter into a sync one:
async def mock_records():
for i in range(3):
yield i
await asyncio.sleep(1)
for record in wrap_async_iter(mock_records()):
print(record)
In your case Foo.__iter__ would use yield from wrap_async_iter(asynchronous_function(...)).
If you want to receive all records from async generator, you can use async for or, for shortness, asynchronous comprehensions:
async def asynchronous_function(*args, **kwds):
# ...
yield record
async def aget_records():
records = [
record
async for record
in asynchronous_function()
]
return records
If you want to get result from asynchronous function synchronously (i.e. blocking), you can just run this function in asyncio loop:
def get_records():
records = asyncio.run(aget_records())
return records
Note, however, that once you run some coroutine in event loop you're losing ability to run this coroutine concurrently (i.e. parallel) with other coroutines and thus receive all related benefits.
As Vincent already pointed in comments, asyncio is not a magic wand that makes code faster, it's an instrument that sometimes can be used to run different I/O tasks concurrently with low overhead.
You may be interested in reading this answer to see main idea behind asyncio.

What's the equivalent of moment-yielding (from Tornado) in Twisted?

Part of the implementation of inlineCallbacks is this:
if isinstance(result, Deferred):
# a deferred was yielded, get the result.
def gotResult(r):
if waiting[0]:
waiting[0] = False
waiting[1] = r
else:
_inlineCallbacks(r, g, deferred)
result.addBoth(gotResult)
if waiting[0]:
# Haven't called back yet, set flag so that we get reinvoked
# and return from the loop
waiting[0] = False
return deferred
result = waiting[1]
# Reset waiting to initial values for next loop. gotResult uses
# waiting, but this isn't a problem because gotResult is only
# executed once, and if it hasn't been executed yet, the return
# branch above would have been taken.
waiting[0] = True
waiting[1] = None
As it is shown, if in am inlineCallbacks-decorated function I make a call like this:
#inlineCallbacks
def myfunction(a, b):
c = callsomething(a)
yield twisted.internet.defer.succeed(None)
print callsomething2(b, c)
This yield will get back to the function immediately (this means: it won't be re-scheduled but immediately continue from the yield). This contrasts with Tornado's tornado.gen.moment (which isn't more than an already-resolved Future with a result of None), which makes the yielder re-schedule itself, regardless the future being already resolved or not.
How can I run a behavior like the one Tornado does when yielding a dummy future like moment?
The equivalent might be something like a yielding a Deferred that doesn't fire until "soon". reactor.callLater(0, ...) is generally accepted to create a timed event that doesn't run now but will run pretty soon. You can easily get a Deferred that fires based on this using twisted.internet.task.deferLater(reactor, 0, lambda: None).
You may want to look at alternate scheduling tools instead, though (in both Twisted and Tornado). This kind of re-scheduling trick generally only works in small, simple applications. Its effectiveness diminishes the more tasks concurrently employ it.
Consider whether something like twisted.internet.task.cooperate might provide a better solution instead.

Celery: Task Singleton?

I have a task that I need to run asynchronously from the web page that triggered it. This task runs rather long, and as the web page could be getting a lot of these requests, I'd like celery to only run one instance of this task at a given time.
Is there any way I can do this in Celery natively? I'm tempted to create a database table that holds this state for all the tasks to communicate with, but it feels hacky.
You probably can create a dedicated worker for that task configured with CELERYD_CONCURRENCY=1 then all tasks on that worker will run synchronously
You can use memcache/redis for that.
There is an example on the celery official site - http://docs.celeryproject.org/en/latest/tutorials/task-cookbook.html
And if you prefer redis (This is a Django realization, but you can also easily modify it for your needs):
from django.core.cache import cache
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
class SingletonTask(Task):
def __call__(self, *args, **kwargs):
lock = cache.lock(self.name)
if not lock.acquire(blocking=False):
logger.info("{} failed to lock".format(self.name))
return
try:
super(SingletonTask, self).__call__(*args, **kwargs)
except Exception as e:
lock.release()
raise e
lock.release()
And then use it as a base task:
#shared_task(base=SingletonTask)
def test_task():
from time import sleep
sleep(10)
This realization is nonblocking. If you want next task to wait for the previous task change blocking=False to blocking=True and add timeout