Combining trio and flask - python-trio

I'm trying to make an HTTP API that can create and destroy concurrent tasks that open TCP connections to remote servers streaming ~15-second data. I'll have to figure out how to handle the data later. For now I just print it.
In the example below, I can create multiple TCP connections by navigating to http://192.168.1.1:5000/addconnection.
Questions:
1) Is this approach reasonable? I think Flask may be creating a new thread for each /addconnection request. I'm not sure what performance limits I'll hit doing that.
2) Is it possible to keep track of each connection? I'd like to implement /listconnections and /removeconnections.
3) Is there a more Pythonic way to do this? I've read a little about Celery, but I don't really understand it very well yet. Perhaps there are other already existing tools for handling similar problems.
import trio
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
#app.route("/addconnection")
def addconnection():
async def receiver(client_stream):
print("Receiver: started!")
while True:
data = await client_stream.receive_some(16800)
print("Received Data: {}".format(data))
async def parent():
async with trio.open_nursery() as nursery:
client_stream = await trio.open_tcp_stream('192.168.1.1', 1234)
nursery.start_soon(receiver, client_stream)
trio.run(parent)

1) You will create a new event loop for each /addconnection request which will block the Flask runtime. This will likely limit you to a single request per thread.
2) Yes, in the simplest case you can store them in a global set, see connections below.
3) I'm the author of Quart-Trio, which I think is a better way. Quart is the Flask API re-implemented with async/await (which solves most of 1)). Quart-Trio is an extension to use Trio rather than asyncio for Quart.
Roughly (and I've not tested this) your code becomes,
import trio
from quart_trio import QuartTrio
connections = set()
app = QuartTrio(__name__)
#app.route("/")
async def hello():
return "Hello World!"
#app.route("/addconnection")
async def addconnection():
async def receiver(client_stream):
print("Receiver: started!")
while True:
data = await client_stream.receive_some(16800)
print("Received Data: {}".format(data))
async def parent():
async with trio.open_nursery() as nursery:
client_stream = await trio.open_tcp_stream('192.168.1.1', 1234)
connections.add(client_stream)
nursery.start_soon(receiver, client_stream)
connections.remove(client_stream)
app.nursery.start_soon(parent)
return "Connection Created"
if __name__ == "__main__":
# Allows this to run and serve via python script.py
# For production use `hypercorn -k trio script:app`
app.run()

Where you have async def receiver(client_stream): I would put an await await trio.sleep(0.029) between each loop iteration to give the rest of the program a chance to run. You can increase the sleep time according to how busy you want the function to be. But if you execute that loop your app is likely to freeze. Also cancellation blocks should be used so you are not stuck reading data forever.

Related

Running sync code asynchronously in Kotlin

I need to fetch some data using multiple calls. My code is synchronous, and I can call a synchronous getData method (which makes a network request) multiple times to get everything. To improve performance, I want to execute all these calls concurrently using coroutines, but I don't know if this code is really doing what I want it to.
My relevant Kotlin code looks roughly like:
fun getAllData(): List<Data> {
val coros = listof(1, 2, 3, 4).map {
num -> GlobalScope.async(Dispatchers.IO) { getData(num) }
}
return runBlocking { coros.awaitAll() }
}
I can edit this code, but I cannot edit the getData method.
Is this code running the getData calls concurrently? I'm suspicious that wrapping this in an async block doesn't solve the issue as presumably somewhere inside getData I need to await/return control back to the async block to continue to the next call, which I am not doing since getData is synchronous.
If this isn't running concurrently, is there any way I can make the calls behave asynchronously? Would the overhead of threads typically be worth the time saved in making multiple concurrent network requests?

PyQt5 is QTimer running in separate thread and is it blocking?

I'm having an app which uses a database. I want to set a timer to launch a function which will modify the db periodically. But I want to be sure that it is blocking, so no read-write operations with db until this function would finish.
My QTimer is in the GUI thread, so as far as I understand, it's slot will block main thread. Am I right?
class AppLauncher(QtWidgets.QMainWindow, AppWindow.Ui_MainWindow):
def __init__(self, parent=None):
super(AppLauncher, self).__init__(parent)
self.setupUi(self)
flags = QtCore.Qt.WindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.WindowStaysOnTopHint)
self.setWindowFlags(flags)
self.setWindowState(QtCore.Qt.WindowFullScreen)
self.fooTimer = QTimer(self)
self.fooTimer.timeout.connect(self.foo)
def foo(self):
pass
def main():
app = QApplication(sys.argv)
form = AppLauncher()
form.show()
app.exec_()
if __name__ == '__main__':
main()
QTimer is always running in the thread it was created and started, but that doesn't matter, as it wouldn't change the resulting behavior of the timeout connected functions even if it was executed in another thread.
What always matters is the thread in which the slot/function is, and as long as foo is a member of an instance that is in the same thread of any other function you want to "block", it will work as expected, preventing execution of anything else until it returns.

How can I run two threads in python using their own commands in a Telegram bot

I want to run two threads in python using their own commands from a Telegram bot. How do I do that exactly?
I am using telegram.ext module
Here is a working example which is self explanatory. I wrote two command functions. each function has a thread function defined inside it. Then I created a thread object with arguments passed. finally started the threads.
import time
from telegram.ext import *
import threading
BOT_TOKEN = '***INSERT YOUR BOT ID HERE***.'
def start(update: Updater, context: CallbackContext):
update.message.reply_text('Start command going to start the thread 1 now')
def thread1(update: Updater, context: CallbackContext):
while True:
update.message.reply_text('I am from thread 1. going to sleep now.')
time.sleep(2)
t1 = threading.Thread(target=thread1,args=(update,context))
t1.start()
def run(update: Updater, context: CallbackContext):
update.message.reply_text('run command is going to start the thread 2 now')
def thread2(update: Updater, context: CallbackContext):
while True:
update.message.reply_text('I am from thread 2. going to sleep now')
time.sleep(5)
t2 = threading.Thread(target=thread2,args=(update,context))
t2.start()
def main() -> None:
print('bot started..')
updater = Updater(BOT_TOKEN)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler('start', start))
dispatcher.add_handler(CommandHandler('run', run))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()

WCF Async deadlock?

Has anyone run into a situation where a WaitAny call returns a valid handle index, but the Proxy.End call blocks? Or has any recommendations or how best to debug this - tried tracing, performance counters (to check the max percentages), logging everywhere
The test scenario: 2 async. requests are going out (there's a bit more to the full implementation), and the 1st Proxy.End call return successfully, but the subsequent blocks. I've check the WCF trace and don't see anything particularly interesting. NOTE that it is self querying an endpoint that exists in the same process as well as a remote machine (=2 async requests)
As far as I can see the call is going through on the service implementation side for both queries, but it just blocks on the subsequent end call. It seems to work with just a single call though, regardless of whether it is sending the request to a remote machine or to itself; so it something to do with the multiple queries or some other factor causing the lockup.
I've tried different "concurrencymode"s and "instancecontextmode"s but it doesn't seem to have any bearing on the result.
Here's a cut down version of the internal code for parsing the handle list:
ValidationResults IValidationService.EndValidate()
{
var results = new ValidationResults();
if (_asyncResults.RemainingWaitHandles == null)
{
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
var waitArray = _asyncResults.RemainingWaitHandles.ToArray();
if (waitArray.GetLength(0) > 0)
{
int handleIndex = WaitHandle.WaitAny(waitArray, _defaultTimeOut);
if (handleIndex == WaitHandle.WaitTimeout)
{
// Timeout on signal for all handles occurred
// Close proxies and return...
}
var asyncResult = _asyncResults.Results[handleIndex];
results.Results = asyncResult.Proxy.EndServerValidateGroups(asyncResult.AsyncResult);
asyncResult.Proxy.Close();
_asyncResults.Results.RemoveAt(handleIndex);
_asyncResults.RemainingWaitHandles.RemoveAt(handleIndex);
results.ReturnCode = AsyncResultEnum.Success;
return results;
}
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
and the code that calls this:
validateResult = validationService.EndValidateSuppression();
while (validateResult.ReturnCode == AsyncResultEnum.Success)
{
// Update progress step
//duplexContextChannel.ValidateGroupCallback(progressInfo);
validateResult = validationService.EndValidateSuppression();
}
I've commented out the callbacks on the initiating node (FYI it's actually an 3-tier setup, but the problem is isolated to this 2nd tier calling the 3rd tier - the callbacks go from the 2nd tier to the 1st tier which have been removed in this test). Thoughts?
Sticking to the solution I left in my comment. Simply avoid chaining a callback to an aysnc calls that have different destinations (i.e. proxies)

QtWebKit QApplication call twice

I am calling a scraping class from Flask and the second time I instantiate a new Webkit() class (QApplication), it exits my Flask app.
How can I re-run a Qt GUI app multiple times and have it contained so it does not shut down the "outer" app?
Further clarification, Qt is event drive and calling QApplication.quit() closes not only the event loop but Python as well. Not calling quit() though never continues executing the rest of the code.
class Webkit():
...
def __run(self, url, method, dict=None):
self.qapp = QApplication(sys.argv) # FAIL here the 2nd time round
req = QNetworkRequest()
req.setUrl(QUrl(url))
self.qweb = QWebView()
self.qweb.setPage(self.Page())
self.qweb.loadFinished.connect(self.finished_loading)
self.qweb.load(req)
self.qapp.exec_()
def finished_loading(self):
self.qapp.quit()
The only (hacky!) solution so far is for me is to add this to the WebKit() class:
if __name__ == '__main__':
....
and then parse the result from the Flask app with this:
return os.popen('python webkit.py').read()