How can I run two threads in python using their own commands in a Telegram bot - telegram-bot

I want to run two threads in python using their own commands from a Telegram bot. How do I do that exactly?
I am using telegram.ext module

Here is a working example which is self explanatory. I wrote two command functions. each function has a thread function defined inside it. Then I created a thread object with arguments passed. finally started the threads.
import time
from telegram.ext import *
import threading
BOT_TOKEN = '***INSERT YOUR BOT ID HERE***.'
def start(update: Updater, context: CallbackContext):
update.message.reply_text('Start command going to start the thread 1 now')
def thread1(update: Updater, context: CallbackContext):
while True:
update.message.reply_text('I am from thread 1. going to sleep now.')
time.sleep(2)
t1 = threading.Thread(target=thread1,args=(update,context))
t1.start()
def run(update: Updater, context: CallbackContext):
update.message.reply_text('run command is going to start the thread 2 now')
def thread2(update: Updater, context: CallbackContext):
while True:
update.message.reply_text('I am from thread 2. going to sleep now')
time.sleep(5)
t2 = threading.Thread(target=thread2,args=(update,context))
t2.start()
def main() -> None:
print('bot started..')
updater = Updater(BOT_TOKEN)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler('start', start))
dispatcher.add_handler(CommandHandler('run', run))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()

Related

PyQt5 is QTimer running in separate thread and is it blocking?

I'm having an app which uses a database. I want to set a timer to launch a function which will modify the db periodically. But I want to be sure that it is blocking, so no read-write operations with db until this function would finish.
My QTimer is in the GUI thread, so as far as I understand, it's slot will block main thread. Am I right?
class AppLauncher(QtWidgets.QMainWindow, AppWindow.Ui_MainWindow):
def __init__(self, parent=None):
super(AppLauncher, self).__init__(parent)
self.setupUi(self)
flags = QtCore.Qt.WindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.WindowStaysOnTopHint)
self.setWindowFlags(flags)
self.setWindowState(QtCore.Qt.WindowFullScreen)
self.fooTimer = QTimer(self)
self.fooTimer.timeout.connect(self.foo)
def foo(self):
pass
def main():
app = QApplication(sys.argv)
form = AppLauncher()
form.show()
app.exec_()
if __name__ == '__main__':
main()
QTimer is always running in the thread it was created and started, but that doesn't matter, as it wouldn't change the resulting behavior of the timeout connected functions even if it was executed in another thread.
What always matters is the thread in which the slot/function is, and as long as foo is a member of an instance that is in the same thread of any other function you want to "block", it will work as expected, preventing execution of anything else until it returns.

How do you implement a responding shell script in a Kotlin web service coroutines request?

I created a web service with spring boot 2 and Kotlin to access same unix scripts and other programs of a certain server via a process builder. The response messages shall contain the stdout of the shell script I use. But I have problems with the coroutines in Kotlin. When I use process.waitFor() this is a blocking function. How can you use a dedicated thread(-pool) to track the external processes and suspend the coroutines for that time?
In the following code snippet you see how I get the return code of a shell script and the stdout information of it:
val process = ProcessBuilder(cmd)
.redirectErrorStream(true)
.start()
val exitCode = process.waitFor()
return CmdResult(
exitCode,
process.inputStream
)
I failed with the following trial, because I did not get the stdout information and did not know how to get the result out of the scope to use it as a return type:
val dispatcher = newFixedThreadPoolContext(4, "myPool")
launch(dispatcher) {
val process = ProcessBuilder(listOf(cmdScript))
.redirectErrorStream(true)
.start()
val exitCode = process.waitFor()
CmdResult(
exitCode,
process.inputStream
)
}.join()
You can use top level function withContext to return from a coroutine:
val dispatcher = newFixedThreadPoolContext(4, "myPool")
val result = withContext(dispatcher) {
val process = ProcessBuilder(listOf(cmdScript))
.redirectErrorStream(true)
.start()
val exitCode = process.waitFor()
CmdResult(
exitCode,
process.inputStream
)
}

Combining trio and flask

I'm trying to make an HTTP API that can create and destroy concurrent tasks that open TCP connections to remote servers streaming ~15-second data. I'll have to figure out how to handle the data later. For now I just print it.
In the example below, I can create multiple TCP connections by navigating to http://192.168.1.1:5000/addconnection.
Questions:
1) Is this approach reasonable? I think Flask may be creating a new thread for each /addconnection request. I'm not sure what performance limits I'll hit doing that.
2) Is it possible to keep track of each connection? I'd like to implement /listconnections and /removeconnections.
3) Is there a more Pythonic way to do this? I've read a little about Celery, but I don't really understand it very well yet. Perhaps there are other already existing tools for handling similar problems.
import trio
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
#app.route("/addconnection")
def addconnection():
async def receiver(client_stream):
print("Receiver: started!")
while True:
data = await client_stream.receive_some(16800)
print("Received Data: {}".format(data))
async def parent():
async with trio.open_nursery() as nursery:
client_stream = await trio.open_tcp_stream('192.168.1.1', 1234)
nursery.start_soon(receiver, client_stream)
trio.run(parent)
1) You will create a new event loop for each /addconnection request which will block the Flask runtime. This will likely limit you to a single request per thread.
2) Yes, in the simplest case you can store them in a global set, see connections below.
3) I'm the author of Quart-Trio, which I think is a better way. Quart is the Flask API re-implemented with async/await (which solves most of 1)). Quart-Trio is an extension to use Trio rather than asyncio for Quart.
Roughly (and I've not tested this) your code becomes,
import trio
from quart_trio import QuartTrio
connections = set()
app = QuartTrio(__name__)
#app.route("/")
async def hello():
return "Hello World!"
#app.route("/addconnection")
async def addconnection():
async def receiver(client_stream):
print("Receiver: started!")
while True:
data = await client_stream.receive_some(16800)
print("Received Data: {}".format(data))
async def parent():
async with trio.open_nursery() as nursery:
client_stream = await trio.open_tcp_stream('192.168.1.1', 1234)
connections.add(client_stream)
nursery.start_soon(receiver, client_stream)
connections.remove(client_stream)
app.nursery.start_soon(parent)
return "Connection Created"
if __name__ == "__main__":
# Allows this to run and serve via python script.py
# For production use `hypercorn -k trio script:app`
app.run()
Where you have async def receiver(client_stream): I would put an await await trio.sleep(0.029) between each loop iteration to give the rest of the program a chance to run. You can increase the sleep time according to how busy you want the function to be. But if you execute that loop your app is likely to freeze. Also cancellation blocks should be used so you are not stuck reading data forever.

How to make python apscheduler trigger a function inside a class instance

I have a class which has BaseScheduler as an attribute, nothing fancy, no frameworks etc.
class MyClass(object):
def __init__(self, ...):
...
self.Scheduler = BackgroundScheduler()
...
Later on I define methods that a) schedule jobs based on a schedule definition passed as kwargs, and b) handle the jobs when they are triggered:
def _schedule_Events(self, *args, **kwargs):
try:
Schedule_Def = kwargs.copy()
Schedule_Def['func'] = self._handle_scheduled_Event
job = self.Scheduler.add_job(**Schedule_Def)
self.Scheduled_Events.append(job)
except Exception as e:
self.Logger.exception('%s: exception in schedule_Events, details: %s' %(self.Name, e))
def _handle_scheduled_Event(self, Event_Action):
""" Callback function for scheduled jobs """
try:
.... do stuff ....
However, adding jobs with _schedule_Events fails with:
File "/usr/local/lib/python3.4/dist-packages/apscheduler/util.py", line 381, in check_callable_args
', '.join(unsatisfied_args))
ValueError: The following arguments have not been supplied: Event_Action
The reason is apparently that the 'func' argument must be globally callable, ie. not within a class instance scope. I also don't see how using a 'textual reference' as described in the documentation will help.
If I replace the 'func' callable with a function defined at the module level then it works, but I need to make it call a method within my instance object. Any ideas how to make this work ? Custom trigger ? Wrapping APS Scheduler inside another class and pass the callback ? Other ideas ?
Many thanks in advance.

QtWebKit QApplication call twice

I am calling a scraping class from Flask and the second time I instantiate a new Webkit() class (QApplication), it exits my Flask app.
How can I re-run a Qt GUI app multiple times and have it contained so it does not shut down the "outer" app?
Further clarification, Qt is event drive and calling QApplication.quit() closes not only the event loop but Python as well. Not calling quit() though never continues executing the rest of the code.
class Webkit():
...
def __run(self, url, method, dict=None):
self.qapp = QApplication(sys.argv) # FAIL here the 2nd time round
req = QNetworkRequest()
req.setUrl(QUrl(url))
self.qweb = QWebView()
self.qweb.setPage(self.Page())
self.qweb.loadFinished.connect(self.finished_loading)
self.qweb.load(req)
self.qapp.exec_()
def finished_loading(self):
self.qapp.quit()
The only (hacky!) solution so far is for me is to add this to the WebKit() class:
if __name__ == '__main__':
....
and then parse the result from the Flask app with this:
return os.popen('python webkit.py').read()