Using asyncio to talk to daemon - telnet

I have a python module that uses Telnetlib to open a connection to a daemon, and is able to send commands and get parse responses. This is used for testing purposes. However, sometimes the daemon will send asynchronous messages too. My current solution is to have a "wait_for_message" method that will start a thread to listen on the telnet socket. Meanwhile, in the main thread, I send a certain command that I know will trigger the daemon to send the particular async message. Then I simply do thread.join() and wait for it to finish.
import telnetlib
class Client():
def __init__(self, host, port):
self.connection = telnetlib.Telnet(host, port)
def get_info(self):
self.connection.write('getstate\r')
idx, _, _ = self.connection.expect(['good','bad','ugly'], timeout=1)
return idx
def wait_for_unexpected_message(self):
idx, _, _ = self.connection.expect(['error'])
Then when I write tests, I can use the module as part of the automation system.
client = Client()
client.connect('192.168.0.45', 6000)
if client.get_info() != 0:
# do stuff
else:
# do other stuff
It works really well until I want to handle async messages coming in. I've been reading about Python's new asyncio library, but I haven't quite figured out how to do what I need it to do, or even if my use case would be benefited by asyncio.
So my question: Is there a better way to handle this better with asyncio? I like using telnetlib because of the expect() features it has. Using a simple TCP socket does not do that for me.

Related

Connection to database not closing

I'm using "redis-rs" for rust and I tested sending a few thousand requests locally
and it works really well at the few first, except that at some point it stops accepting requests
and starts showing me this error when I send a command to redis:
"Only one usage of each socket address (protocol/network address/port) is normally permitted"
I am opening a client and a connection on every request to the http server that takes them in,
that's probably a bad idea in the first place, but shouldn't the connections stop existing and close after the function that opened it ends?
Is there any better solution, like some kind of global connection?
thanks
Well if it is an http server, the crate you are using likely is doing multithreading to handle requests. It is possible that one thread got caught in the process of closing the connection just as another thread began processing the next request.
Or in your case, maybe the remote database has not finished closing the previous request by the time the next connection is created. Either way, it's easier to think of it as a race condition between threads.
Since you don't know what thread will request a connection next, it may be better to store the connection as a global resource. Since I assume a mutex lock is faster than opening and closing a socket, I used lazy_static to create a single thread safe connection.
use lazy_static::lazy_static;
use parking_lot::Mutex;
use std::sync::Arc;
lazy_static! {
pub static ref LOCAL_DB: Arc<Mutex<Connection>> = {
let connection = Connection::open("local.sqlite").expect("Unable to open local DB");
connection.execute_batch(CREATE_TABLE).unwrap();
Arc::new(Mutex::new(connection))
};
}
// I can then just use it anywhere in functions without any complications.
let lock = LOCAL_DB.lock();
lock.execute_batch("begin").unwrap();
// etc.

Celery task for ML prediction hangs in execution

I'm trying to create a web application that receives an input from POST request and provides some ML predictions based on that input.
Since prediction model is quite heavy, I don't want that user waits for calculation to complete. Instead, I delegated heavy computation to Celery task and user can later inspect the result.
I'm using simple Flask application with Celery, Redis and Flower.
My view.py:
#ns.route('predict/')
class Predict(Resource):
...
def post(self):
...
do_categorize(data)
return jsonify(success=True)
My tasks.py file looks something like this:
from ai.categorizer import Categorizer
categorizer = Categorizer(
model_path='category_model.h5',
tokenizer_path='tokenize.joblib',
labels_path='labels.joblib'
)
#task()
def do_categorize(data):
result = categorizer.predict(data)
print(result)
# Write result to the DB
...
My predict() method inside Categorizer class:
def predict(self, value):
K.set_session(self.sess)
with self.sess.as_default():
with self.graph.as_default():
prediction = self.model.predict(np.asarray([value], dtype='int64'))
return prediction
I'm running Celery like this:
celery worker -A app.celery --loglevel=DEBUG
The problem that I'm having for the last couple of days is that categorizer.predict(data) call hangs in the middle of the execution.
I tried to run categorizer.predict(data) inside of post method and it works. But if I place it inside Celery tasks it stop working. There is no console log, if I try to debug it it just freezes on .predict().
My questions:
How can I solve this issue?
Is there any memory, CPU limit for the worker?
Are Celery tasks the "right" way to do such heavy computations?
How can I debug this problem? What am I doing wrong?
Is it correct to initialize models at the top of the file?
Thanks to this SO question I found the answer for my problem:
It turns out that is better for Keras to work with Threads pooling instead of default Process.
Lucky for me, with Celery 4.4 Threads pooling was reintroduced not a long time ago.
You can read more at Celery 4.4 Changelogs:
Threaded Tasks Pool
We reintroduced a threaded task pool using
concurrent.futures.ThreadPoolExecutor.
The previous threaded task pool was experimental. In addition it was based on the threadpool package which is obsolete.
You can use the new threaded task pool by setting worker_pool to ‘threads` or by passing –pool threads to the celery worker command.
Now you can use threads instead of processes for pooling.
celery worker -A your_application --pool threads --loginfo=INFO
If you can not use latest Celery version, you have possibility to use gevent package:
pip install gevent
celery worker -A your_application --pool gevent --loginfo=INFO

How to ACK celery tasks with parallel code in reactor?

I have a celery task that, when called, simply ignites the execution of some parallel code inside a twisted reactor. Here's some sample (not runnable) code to illustrate:
def run_task_in_reactor():
# this takes a while to run
do_something()
do_something_more()
#celery.task
def run_task():
print "Started reactor"
reactor.callFromThread(run_task_in_reactor)
(For the sake of simplicity, please assume that the reactor is already running when the task is received by the worker; I used the signal #worker_process_init.connect to start my reactor in another thread as soon as the worker comes up)
When I call run_task.delay(), the task finishes pretty quickly (since it does not wait for run_task_in_reactor() to finish, only schedules its execution in the reactor). And, when run_task_in_reactor() finally runs, do_something() or do_something_more() can throw an exception, which will go unoticed.
Using pika to consume from my queue, I can use an ACK inside do_something_more() to make the worker notify the correct completion of the task, for instance. However, inside Celery, this does not seems to be possible (or, at least, I do't know how to accomplish the same effect)
Also, I cannot remove the reactor, since it is a requirement of some third-party code I'm using. Other ways to achieve the same result are appreciated as well.
Use reactor.blockingCallFromThread instead.

TwistedWeb on multicore/multiprocessor

What techniques are people using to utilize multiple processors/cores when running a TwistedWeb server? Is there a recommended way of doing it?
My twisted.web based web service is running on Amazon EC2 instances, which often have multiple CPU cores (8, 16), and the type of work that the service is doing benefits from extra processing power, so i would very much like to use that.
I understand that it is possible to use haproxy, squid or a web server, configured as a reverse proxy, in front of multiple instances of Twisted. In fact, we are currently using such a setup, with nginx serving as a reverse proxy to several upstream twisted.web services running on the same host, but each on different port.
This works fine, but what i'm really interested in, is a solution where there is no "front-facing" server, but all twistd processes somehow bind to the same socket and accept requests. Is such thing even possible... or am i being crazy? The operating system is Linux (CentOS).
Thanks.
Anton.
There are a number of ways to support multiprocess operation for a Twisted application. One important question to answer at the start, though, is what you expect your concurrency model to be, and how your application deals with shared state.
In a single process Twisted application, concurrency is all cooperative (with help from Twisted's asynchronous I/O APIs) and shared state can be kept anywhere a Python object would go. Your application code runs knowing that, until it gives up control, nothing else will run. Additionally, any part of your application that wants to access some piece of shared state can probably do so quite easily, since that state is probably kept in a boring old Python object that is easy to access.
When you have multiple processes, even if they're all running Twisted-based applications, then you have two forms of concurrency. One is the same as for the previous case - within a particular process, the concurrency is cooperative. However, you have a new kind, where multiple processes are running. Your platform's process scheduler might switch execution between these processes at any time, and you have very little control over this (as well as very little visibility into when it happens). It might even schedule two of your processes to run simultaneously on different cores (this is probably even what you're hoping for). This means that you lose some guarantees about consistency, since one process doesn't know when a second process might come along and try to operate on some shared state. This leads in to the other important area of consideration, how you will actually share state between the processes.
Unlike the single process model, you no longer have any convenient, easily accessed places to store your state where all your code can reach it. If you put it in one process, all the code in that process can access it easily as a normal Python object, but any code running in any of your other processes no longer has easy access to it. You might need to find an RPC system to let your processes communicate with each other. Or, you might architect your process divide so that each process only receives requests which require state stored in that process. An example of this might be a web site with sessions, where all state about a user is stored in their session, and their sessions are identified by cookies. A front-end process could receive web requests, inspect the cookie, look up which back-end process is responsible for that session, and then forward the request on to that back-end process. This scheme means that back-ends typically don't need to communicate (as long as your web application is sufficiently simple - ie, as long as users don't interact with each other, or operate on shared data).
Note that in that example, a pre-forking model is not appropriate. The front-end process must exclusively own the listening port so that it can inspect all incoming requests before they are handled by a back-end process.
Of course, there are many types of application, with many other models for managing state. Selecting the right model for multi-processing requires first understanding what kind of concurrency makes sense for your application, and how you can manage your application's state.
That being said, with very new versions of Twisted (unreleased as of this point), it's quite easy to share a listening TCP port amongst multiple processes. Here is a code snippet which demonstrates one way you might use some new APIs to accomplish this:
from os import environ
from sys import argv, executable
from socket import AF_INET
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.static import File
def main(fd=None):
root = File("/var/www")
factory = Site(root)
if fd is None:
# Create a new listening port and several other processes to help out.
port = reactor.listenTCP(8080, factory)
for i in range(3):
reactor.spawnProcess(
None, executable, [executable, __file__, str(port.fileno())],
childFDs={0: 0, 1: 1, 2: 2, port.fileno(): port.fileno()},
env=environ)
else:
# Another process created the port, just start listening on it.
port = reactor.adoptStreamPort(fd, AF_INET, factory)
reactor.run()
if __name__ == '__main__':
if len(argv) == 1:
main()
else:
main(int(argv[1]))
With older versions, you can sometimes get away with using fork to share the port. However, this is rather error prone, fails on some platforms, and isn't a supported way to use Twisted:
from os import fork
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.static import File
def main():
root = File("/var/www")
factory = Site(root)
# Create a new listening port
port = reactor.listenTCP(8080, factory)
# Create a few more processes to also service that port
for i in range(3):
if fork() == 0:
# Proceed immediately onward in the children.
# The parent will continue the for loop.
break
reactor.run()
if __name__ == '__main__':
main()
This works because of the normal behavior of fork, where the newly created process (the child) inherits all of the memory and file descriptors from the original process (the parent). Since processes are otherwise isolated, the two processes don't interfere with each other, at least as far as the Python code they are executing goes. Since the file descriptors are inherited, either the parent or any of the children can accept connections on the port.
Since forwarding HTTP requests is such an easy task, I doubt you'll notice much of a performance improvement using either of these techniques. The former is a bit nicer than proxying, because it simplifies your deployment and works for non-HTTP applications more easily. The latter is probably more of a liability than it's worth accepting.
The recommended way IMO is to use haproxy (or another load balancer) like you already are, the bottleneck shouldn't be the load balancer if configured correctly. Besides, you'll want to have some fallover method which haproxy provides in case one of your processes goes down.
It isn't possible to bind multiple processes to the same TCP socket, but it is possible with UDP.
If you wish to serve your web content over HTTPS as well, this is what you will need to do on top of #Jean-Paul's snippet.
from twisted.internet.ssl import PrivateCertificate
from twisted.protocols.tls import TLSMemoryBIOFactory
'''
Original snippet goes here
..........
...............
'''
privateCert = PrivateCertificate.loadPEM(open('./server.cer').read() + open('./server.key').read())
tlsFactory = TLSMemoryBIOFactory(privateCert.options(), False, factory)
reactor.adoptStreamPort(fd, AF_INET, tlsFactory)
By using fd, you will serve either HTTP or HTTPS but not both.
If you wish to have both, listenSSL on the parent process and include the ssl fd you get from the ssl port as the second argument when spawning the child process.
Complete snipper is here:
from os import environ
from sys import argv, executable
from socket import AF_INET
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.static import File
from twisted.internet import reactor, ssl
from twisted.internet.ssl import PrivateCertificate
from twisted.protocols.tls import TLSMemoryBIOFactory
def main(fd=None, fd_ssl=None):
root = File("/var/www")
factory = Site(root)
spawned = []
if fd is None:
# Create a new listening port and several other processes to help out.
port = reactor.listenTCP(8080, factory)
port_ssl = reactor.listenSSL(8443, factory, ssl.DefaultOpenSSLContextFactory('./server.key', './server.cer'))
for i in range(3):
child = reactor.spawnProcess(
None, executable, [executable, __file__, str(port.fileno()), str(port_ssl.fileno())],
childFDs={0: 0, 1: 1, 2: 2, port.fileno(): port.fileno(), port_ssl.fileno(): port_ssl.fileno()},
env=environ)
spawned.append(child)
else:
# Another process created the port, just start listening on it.
port = reactor.adoptStreamPort(fd, AF_INET, factory)
cer = open('./server.cer')
key = open('./server.key')
pem_data = cer.read() + key.read()
cer.close()
pem.close()
privateCert = PrivateCertificate.loadPEM(pem_data )
tlsFactory = TLSMemoryBIOFactory(privateCert.options(), False, factory)
reactor.adoptStreamPort(fd_ssl, AF_INET, tlsFactory)
reactor.run()
for p in spawned:
p.signalProcess('INT')
if __name__ == '__main__':
if len(argv) == 1:
main()
else:
main(int(argv[1:]))

Finding which task went to which queue

I am using RabbitMQ with Celery and I have set some custom routing settings for the task. A specific type of task goes to one queue and all the other tasks goes to another queue. Now I want to verify it is working or not.
For this, I want to inspect which tasks went to which queue. Unfortunately, I didn't find anything which could help me on this. celeryev monitor just provides information about which tasks have been received and what is their completion status. rabbitmqctl gives me information about the current running and waiting tasks only - so I cannot see to which queue did my intended task go to.
Could anyone help me with this?
You normally can't inspect messages on a queue with AMQP (not sure about Celery, though).
If you just need this as a one-off test, the simplest way would probably be to write a quick program in Python that gets all messages from the queues and prints them out.
Using py-ampqlib, this should do it:
from amqplib import client_0_8 as amqp
conn = amqp.Connection(host="localhost:5672", userid="guest", password="guest", virtual_host="/", insist=False)
chan = conn.channel()
queue_name = "the_queue"
print "Draining", queue_name
while True:
msg = chan.basic_get(queue_name)
if msg is None:
break
print msg.body
print "All done"
If you need more help, a good place to ask is the RabbitMQ Discuss mailing list. The RabbitMQ developers do their best to answer all the questions posted there, and Celery's author also reads it.