How to enumerate and kill the kept-alive connections using twisted.web.server ?
class Srv(Resource):
isLeaf = True
def __init__(self,port):
self.listener = reactor.listenTCP(port, Site(self))
def shutdown(self):
self.listener.stopListening()
## HOW TO ENUMERATE AND KILL OPEN CONNECTIONS
Update:
For now, keeping transports in a set,and calling abortConnection() on them inside a try/except.
Do you keep your connections open the whole time when your webserver is running? For long running connections, you could try using a wrapper like an object pool. That way you just have to shutdown your pool and it will be the responsibility of the object pool to shutdown and clean up all your resources. (database connections in your case)
If you are talking about individual connections: those should have already been disconnected at the end of the request/response call stack.
Related
Currently I am using Aerospike in my application.
I faced lots of timeout issues as shown below when I was creating new java client for each transaction and I was not closing it so number of connection ramp up dramatically.
Aerospike Error: (9) Client timeout: timeout=1000 iterations=1 failedNodes=0 failedConns=0
so to resolve this timeout issue,I didn't made any changes to client, read and write policy, I just created only one client, stored it's instance in some variable and used this same client for all transaction (get or put requests).
now I want to understand how moving from multiple client to one client resolved my timeout issue.
how these connection were not closing automatically.
The AerospikeClient constructor requests peers, partition maps and racks for all nodes in the cluster and initializes connection pools and async eventloops. This is an expensive process that is only meant to be performed once per cluster at application startup. AerospikeClient is thread-safe, so instances can be shared between threads.
If AerospikeClient close() is not called, connections residing in the pools (at least one connection pool per node) will not be closed. There are no finalize() methods in AerospikeClient.
The first transaction(s) usually need to create new connections. This adds to the latency and can cause timeouts.
The client does more than just the application's transactions. It also monitors the cluster for changes so that it can maintain one hop per transaction. Also, I believe when we initialize the client, we create an initial pool of sockets.
It is expected that most apps would only need one global client.
I'm using "redis-rs" for rust and I tested sending a few thousand requests locally
and it works really well at the few first, except that at some point it stops accepting requests
and starts showing me this error when I send a command to redis:
"Only one usage of each socket address (protocol/network address/port) is normally permitted"
I am opening a client and a connection on every request to the http server that takes them in,
that's probably a bad idea in the first place, but shouldn't the connections stop existing and close after the function that opened it ends?
Is there any better solution, like some kind of global connection?
thanks
Well if it is an http server, the crate you are using likely is doing multithreading to handle requests. It is possible that one thread got caught in the process of closing the connection just as another thread began processing the next request.
Or in your case, maybe the remote database has not finished closing the previous request by the time the next connection is created. Either way, it's easier to think of it as a race condition between threads.
Since you don't know what thread will request a connection next, it may be better to store the connection as a global resource. Since I assume a mutex lock is faster than opening and closing a socket, I used lazy_static to create a single thread safe connection.
use lazy_static::lazy_static;
use parking_lot::Mutex;
use std::sync::Arc;
lazy_static! {
pub static ref LOCAL_DB: Arc<Mutex<Connection>> = {
let connection = Connection::open("local.sqlite").expect("Unable to open local DB");
connection.execute_batch(CREATE_TABLE).unwrap();
Arc::new(Mutex::new(connection))
};
}
// I can then just use it anywhere in functions without any complications.
let lock = LOCAL_DB.lock();
lock.execute_batch("begin").unwrap();
// etc.
i am using activemq PooledConnectionFactory to create connection. I am creating threads and each thread would have its own connection, session and producer.
I have two queries:
1. Do i need to close connection,session, producer myself in code or pooledConnectionFactory would do it once the message sending is successful by producer.
2. creating connection for every thread (eventually for each message) would be a performance bottleneck. Is it possible to have only one connection with many sessions in it (or there should be one-to-one mapping between session and connection, I think I read this somewhere on activemq website)
Any help would be appreciated.
You need to use the code just as you would any other JMS Connection, Session, and Producer. There's not magic to detect when your thread is done with it, you need to close it which will return it to the pool. You can use only one Connection and take many sessions from it, but you need to close them so that they go back to the pool to be handed out to others on demand.
Do you know if it's possible actually to disconnect a rtmpconnection and how ?
There is no "disconnect" method in the official doc, and also in the rtmpconnection.lzx . So if you know a way out to disconnect the rtmp connection,please let me know. Thanks in advance.
The <rtmpconnection> class in OpenLaszlo uses the ActionScript 3 NetConnection class to connect to the server. The NetConnection class has a method close(), here is the documentation for that:
Closes the connection that was opened locally or to the server and
dispatches a netStatus event with a code property of
NetConnection.Connect.Closed.
This method disconnects all NetStream objects running over the
connection. Any queued data that has not been sent is discarded. (To
terminate local or server streams without closing the connection, use
NetStream.close().) If you close the connection and then want to
create a new one, you must create a new NetConnection object and call
the connect() method again.
The close() method also disconnects all remote shared objects running
over this connection. However, you don't need to recreate the shared
object to reconnect. Instead, you can just call SharedObject.connect()
to reestablish the connection to the shared object. Also, any data in
the shared object that was queued when you issued
NetConnection.close() is sent after you reestablish a connection to
the shared object.
With Flash Media Server, the best development practice is to call
close() when the client no longer needs the connection to the server.
Calling close() is the fastest way to clean up unused connections. You
can configure the server to close idle connections automatically as a
back-up measure.
In the LZX source code for the <rtmpconnection> I can see that NetConnection.close() is only called in case of a connection failure:
<!--- Handle connection failure, attempt to reconnect using altsrc
#keywords private -->
<method name="_handleConnectionFailure" args="msg"><![CDATA[
this._nc.close();
if (this.debug) {
if ($debug) Debug.warn("error connecting to", this._connecturl, ":", msg);
}
....
I don't know why there is no close method defined on the <rtmpconnection> class, but you could add that code for your yourself, by extending the <rtmpconnection> and adding a close method. Just make sure you handle the state variables correctly.
As I haven't used Red5 in a long time, I cannot tell you if Red5 automatically closes NetConnections which are idle.
If I call ActiveRecord::Base.establish_connection to switch databases for the duration of a Rails request, how global is the effect of this change?
Does it affect other instances of the Rails app running under Passenger?
Does it affect the next request by the same instance of Rails?
Are there any thread race conditions to worry about?
No(*), Yes, No.
I'm not very familiar with Passenger, but I'm assuming it works like other containers that use a process per Rails instance. In that case, each will have its own connection.
The connection is maintained across requests, so if you switch the connection for an ActiveRecord class, it will be used in the next request.
Finally, database connections are shared across threads. You can verify this with:
Thread.new { puts ActiveRecord::Base.connection.object_id; sleep 30; puts ActiveRecord::Base.object_id}
sleep 10
ActiveRecord::Base.establish_connection
and seeing that the object ID output before and after the call to establish connection is different.
So, you may have thread issues if you are expecting all accesses from within a thread to access the same database connection but then you switch it halfway through to a different connection from within another thread..