Netty Http Client Connection Pool - spring-webflux

I am trying to understand netty http client connection pool.
If it is NIO and asynchronous, then what is the significance of this connection pool?
For ex: if service A calls service B and service A has the client connection pool count set as 50, then does it mean we can make only 50 parallel requests at the max?
UPDATE:
// remote server
#GetMapping("echo")
public Mono<String> echo(){
return Mono.just("echo")
.delayElement(Duration.ofSeconds(3));
}
// client 1 conneciton
HttpClient httpClient = HttpClient.create(ConnectionProvider.newConnection());
WebClient client = WebClient.builder()
.baseUrl("http://localhost:8080/echo")
.clientConnector(new ReactorClientHttpConnector(httpClient))
.build();
// run
var start = System.currentTimeMillis();
Flux.range(1, 50)
.flatMap(i -> client.get().retrieve().bodyToMono(String.class))
.collectList()
.doOnNext(System.out::println)
.doFinally(s -> System.out.println("Total Time Taken : " + (System.currentTimeMillis() - start) + " ms"))
.block();
I get all the calls get completed in 3.5 seconds. Ideally with 1 connection I should have got them completed in 150 seconds.

A connection pool is a cache of connections maintained so that the connections can be reused when future requests to the remote service (database, micro service etc.) are required. Connection pools are used to enhance the performance ... see Connection pool. It does NOT depend on the transport that you will choose: NIO, epoll, kqueue etc.
When you have a connection pool, for every connection you do just once the DNS resolution, connection establishment etc. and then you reuse this connection for many requests.
then what is the significance of this connection pool?
When you do NOT have a connection pool then DNS resolution, connection establishment etc. will be made every time when you want to make a request to the remote service.
The connection pool contains connections ONLY to a given remote service. So when you have service A and service B then you will have a connection pool for service A and a connection pool for service B (if these are different remote addresses, Reactor Netty does NOT provide configuration per URI).
In Reactor Netty you can choose to configure the connection pools to have one and the same configuration OR you can configure every pool with different configurations (depending on your use case).
In Reactor Netty you can configure the max connection which mean that for a given service you can make maximum this number of parallel requests (opened connections). The other requests will be kept in a queue and once a connection is available for reuse a pending request can be executed.
As it is mentioned above, all available configuration for the connection pool can be found in the Reference Documentation

Related

Questions about SignalR Connection

All,
I am using SignalR (.net 6) and have couple of questions about SignalR Connections (specifically SignalR connections that use web sockets):
Q #1)
If the SignalR client crashes, will SignalR server dispose the underlying connection automatically for me (and the OnDisconnectedAsync() event will be fired)?
The idea is to dispose client resources (on the server, resource ex: NHibernate session) that are tied to each connection.
My Tests Indicate (on local machine, both server and client):
I tried to simulate this scenario where I had a running client which then I shut down with Task manager and the minute Windows released resources for the process, the SignalR server somehow detected that connection was lost and released the connection and OnDisconnectedAsync() was called. I am not sure if my test was sufficient for this use case (client crash). I am curious of how did the server know, was it the fact the maybe the finalizer for client connection ran?
Q #2) If the current connection between client and server is broken or interrupted and SignalR needs to reconnect, and it successfully reconnects, does it use the same connection (with the same connection ID/same web socket) or does it attempt create new connection (tied to a new web socket)?
https://learn.microsoft.com/en-us/aspnet/core/signalr/configuration?view=aspnetcore-6.0&tabs=dotnet
The server considers the client disconnected if it hasn't received a message (including keep-alive) in this interval. It could take longer than this timeout interval for the client to be marked disconnected due to how this is implemented. The recommended value is double the KeepAliveInterval value.
It assigns a new connection id. Consider using other data to track which user is it, eg. Checking in the on connect and on disconnect methods.
https://learn.microsoft.com/en-us/aspnet/core/signalr/groups?view=aspnetcore-6.0

IIS Hosted WCF service does not recycle TCP ports "Insufficient winsock resources"

I have a WCF Service hosted on IIS 7 that runs successfully for a period of time, then fails to communicate to other network locations ( I suspect there are no TCP ports available to connect to the outside world )
Background of application:
My system transcodes large media files ( which takes time). I have a centrally hosted WCF service which is is located on server A - which will be referred to as 'Central WCF Service'. I then have many client services which do the actual transcoding of the media files on different servers: B,C,D,E,F and so on - which will be referred to as 'Client Processor services'. The Central WCF Service manages which Client Processor Service the 'Transcode Jobs' get sent to be processed . Each of these Client Processor Services is a self hosted WCF service, they basically do the long running process, and get polled by the Central WCF Service checking job progress percentage. The Central WCF service therefore opens up a lot of connections to these clients to poll them for their job progresss, polling occurs roughly once every 2-3 seconds to each of the clients.
The Central WCF service stores a string list of the addresses for the Client Processor services. The code which Is used to poll each client is descrbied below ( stripped down version ):
public ClientProcessorClient getClientByaddress(string address)
{
Binding bidning = new NetTcpBinding(SecurityMode.None);
return new ClientProcessorClient(bidning, new EndpointAddress(address));
}
public void pollJobs()
{
foreach (string clientAddress in clients)
{
ClientProcessorClient client = getClientByaddress(clientAddress);
int progress = client.GetProgress();
client.Close();
// Do stuff with progress
}
}
What happens when it breaks:
I can submit many transcode jobs to the Central WCF Service and it submits jobs to the clients successfully updating progress etc. After around an hour of processing the server that the Central WCF service is hosted on stops working properly. Errors are thrown by the Central WCF Service Insufficient winsock resources available to complete socket connection initiation. when trying to contact the Client WCF services. The Client WCF services are all pingable from a WCF Test Client running on my local machine. Also I have noticed that when in this state the server cannot view network file resources - I have logged in remotely and tried to locate a network attached storage folder, it fails to connect. I CAN however make calls TO that server e.g. I can open a WCF Test Client and connect to the Central WCF Service and call it's ping methods. Communications are allowed IN but not OUT from the server.
Few points of interest:
In the faulted state the connections TO the server can be made, but not FROM the server.
Each of my services ( Central WCF service and Client Processor Service ) are singleton instances.
The Central WCF Service is hosted in IIS 7 and application pool Recycling is disabled
Unfortunately named pipe is not an option ( the clients and servers are on different machines )
My thoughts/Questions
All signs point towards the server running out of TCP sockets. Am I setting up the WCF ClientProcessorClient's properly? Am I disposing of them properly? Do I need to wrap them in a using statement? Does anybody know how I can debug/diagnose where the problem occurs?
Thanks
For good or ill, Microsoft decided to implement the WCF service proxy logic (either ClientBase or directly from ChannelFactory) to allow exceptions to be thrown in the Close() method. I believe all the Dispose() method does is call Close() but I have never tried to look at the source code. If a proxy is in a faulted state, Abort() must be called to release resources (such as TCP sessions).
The implication is the WCF service proxy does not release resources until a call to either Close() or Abort() completes successfully. Take a look at this blog post for one option to properly closing the proxy instance.

Tomcat - Configuring maxThreads and acceptCount in Http connector

I currently have an application deployed using Tomcat that interacts with a Postgres database via JDBC. The queries are very expensive, so what I'm seeing is a timeout caused by Tomcat or Apache (Apache sits in front of Tomcat in my configuration). I'm trying to limit the connections to the database to 20-30 simultaneous connections, so that the database is not overwhelmed. I've done this using the \.. configuration, setting maxActive to 30 and maxIdle to 20. I also bumped up the maxWait.
In this scenario I'm limiting the USE of the database, but I want the connections/requests to be POOLED within Tomcat. Apache can accept 250 simultaneous requests. So I need to ensure Tomcat can also accept this many, but handle them appropriately.
Tomcat has two settings in the HTTP Connector config file:
maxThreads - "Max number of request processing threads to be created by the Http Connector, which therefore determines the max number of simultaneous requests that can be handled."
acceptCount - "The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused."
So I'm guessing that if I set maxThreads to the max number of JDBC connections (30), then I can set acceptCount to 250-30 = 220.
I don't quite understand the difference between a thread that is WAITING on a JDBC connection to open up from the pool, versus a thread that is queued... My thought is that a queued thread is consuming less cycles whereas a running thread, waiting on the JDBC pool, will be spending cycles checking the pool for a free thread...?
Note that the HTTP connector is for incoming HTTP requests, and unrelated to JDBC. You probably want to configure the JDBC connection pool separately, such as the connectionProperties for the JDBC connector:
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Unless your application handles request in a matter where it directly connects to the database on a per http connection basis, then you should configure your JDBC connection pool based on what your database software is set to/ can handle and your maxthreads to what your application / hardware can handle.
Keeping maxActive value (of db connection pooling) lesser than maxThreads (i.e. number of concurrent threads) makes sense in most of the cases. You can set acceptCount to a higher value depending upon what traffic you are expecting in your website and how fast one request can be processed.

Socket connection was aborted - WCF

I have a simple client server apps that uses WCF (netTcpBinding) when i'm launching the server and sending messages through the client everythings works fine , but when i'm closing the server manually and open it again (without closing the client app at all) the next time the client tries to send a message to the server i get this exception (on the client side):
The socket connection was aborted. This could be caused by an error processing y
our message or a receive timeout being exceeded by the remote host, or an underl
ying network resource issue. Local socket timeout was '00:00:59.9843903'.
if i use basicHttpBinding the problem doesn't occur.
is any one knows why this problem occurs ???
Thanks,
Liran
This is expected behavior. When you close the server, TCP connection on the server is closed and you can't call it from the client anymore. Starting the server again will not help. You have to catch the exception on the client, Abort current proxy and create and open new one.
With BasicHttpBinding it works because NetTcpBinding uses single channel for whole life of the proxy (the cannel is bound to TCP connection) whereas BasicHttpBinding creates new one for each call (it reuses existing HTTP connection or create new one if connection doesn't exist).

wcf slow connection and number of connections in the pool

I have a wcf client.
The client calls a function and then closes.
If I use netstat there is only one connection.
I made an experiment.
In the server function I put Thread.sleep(10000).
Then again I started the client.
With netstat I found out that there are 5 connections.
Why when the response is slow the client opens more connections than one ?
Regards
Nettcp connection are pooled and if you had your process running for a while then you see that these would be reused and a new one would be created if an existing one is being used and returned to the pool. So your usage will determine how the pool functions.
http://kennyw.com/work/indigo/173