Spring reactor webclient rate limit - spring-webflux

I have a springboot webflux application A which need call another web service B. Something like this:
// controller
#Post("/request")
public doRequest(RequestA body) {
transformToRequestB(body)
.sendRequestToB(bRequests)
.collectResponses()
.subscribe(result -> saveResultToDb(result));
}
I am using spring Spring webclient to send request to B. But some B requests take longtime to response and limit concurrency connections. So I offen get the following error:
reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 45000ms
......
Is there any way to limit the request rates to service B to avoid this error? I dont care how long it take the jobs done but need it finished without error.
I am trying to implement a queue to store the transfored service B requests and delay to send it when connection pool has free connection. I can observe the connection release event, but dont know how to delay the request sent. Many thank for any advice.

Related

WCF service hosted on Azure App service never seems to finish threads opened for processing

I have deployed a WCF service to Azure App Service that performs just one task - send a message to the topic. Although app works fine with normal load, it starts experiencing higher thread count as soon as load on the app increases.
The app instance becomes unhealthy when the threads count limit is reached.
Those threads stay in waiting state forever. We tried scaleout option on thread count metrics but the app just keeps on adding more instances as the earlier instance still had almost all threads waiting and remain unhealthy forever.
This is performed in the below sequence.
Accept a request.
initialize a Service bus topic client
Send the requested message to the topic.
Closed the topic client.
While sending a burst of 1000 requests, the app works but the number of threads initiated always stays in the waiting state. However, while these threads are waiting CPU stays at 0%. The average response time from this service is also under 100 ms avg.
After sending 1000 requests to this service, I see a similar number of threads open.
What could be the potential root cause of this issue? Is there any issue with my code to send the message to the topic?
public async Task SendAsync(Message message)
{
try
{
await _topicClient.SendAsync(message);
}
catch(Exception exc)
{
throw new Exception(exc.Message);
}
finally
{
await _topicClient.CloseAsync();
}
}
enter image description here
The code sample you provided does not really tell us much. We do not know how SendAsync(Message message) is being invoked. Is your image your queue count that drops to 0 before accepting more messages? I'm assuming a client calls your WCF app service which tells it send the message to service bus?
It does sound like you are hitting the 1000 maximum connections. Your _topicClinet should be a singleton for your app domain that all clients use. You also should only need one app service instance if all you're doing is message forwarding. No need for scaling unless there's more processing that you haven't alluded to.
Have a look at the Service Bus messaging best practices doc for more suggestions.
Thanks for responding. These are good suggestions and I will look to review my implementation inline with these.
The good news is that I was able to resolve the issue, it wasn't related to the topic client as I thought earlier. It was due to how I was registering dependency injection.
I am implementing a WCF service based on .Net Framework 4.8 and initially, we did not include Global.asax but registered DI in the service controller constructor. The implementation worked till we realized (as part of performance testing) it seems to add additional threads when we added ILogger dependency. Those additional threads never cool down but were adding up as the service received more requests.
To resolve, I moved DI registration into Application_Start in global.asax.

Webjob Errors since move to Azure Sql v12

Each morning at 4am a Scheduled Task creates around 500 messages on a message queue. Each message is an id of an email to send. Each message is picked up and a url created and sent via await HttpClient.GetAsync(url) The url target then creates and sends the email. This has worked well for months.
I've just upgraded to SQL Azure v12 and all is now not well.
The very first messages to be processed (after 2 minutes running time) threw a
"System.Threading.Tasks.TaskCanceledException"
I'm also seeing
"System.Data.Entity.Core.EntityException: The underlying provider
failed on Open. ---> System.InvalidOperationException: Timeout
expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because all pooled connections
were in use and max pool size was reached."
and a couple of
"The timeout period elapsed prior to completion of the operation or
the server is not responding. This failure occurred while attempting
to connect to the routing destination. The duration spent while
attempting to connect to the original server was - [Pre-Login]
initialization=6; handshake=426; [Login] initialization=0;
authentication=0;"
The webjob that sends the request to the api is awaiting a response. I'm wondering if because it's async, while awaiting the response the thread is freed to go off and processes another queue item - and therefore creates another api request, essentially this hits the api again and again until there are so many requests being processed by the api all the theads are in use - and that I might be better off NOT making the webjob async because then it (the 'trapped' thread) would send a request only after the first request completes? Is that right? edit: actually the IIS logs suggest that the api requests are not all happening at once. So my question is "what should I look at next? Are these common SQL v12 errors or is the recent upgrade a red-herring?"
just to clarify, the webjob that fires in response to the queue message simply does:
using (HttpClient client = new HttpClient())
{
response = await client.GetAsync(url);
}
and hits the web api of an Always On standard tier azure website. Database DTU % is about 25% while this happens.
edit:
"Gateway no longer provides retry logic in V12 Before version V12,
Azure SQL Database had a gateway that acted as a proxy to buffer all
interactions between the database and your client program. The gateway
provided automated retry logic for some transient errors.
V12 eliminated the gateway. Now your program must more fully handle
transient errors."
adding DbConfiguration class for SqlAzureExecutionStrategy. Will so how it runs tonight.
Adding the EF retry config class fixed the transient errors. The cancelled tasks are a different issue (new question)
//https://msdn.microsoft.com/en-us/data/jj680699
public class SqlAzureConfiguration : DbConfiguration
{
public SqlAzureConfiguration()
{
this.SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy());
}
}
and in web.config (because I have multiple contexts)
<entityFramework codeConfigurationType="Abc.DataService.SqlAzureConfiguration, Abc.DataService">

Using Grizzly with JMS/ActiveMQ

I'm working on proof-of-concept project designed to explore the benefits of offloading work from a NIO server to a message queue for backend processing. I'm using Grizzly for the NIO boilerplate stuff, and Spring Integration for the messaging (with JMS/ActiveMQ as the messaging implementation). Basically, what I want to do is this:
Client connection -> Server -> Server creates "work-to-be-done" message -> JMS/ActiveMQ
On the ActiveMQ message queue, a number of "workers" will be actively consuming these messages, processing them, and placing the result on another queue. The server is listening for "response messages" on that queue, and once a message is picked up it will execute the following:
Response queue -> Server serializes the message to something the client can understand -> back to the client
My immediate problem is my lack of understanding of Grizzly, specifically how to decouple the event handling from the messaging. The server has to create the work-to-be-done message in such a way that when the reply message comes back from the worker, the server knows who the client was (find the related FilterChainContext in Grizzly) in order to send the tcp message.
I might be able to use FilterChainContext.getAddress() and place that on the work message, but I'm not sure how to code a method which takes a peer address and a message and somehow sends that (FilterChainContext.write()) when it has no FilterChainContext.
I'm playing with the idea now of keeping a Map around, but I'm apprehensive about this approach because I don't want stuff to go stale in a map if something happens to the message during serialization or processing.
Ideas and suggestions are welcome.
-Michael
You could use the TCP adapters/gateways (which have an option to use NIO), together with custom (de)serializers. If you must use Grizzly, you could write a server connection factory implementation. In the case of the outbound adapter (or inbound gateway), the endpoint is registered as a 'TcpListener' (using the connectionId) and the SI message contains the IpHeaders.CONNECTION_ID header used to determine which connection gets the reply. When a connection closes, it is unregistered (removed from the map).

WCF Multiple requests from same client

i am building a WCF Service and i need clients to be able to acquire multiple results in the same time.
For example 5 callings of void UploadPhoto(byte[] photo);
and 1 string GetInfo()
If I understand it correctly, than whenever I do a request for a service, I need to get a response for the first one before the second gets proceeded. Is that correct?
Thanks
You can make multiple calls if you increase the System.Net.ServicePointManager.DefaultConnectionLimit the default is 2.
You need to set the WCF Service as Per-Call Service to process concurrent requests.
That is not quite correct.
If you call a WCF (or other web service) syncronosly then you have to wait for the response before doing anything else.
However, you can call a wcf service asyncronosly, in which case you do not have to wait for the result. You create a handler that handles the result when it comes back, but the main program continues.
Have a look at Ladislav's answer to this question: Difference between WCF sync and async call?

How to process Queued WCF Web service requests

I have a requirement to Queue web service requests and then process each request based on priority and request time. And then send response back.
The approach I'm thinking is as follows
1 Create a web service method to submit requests and enqueue requests.
2 Create two queues (high priority requests and lower priority requests)
3 Create a Processing method to process each request one at a time(dequeue the high priority queues first if it exists) process and then store the response
4 Create a dictionary to store response for the respective request.
5 create a web service method to get the response
I'm thinking to use in memory queue since I expect few number of requests queued at a time.
The problem I'm having is in step 3. I want the processor method to continuously run as long as there are requests in the queue.
How can I accomplish step 3 using WCF web service ?
I'm using .NET 4.0 environment.
I really appreciate any ideas or suggestions.
Thanks
I would create my service contract to make it clear that the operations will be queued. Something like:
[OperationContract]
string EnqueueRequest(int priority, RequestDetails details);
[OperationContract]
bool IsRequestComplete(string requestId);
I would have EnqueueRequest place each request into an MSMQ queue. I'd have a Windows Service processing the requests in the queue. That service would be the only process that has access to the SDLC device.
Have you coded the service in a plane jane, meat and potatoes way and profiled to see if it's necessary to queue requests? There is overhead involved in queuing. It's a good idea to do some measurement and see if just servicing requests is adequate.
Another way to approach it would be to use Microsoft Message Queue. Here is even some tight integration between message queues and WCF. The thought is, if you do actually need a queue, why not use something that is already built and tested.