Apache HTTP client only two connections are possible - apache

I have below code to invoke a REST API method using Apache HTTP client. However only two parallel requests could be sent using above client.
Is there any parameter to set max-connections?
HttpPost post = new HttpPost(resourcePath);
addPayloadJsonString(payload, post);//set a String Entity
setAuthHeader(post);// set Authorization: Basic header
try {
return httpClient.execute(post);
} catch (IOException e) {
String errorMsg = "Error while executing POST statement";
log.error(errorMsg, e);
throw new RestClientException(errorMsg, e);
}
Jars I am using are below are,
org.apache.httpcomponents.httpclient_4.3.5.jar
org.apache.httpcomponents.httpcore_4.3.2.jar

You can configure the HttpClient with HttpClientConnectionManager
Take a look at Pooling connection manager.
ClientConnectionPoolManager maintains a pool of HttpClientConnections and is able to service connection requests from multiple execution threads. Connections are pooled on a per route basis. A request for a route which already the manager has persistent connections for available in the pool will be services by leasing a connection from the pool rather than creating a brand new connection.
PoolingHttpClientConnectionManager maintains a maximum limit of connections on a per route basis and in total. Per default this implementation will create no more than 2 concurrent connections per given route and no more 20 connections in total. For many real-world applications these limits may prove too constraining, especially if they use HTTP as a transport protocol for their services.
This example shows how the connection pool parameters can be adjusted:
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
// Increase max total connection to 200
cm.setMaxTotal(200);
// Increase default max connection per route to 20
cm.setDefaultMaxPerRoute(20);
// Increase max connections for localhost:80 to 50
HttpHost localhost = new HttpHost("locahost", 80);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CloseableHttpClient httpClient = HttpClients.custom()
.setConnectionManager(cm)
.build();

Related

RSocket Client Side Load balancing for multilevel Microservices

In our microservices based application we have layer of Microservices i.e User/Rest client calling MS1 and MS1 calling MS2 .. and so on. For simplicity and to present the actual problem I will mention only client, MS1 and MS2 here. We are tyring to implement MS to MS calls using RSocket communication protocol
(Request Response Interaction Model).
We also need to implement client side load balancing in RSocket as we will be running mulitple PODS(instances) of MSs in kubernetes enviroment.
We are observing following problem in client side Load Balancing in local Unit testing also(mentioning this so as to rule out any issue with deployment/kubernetes env. etc)
1.) Client -> MS1(Instance1) -> MS2(Instance1) ( RSocket Load Balancing code is Working fine and each request is processed)
Client -> MS1(Instance1,Intanse2) -> MS2(Instance1) (Load Balancing is working fine)
Client -> MS1(Instance1)->S2(Instance1c,Instance2) (Only 1st request passes i.e Client -> MS1(Instance1 -> MS2(Instance1)
and then 2nd Request Client -> MS1(Intance1) (Stops here and call to MS2(Instance2) is not initiated
Clinet -> MS1(Instance1,Instance2) -> MS2(Instance1,Instance2)
Only 2 request gets successfully processed Client -> MS1(Instance1) -> MS2(Intstance1)
and Client -> MS1(Instance2) -> MS2(Instance2)
futher the RSocket call does not happen and as per KeepAliveInterval and KeepAliveMaxLifeTime
client RSocket connection id disposed with error
Caused by: ConnectionErrorException (0x101): No keep-alive acks for 30000 ms
at io.rsocket.core.RSocketRequester.lambda$tryTerminateOnKeepAlive$2(RSocketRequester.java:299)
Now let us see how I have implmented client side Load Balancing code.
I am relying on Flux<List and 3 important Beans are
private Mono<List<LoadbalanceTarget>> targets()
{
Mono<List<LoadbalanceTarget>> mono = Mono.fromSupplier(()->serviceRegistry.getServerInstances()
.stream()
.map(server -> LoadbalanceTarget.from(getLoadBalanceTargetKey(server),
TcpClientTransport.create(TcpClient
.create()
.option(ChannelOption.TCP_NODELAY, true)
.option(ChannelOption.ALLOW_HALF_CLOSURE,true)
.host(server.getHost())
.port(server.getPort()))))
.collect(Collectors.toList()));
return mono;
#Bean
public Flux<List<LoadbalanceTarget>> targetFluxForMathService2()
{
return Flux.from(targets());
}
Note: for testing I am faking serviceRegistry and returing list of hard coded RSocket server instances( Host and port)
#Bean
public RSocketRequester rSocketRequester2(Flux<List<LoadbalanceTarget>> targetFluxForMathService2) {
RSocketRequester rSocketRequester = this.builder.rsocketConnector(configurer->
configurer.keepAlive(Duration.ofSeconds(10), Duration.ofSeconds(30))
.reconnect(Retry.fixedDelay(3,
Duration.ofSeconds(1))
.doBeforeRetry(s-> System.out.println("Disconnected, retrying to connect"))))
.transports(targetFluxForMathService2, new RoundRobinLoadbalanceStrategy());
return rSocketRequester;
}
private String getLoadBalanceTargetKey(RSocketServerInstance server)
{
return server.getHost() + server.getPort();
}
Any help will be highly appreciated.

I am trying to create a Connection Pool with single connection using PoolingHttpClientConnectionManager and ClosableHttpClient for HTTPS and reuse it

Trying to create a connection pool, where only single HTTPS connection will be created and when subsequent request comes previous connection pool would be used. Currently when I am trying to hit any request each time new connection is getting Established and previous connection is going into time wait state.
Below code-snippet I am using, it's working for HTTP connection but not for HTTPS
SslConfigurator sslConfig = SslConfigurator.newInstance().keyStoreFile(this.connectionInfo.getKeyStorePath()).keyStorePassword(connectionInfo.getKeyStorePassword()).keyStoreType("JKS").trustStoreFile(this.connectionInfo.getKeyStorePath()).trustStorePassword(connectionInfo.getKeyStorePassword()).securityProtocol("TLS");
logger.info("SSL CONFIG Accepted");
sslContext = sslConfig.createSSLContext();
SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslContext,NoopHostnameVerifier.INSTANCE);
logger.info("SSL CONTEXT CREATED, Building Client" );
Registry<ConnectionSocketFactory> socketFactoryRegistry = RegistryBuilder.<ConnectionSocketFactory> create().register("https", sslsf).build();
connManager = new PoolingHttpClientConnectionManager(socketFactoryRegistry);
connManager.setMaxTotal(1);
connManager.setDefaultMaxPerRoute(1);
config = RequestConfig.custom().setConnectTimeout(60000).setConnectionRequestTimeout(60000).setSocketTimeout(60000).build();
client = HttpClients.custom().setDefaultRequestConfig(config).setConnectionManager(connManager).build();
connManager.setMaxTotal(1);
I am not sure why you think this has no effect because this should definitely restrict the total number of connections in the pool to just one at a time.
In your particular case however you should be using BasicHttpClientConnectionManager instead of PoolingHttpClientConnectionManager

Masstransit RPC (RabbitMq) throughput limit

We are using Masstransit with RabbitMq for making RPCs from one component of our system to others.
Recently we faced the limit of throughput on client side, measured about 80 completed responses per second.
While trying to investigate where the problem was, I found that requests were processed fast by the RPC server, then responses were put to callback queue, and then, the queue processing speed was 80 M\s
This limit is only on client side. Starting another process of the same client app on the same machine doubles requests throughput on the server side, but then I see two callback queues, filled with messages, are being consumed each with the same 80 M\s
We are using single instance of IBus
builder.Register(c =>
{
var busSettings = c.Resolve<RabbitSettings>();
var busControl = MassTransitBus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(busSettings.Host), h =>
{
h.Username(busSettings.Username);
h.Password(busSettings.Password);
});
cfg.UseSerilog();
cfg.Send<IProcessorContext>(x =>
{
x.UseCorrelationId(context => context.Scope.CommandContext.CommandId);
});
}
);
return busControl;
})
.As<IBusControl>()
.As<IBus>()
.SingleInstance();
The send logic looks like this:
var busResponse = await _bus.Request<TRequest, TResult>(
destinationAddress: _settings.Host.GetServiceUrl<TCommand>(queueType),
message: commandContext,
cancellationToken: default(CancellationToken),
timeout: TimeSpan.FromSeconds(_settings.Timeout),
callback: p => { p.WithPriority(priority); });
Has anyone faced the problem of that kind?
My guess that there is some program limit in the response dispatch logic. It might be the Max thread pool size, or the size of the buffer, also the prefetch count of response queue.
I tried to play with .Net thread pool size, but nothing helped.
I'm kind of new to Masstransit and will appreciate any help with my problem.
Hope it can be fixed in configuration way
There are a few things you can try to optimize the performance. I'd also suggest checking out the MassTransit-Benchmark and running it in your environment - this will give you an idea of the possible throughput of your broker. It allows you to adjust settings like prefetch count, concurrency, etc. to see how they affect your results.
Also, I would suggest using one of the request clients to reduce the setup for each request/response. For example, create the request client once, and then use that same client for each request.
var serviceUrl = yourMethodToGetIt<TRequest>(...);
var client = Bus.CreateRequestClient<TRequest>(serviceUrl);
Then, use that IRequestClient<TRequest> instance whenever you need to perform a request.
Response<Value> response = await client.GetResponse<TResponse>(new Request());
Since you are just using RPC, I'd highly recommend settings the receive endpoint queue to non-durable, to avoid writing RPC requests to disk. And adjust the bus prefetch count to a higher value (higher than the maximum number of concurrent requests you may have by 2x) to ensure that responses are always delivered directly to your awaiting response consumer (it's an internal thing to how RabbitMQ delivers messages).
var busControl = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.PrefetchCount = 1000;
}

User destinations in a multi-server environment? (Spring WebSocket and RabbitMQ)

The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}

WCF client timeout after 400 instance calls when not closing proxy

I am using the following code to investigate what happens when you fail to close the proxy:
class Program
{
static void Main()
{
for (int i = 1; i < 500; i++)
{
MakeTheCall(i);
}
Console.WriteLine("DONE");
Console.ReadKey();
}
private static void MakeTheCall(int i)
{
Console.Write("Call {0} - ", i);
var proxy = new ServiceReference1.TestServiceClient();
var result = proxy.LookUpCustomer("123456", new DateTime(1986, 1, 1));
Console.WriteLine(result.Email);
//proxy.Close();
}
}
The service is using net.Tcp binding, WAS hosted, all default values.
Running it, I get a timeout when i > 400. Why 400 - is this a setting somwhere? I expected it to be much less - equal to maxConnections.
By not closing the proxy, you are maintaining a session on the service. The maxConcurrentSessions throttling attribute controls how many sessions the service can accommodate. The default (in .NET 4.0) is 100 * Processor Count, so I am guessing that you have 4 processors (or cores) = 400 concurrent sessions?
The reason your test code is timing out is probably due to the default WCF service throttling and doesn't have anything to do with not disposing of the proxy object. To conserve client-side resource, you should always properly dispose the proxy instance.
I believe that a service host will only create up to 16 instances of a service by default which may be even less if the binding is set to use session of some sort. You're flooding it with around 400 requests within a few seconds. There are a set of WCF performance counters you can fire up and view the instancing of a WCF service. I knew all that prep for the WCF certification exam would come in really useful sometime :)