I'm implementing a rest client based on MicroProfile Rest Client spec and deployed on Open Liberty 22.0.0.9.
After a few performance tests with JMeter it seems that the connection pool for the rest client is only 10.
How can I change this because it is really not enough for our usage ?
It seems that the underlying client implementation is still cxf (org.apache.cxf.microprofile.client.CxfTypeSafeClientBuilder).
On this page https://openliberty.io/docs/latest/reference/jaxrs-dif.html it is stated that "The underlying JAX-RS implementation for Open Liberty also changed from Apache CXF to RESTEasy."
Does the implementation for jaxrs 2.0 and 2.1 have also switched to Resteasy or is it true only for restfulWS-3.0 ?
Anyway, to change the configuration of cxf I've tried to add a jvm.options property "-Dhttp.maxConnections=100" but it has not effect.
I've also set a RestClientBuilderListener but I don't find any working property to set on the RestClientBuilder...
Any idea how I can achieve this ?
For mpRestClient-2.0:
If used synchronously, then the underlying CXF uses the JDK's HttpURLConnection. By default, HTTP Keep-Alive is enabled and used, unless the server responds with a "Connection: close" response header. If it does not, then the maximum number of cached keep-alive connections per destination host is controlled with -Dhttp.maxConnections=X (default 5). If the server responds with a "Keep-Alive: timeout=X" response header, then the KeepAliveCache will purge and close the connection after approximately X seconds of idleness. If the server does not respond with such a header, the default is 5 seconds and cannot be tuned.
If used asynchronously, then the underlying CXF uses Apache HttpClient, and the HttpClient may be tuned with client.setProperty calls such as
org.apache.cxf.transport.http.async.MAX_CONNECTIONS
org.apache.cxf.transport.http.async.MAX_PER_HOST_CONNECTIONS
org.apache.cxf.transport.http.async.CONNECTION_TTL
org.apache.cxf.transport.http.async.CONNECTION_MAX_IDLE
Related
I'm using streamed service calls in Lagom. Once I upgraded to 1.4, error messages from the server are not being propagated to the client over websockets. This works in test using the lagomtestkit, but not when running a service using 'runAll' from SBT or in a live deployment.
Using 'runAll', all client calls that fail come back with "Peer closed connection with code 1011 'internal error'"
The issue here is fairly easy to diagnose. Lines 66-68 of akka-http 10.0.11 FrameOutHandler create the WebSocket closeFrame, throwing away the passed in exception and returning "internal error", even though they have the exception message.
My problem is that although I can see the error, I can't see any easy way to fix it without patching akka-http. Is this something that should be supported in Lagom? It used to work in 1.3 when we used the netty client.
Are you testing with another Lagom client connecting directly to the port that the service listens to, or using a web browser or some other client connecting through port 9000?
If it's the latter, you might also need to change the service gateway implementation back to Netty as described in the documentation on Default gateway implementation:
The Lagom development environment provides an implementation of a
Service Gateway based on Akka HTTP and the (now legacy) implementation
based on Netty.
You may opt in to use the old netty implementation.
In the Maven root project pom:
<plugin>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-maven-plugin</artifactId>
<version>${lagom.version}</version>
<configuration>
<serviceGatewayImpl>netty</serviceGatewayImpl>
</configuration>
</plugin>
In sbt:
// Implementation of the service gateway: "akka-http" (default) or
"netty" lagomServiceGatewayImpl in ThisBuild := "netty"
In any case, please create an issue on GitHub and we can investigate a solution in the framework.
I have a simple passthrough proxy in WSO2 ESB 5.0.0 to a WSO2 DSS. When I consume the esb proxy the live treads classes loaded increase until WSO2 ESB breaks down. When esb breaks down there are 284 threads and 14k classes load. If I consume the DSS directly, dss doesn't break down and the maximun threads are 104 and 9k classes loaded.
How can I force esb releases that resources, or improve how the esb handle the http connections in esb? Looks like zombie connections never release the thread.
Any help to focus the problem?
Doesn't look that problem with classloading and thread count. I just finished to test newly installed WSO2ESB server.
WSOESB version 5.0.0
java8
Windows 8
Esb server as well has DSS feature installed.
DSS services is called over http1.1 protocol.
DSS service has long running query (over 10s)
Total number of simultaneously requests to ESB service over 150
Total number of loaded classes over 15000 total threads running over 550. Even in this high load there is no any issuer like you mention.
What i actually recommend is to check how you make http requests to esb service. It is kind of sensitive to headers like Content-Type, Encoding. It took quiet long time to find out how properly call soap service on esb, using apache httpclient (4.5)
Eventually probably find out then problem. Problem is between DSS and ESB servers. According to source code, this kind of error happen, when esb send request to dss server and request is read by DSS server but connection to DSS server is closed before DSS server write response to ESB server. Then esb server report message about such problem as your mention
SourceHandler
...
} else if (state == ProtocolState.REQUEST_DONE) {
isFault = true;
log.warn("Connection closed by the client after request is read: " + conn);
}
Easy to reproduce start esb and dss server. Start sending a lots of requests to passthrough proxy (which proxy request to DSS service) on ESB, shutdown DSS server and you will see a lot of
WARN - SourceHandler Connection closed by the client after request is read: http-incoming-1073 Remote Address
This is might be network issuer, firewall or as well WsoDSS server has socket timeout which is by default 180s.
I am trying to check whether Apache CXF implements HTTP Connection Pooling? If yes, how can we configure the same. If not, how can we achieve the same?
This thread is little direction towards the same. But it's not clear whether HTTPConduit has a way to set the same or properly configure.
Can anyone guide me on this?
Apache CXF uses HTTPUrlConnection internally and relies on java system properties to configure client connection settings.
Two main ones that you can configure are as follows:
http.keepAlive (default: true) -
Indicates if persistent connections should be supported. They improve performance by allowing the underlying socket connection to be reused for multiple http requests. If this is set to true then persistent connections will be requested with HTTP 1.1 servers.
http.maxConnections (default: 5) -
If HTTP keepalive is enabled (see above) this value determines the maximum number of idle connections that will be simultaneously kept alive, per destination.
Here is a list of all the properties that you can set to configure HTTPUrlConnection
Hope it helps.
i have web service(cxf integated with spring(jax-ws)) project which it was deployed on Weblogic(12.1.1).
Another project as client was deployed same application server in another machine.
My problem is that at least 23 second of time for sending info from server to client.(this time is more long),but this time very fast when request is called by Soap Ui.
how can i config weblogic for improvement this problem.
I was searching a lot finally i found solution of my problem.
JaxWSProxyFactoryBean is object of cxf client to communicate with cxf server.
This Object have a property is named Bus,if this property not set in config cxf client,performance wont be better
We have a quite strange situation on my sight. Under load our WL 10.3.2 server failed to response. We are using RestEasy with HttpClient version 3.1 to coordinate with web service deployed as WAR.
What we have is a calculation process that run on 4 containers based on 4 physical machines and each of them send request to WL during calculation.
Each run we see a messages from HttpClient like this:
[THREAD1] INFO I/O exception (org.apache.commons.httpclient.NoHttpResponseException) caught when processing request: The server OUR_SERVER_NAME failed to respond
[THREAD1] INFO Retrying request
The HttpClient make several requests until get necessary data.
I want to understand why WL can refuse connections. I read about WL thread pool that process http request and found out that WL allocate separate thread to process web request and the numbers of threads is not bounded in default configuration. Also our server is configured Maximum Open Sockets: -1 which means that the number of open sockets is unlimited.
From this thread I'd want to understand where the issue is? Is it on WL side or it's a problem of our business logic? Can you guys help to deeper investigate the situation.
What should I check more in order to understand that our WL server is configured to work with as much requests as we want?