WSO2 ESB 5.0.0 threads and classes loaded issue - wso2-esb

I have a simple passthrough proxy in WSO2 ESB 5.0.0 to a WSO2 DSS. When I consume the esb proxy the live treads classes loaded increase until WSO2 ESB breaks down. When esb breaks down there are 284 threads and 14k classes load. If I consume the DSS directly, dss doesn't break down and the maximun threads are 104 and 9k classes loaded.
How can I force esb releases that resources, or improve how the esb handle the http connections in esb? Looks like zombie connections never release the thread.
Any help to focus the problem?

Doesn't look that problem with classloading and thread count. I just finished to test newly installed WSO2ESB server.
WSOESB version 5.0.0
java8
Windows 8
Esb server as well has DSS feature installed.
DSS services is called over http1.1 protocol.
DSS service has long running query (over 10s)
Total number of simultaneously requests to ESB service over 150
Total number of loaded classes over 15000 total threads running over 550. Even in this high load there is no any issuer like you mention.
What i actually recommend is to check how you make http requests to esb service. It is kind of sensitive to headers like Content-Type, Encoding. It took quiet long time to find out how properly call soap service on esb, using apache httpclient (4.5)
Eventually probably find out then problem. Problem is between DSS and ESB servers. According to source code, this kind of error happen, when esb send request to dss server and request is read by DSS server but connection to DSS server is closed before DSS server write response to ESB server. Then esb server report message about such problem as your mention
SourceHandler
...
} else if (state == ProtocolState.REQUEST_DONE) {
isFault = true;
log.warn("Connection closed by the client after request is read: " + conn);
}
Easy to reproduce start esb and dss server. Start sending a lots of requests to passthrough proxy (which proxy request to DSS service) on ESB, shutdown DSS server and you will see a lot of
WARN - SourceHandler Connection closed by the client after request is read: http-incoming-1073 Remote Address
This is might be network issuer, firewall or as well WsoDSS server has socket timeout which is by default 180s.

Related

Helidon - ForwardingHandler does not complete RequestContext Publisher if the LoadBalancer cuts the started Expect: 100-continue process

We are providing Helidon MP Rest-Services behind an Apache httpd Load balancer. The following constellation leads to a stuck of the JerseySupport Service executor queue.
A Client sends a POST request to our rest service with a json payload and an Expect 100-continue Header. The Apache load balancer sends the request do the backend. The backend takes the request and starts a JerseySupport runnable which waits for incomming data, then the backend sends a response to the LB to start the stream (Response Status 100). If the client request exceeds the load balancer connection timeout at this point the load Balancer cuts the connection to the calling client with a proxy error but the backend service does not get informed and waits forever.
The problem is that the io.helidon.webserver.ForwardingHandler only completes the http content publisher if a LastHttpContent Message is send and this never happens. If the publisher never completes, the subscriber inside the waiting JerseySupport service instance blocks a server executor instance forever. If this happens several times the whole rest service is blocked.
I did not found a possibility to configure a corresponding timeout inside helidon to interrupt the JerseySupport service nor a possibility to get the apache load balancer to end the connection to the backend appropriately.
Has anyone of you noticed similar problems or found a workaround apart from disabling 100-continue streaming.
Helidon Version: 1.4.4
Apache Version: 2.4.41
Thanks in advance

Weblogic 12c HTTP Request call executed severals times by thread pools?

Using Weblogic 12c, i have a big problem: when calling a rest service using differents clients (java client or using command line like curl) there is no problem.
BUT when the client is in C#, the request is executed severals time (each minutes) by one different thread of the pool so it implies lot of errors in log files because:
The C# client get its response and close the connection, but the others 'duplicated' requests (by threads pools) creates stacks errors because the service can not write a response (there is no client to get the response).
Some precisions:
- There is no Stuck thread in my case.
- Using Tomcat there is no problem
- There is the same problem using new installed Weblogic12c Server (so there is no custom configuration)
- The http headers are the same between C# or others clients
- The same data tests are used to reproduce the problem
check access.log to check how many requests are received by the server, if its more in case of c# then its a problem there not the server.
You can also enable http debug to get more details about the incoming request.

How to Load Balance Mule ESB without using Mule Management Console

I am working with Mule ESB and instead of using Mule Management Console (MMC). I just want to load balance so that if I am exposing my Mule ESB as a Service so in that case I don't want to use load balancer to balance my Mule ESB , because once the request will come Load Balancer, it is the single point of failure in case if it is down. So I just need a use case how to Expose Mule as a Service with Optimized Load Balancing without using MMC (Mule Management Console).
For load balancing incoming HTTP request, over multiple Mule instances, you will need a external loadbalancer. Mule ESB Enterprise Edition nor MMC will help you with that.
You can use a commercial one, such as a F5 BIP-IP, or setup a HAProxy. To avoid the loadbalancer to be a single point of failure you can setup a redundant HAProxy.
For JMS make sure to setup a external message broker cluster and connect to it using the normal jms:inbound-endpoint that way Mule will act as a competing consumer and you will achieve load balancing of messages.
I would also advice you to have a look at "MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability" that covers this. It is a bit dated but most of the information in there is still valid.
It's unclear what transport are you using, anyhow you have just limited number of options.
Use Mule EE clustering feature for the VM transport.
Use a load balancer
Use a transport that support competing consumers like JMS or AMQP.
Could you provide a more detailed explanation of you deployment so I can provide more extact info?

Weblogic server failed to response under load

We have a quite strange situation on my sight. Under load our WL 10.3.2 server failed to response. We are using RestEasy with HttpClient version 3.1 to coordinate with web service deployed as WAR.
What we have is a calculation process that run on 4 containers based on 4 physical machines and each of them send request to WL during calculation.
Each run we see a messages from HttpClient like this:
[THREAD1] INFO I/O exception (org.apache.commons.httpclient.NoHttpResponseException) caught when processing request: The server OUR_SERVER_NAME failed to respond
[THREAD1] INFO Retrying request
The HttpClient make several requests until get necessary data.
I want to understand why WL can refuse connections. I read about WL thread pool that process http request and found out that WL allocate separate thread to process web request and the numbers of threads is not bounded in default configuration. Also our server is configured Maximum Open Sockets: -1 which means that the number of open sockets is unlimited.
From this thread I'd want to understand where the issue is? Is it on WL side or it's a problem of our business logic? Can you guys help to deeper investigate the situation.
What should I check more in order to understand that our WL server is configured to work with as much requests as we want?

WCF Streaming across proxy servers etc

All
Sorry if this is an obvious question but does WCF streaming work correctly from a client to an web server (using basicHttpBinding) if a proxy server is in the way?
I seem to remember reading that proxy servers can cache requests until they are ready (hence why sometimes a download doesn't respond for ages then suddenly completes) and I'm not sure if this will stop streaming working correctly.
Thanks
Probably too late for you, but from my interpretation of the web page below- no, streaming does not work when a proxy server is in the way.
http://msdn.microsoft.com/en-us/library/ms733742.aspx
The decision to use either buffered or streamed transfers is a local decision of the endpoint. For HTTP transports, the transfer mode does not propagate across a connection or to proxy servers and other intermediaries. Setting the transfer mode is not reflected in the description of the service interface. After generating a WCF client to a service, you must edit the configuration file for services intended to be used with streamed transfers to set the mode. For TCP and named pipe transports, the transfer mode is propagated as a policy assertion.