Socket exception while Load testing - weblogic

While load testing with JMeter we are receiving Non HTTP response code: java.net.SocketException for all the requests once peak load is reached.
Here is the server config:
JMeter -> F5 (load balancer) -> 2 legs of Weblogic servers.
What could be the reason of getting socketexception?
Any help in this regard is highly appreciated!

This is your server starting to reject connections or timeouts on client side.
This means your server cannot handle the load correctly.

Related

Sticky sessions with Load Balancer

It would be a great help if you can clarify on this please.
When am using a load balancer and I bind a server with a client with say either appsession or any other means. However if that server goes down then the load balancer redirects the client to another server and while doing so, the whole session is lost. So do i have to write my application in such a way that it stores session data externally so that it can be shared?
So how good is using a load balancer when a transaction fails halfway because the server goes unresponsive?
Please let me know, thanks.
There is a difference between the 2 concepts: session stickiness and session replication.
Session stickiness gives you an assurance that once a request from a client reaches a healthy server, subsequent requests from the same client will be handled by that server. When your server goes down, the stickiness is lost, and new requests go to a different healthy server. Session stickiness is usually offered by the load balancer and your application servers generally do not need to do anything.
Session replication gives you the capability of recovering the session when a server goes down. In the above case, stickiness is lost, but the new server will be able to recover the previous session based on an external session storage, which you will have to implement.

Helidon - ForwardingHandler does not complete RequestContext Publisher if the LoadBalancer cuts the started Expect: 100-continue process

We are providing Helidon MP Rest-Services behind an Apache httpd Load balancer. The following constellation leads to a stuck of the JerseySupport Service executor queue.
A Client sends a POST request to our rest service with a json payload and an Expect 100-continue Header. The Apache load balancer sends the request do the backend. The backend takes the request and starts a JerseySupport runnable which waits for incomming data, then the backend sends a response to the LB to start the stream (Response Status 100). If the client request exceeds the load balancer connection timeout at this point the load Balancer cuts the connection to the calling client with a proxy error but the backend service does not get informed and waits forever.
The problem is that the io.helidon.webserver.ForwardingHandler only completes the http content publisher if a LastHttpContent Message is send and this never happens. If the publisher never completes, the subscriber inside the waiting JerseySupport service instance blocks a server executor instance forever. If this happens several times the whole rest service is blocked.
I did not found a possibility to configure a corresponding timeout inside helidon to interrupt the JerseySupport service nor a possibility to get the apache load balancer to end the connection to the backend appropriately.
Has anyone of you noticed similar problems or found a workaround apart from disabling 100-continue streaming.
Helidon Version: 1.4.4
Apache Version: 2.4.41
Thanks in advance

Load Balancer with Zookeeper

I'm trying to create a Load Balancer to be in front of a Zookeeper 3.4.6 cluster. When I do that the cluster works well but an exception is thrown:
WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
It means that Zookeeper is understanding Load Balancer as a client and it's tryong to stablish a connection with it. But the Load Balancer just pings TCP 2181 and comes out.
You are trying to use a load balancer between your ZooKeeper cluster and clients?
When you give your clients a ZooKeeper connection string in the form of multiple endpoints like this: "server1,server2,server3...", the clients will pick one of the servers and switch over in case of failure. This way, if all your clients have the same ZooKeeper endpoints string, you will end up with a balanced pool.
If you put a standard load balancer between the clients and the server, it can cause failures like this. A load balancer doesn't play well with the way ZooKeeper expects its clients to behave. A client needs to maintain an open TCP connection to a specific server it has a session on, sending periodic heartbeats.
There are certain limitations to the way ZooKeeper clients load balance themselves (e.g. connections won't rebalance in case of server restarts), but fixing these limitations would require a ZooKeeper protocol aware load balancing logic, probably as part of the client implementation.

jmeter HTTP response code: org.apache.http.conn.HttpHostConnectException,Non HTTP response message: Connection refused Error

I am working to JMeter for testing the load.I am using Amazon Server.when I test the load to 400 Concurrent users, I am getting the error message
HTTP response code: org.apache.http.conn.HttpHostConnectException,Non HTTP response message: Connection refused
Till the 400 Threads Request, it is working fine and provide us the response.
We are using Xammp and apache server.Can anyone help me out?
Thanks
Mitesh
Make sure that your Apache is configured to accept as many as 400+ concurrent users. Here are instructions on calculating and setting.
Make sure that your JMeter is configured to produce as many as 400+ concurrent users. Here is the guide on proper JMeter tuning.
Make sure that your Apache and JMeter systems are not overloaded and have enough spare CPU, RAM, Network and Disk IO, disk space, swap, etc. You can use PerfMon JMeter Plugin for monitoring systems health.

Weblogic server failed to response under load

We have a quite strange situation on my sight. Under load our WL 10.3.2 server failed to response. We are using RestEasy with HttpClient version 3.1 to coordinate with web service deployed as WAR.
What we have is a calculation process that run on 4 containers based on 4 physical machines and each of them send request to WL during calculation.
Each run we see a messages from HttpClient like this:
[THREAD1] INFO I/O exception (org.apache.commons.httpclient.NoHttpResponseException) caught when processing request: The server OUR_SERVER_NAME failed to respond
[THREAD1] INFO Retrying request
The HttpClient make several requests until get necessary data.
I want to understand why WL can refuse connections. I read about WL thread pool that process http request and found out that WL allocate separate thread to process web request and the numbers of threads is not bounded in default configuration. Also our server is configured Maximum Open Sockets: -1 which means that the number of open sockets is unlimited.
From this thread I'd want to understand where the issue is? Is it on WL side or it's a problem of our business logic? Can you guys help to deeper investigate the situation.
What should I check more in order to understand that our WL server is configured to work with as much requests as we want?