Weblogic Performance Tunning - weblogic

We have a problem with Weblogic 10.3.2. We install a standard domain with default parameters. In this domain we only have a managed server and only running a web application on this managed server.
After installation we face performance problems. Sometimes user waits 1-2 minutes for application responses. (Forexample user clicks a button and it takes 1-2 minutes to perform GUI refresh. Its not a complicated task.)
To overcome these performance problems define parameters like;
configuraion->server start->arguments
-Xms4g -Xmx6g -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=500
And also we change the datasource connection pool parameters of the application in the weblogic side as below.
Initial Capacity:50
Maximum Capacity:250
Capacity Increment: 10
Statement Cache Type: LRU
Statement Cache Size: 50
We run Weblogic on 32 GB RAM servers with 16 CPUs. %25 resource of the server machine is dedicated for the Weblogic. But we still have performance problem.
Our target is servicing 300-400 concurrent users avoiding 1-2 minutes waiting time for each application request.
Defining a work manager can solve performance issue?
My datasource or managed bean definition is incorrect?
Can anyone help me?
Thanks for your replies

Related

How do I configure Embedded Infinispan to handle K8s rolling updates?

I have a simple project that allows you to add keys to a distributed cache in an application that is running Infinispan version 13 in embedded mode. It is all published here.
I run a kubernetes setup that can run in minikube. I observe that when I run my example with six pods and perform a rolling update, my infinispan performance degrades from the start of the roll out up until four minutes after the last pod has restarted and created its cache. After this time the cluster operates as normal again. With degrading I mean that the operation of getting the count of items in the cache takes 2-3 seconds to execute, compared to below 0.5 seconds in normal mode. With my setup this is consistently happening, and consistently working again after four minutes.
When running the project on my local machine without a kubernetes environment I have not experienced the same kind of delays.
I have tried using TRACE logs, but can see no event of significance that happens after these four minutes.
Is there something obvious that I'm missing in my configuration of Infinispan (that you can see in my referenced project), or some additional operation that needs to be performed? (currently I start the cache on startup, and perform stop on shutdown).
A colleague found the following logs when running Infinispan in non embedded mode:
2022-01-09 14:56:45,378 DEBUG (jgroups-230,infinispan-server-2) [org.jgroups.protocols.UNICAST3] infinispan-server-2: removing expired connection for infinispan-server-0 (240058 ms old) from recv_table
After these logs the service performance was returned to normal again. This lead us to suspect that JGroups somehow tries to use old connections to pods that have been removed. By changing the conn_close_timeout setting on UNICAST3 for Jgroups to 10 seconds instead of the default value 4 minutes we could confirm that service degradation was fixed in 10s instead of 4 minutes.
Additionally it seems that this fix only works when the service is running as a StatefulSet and not when it runs as a Deployment. I don't have explanation for exactly why this is, but in conclusion make the service to a stateful set and changing the conn_close_timeout on UNICAST3 in the JGroups configuration fixed our problem.

Tomcat API : how to improve performance when client are closing connection after each request?

I have a simple Tomcat API and my goal is to manage the higher number of req /sec.
My problem is the following :
Scenario 1: When the client is using some persistent connections I manage to reach around 20000 req/sec using a single instance of the API. The server is loaded and the CPU of the server is almost fully used.
Scenario 2: When the client is closing connections after each request, the API only manages 600 req/sec and the server resources are not used at all. So I guess there is a bottleneck either on the global number of connections, either on the number of connections the server is able to manage per second.
What I want to know is if there is a configuration (on tomcat or on the server) that I can change to improve performance during scenario 2.
If not, which kind of resources is limiting? Can I address the problem by deploying many 1 CPU servers?
What I have looked to for the moment :
The number of thread and connection in Tomcat config :
I have adjusted theses number from default to 200 threads and 2000 connections, I don't see any effect during scenario2.
Ulimit is set to unlimited
JVM is configured as follow : JAVA_OPTS: -Xmx8g
It was better if you provide more information about your deployment but generally there are some works that can help you to achieve better performance.
First of all you should measure the cost of each request and optimize it as much as you can. For example, if your API with each request, execute a query on the local database and this query is consuming a lot of CPU usages, you should optimize your query.By doing this your server can tolerate more request before its cpu becomes 100%.
Note some tools like JProbe can help you for optimizing your API.
secondly, monitor your resources during the test and find which one of them becomes fully used. You should check Network Connection, Disk, Memory and CPU loads during the test and identify weakness of your resources. Track thread blocks and deadlocks as they are important to performance.
You can scale-up your server resources based on this information or decide to implement distributed architecture or add a load-balancer to your solution or add a caching strategy for you project.
In your Tomcat configuration there some settings which can improve your performance such as :
Configuring connectors
set maxThreads to a high enough value
set acceptCount to a high enough value
Configuring cache
set cacheMaxSize attribute to the appropriate value.
Configuring content compression
turning content compression on and using GZIP compression

Allow 1000+ concurrent users on IIS 8 for ASP.NET MVC web application

We have built the ASP.NET MVC4 application and deployed on IIS8.5
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)Appication pool setting upate
ASP.NET Config file change
Still, max 100 users are allowed in one go and rest being queued.
Server configuration is 4 core processor, 8 GB Ram and 64-bit OS.
Need help to fix my problem. many many thanks in Advance.
Attached image of all configuration.
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)
Allow maximum number of users with concurrent login and support.SServer Configuration
Need help to fix my problem. many many thanks in Advance. Attached image of all configuration.
I suggest you could run below command to modify the appConcurrentRequestLimit in server runtime in applicationhost.config file.
c:\windows\system32\inetsrv\appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000
However,I would like a recommendation from you regarding two options we have. 1 We can Update my existing server from 4 Core 8gb RAM to 6 core 16 GB RAM or 2 We can Keep separate servers i.e One for IIS and One for SQL Server. Server Config will be same for both(4 Core 8GB RAM). Which option would be preferable?
In my opinion, this should according to your application performance. I think you should analysis the sql's performance, if you find the sql has takes a lot of server's resource to handle the sql query or something elese which cause your IIS takes so long time to response, I suggest you could keep separate servers.
If you find the sql server just take a little server resouce, I suggest you could use use just one server.

How can I increase the amount of time apache waits before timing out an HTTP request?

Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.
Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.
Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;

Help analyzing glassfish server hang problem

We are running a glassfish server with around 20 jax-ws metro web services. The server specs are Core2Duo with 8GB RAM. We are using a single http listener for all the web services. Development is set to True. Request Thread Count is 2 and Acceptor Count is 1.
The Minimum and Maximum Heap Sizes are 1GB and the Perm Gen is set to 512MB.
The services access an Oracle database via a Hibernate layer and there are many interservice calls between the services.
The front end is ASP.Net. Our problem is that when 4-5 users try to access the application simultaneously for some time (1 hour) the glassfish server hangs with the CPU going to 100% but the memory utilization is around 10-11%.
We are not able to find any pointers as to how to debug this problem. On some instances the log file gives java.lang.OutofMemory Exception : PermGenSpace. But this is also not everytime, i.e. on many occassions the log file does not give any error on hanging. Also the glass fish server does not start if we try to increase the Perm Gen Space. We need some direction on how to diagnose and move towards the solution to this problem.
The Glass Fish Version we are using is v2.1.
We have the following observations:
1. Adding more http listeners (1 listener per 4-5 services) does prolong the failing time but not with much effect.
2. When calling some of the heavy services (one by one operation) with SOAP-UI we also get the hang problem when running many threads simultaneously. (e.g. 8-10 threads)
3. We have observed that when calling with SOAP-UI a service operation (which does not call any other services) rarely hangs while a service calling other services hangs much frequently.