I have created a custom work manager, targeting to a weblogic cluster consisting 4 managed servers. Assuming that the work manager is configured to have a maxThread-constraint of 50 threads.
Does it mean that each of the 4 managed servers is running at maximum 50 threads regarding requests dispatched to this work manager? Or does it mean that all the managed servers together are supposed to have 50 threads at maximum?
Thanks,
Hadi
it means that each of the 4 managed servers is running at maximum 50 threads.
Related
We have built the ASP.NET MVC4 application and deployed on IIS8.5
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)Appication pool setting upate
ASP.NET Config file change
Still, max 100 users are allowed in one go and rest being queued.
Server configuration is 4 core processor, 8 GB Ram and 64-bit OS.
Need help to fix my problem. many many thanks in Advance.
Attached image of all configuration.
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)
Allow maximum number of users with concurrent login and support.SServer Configuration
Need help to fix my problem. many many thanks in Advance. Attached image of all configuration.
I suggest you could run below command to modify the appConcurrentRequestLimit in server runtime in applicationhost.config file.
c:\windows\system32\inetsrv\appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000
However,I would like a recommendation from you regarding two options we have. 1 We can Update my existing server from 4 Core 8gb RAM to 6 core 16 GB RAM or 2 We can Keep separate servers i.e One for IIS and One for SQL Server. Server Config will be same for both(4 Core 8GB RAM). Which option would be preferable?
In my opinion, this should according to your application performance. I think you should analysis the sql's performance, if you find the sql has takes a lot of server's resource to handle the sql query or something elese which cause your IIS takes so long time to response, I suggest you could keep separate servers.
If you find the sql server just take a little server resouce, I suggest you could use use just one server.
My application is running on 10 servers and I use infinispan for managing the cache on those 10 servers. Currently infinispan is configured on all the 10 servers. I wish to restrict the infinispan instances to just 4 servers instead of the current 10. The number of servers is not changing and is remaining fixed at 10.
I am also wish to use JGroups, that is a part of infinispan package to replicate the cache data across the 4 infinispan instances.
Can someone help me to understand how it can be done.
You have to setup multicast address on your jgroups xml configuration file (mcast_addr and mcast_port). Make sure your 4 server have the same multicast address and give different address for the other 6.
I want information on the WSO2 ESB clustering system requirements for production deployment on Linux.
Went through the following link :ESB clustering
Understand that more than 1 copy of the WSO2 ESB would be extracted and set up on single server for Worker nodes and similarly on the other server for Manager (DepSyn and admin) , worker nodes .
Can someone suggest what would be the system requirements of each server in this case ?
system prerequisites link suggests
Memory - 2 GB , 1 GB Heap size
Disk - 1 GB
assuming to handle one ESB instance (worker or manager node).
Thanks in advance,
Sai.
As a minimum, the system requirement would be 2 GB for the ESB worker JVM (+appropriate memory for the OS: assume 2GB for Linux in this case) which would be 4 GB minimum. Of course based on the type of work done and load, this requirement might increase.
The worker manager separation is for the separation of concerns. Hence in a typical production deployment, you might have a single manager node (same specs) and 2 worker nodes where only the worker nodes would handle traffic.
Forward: I'm using Java 6u45, WebLogic 10.3.6, and Ubuntu Desktop 14.04 64-bit.
I just started as a student assistant at one of my state's IT offices. On my first day I was tasked with testing WebLogic on Ubuntu (Windows isn't cases sensitive, causing later issues because WebLogic is...). I started messing around with clustering, and now my setup is as follows:
1 Ubuntu machine
1 domain
6 servers: Admin server, wls1-4, and wlsmaster (wlsmaster was supposed to be what wls1 and wls2 reported to within the cluster because I set the cluster to be unicast, but that's a secondary question for now).
2 clusters: cluster1 and cluster2. wls1, wls2, and wlsmaster are on cluster1. wls3 and 4 are on cluster2.
Given my setup, do I even need to use node manager because I'm only using one physical machine? Secondary question; if I want to use unicast, how do I set the master? $state uses unicast for what few Weblogic servers we have, so I was told to check that out.
A few things:
No, you don't necessarily have to use a nodemanager, but it will make your life easier. When you log into the weblogic admin console and attempt to start one of your servers e.g. wls1-4, the Admin server will attempt to talk to the node manager to start the servers. Without the nodemanager you will have to start each server individually using the startManagedWebLogic.sh script and if you need to bring servers up and down often it will be very annoying.
With regards to Unicast it is pretty easy to set up (we just leave all the default values alone). Here is the pertinent info from the Oracle Docs:
"Each of the Managed Servers in a WebLogic Server cluster has a name. For unicast clusters, WebLogic Server reads these Managed Server names and then sorts them into an ordered list by alphanumeric name. The first 10 Managed Servers in the list (up to 10 Managed Servers) become the first unicast clustering group. The second set of 10 Managed Servers (if applicable) becomes the second group, and so on until all Managed Servers in the cluster are organized into groups of 10 Managed Servers or less. The first Managed Server for each group becomes the group leader for the other (up to) nine Managed Servers in the group."
So you will want to name your master servers in such a way that they are the first alphanumerically in the cluster. That said, for your use case I doubt you need those master servers as all. Just have 2 clusters, one with wls1-2 and one with wls3-4.
The Rails application I'm currently working on is hosted at Amazon EC2 servers. It's using Resque for running background jobs, and there are 2 such instances (would-be production and a stage). Also I've mounted Resque monitoring web app to the /resque route (on stage only).
Here is my question:
Why there are workers from multiple hosts registered within my stage system and how can I avoid this?
Some additional details:
I see workers from apparently 3 different machines, but only 2 of them I managed to identify - the stage(obviously) and the production. The third has another address format(starts with domU) and haven't any clue what it could be.
It looks like you're sharing a single Redis server across multiple resque server environments.
The best way to do this safely is to use separate Redis servers or separate Redis databases or namespaces. The Redis-namespace gem can be used with Resque to isolate each environments Resque queues and worker data.
I can't really help you with what the unknown one is, but I had something similar happen when moving hosts and having dns names change. The only way I found to clear out the old ones was to stop all workers on the machine, fire up IRB, require 'resque' and look at Resque.workers. This will list all the workers resque knows about, which in your case will include about 20 bogus ones. You can then do:
Resque.workers.each do {|worker| worker.unregister_worker}
This should prune all the not-really-there workers and get you back to a proper display of the real workers.