Weblogic connection pool - weblogic9.x

Whenever I change connection pool parameters (url, id, pwd etc) I try to deploy the new connection pool in weblogic using the Control tab. I do a shutdown and start which IMO should redepoly the new connection pool.
But the changes take effect only when I restart the server. What other parameters need to be tuned to avoid restarting the server?
Thank you!

You need to untarget and retarget that specific connection pool to the server or cluster, that should be enough.
You dont have to restart the whole WL server.

Some settings require reset (or redeploy) and some don't.
To find out which option requires restart/redeploy you should click on [More info ...] near option description (or go to
JDBC Data Source -> Configuration -> Connection Pool)
In example, you will find out that Maximum Capacity does not require restart.
But for Pinned-To-Thread option it says:
Changes take effect after you redeploy the module or restart the server.

Related

weblogic server goes down when any database glitches occurs

Dears,
I am facing a problem with my weblogic 12c Server. I have 3 managed nodes and one admin server. I have configured an Oracle Datasource. Whenever any network glitches or database downtime happens, weblogic server also goes down completely. Is there any configuration which I can make to hold on the server until everything comes back.
Servers are trying to restart automatically.But it is in STARTING state. I have to again start all servers and it is taking time. Your suggestions will help me a lot. Thanks in advance.
If you don't want your servers to restart automatically, the server failure action can probably be tweaked:
{managed_server} -> Configuration -> Overload -> Failure Action
However, probably you should be more interested in exploring if you are missing connection timeout config for your datasource.

Weblogic datasource test connection on startup

We have multiple data sources in our weblogic(10.3.5) connecting to different DB servers (we have quite a few DB servers).
If any one of the DB server is down or the DB password changed/expired, the complete managed server is going to admin state.
I think this is because weblogic is trying to test the datasource while its coming up and since its unable to initialize the datasource, the server is not starting and going into admin mode.
Is there a way we can disable this feature? Our application has a logic to check the datasource if it is active (test connection) before the user starts using that datasource.
I am aware of weblogic JMX MBeans which can be used to disable/suspend. But to do this, we need to write a startup class. Not sure if this works, but If there's a configuration that we can set, we would prefer that.
On the Connection Pool tab for the datasource, set the Initial Capacity to 0. This will stop the initial check and the server should start properly.

WLST data source testPool- how to set timeout

How to set timeout while testing a datasource using WLST.
When I do the testPool on a dataSource it waits indefinitely.
this also hangs the managed server.
is there a way we can set timeout?
Please set the below settings in the weblogic connection pools according to your requirement in weblogic console.
•Test Frequency—(TestFrequencySeconds in the JDBCConnectionPoolParamsBean) Use this attribute to specify the number of seconds between tests of unused connections. WebLogic Server tests unused connections, and closes and replaces any faulty connections. You must also set the Test Table Name.
•Test Reserved Connections—(TestConnectionsOnReserve in the JDBCConnectionPoolParamsBean) Enable this option to test each connection before giving to a client. This may add a slight delay to the request, but it guarantees that the connection is healthy. You must also set a Test Table Name

How to swap a server in and Out of cluster during runtime

I am implementing session replication in my application. This is old application.
I made all changes and now need to test the server switch and confirm that the objects in session is properly carried to another server in server list.
I have 1 Admin server and 2 managed servers. So the cluster is made of 2 managed server.
while testing I have to always bounce the server and test the flow of my application. This process is very time consuming. So I am looking for any other way to sway a server in and out of cluster
during runtime. I asked on Oracle support website , but they said only way to bounce the server.
How can I write a script for this?
Is there a parameter in weblogic or wlproxy plugin config file that help in this switch.
Your help is appreciated.
using Weblogic scripting tool (WLST) in script mode, you can write a script to automate the shutdown / startup of the managed server that you would like to remove temporarily from the cluster.
you create a file with .py extension which will contain the weblogic commands that you would like to run.
shutdown.py:
connect('username','password','t3://adminIP:port')
shutdown('servername')
disconnect()
startup.py:
connect('username','password','t3://adminIP:port')
start('servername')
disconnect()
to run the script from commandline:
java weblogic.WLST c:\myscripts\shutdown.py
you can put this line in a shell/batch script.
Another way is to write a Java program or an ANT script to invoke the commands using the weblogic.jar file that comes with weblogic.
If you were to change the state of a weblogic managed server from running to admin mode then also you can test the session replication.
You can do this from admin console by selecting the managed server and going to control tab and changing the state of the server to Admin. You can change it back to running from the same place.
Using WLST you can use the commands suspend and resume
http://docs.oracle.com/cd/E11035_01/wls100/server_start/server_life.html
http://docs.oracle.com/cd/E14571_01/web.1111/e13813/quick_ref.htm
suspending and resuming managed servers is quicker than shutting it down and restarting it again.
I have tested this at my end and it works fine, ie when I change the state to admin, my request goes to another managed server and the session is also replicated.
I have used the sample WLS cluster replication example available in wls installation.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.