how to replicate weblogic admin console parameters to multiple site - automation

I have weblogic 12c clusters setup in 10 different destination. They are of the same setup and it act as a failover in same cases
I will like to make a tuning config change on the admin console such as change of datasource connection pool or change of thread pool size for performance.
Is there a way that I can automate 1 config change and applies to all different destination cluster admin console rather than manually access each admin console to make the config change.
Much appreciated
Thanks

Related

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

Verify Load balancing Azure Container Service

I am using the Azure Container Service with Kubernetes orchestrator and have an app deployed on a cluster with 3 nodes. It has 5 replicas. How can I verify load balancing in action e.g. I want to be able to see that every time I hit the external IP I am being routed to perhaps a different node. Thanks.
The simplest solution is to connect (over ssh for example) to 3 nodes and run WinDump there. In order everything is working properly you will be able to see what happens on every node.
Also here is Microsoft documentation for testing a load balancer:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-load-balancer#test-load-balancer
The default Load Balancer which are available to your Windows Azure Web and Worker roles are software load balancers and not so much configurable however they do work in Round Robin setting. If you want to test this behavior this is what you need to do:
Create two (or more) instances of your service with RDP access
enabled so you can RDP to both instances
RDP to your both instances and run NETMON or any network monitor
solution in it.
Now access your Windows Azure web application from your desktop You
need to understand that when a network connection is made from your
desktop the connection is still alive based on network settings
(default 60 seconds) so you need to wait until default timeout is
passed to access your Windows Azure web application again.
When you will access your Windows Azure Web application again you can
verify that seconds time the request went to next instance. BE sure
to pass the connection timeout otherwise your request will be keep
handled by same instance.
Note: If you dont want to use RDP, you sure can also create a test ASP.NET page to write some special code based on your specific instance which will show you that this page is specific to certain instance. The best way to do is to read the Instance ID as below:
int instanceID = RoleEnvironment.CurrentRoleInstance.Id;
If you want to have more control over Windows Azure Load Balancing, i would suggest using the Windows Azure Traffic Manager which will help you to route the traffic to your site via Round-Robin, Performance or backup based scenario. More info on using Traffis Manager is in this article.

Weblogic datasource test connection on startup

We have multiple data sources in our weblogic(10.3.5) connecting to different DB servers (we have quite a few DB servers).
If any one of the DB server is down or the DB password changed/expired, the complete managed server is going to admin state.
I think this is because weblogic is trying to test the datasource while its coming up and since its unable to initialize the datasource, the server is not starting and going into admin mode.
Is there a way we can disable this feature? Our application has a logic to check the datasource if it is active (test connection) before the user starts using that datasource.
I am aware of weblogic JMX MBeans which can be used to disable/suspend. But to do this, we need to write a startup class. Not sure if this works, but If there's a configuration that we can set, we would prefer that.
On the Connection Pool tab for the datasource, set the Initial Capacity to 0. This will stop the initial check and the server should start properly.

How to swap a server in and Out of cluster during runtime

I am implementing session replication in my application. This is old application.
I made all changes and now need to test the server switch and confirm that the objects in session is properly carried to another server in server list.
I have 1 Admin server and 2 managed servers. So the cluster is made of 2 managed server.
while testing I have to always bounce the server and test the flow of my application. This process is very time consuming. So I am looking for any other way to sway a server in and out of cluster
during runtime. I asked on Oracle support website , but they said only way to bounce the server.
How can I write a script for this?
Is there a parameter in weblogic or wlproxy plugin config file that help in this switch.
Your help is appreciated.
using Weblogic scripting tool (WLST) in script mode, you can write a script to automate the shutdown / startup of the managed server that you would like to remove temporarily from the cluster.
you create a file with .py extension which will contain the weblogic commands that you would like to run.
shutdown.py:
connect('username','password','t3://adminIP:port')
shutdown('servername')
disconnect()
startup.py:
connect('username','password','t3://adminIP:port')
start('servername')
disconnect()
to run the script from commandline:
java weblogic.WLST c:\myscripts\shutdown.py
you can put this line in a shell/batch script.
Another way is to write a Java program or an ANT script to invoke the commands using the weblogic.jar file that comes with weblogic.
If you were to change the state of a weblogic managed server from running to admin mode then also you can test the session replication.
You can do this from admin console by selecting the managed server and going to control tab and changing the state of the server to Admin. You can change it back to running from the same place.
Using WLST you can use the commands suspend and resume
http://docs.oracle.com/cd/E11035_01/wls100/server_start/server_life.html
http://docs.oracle.com/cd/E14571_01/web.1111/e13813/quick_ref.htm
suspending and resuming managed servers is quicker than shutting it down and restarting it again.
I have tested this at my end and it works fine, ie when I change the state to admin, my request goes to another managed server and the session is also replicated.
I have used the sample WLS cluster replication example available in wls installation.

Database datasource binding for multiple app and instance

We use a single mysql database for 2 applications. One of this application has 2 instances.
So we would like to know how cloudbees manage the connexion pool between all these apps. I saw in other threads that mysql default conf on cloudbees accept 20 connections. For the moment, we use "old" hibernate configuration with explicite c3p0 but we thought that they could try to open too much connexions on the db.
If we change the conf to use jndi cloudbees datasource (as describ here https://developer.cloudbees.com/bin/view/RUN/DatabaseGuide), should our apps share the same connection pool ? Or at least all instances of each app ?
Hope it's understandable. Let me know if not.
Thanks for your help,
Each instance of the app has its own connection pool - so you need to consider that in sizing the pool.