I have configured a weblogic cluster that consists of two servers configured as migratable targets. This way I can use wlst to migrate the services that run in one of the servers to the other with the command `migrate('serverX', 'serverX').
But before run migrate command I'd like to check if each migratable target is running in its preferred server so I run migrate only if needed.
Does anyone know how to check it?
Regards
You can definitely do this with wlst, here are some steps:
connect('weblogic','weblogic','http://myserver:7701')
cd('MigratableTargets')
ls() #this will list out all migratable objects
cd('<migratable name>')
ls('UserPreferredServer')
ls('HostingServer')
That will list your preferred server and the currently hosted server. You can use the current management object cmo and check to see if they are equal:
cd('<migratable name>')
if(cmo.getUserPreferredServer() == cmo.getHostingServer())
...
migrate('serverX', 'servery')
You can see some of the calls that are available in the Oracle Weblogic API docs.
Related
I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.
what is the best way to monitor the Mule ESB instances. Is there a way i can get alerted when my mule instance goes down for some reason. I have 4 instances of Mule running and how will I come to know if 1 of them got down due to some reason.
Thanks!
I assume you are running community edition? (Enterprise edition provides a Management Console which allows you to define alerts). If you are using CE, then you are able to enable JMX monitoring on the instances and then use one of many ways to verify based on JMX info, whether your server is running. One way is to write your own application that retrieves JMX data programmatically and act accordingly.
HTH
If you are using Mule EE, you can use MMC to monitor all your instances as Gabriel has already suggested. My suggestion would be to install MMC inside tomcat on a separate server. This is to ensure that even if your Mule Server crashes or goes down, your MMC is still running and can send you alerts about your Mule server downtime. You can refer below link for details on how to setup server down and up alerts.
https://developer.mulesoft.com/docs/display/current/Working+With+Alerts
Additionally I would recommend to use MMC with database persistence to ensure you have ability to recover MMC workspace even if your MMC server crashes. You can refer about MMC setup with DB persistence at below link.
https://developer.mulesoft.com/docs/display/current/Configuring+MMC+for+External+Databases+-+Quick+Reference
If you don't have Mule EE, you may want to explore other tools or customer alerting applications as suggested by Gabriel.
HTH
You can set up a JMX agent by adding the following lines into your "conf/wrapper.conf" file :
wrapper.java.additional.19=-Dcom.sun.management.jmxremote
wrapper.java.additional.20=-Dcom.sun.management.jmxremote.port=10055
wrapper.java.additional.21=-Dcom.sun.management.jmxremote.authenticate=false
wrapper.java.additional.22=-Dcom.sun.management.jmxremote.ssl=false
wrapper.java.additional.23=-Djava.rmi.server.hostname=127.0.0.1
don't forget to change the values accordingly. Also you can implement SSL authentication with a few extra lines.
Now once your monitoring platform is set up you can always activate Java pollers and start the server.
I need to customize the Perforce server to achieve the following requirements:
I need a local replica server which gets synced with the main server in a different geographical location. I can have the same time zone settings for the local and main servers
The client should be able to commit to the replica server.
The replica server will have build capability as well as a test frame work that is run whenever a build is succesfull.
Once the build and test is succesfull the code should get committed to main server.
I know that the replica server provided by perforce is used as a readonly server which can't write to main server and the forwarding replica just forwards the commands to main server.
I can't use proxy server, as the local server should work even when the main server is offline.
Is it possible to do this? Can anyone point me to some articles which would help me to set up such a server
I had asked the same question in the Perforce forum, but the question is still under verification by moderators.
An edge/commit setup may meet your requirements, as an Edge Server handles some local operations associated with workspaces and work in progress.
As well as read-only commands, the following operations can be performed on an Edge Server:
syncing, checking out, merging, resolving, and reverting files
More information about edge/commit archetecture is available here:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.distributed.html
You may also want to look at BuildFarm servers:
http://www.perforce.com/perforce/doc.current/manuals/p4dist/chapter.replication.html#DB5-72814
Hope this helps,
Jen!
Build Server doesn't allow build work spaces to submit files. If submitting files is required as part of the build process, consider the use of an edge server to support your automated build processes.
With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server.
Edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.
I have an SSIS data package with a sequence container(and a nested sequence container) that works fine when I set the transaction option to supported. However when I set it to required it fails. I suspect it's because my source destination is on another server, is transaction option required not a possibility when doing a cross server data flow?
SSIS Is compatible with Transactions across different data sources, however as I understand it they require the use of the MSDTC service. If your data source is not compatible with this then it will fail. If your data source is compatible I.E. another Windows machine with SQL Server, then check that the service is switched on and configured correctly.
You could also set the specific parts of the sequence container to set the TransactionOption to not support to get around it, although I don't know if that will work for a source.
I've had this in the past. ensure you have port TCP port 135(RPC) and program MsDtsSrvr.exe allowed through your windows firewall on the server. you can test by temp disabling windows firewall on the server and run your SSIS package. If it runs enable again and add the rules above.
Hope this helps
I am implementing session replication in my application. This is old application.
I made all changes and now need to test the server switch and confirm that the objects in session is properly carried to another server in server list.
I have 1 Admin server and 2 managed servers. So the cluster is made of 2 managed server.
while testing I have to always bounce the server and test the flow of my application. This process is very time consuming. So I am looking for any other way to sway a server in and out of cluster
during runtime. I asked on Oracle support website , but they said only way to bounce the server.
How can I write a script for this?
Is there a parameter in weblogic or wlproxy plugin config file that help in this switch.
Your help is appreciated.
using Weblogic scripting tool (WLST) in script mode, you can write a script to automate the shutdown / startup of the managed server that you would like to remove temporarily from the cluster.
you create a file with .py extension which will contain the weblogic commands that you would like to run.
shutdown.py:
connect('username','password','t3://adminIP:port')
shutdown('servername')
disconnect()
startup.py:
connect('username','password','t3://adminIP:port')
start('servername')
disconnect()
to run the script from commandline:
java weblogic.WLST c:\myscripts\shutdown.py
you can put this line in a shell/batch script.
Another way is to write a Java program or an ANT script to invoke the commands using the weblogic.jar file that comes with weblogic.
If you were to change the state of a weblogic managed server from running to admin mode then also you can test the session replication.
You can do this from admin console by selecting the managed server and going to control tab and changing the state of the server to Admin. You can change it back to running from the same place.
Using WLST you can use the commands suspend and resume
http://docs.oracle.com/cd/E11035_01/wls100/server_start/server_life.html
http://docs.oracle.com/cd/E14571_01/web.1111/e13813/quick_ref.htm
suspending and resuming managed servers is quicker than shutting it down and restarting it again.
I have tested this at my end and it works fine, ie when I change the state to admin, my request goes to another managed server and the session is also replicated.
I have used the sample WLS cluster replication example available in wls installation.