I´m trying to configure a clúster of ActiveMQ (v.5.12.1) instances in master-slave using a shared file system.
Those instances are running in different servers using this configuration
<persistenceAdapter>
<kahaDB directory="/sharedDir/data/activemq/kahadb"/>
</persistenceAdapter>
The problem is that both instances start as master because lock file mechanism does¡t seem to be working.
I´ve made a test with 2 instances in the same server, and with this configuration it works as expected (2nd instance can´t acquire file lock and waits as slave).
We are using NFSv4 and mount properties are like this:
x.x.x.52:/mnt/data on /mnt/nfs type nfs4 (rw,relatime,vers=4.0,rsize=8192,wsize=8192,namlen=255,hard,proto=tcp,port=0,timeo=14,retrans=2,sec=sys,clientaddr=x.x.x.167,local_lock=none,addr=x.x.x.52)
bindfs on /sharedDir type fuse.bindfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Thank you
Related
I am setting up a Rabbitmq single node container built form a docker image. The Image is configured to persist to nfs mounted disc.
I ran into an issue when the image is restarted. Since every time a node restarted it gets unique name and the restarted node searching for the old nodes it’s reads from cluster_nodes.config file
Error dump shows:
Error during startup: {error,
{failed_to_cluster_with,
[rabbit#9c3bfb851ba3],
"Mnesia could not connect to any nodes."}}
How can I configure my image to use same name each time when it’s restarted instead of using arbitrary node name given by Kubernetes cluster?
Create CLuster COnfig The "Create Cluster Configuration button" not working from webconsole https://console.gridgain.com/configuration/overview..
Moreover when i launch the console.gridgain.com from my browser. I am getting below error
Failed to load clusters: Cannot start/stop cache within lock or transaction [cacheNames=ClusterCache, operation=dynamicStartCache]
I think this means you have tried to use getOrCreateCache from within an Apache Ignite Transaction.
I recommend getting all of your caches before you start a Transaction. Maybe there's something else but you will need to share more details.
Seems Gridgain ignite team has made a fix and it is now resolved.
In Knox config file in Ambari we have defined:
<url>http://{{namenode_host}}:{{namenode_http_port}}/webhdfs</url>
The problem is we have 2 namenodes, one active and one passive for high availability. Our active namenode01 failed so namenode02 became active.
This caused problems for a lot scripts as they were hardcoded to point to namenode01. So we used a command to failover namenode02 back to namenode01 using a terminal, not Ambari.
Now, the macro {{namenode_host}} is defined as namenode02 and not namenode01.
So, where is {{namenode_host}} defined?
Or, do we need to failover namenode01 to namenode02, then failover again to namenode01 using Ambari to update the macro?
If we need to failover the namenode using Ambari, I'm assuming we need to select the "Restart" option? There isn't a direct failover command.
See issue here:
https://issues.apache.org/jira/browse/AMBARI-12763
This was committed to Ambari to support HA mode for Knox. However if you're still looking for the location take a look at the file that's edited in the patch. That file is the place where the macros are set. You'll have to find it on your local machine though.
Should be something like params_linux.py
I am currently working with replicated servers in the same machine..for instance I am now working with 3 servers ...Now I want to add 2 more server ...How to add it dynamically
this is my zoo.cfg file
tickTime=2000
dataDir=F:\zookeeper
clientPort=2182
initLimit=5
syncLimit=2
server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890
Now I want to add two more servers dynamically
Since version 3.5.0, Zookeeper has a Dynamic Reconfiguration feature which allows servers to be added or removed and the entire process is orchestrated via Zookeeper itself. See https://zookeeper.apache.org/doc/r3.5.3-beta/zookeeperReconfig.html. From my experience, reconfiguring an ensemble on version 3.4.6 can get very strange.
server.4=localhost:2891:3891
server.5=localhost:2892:3892
Add above lines to your cfg file .
For details check : ..what is zookeeper port and its usage?
I have configured cluster with two instances on glassfish 3.1.1 and iPlanet Web Server as a load-balancer (on the same machine). For test application provided with glassfish everything works ok (and this application has session replication enabled).
But when I try to make my own application working following situation takes place: it responds when I send requests on ports of a particular instances (that is 28080 and 28081), but when I try to send request through load balancer (port 81) I get error 404. My application has not session replication enabled yet, but it can just make a connection and create two other sessions for each instance. I would like to get similar effect with load balancer.
So I would like to determine:
Is session replication strongly required to load balancer works fine?
Does anyone know any other reasons of this error?
Message from iPlanet log:
[23/Aug/2012:05:44:16] failure ( 4120) myHost: for host 127.0.0.1 trying to GET /myApp/login.jsp, service-j2ee reports: PWC6117: File "c:/webserver7/https-myHost/docs/myApp/login.jsp" not found
Additional conclusions:
(81 - http-listener port on iPlanet)
When I send GET http://localhost:81/testApp then loadbalancer passes it to glassfish and returns correct site. But when I try the same with my test application, GET http://localhost:81/myApp then iPlanet looks for this site in its own resources (docs directory as in log above)
fragment of myHost-obj.conf:
<Object name="default">
AuthTrans fn="match-browser" browser="*MSIE*" ssl-unclean-shutdown="true"
NameTrans fn="name-trans-passthrough" name="lbplugin" config-file="C:/WebServer7/https-myHost/config/loadbalancer.xml"
NameTrans fn="assign-name" name="perf" from="/.perf"
NameTrans fn="ntrans-j2ee" name="j2ee"
NameTrans fn="pfx2dir" from="/mc-icons" dir="C:/WebServer7/lib/icons" name="es-internal"
PathCheck fn="uri-clean"
PathCheck fn="check-acl" acl="default"
PathCheck fn="find-pathinfo"
PathCheck fn="find-index-j2ee"
PathCheck fn="find-index" index-names="index.html,home.html,index.jsp"
ObjectType fn="type-j2ee"
ObjectType fn="type-by-extension"
ObjectType fn="force-type" type="text/plain"
Service method="(GET|HEAD)" type="magnus-internal/directory" fn="index-common"
Service method="(GET|HEAD|POST)" type="*~magnus-internal/*" fn="send-file"
Service method="TRACE" fn="service-trace"
Error fn="error-j2ee"
AddLog fn="flex-log"
</Object>
First, if you are running the Load Balancer plugin, then you may have a support contract (a GlassFish license is required before you put the plugin into production). If so, calling support is a good option.
To answer your first question, session replication is not required for the Load Balancer to work.
As a shameless plug, I have a 5-part youtube series on setting this up. You can skip the videos on downloading and installing and go straight to setup/configuration/testing. Based on what you describe, I suspect the issue isn't the plugin itself, but the loadbalancer.xml configuration. Look at loadbalancer.xml and see if myApp is configured.
Hope this helps.