Can one appendonly.aof file be used by 2 different redis instances located in different servers? - redis

I have 2 different servers with redis instance running in each on same port, and both are using same appendonly.aof file which is stored in common shared path of both the servers.
If i add some keys using one machine those are not reflected in other machine, if i restart the other server all the changes are refplected.
is there any way or any configuration changes so that when ever the changes are done by one machine should be reflected to other immediatly?

Related

How to configure Redis clients when connecting to master-replica setup?

I have a Redis setup with 1 master and 2 replicas - so totally 3 nodes.
Given that writes can happen only via master node while reads can happen via all 3 nodes, how do I configure the clients?
Can I pass all nodes IP in the IP list and the client will take care of connecting to the right node by itself?
Or do the clients need to be configured separately for reads and writes?
It depends on the specific client you are using; some clients automatically split read-only / write commands based on the client-connection configuration, while others allow to specify the preferred replication target at the command or invocation level.
For example, ioredis automatically handles that through the scaleReads configuration option, while StackExchange.Redis allows to handle that through the CommandFlags enum at the command invocation level: you should really check the documentation of the specific Redis client you want to use.
Finally, redis-cli does not split read-only / write commands; it can connect (via the -c option) to a Redis Cluster and follow slot redirections automatically - but the connection will always happen against a master node, with no read/write splitting.

How can I configure Apache Zookeeper with redundancy on only two physical frames?

I would like to have a high-availability/redundant installation of Zookeeper running in my production environment. The problem is that I only have 2 physical frames available, so that rules out configuring a Zookeeper cluster/ensemble since I'd only have redundancy if the frame with the minority of servers goes down. What is the best practice in this situation? Is it possible to have a separate standalone install running on each frame connected to the same set of SOLR nodes or to use one server as primary and one as backup?
Zookeeper requires 3 nodes. In your scenario if you cannot get another machine you can setup multiple zookeeper nodes on the same machine in different directories using different ports.

Unison sync across more than 2 computers

I am currently using Unison across 2 computers (server and laptop). I need to create another connection where I can timely backup my data from server.
laptop <--> server -> backup
Here the connection to backup from server can be unidirectional. Is there any way to accomplish this?
This is a very common thing to do. When setting up Unison across multiple machines, you should prefer a star topology. So if you can, you should run another instance of Unison, along with any backup-related scrips, on your machine where you are storing your backups. It should look about the same as your setup on your laptop (depending on where your backups are being stored).

init.d values for separate redis-server instances

I need to clear a concept. I have two redis servers running on a single VM. Server#1 connects via TCP, server#2 connects via a UNIX socket. I'm on the cusp of converting the TCP server to UNIX as well.
An excerpt from the init.d script for server#1 is:
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis.conf
NAME=redis-server
DESC=redis-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis-server.pid
The comparable excerpt from the init.d script for server#2 is (which has its own config):
DAEMON=/usr/bin/redis-server
DAEMON_ARGS=/etc/redis/redis-2.conf
NAME=redis2-server
DESC=redis2-server
RUNDIR=/var/run/redis
PIDFILE=$RUNDIR/redis2-server.pid
Both servers are currently up and running. My question is: how come DAEMON is kept the same for both servers? Why wasn't a separate executable needed?
I configured the two servers using config from various internet forums. While it works, I've failed to understand the significance of the DAEMON value, given it remains the same for both server instances. Is it because the executable is fed different config files, and this the same DAEMON is able to handle multiple server instances? Being a beginner, I'd really like some expert opinion on this. Thanks in advance.
Open terminal (or cmd). Now open it again. You have two copies open, but they are both using the same executable.
You're doing the same with redis: DAEMON is just saying where to find the program, and since you're happy to use the same version of redis for both, you can use the same path for both DAEMON values, each instance of which has its own ID stored in the PIDFILE, which is why they need to be different paths or they will interfere with each other.

Sharing files across weblogic clusters

I have a weblogic 12 cluster. Files get pushed to it both through http forms and through scp to a single machine on the cluster. However I need the files on all the nodes of the cluster. I can run scp myself and copy to all parts of the cluster, but I was hoping that weblogic supported the functionality in some manner. I don't have a disk shared between the machines that would make this easier. Nor can I create a shared disk.
Does anybody know?
No There is no way for WLS to ensure a file copied on one instance of WLS is copied to another. Especially when you are copying it over even using scp.
Please use a shared storage mount, so that all managed servers can refer to this location with out the need to do SCP.