How can I specify network interface for redis replication? - redis

I'm using Redis 3.2.0 and enabled replication. But I got result for "info replication" as follows:
master_link_status:down
Redis log shows:
Connecting to MASTER master_host:6379
MASTER <-> SLAVE sync started
...
Timeout connecting to the MASTER
Connecting to MASTER master_host:6379
...
Ping and telnet to port 6379 of master host from slave host is succeeded.
So, I thought redis process on slave host is trying to connect to master host via wrong network interface(slave host has multiple network interfaces).
Can I specify network interface which is used by redis replication?

When Redis connects to master host, client socket is binded to address which is specified by first argument of "bind" parameter.

Related

Unable to access Redis (cluster mode enabled) Cluster's Endpoints

I have 1 VPC - under that 1 EC2 instance ( amazon ami ) and 1 Redis (cluster mode enabled) Cluster with Auth ( password) and with Security Group Open to all IP:Port ( only for testing sake ) - so very simple setup.
telnet works at port 6379 from my EC2 Instance
- Configuration EndPoint
- Shard>eachNode EndPoint
Not able to connect to Redis Server using Redis CLI - doesnt matter endpoint either Config or Node endpoint; Using Redis CLI of v.5.0.4 ;
Please Note - AWS ElastiCache Redis Cluster ( Cluster disabled ) or Single Server Node, provides Primary Endpoint, which works fine. Only when Cluster is enabled and get ConfigEndpoint/NodeEndPoints - then having problem.
Config EndPoint:
[root#ip-xx-xx-xx-xx src]# ./redis-cli -h clustercfg.xxxx.xxxxx.use1.cache.amazonaws.com -p 6379
Node EndPoint:
[root#ip-xx-xx-xx-xx src]# ./redis-cli -h xxxx-0001-0-01.xxxx.xxxxx.use1.cache.amazonaws.com -p 6379
Any help is appreciated!
thanks
After spending few days on this issue, I was able to find the solution - we need stunnel or any other equivalent that creates SSL tunnel, redis-cli doesn't support ssl or tls.
To access data from ElastiCache for Redis nodes enabled with in-transit encryption, you use clients that work with Secure Socket Layer (SSL). However, redis-cli doesn't support SSL or Transport Layer Security (TLS).
To work around this, you can use the stunnel command to create an SSL tunnel to the redis nodes. You then use redis-cli to connect to the tunnel to access data from encrypted Redis nodes.
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/in-transit-encryption.html

activemq master not giving up on network failure

I have an activemq installation with master / slave failover.
Master and Slave are synced using the lease-database-locker
Master and Slave run on 2 different machines and the database is located on a third machine.
Failover and client reconnection works properly on a forced shutdown of the master broker. The slave is taking over properly and the clients reconnect due to their failover setting.
The problems start, if I simulate a network outage on the master broker only. This is done by using an iptables Drop Rule for packages going to the database on the master.
The master now realizes, that it cannot connect to the Database any longer. The slave starts up, since it's network connection is still alive.
It seems from the logs, that the clients still try to reconnect to the non responding master
For my understanding the master should inform the clients, that there is no connection anymore. The clients should failover and reconnect to the slave.
But this is not happening.
The clients do reconnect to the slave if I reestablish the db connection by reenabling the network connection to the db for the master. The master gives up beeing the master then.
I have set a queryTimeout on the lease-database-locker.
I have set updateClusterClients=true for the transport connector.
I have set a validationQueryTimeout of 10s on the db connection.
I have set a testOnBorrow for the db connection
Is there a way to force the master to inform the clients to failover in this particular case ?
After some digging I found the trick.
The broker was not informing the clients due to a missing ioExceptionHandler configuration.
The documentation can be found here
http://activemq.apache.org/configurable-ioexception-handling.html
I needed to specify
<bean id="ioExceptionHandler" class="org.apache.activemq.util.LeaseLockerIOExceptionHandler">
<property name="stopStartConnectors"><value>true</value></property>
<property name="resumeCheckSleepPeriod"><value>5000</value></property>
</bean>
and tell the broker to use the Handler
<broker xmlns="http://activemq.apache.org/schema/core" ....
ioExceptionHandler="#ioExceptionHandler" >
In order to produce an error on network outages I also had to set a queryTimeout on the lease query:
<jdbcPersistenceAdapter dataDirectory="${activemq.base}/data" dataSource="#mysql-ds-db01-st" lockKeepAlivePeriod="3000">
<locker>
<lease-database-locker lockAcquireSleepInterval="10000" queryTimeout="8" />
</locker>
This will produce an sql exception if the query takes to long due to a network outage.
I did test the network by dropping packages to the database using an iptables rule:
/sbin/iptables -A OUTPUT -p tcp --destination-port 13306 -j DROP
Sounds like you client doesn't have the address of the slave in its URI so it doesn't know where to reconnect to. The master broker doesn't inform the client where the slave is as it doesn't know there is a slave(s) or where that slave might be on the network, and even if it did that would be unreliable depending on what the conditions are that caused the master broker to drop in the first place.
You need to provide the client with the connection information for the master and the slave in the failover URI.

How to connect to Redis Cluster with ServiceStack client (without Sentinel)

I setup a Redis Cluster (ver 3.2.0), not Sentinel, with 4 masters (each with a slave) and a Virtual IP randomly pointing to one of 4 Master servers' IPs.
VIP: 10.0.0.10:6379, connecting to M1, M2, M3, M4:
M1: 10.0.0.1:6379 -
S1: 10.0.0.5:6378
M2: 10.0.0.2:6379 -
S2: 10.0.0.6:6378
M3: 10.0.0.3:6379 -
S3: 10.0.0.7:6378
M4: 10.0.0.4:6379 -
S4: 10.0.0.8:6378
My client uses ServiceStack to connect to my cluster via VIP: 10.0.0.10:6379, but I get the error:
An exception of type 'ServiceStack.Redis.RedisResponseException' occurred in ServiceStack.Redis.dll but was not handled in user code
Additional information: MOVED 2872 10.0.0.3:6379
My current string:
<add key="REDIS_MANAGER" value="redsAuthEnt#10.0.0.10:6379?connectTimeout=10000" />
I think this happens because my ServiceStack string connects as standalone Redis not a Redis Cluster.
It's the same as when we have to use -c with the redis-cli command line.
Help me craft a connection string to my Redis Cluster using the ServiceStack client or any other solution to use Redis Cluster.
ServiceStack.Redis does not support Redis Cluster, you can vote for this feature request on UserVoice.

Configure HAproxy for Redis with deferent Auth keys

I have Redis cluster of three instances and the cluster is powered by Redis Sentinel and they are running as [master,slave,slave].
Also and HAproxy instance is running to transfer the traffic to the master node, and those tow slaves are read only, are used by another applications.
It was very easy to configure HAproxy to select the Master Node when same auth key used for all instance, but now we have different auth keys for every instance different from others.
#listen redis-16
bind ip_address:6379 name redis
mode tcp
default_backend bk_redis_16
backend bk_redis_16
# mode tcp
option tcp-check
tcp-check connect
tcp-check send AUTH\ auth_key\r\n
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server R1 ip_address:6379 check inter 1s
server R2 ip_address:6380 check inter 1s
server R3 ip_address:6381 check inter 1s
So the above code works only when we have one passwords across {R1,R2,R3}, How to configuer HAproxy for different passwords.
I mean how to make HAproxy use the the each auth key for its server, like the following:
R1 : abc
R2 : klm
R3 : xyz
You have two primary options:
Set up an HA Proxy config for each set of servers which have different passwords.
Set up HA Proxy to not use auth but rather pass all connections through transparently.
You have other problems with the setup you list. Your read-only slaves will not have a role of "master". Thus even if you could assign each a different password, your check would refuse the connection. Also, in the case of a partition your check will allow split-brain conditions.
When using HA Proxy in front of a Sentinel managed Redis pod[1] if you try to have HA Proxy figure out where to route connections to you must have HA Proxy check all Sentinels to ensure that the Redis instance the majority of Sentinels have decided is indeed the master. Otherwise you can suffer from split-brain where two or more instances report themselves as the Master. There is actually a moment after a failover when you can see this happen.
If your master goes down and a slave is promoted, when the master comes back up it will report itself as master until Sentinel detects the master and reconfigures it to be a slave. During this time your HA Proxy check will send writes to the original master. These writes will be lost when Sentinel reconfigures it to be a slave.
For the case of option 1:
You can either run a separate configured instance of HA Proxy or you can set up front ends and multiple back ends (paired up). Personally I'd go with multiple instances of HA Proxy as it allows you to manage them without interference with each other.
For the case of option 2:
You'll need to glue Sentinel's notification mechanism to HA Proxy being reconfigured. This can easily be done using a script triggered on Sentinel to reach out and reconfigure HA Proxy on the switch-master event. The details on doing this are at http://redis.io/topics/sentinel and more directly at the bottom of the example file found at http://download.redis.io/redis-stable/sentinel.conf
In a Redis Pod + Sentinel setup with direct connectivity the clients are able to gather the information needed to determine where to connect to. When you place a non-transparent proxy in between them your proxy needs to be able to make those decisions - or have them made for it when topology changes occur - on behalf of the client.
Note: what you describe is not a Redis cluster, it is a replication setup. A Redis cluster is entirely different. I use the term "pod" to apply to a replication based setup.

Can someone explain exactly how Booksleeve and Redis work together and it's application in a SignalR app?

We are implementing scale-out for our SignalR app and trying to avoid a single point of failure in our cluster. Thus, more than one Redis message bus server is required.
The problem with implementing Redis Sentinel is that upon fail-over, a the client needs to connect to a new endpoint [address], which would require the SignalR application to be restarted (Redis endpoint defined in Application_Start()).
Not an option.
I'm trying to understand if/how Booksleeve will help, and would like some explain this.
The issue is that we can only have one single endpoint defined for message bus. A hardware solution is not currently an option.
Would the SignalR application connect to a Booksleeve wrapper, which maintains the list of master/slaves?
Another option using Azure Service Bus. However, the instructions for Wiring Up the Windows Azure Service Bus Provider indicate there are still problems with this:
Note, this web site is an ASP.NET site that runs in an Azure web role.
As of 1.0alpha2 there are some bugs in AzureWebSites due to which
ServiceBus Scale out scenarios do not work well. We are working on
resolving this for the future
I don't know the specifics of how SignalR does the connect, but: BookSleeve already offers some concessions towards failover nodes. In particular, the ConnectionUtils.Connect method takes a string (ideal for web.config configuration values etc), which can include multiple redis nodes, and BookSleeve will then try to locate the most appropriate node to connect to. If the nodes mentioned in the string are regular redis nodes, it will attempt to connect to a master, otherwise falling back to a slave (optionally promoting the slave in the process). If the nodes mentioned are sentinel nodes, it will ask sentinel to nominate a serer to connect to.
What BookSleeve doesn't offer at the moment is a redundant connection wrapper that will automatically reconnect. That is on the road-map, but isn't difficult to do in the calling code. I plan to add more support for this at the same time as implementing redis-cluster support.
But: all that is from a BookSleeve perspective - I can't comment on SignalR specifically.
BookSleeve 1.3.41.0 supports Redis sentinel. Deployment configuration we use: 1 master redis, 1 slave redis. Each box has sentinel (one for master, one for slave). Clients connect to sentinel first, sentinel then redirects them to active master.
This is how it is implemented in client code:
public class OwinStartup
{
public void Configuration(IAppBuilder app)
{
var config = new WebClientRedisScaleoutConfiguration();
GlobalHost.DependencyResolver.UseRedis(config);
app.MapSignalR();
}
}
public class WebClientRedisScaleoutConfiguration : RedisScaleoutConfiguration
{
public WebClientRedisScaleoutConfiguration()
: base(() => getRedisConnection(), WebUIConfiguration.Default.Redis.EventKey)
{ }
private static BookSleeve.RedisConnection _recentConnection;
private static BookSleeve.RedisConnection getRedisConnection()
{
var log = new TraceTextWriter();
var connection = BookSleeve.ConnectionUtils.Connect("sentinel1:23679,sentinel2:23679,serviceName=WebClient", log);
if (connection != null)
{
_recentConnection = connection;
return connection;
}
if (_recentConnection != null)
{
return _recentConnection;
}
// Cannot return null nor throw exception -- this will break reconnection cycle.
return new BookSleeve.RedisConnection(string.Empty);
}
}
Hot to configure redis.
Common steps
Download Redis for windows http://redis.io/download
Unzip to c:\redis
Master (only very first redis box, only one such config)
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Slave (other boxes)
Update redis.conf: add line slaveof masterAddr 6379 where masterAddr
is address where redis in master mode is running, 6379 is default
redis port.
Create Redis service: execute command within redis directory redis-server --service-install redis.conf --service-name redis
Start Redis service
Ensure Redis is listing port 6379
Sentinel (common for master and slave)
Create file redis-sentinel.conf with content:
port 26379
logfile "redis-sentinel1.log"
sentinel monitor WebClient masterAddr 6379 1
where masterAddr is address where redis in master mode is running,
6379 is default redis port, 1 is quorum (number of host that makes
decision is server down or not). WebClient is group name. You specify it in client code ConnectionUtils.Connect("...,serviceName=WebClient...")
Create redis sentinel service:execute command within redis directory redis-server --service-install redis-sentinel.conf --service-name redis-sentinel --sentinel
Start redis-sentinel service