I've created a Redis cluster with 3 masters and 2 replicas for each master node. Which Java client libraries are able to connect to the cluster (set of master nodes?) and are able to detect the master failover and then able to communicate to the new master?
I used Jedis client library to connect to the Redis cluster:
private static JedisCluster jedisCluster;
static {
try {
Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>();
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7000));
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7001));
jedisClusterNodes.add(new HostAndPort("127.0.0.1", 7002));
jedisCluster = new JedisCluster(jedisClusterNodes);
} catch (Throwable t) {
t.printStackTrace();
}
}
All the clients executing jedisCluster.incr(KEY); using this jedisCluster start failing when a master goes down. At the cluster level, one of the replicas is promoted to be the new master.
So, where is the auto detection of the new master(s) by the client?
Related
I have a Redis-consuming application that has previously received keyspace events from a Redis (AWS ElasicCache) cluster with cluster mode turned "OFF". When a value is stored in the Redis key/value store, the application receives notification of the event, fetches the value, and continues execution.
However, this does not happen when the Redis cluster has cluster mode turned "ON". I understand from Redis documentation:
Every node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications are not broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes.
So, I updated the config to include all of the nodes in the cluster:
spring:
redis:
cluster:
nodes:
- my-encrypted-cluster-0001-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0001-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0001-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
I add them to the JedisConnectionFactory like so:
#Bean
public JedisConnectionFactory jedisConnectionFactory(RedisProperties redisProperties) {
// Add all 'node' endpoints to the config.
List<String> nodes = redisProperties.getCluster().getNodes();
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration(nodes);
redisClusterConfiguration.setPassword(redisProperties.getPassword());
JedisClientConfiguration jedisClientConfig = JedisClientConfiguration.builder()
.clientName("Encrypted_Jedis_Client")
.useSsl()
.and()
.build();
JedisConnectionFactory jcf = new JedisConnectionFactory(redisClusterConfiguration, jedisClientConfig);
Optional.ofNullable(jcf.getPoolConfig()).ifPresent(config -> {
log.info("Setting max/min idle properties on Jedis pool config.");
jcf.getPoolConfig().setMaxIdle(30);
jcf.getPoolConfig().setMinIdle(10);
});
return jcf;
}
Then I add the JedisConnectionFactory to the RedisMessageListenerContainer:
#Bean
public RedisMessageListenerContainer container(JedisConnectionFactory connectionFactory) {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
return container;
}
My listener class registers itself with the RedisMessageListenerContainer:
#Component
public class MyMessageListener extends KeyspaceEventMessageListener implements MessageListener,
SubscriptionListener {
#Autowired
public ClusterStateMessageListener(final RedisMessageListenerContainer listenerContainer,) {
super(listenerContainer);
}
#Override
protected void doRegister(RedisMessageListenerContainer container) {
container.addMessageListener(this, "__keyspace#0__:my.key");
}
#Override
protected void doHandleMessage(Message message) {
// Handle the message ...
}
}
With this configuration, the consumer application will receive keyspace notifications, but not reliably. If the application starts up, but does not receive keyspace notifications, it needs to be restarted. This has to continue until keyspace notifications are received - obviously, not ideal.
The producer application is able to reliably publish the value. It has a similar configuration, but does not include a listener. I know the value is published because it is visible in the cache when I use RedisInsight to view the key.
So, where does the "clients need to subscribe to each of the nodes" part happen, and how can I prove it is happening?
Why are keyspace notifications received intermittently? Is my consuming application not subscribing to all of the nodes it is given, or is something else going on?
Does Spring Data Redis support listening for keyspace events from a Redis cluster in clustered mode, or do I need to handle this differently?
Thanks for your help!
I have started ignite server as well as app as client node using the following configuration
public IgniteConfigurer config() {
return cfg -> {
// The node will be started as a client node.
cfg.setClientMode(true);
// Classes of custom Java logic will be transferred over the wire from this app.
cfg.setPeerClassLoadingEnabled(false);
// Setting up an IP Finder to ensure the client can locate the servers.
final TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Arrays.asList(ip));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
// Cache Metrics log frequency. If 0 then log print disable.
cfg.setMetricsLogFrequency(Integer.parseInt(cacheMetricsLogFrequency));
// setting up storage configuration
final DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
// setting up data region for storage
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setWorkDirectory(cacheStorage);
final TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
// Sets message queue limit for incoming and outgoing messages
communicationSpi.setMessageQueueLimit(Integer.parseInt(cacheTcpCommunicationSpiMessageQueueLimit));
cfg.setCommunicationSpi(communicationSpi);
final CacheCheckpointSpi cpSpi = new CacheCheckpointSpi();
cfg.setCheckpointSpi(cpSpi);
final FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();
// Execute all jobs sequentially by setting parallel job number to 1.
colSpi.setParallelJobsNumber(Integer.parseInt(cacheParallelJobs));
cfg.setCollisionSpi(colSpi);
// set failure handler for auto connection if ignite server stop/starts.
cfg.setFailureHandler(new StopNodeFailureHandler());
};
}
everything working fine. Now I have stopped ignite server and again restart ignite server. After restarting ignite server When I do any cache operation on I am getting error like
Caused by: class org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to perform cache operation (cache is stopped): mycache1
... 63 more
When I see ignite server logs it shows me the client is connected. See below logs
[17:25:41] ^-- Baseline [id=0, size=1, online=1, offline=0]
[17:25:42] Topology snapshot [ver=2, locNode=ea964803, servers=1, clients=1, state=ACTIVE, CPUs=8, offheap=6.3GB, heap=4.5GB]
[17:25:42] ^-- Baseline [id=0, size=1, online=1, offline=0]
So why it not allowed to perform any cache operation through the application which is running as a client node?.
It looks like you are creating your "mycache1" inside the default data region which is not configured to be persistent.
I.e. you first define a default region to be persistent:
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
But further you are re-creating it without setPersistenceEnabled:
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
So you need to replace getDefaultDataRegionConfiguration().setPersistenceEnabled(true); with storageCfg.setDefaultDataRegionConfiguration(defaultRegion); to enable persistence and I think you won't have CacheStoppedException anymore.
As for in-memory configuration (which I think was applied here instead) and dynamically created caches, this is expected behavior. Because in this case, the server knows nothing about the previously created caches and you need to recreate them explicitly. Doing something like:
try{
...
}
catch(Exception exception) {
if (exception instanceof IgniteException) {
final Throwable rootCause = getRootCause(exception);
if(rootCause instanceof CacheStoppedException)
{
ignite.cache("mycache1");
mylogger.info("Connection re-estabilished with the cache.");
}
}
What I'm using:
spring-data-redis.1.7.0.RELEASE
Lettuce.3.5.0.Final
I configured Spring beans related to Redis as follows:
#Bean
public LettucePool lettucePool() {
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMasIdle(10);
poolConfig.setMinIdle(8);
... // setting others..
return new DefaultLettucePool(host, port, poolConfig)
}
#Bean
public RedisConnectionFactory redisConnectionFactory() {
new LettuceConnectionFactory(lettucePool());
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<Stirng, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
redisTemplate.setEnableTransactionSupport(true);
... // setting serializers..
return redisTemplate;
}
And redisTemplate Bean is autowired and used for Redis opertions.
The connections look correctly established when I check using an 'info' command via redis-cli. The client count is exactly the same with the value set to the lettucePool Bean + 1. (redis-cli is also a client)
However, my application's log says it sends operation requests through always the one same port. So I checked client status using 'client list' command and it says there are the pooling number of clients and just port is sending requests.
What am I missing?
This is caused by Lettuce's specific feature, 'sharing native connection'.
LettuceConnectionFactory in spring-data-redis has a setter method named setShareNativeConnection(boolean), set to true by default. This means no matter how many connections are created and pooled, only one native connection is used as long as non-blocking and non-transactional operation called.
As you can see, I didn't manually set the value so it was set to the default, 'true' and I had no blocking or transactional operations.
Additionally, the reason why the default value is set to true is that Redis itself is single-threaded, which means even though clients send many operations simultaneously, Redis must execute them one by one, so settings this value to 'false' doesn't mean that it increases Redis' throughput.
I am using Spring 2.1.1 and Redis 4.0.1. I have configured two node computers one with IP:192.168.20.40 with master configuration and other with IP:192.168.20.55 with slave configuration. I am running Springboot application using jedis (not using spring-jedis) on two systems, different conditions are occuring-
#Bean
public JedisSentinelPool jedisSentinelPool() {
Set<String> sentinels=new HashSet<>();
sentinels.add("192.168.20.40:26379");
sentinels.add("192.168.20.55:26379");
JedisSentinelPool jedisSentinelPool=new JedisSentinelPool("mymaster", sentinels);
return jedisSentinelPool;
}
When running application on master node(redis configured with master) data get entred in cache.
When running application on slave node(redis configured with slave),exception occured -
(i.) I am able to get the jedis object from sentinel pool but unable to store data into the redis with exception "redis.clients.jedis.exceptions.JedisDataException: READONLY You can't write against a read only slave."
When running application on another server(192.168.20.33), and redis server are hosted on "IP:192.168.20.40" & "IP:192.168.20.55" , then my application is unable to get the jedis object from sentinel pool-
public String addToCache(#PathVariable("cacheName") String cacheName, HttpEntity<String> httpEntity, #PathVariable("key") String key) {
try (Jedis jedis = jedisPool.getResource();) {
long dataToEnter = jedis.hset(cacheName.getBytes(), key.getBytes(), httpEntity.getBody().getBytes());
if (dataToEnter == 0)
log.info("data existed in cache {} get updated ",cacheName);
else
log.info("new data inserted in cache {}",cacheName);
} catch (Exception e) {
System.out.println(e);
}
return httpEntity.getBody();
}
any input would be appreciable.
Can you please check you redis configuration file (redis.conf). It should have read-only mode enabled by default. You need to change the read only mode to false.
I'm running Java/Tomcat project in Elastic Beanstalk. I've setup an Elasticache group in the same vpc. Currently, just testing with a single EC2 instance. The app is Spring Boot with spring-boot-starter-redis. It tries to ping Redis with template.getConnectionFactory().getConnection().ping(); on startup and is throwing an exception. Root cause is java.net.ConnectException: Connection refused. If I telnet to the server and port, it works. I installed redis-cli on the same instance and was able to connect and ping to the group and each node. The code also works fine on my local with local redis. Is Jedis connecting to anything other than the visible Elasticache nodes?
#Autowired
private RedisConnectionFactory connectionFactory;
/**
* Configure connection factory as per redis server.
*/
#PostConstruct
public void configureConnectionManager() {
if (cachingEnabled && connectionFactory instanceof JedisConnectionFactory) {
LOGGER.info("Connecting to Redis cache.");
JedisConnectionFactory jedisConnectionFactory =
(JedisConnectionFactory) connectionFactory;
if (port > 0) {
jedisConnectionFactory.setPort(port);
}
if (StringUtils.isNotBlank(hostname)) {
jedisConnectionFactory.setHostName(hostname);
}
jedisConnectionFactory.setUsePool(true);
RedisTemplate<Object, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.afterPropertiesSet();
LOGGER.info("Testing connection to Redis server on "
+ jedisConnectionFactory.getHostName()
+ ":" + jedisConnectionFactory.getPort());
// This will test the connection and throw a runtime exception
// if the server can't be reached.
template.getConnectionFactory().getConnection().ping();
final RedisCacheManager redisCacheManager =
new RedisCacheManager(template);
redisCacheManager.setDefaultExpiration(ttl);
this.cm = redisCacheManager;
} else {
// Default implementation incase cache turned off or exception.
LOGGER.info("Caching disabled for this session.");
this.cm = new NoOpCacheManager();
}
}