I have a Redis-consuming application that has previously received keyspace events from a Redis (AWS ElasicCache) cluster with cluster mode turned "OFF". When a value is stored in the Redis key/value store, the application receives notification of the event, fetches the value, and continues execution.
However, this does not happen when the Redis cluster has cluster mode turned "ON". I understand from Redis documentation:
Every node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications are not broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes.
So, I updated the config to include all of the nodes in the cluster:
spring:
redis:
cluster:
nodes:
- my-encrypted-cluster-0001-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0001-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0001-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0002-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-001.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-002.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
- my-encrypted-cluster-0003-003.my-encrypted-cluster.ymp5y1.use1.cache.amazonaws.com:6379
I add them to the JedisConnectionFactory like so:
#Bean
public JedisConnectionFactory jedisConnectionFactory(RedisProperties redisProperties) {
// Add all 'node' endpoints to the config.
List<String> nodes = redisProperties.getCluster().getNodes();
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration(nodes);
redisClusterConfiguration.setPassword(redisProperties.getPassword());
JedisClientConfiguration jedisClientConfig = JedisClientConfiguration.builder()
.clientName("Encrypted_Jedis_Client")
.useSsl()
.and()
.build();
JedisConnectionFactory jcf = new JedisConnectionFactory(redisClusterConfiguration, jedisClientConfig);
Optional.ofNullable(jcf.getPoolConfig()).ifPresent(config -> {
log.info("Setting max/min idle properties on Jedis pool config.");
jcf.getPoolConfig().setMaxIdle(30);
jcf.getPoolConfig().setMinIdle(10);
});
return jcf;
}
Then I add the JedisConnectionFactory to the RedisMessageListenerContainer:
#Bean
public RedisMessageListenerContainer container(JedisConnectionFactory connectionFactory) {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
return container;
}
My listener class registers itself with the RedisMessageListenerContainer:
#Component
public class MyMessageListener extends KeyspaceEventMessageListener implements MessageListener,
SubscriptionListener {
#Autowired
public ClusterStateMessageListener(final RedisMessageListenerContainer listenerContainer,) {
super(listenerContainer);
}
#Override
protected void doRegister(RedisMessageListenerContainer container) {
container.addMessageListener(this, "__keyspace#0__:my.key");
}
#Override
protected void doHandleMessage(Message message) {
// Handle the message ...
}
}
With this configuration, the consumer application will receive keyspace notifications, but not reliably. If the application starts up, but does not receive keyspace notifications, it needs to be restarted. This has to continue until keyspace notifications are received - obviously, not ideal.
The producer application is able to reliably publish the value. It has a similar configuration, but does not include a listener. I know the value is published because it is visible in the cache when I use RedisInsight to view the key.
So, where does the "clients need to subscribe to each of the nodes" part happen, and how can I prove it is happening?
Why are keyspace notifications received intermittently? Is my consuming application not subscribing to all of the nodes it is given, or is something else going on?
Does Spring Data Redis support listening for keyspace events from a Redis cluster in clustered mode, or do I need to handle this differently?
Thanks for your help!
Related
I have a websocket implementation using redis messaging operation on webflux. And what it does is it listens to topic and returns the values via websocket endpoint.
The problem I have is each time a user sends a message via websocket to the endpoint it seems a brand new redis subscription is made, resulting in the accumulation of subscribers on the redis message topic and the websocket responses are increased with the number of redis topic message subscribtions as well (example user sends 3 messages, redis topic subscriptions are increased to three, websocket connection responses three times).
Would like to know if there is a way to reuse the same subscription to the messaging topic so it would prevent multiple redis topic subscriptions.
The code I use is as follows:
Websocket Handler
public class SendingMessageHandler implements WebSocketHandler {
private final Gson gson = new Gson();
private final MessagingService messagingService;
public SendingMessageHandler(MessagingService messagingService) {
this.messagingService = messagingService;
}
#Override
public Mono<Void> handle(WebSocketSession session) {
Flux<WebSocketMessage> stringFlux = session.receive()
.map(WebSocketMessage::getPayloadAsText)
.flatMap(inputData ->
messagingService.playGame(inputData)
.map(data ->
session.textMessage(gson.toJson(data))
)
);
return session.send(stringFlux);
}
}
Message Handling service
public class MessagingService{
private final ReactiveRedisOperations<String, GamePubSub> reactiveRedisOperations;
public MessagingService(ReactiveRedisOperations<String, GamePubSub> reactiveRedisOperations) {
this.reactiveRedisOperations = reactiveRedisOperations;
}
public Flux<Object> playGame(UserInput userInput){
return reactiveRedisOperations.listenTo("TOPIC_NAME");
}
}
Thank you in advance.
Instead of using ReactiveRedisOperations, MessageListener is the way to go here. You can register a listener once, and use the following as the listener.
data -> session.textMessage(gson.toJson(data))
The registration should happen only once at the beginning of the connection. You can override void afterConnectionEstablished(WebSocketSession session) of SendingMessageHandler to accomplish this. That way a new subscription created per every new Websocket connection, per every message.
Also, don't forget to override afterConnectionClosed, and unsubscribe from the redis topic, and clean up the listener within it.
Instructions on how to use MessageListener.
What I'm using:
spring-data-redis.1.7.0.RELEASE
Lettuce.3.5.0.Final
I configured Spring beans related to Redis as follows:
#Bean
public LettucePool lettucePool() {
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMasIdle(10);
poolConfig.setMinIdle(8);
... // setting others..
return new DefaultLettucePool(host, port, poolConfig)
}
#Bean
public RedisConnectionFactory redisConnectionFactory() {
new LettuceConnectionFactory(lettucePool());
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<Stirng, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
redisTemplate.setEnableTransactionSupport(true);
... // setting serializers..
return redisTemplate;
}
And redisTemplate Bean is autowired and used for Redis opertions.
The connections look correctly established when I check using an 'info' command via redis-cli. The client count is exactly the same with the value set to the lettucePool Bean + 1. (redis-cli is also a client)
However, my application's log says it sends operation requests through always the one same port. So I checked client status using 'client list' command and it says there are the pooling number of clients and just port is sending requests.
What am I missing?
This is caused by Lettuce's specific feature, 'sharing native connection'.
LettuceConnectionFactory in spring-data-redis has a setter method named setShareNativeConnection(boolean), set to true by default. This means no matter how many connections are created and pooled, only one native connection is used as long as non-blocking and non-transactional operation called.
As you can see, I didn't manually set the value so it was set to the default, 'true' and I had no blocking or transactional operations.
Additionally, the reason why the default value is set to true is that Redis itself is single-threaded, which means even though clients send many operations simultaneously, Redis must execute them one by one, so settings this value to 'false' doesn't mean that it increases Redis' throughput.
I'm working with 2 .NET Core console applications in a producer/consumer scenario with MassTransit/RabbitMQ. I need to ensure that even if NO consumers are up-and-running, the messages from the producer are still queued up successfully. That didn't seem to work with Publish() - the messages just disappeared, so I'm using Send() instead. The messages at least get queued up, but without any consumers running the messages all end up in the "_skipped" queue.
So that's my first question: is this the right approach based on the requirement (even if NO consumers are up-and-running, the messages from the producer are still queued up successfully)?
With Send(), my consumer does indeed work, but still many messages are falling through the cracks and getting dumped into to the "_skipped" queue. The consumer's logic is minimal (just logging the message at the moment) so it's not a long-running process.
So that's my second question: why are so many messages still getting dumped into the "_skipped" queue?
And that leads into my third question: does this mean my consumer needs to listen to the "_skipped" queue as well?
I am unsure what code you need to see for this question, but here's a screenshot from the RabbitMQ management UI:
Producer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<CardMessageProducer>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
EndpointConvention.Map<CardMessage>(e.InputAddress);
});
});
}
Producer code:
Bus.Send(message);
Consumer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.AddSingleton<CardMessageConsumer>();
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<MassTransitHostedService>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
e.Consumer<CardMessageConsumer>(provider);
});
//cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName + "_skipped", e =>
//{
// e.Consumer<CardMessageConsumer>(provider);
//});
});
}
Consumer code:
class CardMessageConsumer : IConsumer<CardMessage>
{
private readonly ILogger<CardMessageConsumer> logger;
private readonly ApplicationConfiguration configuration;
private long counter;
public CardMessageConsumer(ILogger<CardMessageConsumer> logger, IOptions<ApplicationConfiguration> options)
{
this.logger = logger;
this.configuration = options.Value;
}
public async Task Consume(ConsumeContext<CardMessage> context)
{
this.counter++;
this.logger.LogTrace($"Message #{this.counter} consumed: {context.Message}");
}
}
In MassTransit, the _skipped queue is the implementation of the dead letter queue concept. Messages get there because they don't get consumed.
MassTransit with RMQ always delivers a message to an exchange, not to a queue. By default, each MassTransit endpoint creates (if there's no existing queue) a queue with the endpoint name, an exchange with the same name and binds them together. When the application has a configured consumer (or handler), an exchange for that message type (using the message type as the exchange name) also gets created and the endpoint exchange gets bound to the message type exchange. So, when you use Publish, the message is published to the message type exchange and gets delivered accordingly, using the endpoint binding (or multiple bindings). When you use Send, the message type exchange is not being used, so the message gets directly to the destination exchange. And, as #maldworth correctly stated, every MassTransit endpoint only expects to get messages that it can consume. If it doesn't know how to consume the message - the message is moved to the dead letter queue. This, as well as the poison message queue, are fundamental patterns of messaging.
If you need messages to queue up to be consumed later, the best way is to have the wiring set up, but the endpoint itself (I mean the application) should not be running. As soon as the application starts, it will consume all queued messages.
When the consumer starts the bus bus.Start(), one of the things it does is create all exchanges and queues for the transport. If you have a requirement that publish/send happens before the consumer, your only option is to run DeployTopologyOnly. Unfortunately this feature is not documented in official docs, but the unit tests are here: https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport.Tests/BuildTopology_Specs.cs
The skipped queue happens when messages are sent to a consumer that doesn't know how to process.
For example if you have a consumer that can process IConsumer<MyMessageA> which is on receive endpoint name "my-queue-a". But then your message producer does Send<MyMessageB>(Uri("my-queue-a")...), Well this is a problem. The consumer only understands the A, it doesn't know how to process B. And so it just moves it to a skipped queue and continues on.
In my case, the same queue listens to multiple consumers at the same time
I am using Spring 2.1.1 and Redis 4.0.1. I have configured two node computers one with IP:192.168.20.40 with master configuration and other with IP:192.168.20.55 with slave configuration. I am running Springboot application using jedis (not using spring-jedis) on two systems, different conditions are occuring-
#Bean
public JedisSentinelPool jedisSentinelPool() {
Set<String> sentinels=new HashSet<>();
sentinels.add("192.168.20.40:26379");
sentinels.add("192.168.20.55:26379");
JedisSentinelPool jedisSentinelPool=new JedisSentinelPool("mymaster", sentinels);
return jedisSentinelPool;
}
When running application on master node(redis configured with master) data get entred in cache.
When running application on slave node(redis configured with slave),exception occured -
(i.) I am able to get the jedis object from sentinel pool but unable to store data into the redis with exception "redis.clients.jedis.exceptions.JedisDataException: READONLY You can't write against a read only slave."
When running application on another server(192.168.20.33), and redis server are hosted on "IP:192.168.20.40" & "IP:192.168.20.55" , then my application is unable to get the jedis object from sentinel pool-
public String addToCache(#PathVariable("cacheName") String cacheName, HttpEntity<String> httpEntity, #PathVariable("key") String key) {
try (Jedis jedis = jedisPool.getResource();) {
long dataToEnter = jedis.hset(cacheName.getBytes(), key.getBytes(), httpEntity.getBody().getBytes());
if (dataToEnter == 0)
log.info("data existed in cache {} get updated ",cacheName);
else
log.info("new data inserted in cache {}",cacheName);
} catch (Exception e) {
System.out.println(e);
}
return httpEntity.getBody();
}
any input would be appreciable.
Can you please check you redis configuration file (redis.conf). It should have read-only mode enabled by default. You need to change the read only mode to false.
I have a durable consumer to a remote JMS queue in embedded Camel routing. Is it possible to have this kind of routing with master-slave configuration? Now it seems that the Camel routes are started and activated already when slave ActiveMQ is started and not when the actual failover happens.
Now it causes the slave instance to receive the same messages that are also sent to master and this causes duplicate messages to arrive to the queue on failover.
I'm using ActiveMQ 5.3 along with Apache Camel 2.1.
Unfortunately, when the slave broker starts so does the CamelContext along with the routes. However you can accomplish this by doing the following:
On the camelContext deployed with slave broker add the following autoStartup attribute to prevent the routes from starting:
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring" autoStartup="false">
...
</camelContext>
Next you need to create a class that implements the ActiveMQ Service Interface. A sample of this would be as follows:
package com.fusesource.example;
import org.apache.activemq.Service;
import org.apache.camel.spring.SpringCamelContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Example used to start and stop the camel context using the ActiveMQ Service interface
*
*/
public class CamelContextService implements Service
{
private final Logger LOG = LoggerFactory.getLogger(CamelContextService.class);
SpringCamelContext camel;
#Override
public void start() throws Exception {
try {
camel.start();
} catch (Exception e) {
LOG.error("Unable to start camel context: " + camel);
e.printStackTrace();
}
}
#Override
public void stop() throws Exception {
try {
camel.stop();
} catch (Exception e) {
LOG.error("Unable to stop camel context: " + camel);
e.printStackTrace();
}
}
public SpringCamelContext getCamel() {
return camel;
}
public void setCamel(SpringCamelContext camel) {
this.camel = camel;
}
}
Then in broker's configuration file, activemq.xml, add the following to register the service:
<services>
<bean xmlns="http://www.springframework.org/schema/beans" class="com.fusesource.example.CamelContextService">
<property name="camel" ref="camel"/>
</bean>
</services>
Now, once the slave broker takes over as the master, the start method will be invoked on the service class and the routes will be started.
I have also posted a blog about this here: http://jason-sherman.blogspot.com/2012/04/activemq-how-to-startstop-camel-routes.html
this shouldn't be an issue because the Camel context/routes on the slave will not start until it becomes the master (when the message store file lock is released by the master)
With camel routepolicies you can decide to suspend/resume certain routes based on your own conditions.
http://camel.apache.org/routepolicy.html
There is an existing ZookeeperRoutePolicy that can be used to do leader election.
http://camel.apache.org/zookeeper.html (see bottom of the page)