ExpiredEvent seems to be lost..... if we use redis cluster.
It works perfect if we use a stand-alone redis server.
my redis.config
notify-keyspace-events "Egx"
my project code
implementation 'org.springframework.session:spring-session-data-redis:2.7.0'
implementation 'org.springframework.data:spring-data-redis:2.7.3'
implementation 'io.lettuce:lettuce-core:6.1.8.RELEASE'
#configuration
#EnableRedisHttpSession(maxInactiveIntervalInSeconds = 60 * 1)
#EnableRedisRepositories
public class RedisConfig {
#bean
public LettuceConnectionFactory lettuceConnectionFactory() {
RedisClusterConfiguration redisClusterConfiguration
= new RedisClusterConfiguration(List.of("ip:port", "ip:port", "ip:port"));
redisClusterConfiguration.setPassword("password");
return new LettuceConnectionFactory(redisClusterConfiguration);
}
}
#component
public class RedisSessionListener {
#eventlistener
public void onCreate(org.springframework.session.events.SessionCreatedEvent event) {
System.out.println("create!!");
System.out.println(event);
}
#EventListener
public void onDestroy(org.springframework.session.events.SessionDestroyedEvent event) {
System.out.println("destory!!");
System.out.println(event);
}
#EventListener
public void sessionExired(SessionExpiredEvent event) {
System.out.println("exired!!");
System.out.println(event);
}
}
I repeated log in and log out
test result..
As it's seem like the expired event seems to be lost.
Is there a solution?
Related
I'm new to JOOQ... The following code seems to work in WildFly 22 but I'm not sure if that is the best way to do things. What is the preferred way to inject WF DataSource to JOOQ DAOs (my extended ones)? Is there a way to avoid doing the ".get()." in the service below and just leave #Resource(...) etc. connection related for the MyCompanyDAO to handle internally?
In other words: companyDAO.get().fetchOneById(id) vs. companyDAO.fetchOneById(id)
#Stateless
public class CompanyService extends DefaultCompanyService {
#Inject
private MyCompanyDAO companyDAO;
public Company find(Integer id) {
return companyDAO.get().fetchOneById(id);
}
}
#Stateless
public class MyCompanyDAO extends CompanyDao {
#Inject
private MyConnectionProvider cp;
public CompanyDAO get() { // since cannot use #Resource in dao constructor
this.configuration().set(cp).set(SQLDialect.POSTGRES);
return this;
}
// custom code here
}
public class CompanyDao extends DAOImpl<CompanyRecord, tables.pojos.Company, Integer> {
// jooq generated code here
}
#Stateless
#LocalBean
public class MyConnectionProvider implements ConnectionProvider {
#Resource(lookup = "java:/MyDS")
private DataSource dataSource;
#Override
public Connection acquire() throws DataAccessException {
try {
return dataSource.getConnection();
} catch (SQLException e) {
throw new DataAccessException("Could not acquire connection.", e);
}
}
#Override
public void release(Connection connection) throws DataAccessException {
try {
connection.close();
} catch (SQLException e) {
throw new DataAccessException("Could not release connection.", e);
}
}
}
Put initialization logic of MyCompanyDAO inside a #PostConstruct method.
#PostConstruct
public void init() {
this.configuration().set(cp).set(SQLDialect.POSTGRES);
}
This way, you don't need to call get:
#Inject
private MyCompanyDAO companyDAO;
public Company find(Integer id) {
return companyDAO.fetchOneById(id);
}
How about using constructor injection instead? The generated DAO classes offer a constructor that accepts a Configuration precisely for that:
#Stateless
public class MyCompanyDAO extends CompanyDao {
#Inject
public MyCompanyDAO (Configuration configuration) {
super(configuration);
}
}
If for some reason you cannot inject the entire configuration (which I'd recommend), you could still inject the ConnectionProvider:
#Stateless
public class MyCompanyDAO extends CompanyDao {
#Inject
public MyCompanyDAO (MyConnectionProvider cp) {
super(DSL.using(cp, SQLDialect.POSTGRES));
}
}
I'm interested to use Publisher Confirms in some producers that we have in a project, using Spring Cloud Stream. I have tried doing a small PoC but it is not working. As far as I see in the documentation, this is possible for Asyncrhonous Publisher Confirm, and it should be as easy as do the next changes:
Add in the application.yml the confirmAckChannel and enable the errorChannelEnabled property.
spring.cloud.stream:
binders:
rabbitDefault:
defaultCandidate: false
type: rabbit
environment.spring.rabbitmq.host: ${spring.rabbitmq.addresses}
....
bindings:
testOutput:
destination: test
binder: rabbitDefault
content-type: application/json
rabbit.bindings:
testOutput.producer:
confirmAckChannel: "testAck"
errorChannelEnabled: true
Then a simple service triggered by an endpoint, where I insert the header related with the errorChannel to the event.
#Service
#RequiredArgsConstructor
public class TestService {
private final TestPublisher testPublisher;
public void sendMessage() {
testPublisher.send(addHeaders());
}
private Message<Event<TestEvent>> addHeaders() {
return withPayload(new Event<>(TestEvent.builder().build()))
.setHeader(MessageHeaders.ERROR_CHANNEL, "errorChannelTest")
.build();
}
}
And then the Publisher of RabbitMQ
#Component
#RequiredArgsConstructor
public class TestPublisher {
private final MessagingChannels messagingChannels;
public boolean send(Message<Event<TestEvent>> message) {
return messagingChannels.test().send(message);
}
}
Where MessagingChannels is implemented as
public interface MessagingChannels {
#Input("testAck")
MessageChannel testAck();
#Input("errorChannelTest")
MessageChannel testError();
#Output("testOutput")
MessageChannel test();
}
After that, I have implemented 2 listeners, one for errorChannelTest input and the other one for testAck.
#Slf4j
#Component
#RequiredArgsConstructor
class TestErrorListener {
#StreamListener("errorChannelTest")
void onCommandReceived(Event<Message> message) {
log.info("Message error received: " + message);
}
}
#Slf4j
#Component
#RequiredArgsConstructor
class TestAckListener {
#StreamListener("testAck")
void onCommandReceived(Event<Message> message) {
log.info("Message ACK received: " + message);
}
}
However, I didn't receive any ACK or NACK for RabbitMQ in these 2 listeners, the event was sent properly to RabbitMQ and manage by the exchange, but then I haven't received any response from RabbitMQ.
Am I missing something? I have checked also with these 2 properties, but it doesn't work as well
spring:
rabbitmq:
publisher-confirm-type: CORRELATED
publisher-returns: true
I'm using Spring-Cloud-Stream 3.0.1.RELEASE and spring-cloud-starter-stream-rabbit 3.0.1.RELEASE
----EDITED------
This is the sample working updated with the recommendations of Gary Russell
Application.yml
spring.cloud.stream:
binders:
rabbitDefault:
defaultCandidate: false
type: rabbit
environment.spring.rabbitmq.host: ${spring.rabbitmq.addresses}
bindings:
testOutput:
destination: exchange.output.test
binder: rabbitDefault
content-type: application/json
testOutput.producer:
errorChannelEnabled: true
rabbit.bindings:
testOutput.producer:
confirmAckChannel: "testAck"
spring:
rabbitmq:
publisher-confirm-type: correlated
publisher-returns: true
TestService
#Service
#RequiredArgsConstructor
public class TestService {
private final TestPublisher testPublisher;
public void sendMessage() {
testPublisher.send(addHeaders());
}
private Message<Event<TestEvent>> addHeaders(Test test) {
return withPayload(new Event<>(TestEvent.builder().test(test).build()))
.build();
}
}
TestService is triggered by an endpoint in the next simple controller to check this PoC.
#RestController
#RequiredArgsConstructor
public class TestController {
private final TestService testService;
#PostMapping("/services/v1/test")
public ResponseEntity<Object> test(#RequestBody Test test) {
testService.sendMessage(test);
return ResponseEntity.ok().build();
}
}
And then the Publisher of RabbitMQ with both ServiceActivators
#Component
#RequiredArgsConstructor
public class TestPublisher {
private final MessagingChannels messagingChannels;
public boolean send(Message<Event<TestEvent>> message) {
log.info("Message for Testing Publisher confirms sent: " + message);
return messagingChannels.test().send(message);
}
#ServiceActivator(inputChannel = TEST_ACK)
public void acks(Message<?> ack) {
log.info("Message ACK received for Test: " + ack);
}
#ServiceActivator(inputChannel = TEST_ERROR)
public void errors(Message<?> error) {
log.info("Message error for Test received: " + error);
}
}
Where MessagingChannels is implemented as
public interface MessagingChannels {
#Input("testAck")
MessageChannel testAck();
#Input("testOutput.errors")
MessageChannel testError();
#Output("testOutput")
MessageChannel test();
}
This is the Main of the application (I have checked with #EnableIntegration too).
#EnableBinding(MessagingChannels.class)
#SpringBootApplication
#EnableScheduling
public class Main {
public static void main(String[] args) {
SpringApplication.run(Main.class, args);
}
}
testAck should not be a binding; it should be a #ServiceActivator instead.
.setHeader(MessageHeaders.ERROR_CHANNEL, "errorChannelTest")
That won't work in this context; errors are sent to a channel named testOutput.errors; again; this needs a #ServiceActivator, not a binding.
You have errorChannelEnabled in the wrong place; it's a common producer property, not rabbit-specific.
#SpringBootApplication
#EnableBinding(Source.class)
public class So62219823Application {
public static void main(String[] args) {
SpringApplication.run(So62219823Application.class, args);
}
#InboundChannelAdapter(channel = "output")
public String source() {
return "foo";
}
#ServiceActivator(inputChannel = "acks")
public void acks(Message<?> ack) {
System.out.println("Ack: " + ack);
}
#ServiceActivator(inputChannel = "output.errors")
public void errors(Message<?> error) {
System.out.println("Error: " + error);
}
}
spring:
cloud:
stream:
bindings:
output:
producer:
error-channel-enabled: true
rabbit:
bindings:
output:
producer:
confirm-ack-channel: acks
rabbitmq:
publisher-confirm-type: correlated
publisher-returns: true
My code like this:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(500);
DataStream<String> stream = env.addSource(getConsumer(TOPIC_1));
Jedis jedis = new Jedis("master1");
stream.map(new RichMapFunction<String, String>() {
#Override
public String map(String value) throws Exception {
String result = jedis.hget("rtc", value);
return result;
}
});
I want to get some data from Redis in map(), but it cannot run,because Jedis.class is not serializable.
How to use not serializable class in map(),such as ZkClient,Jedis?
All rich functions like the RichMapFunction have an open(Configuration) and close call which you can override. These lifecycle methods are called once the function has been deployed to a TaskManager where it is executed.
class MyMapFunction extends RichMapFunction<String, String> {
private transient Jedis jedis;
#Override
public void open(Configuration parameters) {
// open connection to Redis, for example
jedis = new Jedis("master1");
}
#Override
public void close() {
// close connection to Redis
jedis.close();
}
}
I been working on implementing a PUB/SUB service using spring-data-Redis.
I have been researching and following the web and got something to work fine.
my problem is that I need absolute reliability when a message is not processed ( either an Exception is thrown or a logic error occurs ).
In which case I need the message to return to the topic for a retry ( by another subscriber or even the same ).
I have looked at several questions, particularly the following:
Redis Pub/Sub with Reliability
and
How to implement Redis Multi-Exec by using Spring-data-Redis
I have understood that I should use multi, exec for managing a transaction, but I couldn't get it to work.
Here is a simplified version of my code
#Configuration
#PropertySource(value = { "classpath:application.properties" })
public class RedisConfig {
#Autowired
Environment env;
#Bean
public MessageListenerAdapter messageListener() {
MyMessageListenerAdapter messageListeneradapter = new MyMessageListenerAdapter(new RedisMessageSubscriber());
messageListeneradapter.afterPropertiesSet();
return messageListeneradapter;
}
#Bean(name="RedisMessagePublisherBean")
public RedisMessagePublisher messagePublisher() {
return new RedisMessagePublisher();
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
RedisTemplate<String , Object> template = new RedisTemplate<>();
template.setValueSerializer(new GenericToStringSerializer<Object>(Object.class));
template.setEnableTransactionSupport(true);
template.setConnectionFactory(lettuceConnectionFactory());
return template;
}
#Bean
public RedisMessageListenerContainer redisContainer() {
RedisMessageListenerContainer container
= new RedisMessageListenerContainer();
container.setConnectionFactory(lettuceConnectionFactory());
container.addMessageListener(messageListener(), topic());
return container;
}
#Bean
public LettuceConnectionFactory lettuceConnectionFactory() {
LettuceConnectionFactory factory = new LettuceConnectionFactory();
factory.setValidateConnection(true);
factory.setDatabase(1);
factory.afterPropertiesSet();
return factory;
}
#Bean
public ChannelTopic topic() {
return new ChannelTopic("MQ_TOPIC");
}
public class MyMessageListenerAdapter extends MessageListenerAdapter{
public MyMessageListenerAdapter(RedisMessageSubscriber redisMessageSubscriber) {
super(redisMessageSubscriber);
}
#Override
public void onMessage(Message message, byte[] pattern) {
RedisTemplate<?, ?> template = redisTemplate();
template.execute(new SessionCallback<String>() {
#Override
public <K, V> String execute(RedisOperations<K, V> operations) throws DataAccessException {
operations.multi();
System.out.println("got message");
String result = doSomeLogic(message);
if (result == null)
operations.discard();
else
operations.exec();
return null;
}
}) ;
}
}
}
My requirements are that if a message failed to process ( I can leave without runtime exceptions etc.. strictly logical error would suffice for now ), It will return to the topic.
Any help is appreciated, Thanks!
I use the following:
public interface IRepository<T>
{
void Add(T entity);
}
public class Repository<T>
{
private readonly ISession session;
public Repository(ISession session)
{
this.session = session;
}
public void Add(T entity)
{
session.Save(entity);
}
}
public class SomeHandler : IHandleMessages<SomeMessage>
{
private readonly IRepository<EntityA> aRepository;
private readonly IRepository<EntityB> bRepository;
public SomeHandler(IRepository<EntityA> aRepository, IRepository<EntityB> bRepository)
{
this.aRepository = aRepository;
this.bRepository = bRepository;
}
public void Handle(SomeMessage message)
{
aRepository.Add(new A(message.Property);
bRepository.Add(new B(message.Property);
}
}
public class MessageEndPoint : IConfigureThisEndpoint, AsA_Server, IWantCustomInitialization
{
public void Init()
{
ObjectFactory.Configure(config =>
{
config.For<ISession>()
.CacheBy(InstanceScope.ThreadLocal)
.TheDefault.Is.ConstructedBy(ctx => ctx.GetInstance<ISessionFactory>().OpenSession());
config.ForRequestedType(typeof(IRepository<>))
.TheDefaultIsConcreteType(typeof(Repository<>));
}
}
My problem with the threadlocal storage is, is that the same session is used during the whole application thread. I discovered this when I saw the first level cache wasn't cleared. What I want is using a new session instance, before each call to IHandleMessages<>.Handle.
How can I do this with structuremap? Do I have to create a message module?
You're right in that the same session is used for all requests to the same thread. This is because NSB doesn't create new threads for each request. The workaround is to add a custom cache mode and have it cleared when message handling is complete.
1.Extend the thread storage lifecycle and hook it up a a message module
public class NServiceBusThreadLocalStorageLifestyle : ThreadLocalStorageLifecycle, IMessageModule
{
public void HandleBeginMessage(){}
public void HandleEndMessage()
{
EjectAll();
}
public void HandleError(){}
}
2.Configure your structuremap as follows:
For<<ISession>>
.LifecycleIs(new NServiceBusThreadLocalStorageLifestyle())
...
Hope this helps!