Use Spring Cloud Spring Service Connector with RabbitMQ and start publisher config function - spring-cloud-config

I connect RabbitMQ with sprin cloud config:
#Bean
public ConnectionFactory rabbitConnectionFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("publisherConfirms", true);
RabbitConnectionFactoryConfig rabbitConfig = new RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
2.Set rabbitTemplate.setMandatory(true) and setConfirmCallback():
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMandatory(true);
template.setMessageConverter(new Jackson2JsonMessageConverter());
template.setConfirmCallback((correlationData, ack, cause) -> {
if (!ack) {
System.out.println("send message failed: " + cause + correlationData.toString());
} else {
System.out.println("Publisher Confirm" + correlationData.toString());
}
});
return template;
}
3.Send message to queue to invoke the publisherConfirm and print log.
#Component
public class TestSender {
#Autowired
private RabbitTemplate rabbitTemplate;
#Scheduled(cron = "0/5 * * * * ? ")
public void send() {
this.rabbitTemplate.convertAndSend(EXCHANGE, "routingkey", "hello world",
(Message m) -> {
m.getMessageProperties().setHeader("tenant", "aaaaa");
return m;
}, new CorrelationData(UUID.randomUUID().toString()));
Date date = new Date();
System.out.println("Sender Msg Successfully - " + date);
}
}
But publisherConfirm have not worked.The log have not been printed. Howerver true or false, log shouldn't been absent.

Mandatory is not needed for confirms, only returns.
Some things to try:
Turn on DEBUG logging to see it it helps; there are some logs generated regarding confirms.
Add some code
.
template.execute(channel -> {
system.out.println(channel.getClass());
return null;
}
If you don't see PublisherCallbackChannelImpl then it means the configuration didn't work for some reason. Again DEBUG logging should help with the configuration debugging.
If you still can't figure it out, strip your application to the bare minimum that exhibits the behavior and post the complete application.

Related

RabbitTemplate's setChannelTransacted flag causes message being not delivered to queue

Given I have application with AMQP anonymous queue and fanout exchange:
#Bean
public Queue cacheUpdateAnonymousQueue() {
return new AnonymousQueue();
}
public static final String CACHE_UPDATE_FANOUT_EXCHANGE = "cache.update.fanout";
#Bean
FanoutExchange cacheUpdateExchange() {
return new FanoutExchange(CACHE_UPDATE_FANOUT_EXCHANGE);
}
#Bean
Binding cacheUpdateQueueToCacheUpdateExchange() {
return bind(cacheUpdateAnonymousQueue())
.to(cacheUpdateExchange());
}
and Spring Integration flow:
#Bean
public IntegrationFlow cacheOutputFlow() {
return from(channelConfig.cacheUpdateOutputChannel())
.transform(objectToJsonTransformer())
.handle(outboundAdapter())
.get();
}
And I use outbound adapter:
public MessageHandler outboundAdapter() {
rabbitTemplate.setChannelTransacted(true);
return outboundAdapter(rabbitTemplate)
.exchangeName(CACHE_UPDATE_FANOUT_EXCHANGE)
.get();
}
I can see in logs:
o.s.amqp.rabbit.core.RabbitTemplate: Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,4), conn: Proxy#40976c4b Shared Rabbit Connection: SimpleConnection#1cfaa28d [delegate=amqp://guest#127.0.0.1:5672/, localPort= 56042]
o.s.amqp.rabbit.core.RabbitTemplate: Publishing message on exchange [cache.update.fanout], routingKey = []
but message is not delivered to queue bound to cache.update.fanout exchange.
When I set rabbitTemplate.setChannelTransacted(false); in outbound adapter, then I can see in logs:
o.s.amqp.rabbit.core.RabbitTemplate : Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,1), conn: Proxy#11a1389d Shared Rabbit Connection: SimpleConnection#444c6abf [delegate=amqp://guest#127.0.0.1:5672/, localPort= 56552]
o.s.amqp.rabbit.core.RabbitTemplate : Publishing message on exchange [cache.update.fanout], routingKey = []
and message is delivered to queue.
Why is message not delivered in first case?
Why doesn't RabbitTemplate indicate something?
Your logs have different exchange names; I just tested it like this...
#SpringBootApplication
public class So60993877Application {
public static void main(String[] args) {
SpringApplication.run(So60993877Application.class, args);
}
#Bean
public Queue cacheUpdateAnonymousQueue() {
return new AnonymousQueue();
}
public static final String CACHE_UPDATE_FANOUT_EXCHANGE = "cache.update.fanout";
#Bean
FanoutExchange cacheUpdateExchange() {
return new FanoutExchange(CACHE_UPDATE_FANOUT_EXCHANGE);
}
#Bean
Binding cacheUpdateQueueToCacheUpdateExchange() {
return BindingBuilder.bind(cacheUpdateAnonymousQueue())
.to(cacheUpdateExchange());
}
#RabbitListener(queues = "#{cacheUpdateAnonymousQueue.name}")
public void listen(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
template.convertAndSend(CACHE_UPDATE_FANOUT_EXCHANGE,
cacheUpdateAnonymousQueue().getName(), "foo");
template.setChannelTransacted(true);
template.convertAndSend(CACHE_UPDATE_FANOUT_EXCHANGE,
cacheUpdateAnonymousQueue().getName(), "bar");
};
}
}
With no problems.
foo
bar
With confirms and returns enabled:
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
template.setReturnCallback((message, replyCode, replyText, exchange, routingKey) ->
LOG.info("Return: " + message));
template.setConfirmCallback((correlationData, ack, cause) ->
LOG.info("Confirm: " + correlationData + ": " + ack));
return args -> {
template.convertAndSend(CACHE_UPDATE_FANOUT_EXCHANGE, cacheUpdateAnonymousQueue().getName(),
"foo", new CorrelationData("foo"));
// template.setChannelTransacted(true);
template.convertAndSend(CACHE_UPDATE_FANOUT_EXCHANGE, cacheUpdateAnonymousQueue().getName(),
"bar", new CorrelationData("bar"));
template.convertAndSend("missingExchange", cacheUpdateAnonymousQueue().getName(), "baz",
new CorrelationData("baz"));
Thread.sleep(5000);
};
}

Google Cloud Memory Store (Redis), can't connect to redis when instance is just started

I have a problem to connect to redis when my instance is just started.
I use:
runtime: java
env: flex
runtime_config:
jdk: openjdk8
i got following exception:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: connect timed out
RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
java.net.SocketTimeoutException: connect timed out
after 2-3 min, it works smoothly
Do i need to add some check in my code or how i should fix it properly?
p.s.
also i use spring boot, with following configuration
#Value("${spring.redis.host}")
private String redisHost;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
return new JedisConnectionFactory(config);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
in pom.xml
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>2.1.2.RELEASE</version>
I solved this problem as follows: in short, I added the “ping” method, which tries to set and get the value from Redis; if it's possible, then application is ready.
Implementation:
First, you need to update app.yaml add following:
readiness_check:
path: "/readiness_check"
check_interval_sec: 5
timeout_sec: 4
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 300
Second, in your rest controller:
#GetMapping("/readiness_check")
public ResponseEntity<?> readiness_check() {
if (!cacheConfig.ping()) {
return ResponseEntity.notFound().build();
}
return ResponseEntity.ok().build();
}
Third, class CacheConfig:
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
P.S.
Also if somebody needs the full implementation of CacheConfig:
#Configuration
public class CacheConfig {
private static final Logger LOGGER = Logger.getLogger(CacheConfig.class.getName());
#Value("${spring.redis.host}")
private String redisHost;
private final RedisTemplate<String, Object> redisTemplate;
#Autowired
public CacheConfig(#Lazy RedisTemplate<String, Object> redisTemplate) {
this.redisTemplate = redisTemplate;
}
#Bean
JedisConnectionFactory jedisConnectionFactory(
#Autowired JedisPoolConfig poolConfig
) {
// https://cloud.google.com/memorystore/docs/redis/quotas
RedisStandaloneConfiguration config = new RedisStandaloneConfiguration(redisHost, 6379);
JedisClientConfiguration clientConfig = JedisClientConfiguration
.builder()
.usePooling()
.poolConfig(poolConfig)
.build();
return new JedisConnectionFactory(config, clientConfig);
}
#Bean
public RedisTemplate<String, Object> redisTemplate(
#Autowired JedisConnectionFactory jedisConnectionFactory
) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new GenericJackson2JsonRedisSerializer(newObjectMapper()));
return template;
}
/**
* Example: https://github.com/PengliuIBM/pws_demo/blob/1becdca1bc19320c2742504baa1cada3260f8d93/redisData/src/main/java/com/pivotal/wangyu/study/springdataredis/config/RedisConfig.java
*/
#Bean
redis.clients.jedis.JedisPoolConfig jedisPoolConfig() {
final redis.clients.jedis.JedisPoolConfig poolConfig = new redis.clients.jedis.JedisPoolConfig();
// Maximum active connections to Redis instance
poolConfig.setMaxTotal(16);
// Number of connections to Redis that just sit there and do nothing
poolConfig.setMaxIdle(16);
// Minimum number of idle connections to Redis - these can be seen as always open and ready to serve
poolConfig.setMinIdle(8);
// Tests whether connection is dead when returning a connection to the pool
poolConfig.setTestOnBorrow(true);
// Tests whether connection is dead when connection retrieval method is called
poolConfig.setTestOnReturn(true);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
return poolConfig;
}
public boolean ping() {
long prefix = System.currentTimeMillis();
try {
redisTemplate.opsForValue().set("readiness_check_" + prefix, Boolean.TRUE, 100, TimeUnit.SECONDS);
Boolean val = (Boolean) redisTemplate.opsForValue().get("readiness_check_" + prefix);
return Boolean.TRUE.equals(val);
} catch (Exception e) {
LOGGER.info("ping failed for " + System.currentTimeMillis());
return false;
}
}
}

Do we have to pass header values from WebClient in Zipkins

I am using Spring boot and following libraries in client and server,
dependencyManagement {
imports {
mavenBom "org.springframework.cloud:spring-cloud-dependencies:Finchley.SR2"
}
}
// Spring Cloud Sleuth
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-sleuth', version: '2.0.1.RELEASE'
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-zipkin', version: '2.0.1.RELEASE'
Based upon spring documentation, "https://cloud.spring.io/spring-cloud-sleuth/"
Run this app and then hit the home page. You will see traceId and spanId populated in the logs. If this app calls out to another one (e.g. with RestTemplate) it will send the trace data in headers and if the receiver is another Sleuth app you will see the trace continue there.
How will this work with Spring5 web client?
It will work in the same way. It's enough to inject a bean of WebClient or WebClientBuilder type. Check out this sample https://github.com/spring-cloud-samples/sleuth-documentation-apps/blob/master/service1/src/main/java/io/spring/cloud/sleuth/docs/service1/Service2Client.java
/**
* #author Marcin Grzejszczak
*/
#Component
class Service2Client {
private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private final WebClient webClient;
private final String serviceAddress;
private final Tracer tracer;
Service2Client(WebClient webClient,
#Value("${service2.address:localhost:8082}") String serviceAddress,
Tracer tracer) {
this.webClient = webClient;
this.serviceAddress = serviceAddress;
this.tracer = tracer;
}
public String start() throws InterruptedException {
log.info("Hello from service1. Setting baggage foo=>bar");
Span span = tracer.currentSpan();
String secretBaggage = ExtraFieldPropagation.get("baggage");
log.info("Super secret baggage item for key [baggage] is [{}]", secretBaggage);
if (StringUtils.hasText(secretBaggage)) {
span.annotate("secret_baggage_received");
span.tag("baggage", secretBaggage);
}
String baggageKey = "key";
String baggageValue = "foo";
ExtraFieldPropagation.set(baggageKey, baggageValue);
span.annotate("baggage_set");
span.tag(baggageKey, baggageValue);
log.info("Hello from service1. Calling service2");
String response = webClient.get()
.uri("http://" + serviceAddress + "/foo")
.exchange()
.block()
.bodyToMono(String.class).block();
Thread.sleep(100);
log.info("Got response from service2 [{}]", response);
log.info("Service1: Baggage for [key] is [" + ExtraFieldPropagation.get("key") + "]");
return response;
}
#NewSpan("first_span")
String timeout(#SpanTag("someTag") String tag) {
try {
Thread.sleep(300);
log.info("Hello from service1. Calling service2 - should end up with read timeout");
String response = webClient.get()
.uri("http://" + serviceAddress + "/readtimeout")
.retrieve()
.onStatus(httpStatus -> httpStatus.isError(), clientResponse -> {
throw new IllegalStateException("Exception!");
})
.bodyToMono(String.class)
.block();
log.info("Got response from service2 [{}]", response);
return response;
} catch (Exception e) {
log.error("Exception occurred while trying to send a request to service 2", e);
throw new RuntimeException(e);
}
}
}

RabbitMQ delayed message plugin - How to show delayed message in admin UI?

We use the rabbitmq message delay plugin (rabbitmq_delayed_message_exchange) for delaying messages. Is it possible for debugging and monitoring purposes, to show holded / delayed messages in rabbitmq admin web interface?
Link: https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/
Bye,
Ben
No; delayed messages are not visible in the admin UI.
As an alternative you can route the messages to a real queue, with a TTL defined as well as dead lettering which will cause expired message to be routed to the final queue.
You can set a fixed TTL on the temporary queue or use the expiration property on individual messages.
EDIT
#SpringBootApplication
public class So50760600Application {
public static void main(String[] args) {
SpringApplication.run(So50760600Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> template.convertAndSend("", "temp", "foo", m -> {
m.getMessageProperties().setExpiration("5000");
return m;
});
}
#RabbitListener(queues = "final")
public void in(String in, #Header("x-death") List<?> death) {
System.out.println(in + ", x-death:" + death);
}
#Bean
public Queue temp() {
Map<String, Object> args = new HashMap<>();
args.put("x-message-ttl", 10000); // default (max)
args.put("x-dead-letter-exchange", "");
args.put("x-dead-letter-routing-key", "final");
return new Queue("temp", true, false, false, args);
}
#Bean
public Queue finalQ() {
return new Queue("final");
}
}
and
foo:[{reason=expired, original-expiration=5000, count=1, exchange=, time=Fri Jun 08 10:43:42 EDT 2018, routing-keys=[temp], queue=temp}]

Spring data redis - listen to expiration event

I would like to listen expiration events with KeyExpirationEventMessageListener but I can't find an example.
Someone know how to do it using Spring boot 1.4.3 & Spring Data Redis?
I am currently doing this
JedisPool pool = new JedisPool(new JedisPoolConfig(), "localhost");
this.jedis = pool.getResource();
this.jedis.psubscribe(new JedisPubSub() {
#Override
public void onPMessage(String pattern, String channel, String message) {
System.out.println("onPMessage pattern " + pattern + " " + channel + " " + message);
List<Object> txResults = redisTemplate.execute(new SessionCallback<List<Object>>() {
public List<Object> execute(RedisOperations operations) throws DataAccessException {
operations.multi();
operations.opsForValue().get("val:" + message);
operations.delete("val:" + message);
return operations.exec();
}
});
System.out.println(txResults.get(0));
}
}, "__keyevent#0__:expired");
And I would like to use Spring instead of Jedis directly.
Regards
Don't use KeyExpirationEventMessageListener as it triggers RedisKeyExpiredEvent which then leads to a failure in RedisKeyValueAdapter.onApplicationEvent.
Rather use RedisMessageListenerContainer:
#Bean
RedisMessageListenerContainer keyExpirationListenerContainer(RedisConnectionFactory connectionFactory) {
RedisMessageListenerContainer listenerContainer = new RedisMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.addMessageListener((message, pattern) -> {
// event handling comes here
}, new PatternTopic("__keyevent#*__:expired"));
return listenerContainer;
}
RedisMessageListenerContainer runs all notifications on an own thread.