Enabling SSL on Kafka - ssl

I'm trying to connect to a kafka cluster with SSL required on brokers for clients to connect. Most clients can communicate with the brokers over SSL, so I know the brokers are set up correctly. We intent to use 2-way SSL authentication and followed these instructions: https://docs.confluent.io/current/tutorials/security_tutorial.html#security-tutorial.
However I have a java application that I'd like to connect to the brokers. I think SSL handshake is not complete and as a result the request to the broker is timing out. The same java application can connect to non-SSL enabled Kafka brokers without an issue.
Update:
I run into this when I tried to enable ssl. While debugging, authentication exception turned is null. I can also see that my truststore and keystore are loaded appropriately. So how do I troubleshoot this metadata update request timeout further?
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
From
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
When I run kafka console producer using bitnami docker image with the same trustStore/keyStore passed as env variables, it works fine.
This works:
docker run -it -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.keystore.jks:/tmp/keystore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.truststore.jks:/tmp/truststore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/client_ssl.properties:/tmp/client.properties bitnami/kafka:1.0.0-r3 kafka-console-producer.sh --broker-list some-elb.elb.us-west-2.amazonaws.com:9094 --topic test --producer.config /tmp/client.properties
Here are the debug logs from my java client application. Appreciate any insight on how to troubleshoot this.
2018-03-13 20:13:38.661 INFO 20653 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2018-03-13 20:13:38.669 INFO 20653 --- [ main] c.i.aggregate.precompute.Application : Started Application in 14.066 seconds (JVM running for 15.12)
2018-03-13 20:13:42.225 INFO 20653 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [some-elb.elb.us-west-2.amazonaws.com:9094]
buffer.memory = 33554432
client.id =
compression.type = lz4
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 2000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /Users/kafka/Cluster-Certs/kafka.client.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /Users/kafka/Cluster-Certs/kafka.client.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = <some class>
2018-03-13 20:13:42.287 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Starting the Kafka producer
2018-03-13 20:13:42.841 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bufferpool-wait-time
2018-03-13 20:13:43.062 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name buffer-exhausted-records
2018-03-13 20:13:43.217 DEBUG 20653 --- [ main] org.apache.kafka.clients.Metadata : Updated cluster metadata version 1 to Cluster(id = null, nodes = [some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)], partitions = [])
2018-03-13 20:13:45.670 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name produce-throttle-time
2018-03-13 20:13:45.909 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2018-03-13 20:13:45.923 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2018-03-13 20:13:45.935 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2018-03-13 20:13:45.946 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2018-03-13 20:13:45.958 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2018-03-13 20:13:45.968 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2018-03-13 20:13:45.990 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2018-03-13 20:13:46.005 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2018-03-13 20:13:46.025 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2018-03-13 20:13:46.130 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-size
2018-03-13 20:13:46.139 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name compression-rate
2018-03-13 20:13:46.147 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name queue-time
2018-03-13 20:13:46.156 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name request-time
2018-03-13 20:13:46.165 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name records-per-request
2018-03-13 20:13:46.179 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-retries
2018-03-13 20:13:46.189 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name errors
2018-03-13 20:13:46.199 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-size
2018-03-13 20:13:46.250 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-split-rate
2018-03-13 20:13:46.275 DEBUG 20653 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-1] Starting Kafka producer I/O thread.
2018-03-13 20:13:46.329 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
2018-03-13 20:13:46.333 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
2018-03-13 20:13:46.369 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Kafka producer started
2018-03-13 20:13:52.982 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Requesting metadata update for topic ssl-txn.
2018-03-13 20:13:52.987 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initialize connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null) for sending metadata request
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initiating connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:53.217 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-sent
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-received
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.latency
2018-03-13 20:13:53.222 DEBUG 20653 --- [ad | producer-1] o.apache.kafka.common.network.Selector : [Producer clientId=producer-1] Created socket with SO_RCVBUF = 33488, SO_SNDBUF = 131376, SO_TIMEOUT = 0 to node -1
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeWrap -1
2018-03-13 20:13:53.225 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, handshakeResult Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 326, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap -1
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap: handshakeStatus NEED_UNWRAP status BUFFER_UNDERFLOW
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, handshakeResult Status = BUFFER_UNDERFLOW HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 0, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.485 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Completed connection to node -1. Fetching API versions.
2018-03-13 20:13:53.485 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:54.992 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
2018-03-13 20:13:54.992 INFO 20653 --- [ main] c.i.aggregate.precompute.kafka.Producer : sent message in callback
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
at com.intuit.aggregate.precompute.kafka.Producer.send(Producer.java:76)
at com.intuit.aggregate.precompute.Application.main(Application.java:58)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
Disconnected from the target VM, address: '127.0.0.1:53161', transport: 'socket'

This issue was due to an incorrect cert on brokers. java has different defaults than scala/python, for the ciphers which is why other language clients worked. But go also had a similar issue and then they enabled ssl logging on brokers and caught the issue.

Related

No broker/node available in test with Kafka in TestContainers

I am trying to create a bare-bones skeleton integration test for Kafka with TestContainers: just publish message to topic and check it arrives to it (entire setup below).
SkeletonTests.kt
#Testcontainers
class SkeletonTests {
#Container
private val kafka = KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:6.2.1"))
#Test
fun `do nothing special`() {
// Arrange
val producer = KafkaProducer(
mapOf(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG to kafka.bootstrapServers),
StringSerializer(),
StringSerializer()
)
val consumer = KafkaConsumer(
mapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to kafka.bootstrapServers,
ConsumerConfig.MAX_POLL_RECORDS_CONFIG to 1,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
ConsumerConfig.GROUP_ID_CONFIG to "test-group-id"
),
StringDeserializer(),
StringDeserializer()
).apply { subscribe(listOf("topic")) }
// Act
producer.send(ProducerRecord("topic", "Hello there!"))
producer.flush()
// Assert
assertEquals(consumer.poll(Duration.ofSeconds(3)).first().value(), "Hello there!")
}
}
build.gradle.kts
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
kotlin("jvm") version "1.5.31"
}
repositories {
mavenCentral()
}
dependencies {
implementation("org.apache.kafka:kafka-clients:3.1.0")
implementation("ch.qos.logback:logback-core:1.2.11")
implementation("ch.qos.logback:logback-classic:1.2.11")
implementation("org.slf4j:slf4j-api:1.7.36")
testImplementation("org.junit.jupiter:junit-jupiter:5.8.2")
testImplementation("org.testcontainers:kafka:1.17.1")
testImplementation("org.testcontainers:junit-jupiter:1.17.1")
}
tasks.test {
useJUnitPlatform()
}
tasks.withType<KotlinCompile>() {
kotlinOptions.jvmTarget = "11"
}
logback.xml
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
Test passes (ProducerTests > do nothing special() PASSED) however log is flooded with producer and consumer warnings. Is this expected? Am I missing some configuration for broker/leader to make this errors go away?
Producer:
02:12:11.204 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
02:12:11.255 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Initialize connection to node localhost:61785 (id: 1 rack: null) for sending metadata request
02:12:11.255 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Initiating connection to node localhost:61785 (id: 1 rack: null) using address localhost/127.0.0.1
02:12:11.255 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=producer-1] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:829)
02:12:11.256 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node 1 disconnected.
02:12:11.256 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node 1 (localhost/127.0.0.1:61785) could not be established. Broker may not be available.
02:14:39.670 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 3 : {topic=LEADER_NOT_AVAILABLE}
02:14:39.670 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Requesting metadata update for topic topic due to error LEADER_NOT_AVAILABLE
Consumer:
02:21:11.351 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Initiating connection to node localhost:61785 (id: 1 rack: null) using address localhost/127.0.0.1
02:21:11.352 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1374)
02:21:11.352 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Node 1 disconnected.
02:21:11.352 [kafka-coordinator-heartbeat-thread | test-group-id] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Connection to node 1 (localhost/127.0.0.1:61785) could not be established. Broker may not be available.
02:21:11.353 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] No broker available to send FindCoordinator request
02:21:11.543 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] No broker available to send FindCoordinator request
02:21:11.543 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Give up sending metadata request since no node is available
02:21:11.544 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Sending FindCoordinator request to broker localhost:61785 (id: 1 rack: null)
02:21:11.544 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Initiating connection to node localhost:61785 (id: 1 rack: null) using address localhost/127.0.0.1
02:21:11.545 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:306)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1374)
02:21:11.545 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Node 1 disconnected.
02:21:11.545 [kafka-coordinator-heartbeat-thread | test-group-id] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Connection to node 1 (localhost/127.0.0.1:61785) could not be established. Broker may not be available.
02:21:11.545 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] Cancelled request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=consumer-test-group-id-1, correlationId=14) due to node 1 being disconnected
02:21:11.545 [kafka-coordinator-heartbeat-thread | test-group-id] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-test-group-id-1, groupId=test-group-id] FindCoordinator request failed due to org.apache.kafka.common.errors.DisconnectException
Update: I removed Spring dependencies completely however the problem persists and that suggests I am misconfiguring TestContainers.

Why the connection operation of Lettuce takes more time than Jedis?

Connecting to local redis, Lettuce takes nearly 5000ms, but Jedis only takes 30ms.
I refer to thie example ConnectToRedis
I use the default spring-boot-starter with lombok dependency:
My Code:
#Component
#Slf4j
class LettuceRunner implements CommandLineRunner {
#Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
RedisClient redisClient = RedisClient.create("redis://localhost:6379");
watch.start();
StatefulRedisConnection<String, String> connection = redisClient.connect();
watch.stop();
log.info("lettuce : {} ms", watch.getLastTaskTimeMillis());
connection.close();
redisClient.shutdown();
}
}
#Component
#Slf4j
class JedisRunner implements CommandLineRunner {
#Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
watch.start();
Jedis jedis = new Jedis("localhost");
jedis.get("redis_key");
watch.stop();
log.info("jedis : {} ms", watch.getLastTaskInfo().getTimeMillis());
}
}
and the result is:
2020-08-14 17:02:28.236 INFO 21760 --- [ main] com.example.demo.JedisRunner : jedis : 27 ms
2020-08-14 17:02:33.318 INFO 21760 --- [ main] com.example.demo.LettuceRunner : lettuce : 4815 ms
Because Lettuce uses Netty and it spends the bulk of time initiating things in Netty.
Check the logs, as you can see, the bulk of the time spent is inside io.netty package:
2020-08-15 00:54:06.030 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.util.concurrent.DefaultEventExecutorGroup
2020-08-15 00:54:06.031 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Trying to get a Redis connection for: RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.120 DEBUG 728 --- [ main] io.lettuce.core.EpollProvider : Starting without optional epoll library
2020-08-15 00:54:06.122 DEBUG 728 --- [ main] io.lettuce.core.KqueueProvider : Starting without optional kqueue library
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Allocating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.124 DEBUG 728 --- [ main] i.n.channel.MultithreadEventLoopGroup : -Dio.netty.eventLoopThreads: 12
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.noKeySetOptimization: false
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.selectorAutoRebuildThreshold: 512
2020-08-15 00:54:06.421 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Adding reference to io.netty.channel.nio.NioEventLoopGroup#7c59cf66, existing ref count 0
2020-08-15 00:54:06.431 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Resolved SocketAddress localhost:6379 using RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.432 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379
2020-08-15 00:54:06.435 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.processId: 728 (auto-detected)
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv4Stack: false
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv6Addresses: false
2020-08-15 00:54:06.659 DEBUG 728 --- [ main] io.netty.util.NetUtil : Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
2020-08-15 00:54:06.660 DEBUG 728 --- [ main] io.netty.util.NetUtil : Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2020-08-15 00:54:06.898 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.machineId: 00:50:56:ff:fe:c0:00:08 (auto-detected)
2020-08-15 00:54:06.911 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.allocator.type: pooled
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.threadLocalDirectBufferSize: 0
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxCapacityPerThread: 4096
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.linkCapacity: 16
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.ratio: 8
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.delayedQueue.ratio: 8
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkAccessible: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkBounds: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] i.n.util.ResourceLeakDetectorFactory : Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#20e9fc6c
2020-08-15 00:54:06.950 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, [id: 0x7bd077d9] (inactive), chid=0x1] channelRegistered()
2020-08-15 00:54:06.953 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activateEndpointAndExecuteBufferedCommands 0 command(s) buffered
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activating endpoint
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands() Flushing 0 commands
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive() done
2020-08-15 00:54:06.955 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379: Success
2020-08-15 00:54:06.956 INFO 728 --- [ main] c.h.s.c.c.CacheStudyApplicationTests : lettuce : 925 ms
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : close()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : closeAsync()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] closeAsync()
2020-08-15 00:54:06.957 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] userEventTriggered(ctx, io.lettuce.core.ConnectionEvents$Activated#1cda757f)
2020-08-15 00:54:06.958 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive()
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] deactivating endpoint handler
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive() done
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelInactive()
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] Reconnect scheduling disabled
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelUnregistered()
2020-08-15 00:54:06.961 DEBUG 728 --- [ main] i.l.c.resource.DefaultClientResources : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Release executor io.netty.channel.nio.NioEventLoopGroup#7c59cf66
2020-08-15 00:54:06.965 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.PoolThreadCache : Freed 1 thread-local buffer(s) from thread: lettuce-nioEventLoop-8-1

Spring Cloud Config Client: Fetching config from wrong server

When I run my Spring Cloud Config Client project config-client, I found these error:
2018-02-09 10:31:09.885 INFO 13933 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at: http://localhost:8888
2018-02-09 10:31:10.022 WARN 13933 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/config-client/dev/master": 拒绝连接 (Connection refused); nested exception is java.net.ConnectException: 拒绝连接 (Connection refused)
2018-02-09 10:31:10.026 INFO 13933 --- [ main] c.y.c.ConfigClientApplication : No active profile set, falling back to default profiles: default
2018-02-09 10:31:10.040 INFO 13933 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#33b1c5c5: startup date [Fri Feb 09 10:31:10 CST 2018]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#1ffe63b9
2018-02-09 10:31:10.419 INFO 13933 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=65226c2b-524f-3b14-8e17-9fdbc9f72d85
2018-02-09 10:31:10.471 INFO 13933 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$25380e89] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-02-09 10:31:10.688 INFO 13933 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 10001 (http)
2018-02-09 10:31:10.697 INFO 13933 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-02-09 10:31:10.698 INFO 13933 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27
2018-02-09 10:31:10.767 INFO 13933 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-02-09 10:31:10.768 INFO 13933 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 727 ms
2018-02-09 10:31:10.861 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-02-09 10:31:10.865 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-02-09 10:31:10.895 WARN 13933 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'configClientApplication': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'content' in value "${content}"
2018-02-09 10:31:10.896 INFO 13933 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2018-02-09 10:31:10.914 INFO 13933 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
2018-02-09 10:31:10.923 ERROR 13933 --- [ main] o.s.boot.SpringApplication : Application startup failed
Apparently, the config server is wrong. However, the Spring Cloud Config Server is running at localhost:10000/ and application.yml of the project(config-client) is below. Why the spring.cloud.config.uri doesn't work?
application.yml [config-client]
server:
port: 10001
spring:
application:
name: config-client
cloud:
config:
label: master
profile: dev
uri: http://localhost:10000
Fur future readers, as answered here, when using Spring Cloud Config Server, we should specify basic bootstrap settings such as : spring.application.name and spring.cloud.config.uri inside bootstrap.yml (or "bootstrap.properties").
Upon startup, Spring Cloud makes an HTTP call to the config server with the name of the application and retrieves back that application's configuration.
That's said, since we're externalizing our settings using Spring Cloud Config Server, any default configurations defined in application.yml (or "application.properties") will be overridden during the bootstrap process upon startup.
IntelliJ Users: add the following override parameter in the run/Debug Configuration:
Name: spring.cloud.config.uri
Value: http://your-server-here/config-server
you can load configuration servers before starting the Application, using bootstrap.yml
just add configuration server and application name
spring:
application:
name: clientTest
cloud:
config:
uri: http://localhost:8889
enabled: true
fail-fast: true
if we are using bootstrap.properties. we have to include this dependency in pom for spring-2.4.0+
agregado para evitar un error al usar spring mayor que 2.4.0
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
in my case i was testing spring consul, which usually runs in 8500, i saw a different port in the log. Found that the different port is due to following deplendency of spring cloud. Hence i just have to remove it.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

Failed to wait for initial partition map exchange

After change apache Ignite 2.0 to 2.1, I got below warning.
2017-08-17 10:44:21.699 WARN 10884 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
I use third party persistence cache store.
when I remove cacheStore configuration, I didn't got warning. work fine.
Using cacheStore and changing down version 2.1 to 2.0, I didn't got warning. work fine.
Is there significant change in 2.1?
here is my full framework stack.
- spring boot 1.5.6
- spring data jpa
- apache ignite 2.1.0
here is my full configuration in java code.(I use embedded ignite in spring)
I use partitioned cache, write behind cache to rdbms storage using spring data jpa.
IgniteConfiguration igniteConfig = new IgniteConfiguration();
CacheConfiguration<Long, Object> cacheConfig = new CacheConfiguration<>();
cacheConfig.setCopyOnRead(false); //for better performance
cacheConfig
.setWriteThrough(true)
.setWriteBehindEnabled(true)
.setWriteBehindBatchSize(1024)
.setWriteBehindFlushFrequency(10000)
.setWriteBehindCoalescing(true)
.setCacheStoreFactory(new CacheStoreImpl()); //CacheStoreImpl use spring data jpa internally
cacheConfig.setName('myService');
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(2);
cacheConfig.setWriteSynchronizationMode(FULL_ASYNC);
cacheConfig.setNearConfiguration(new NearCacheConfiguration<>());//use default configuration
igniteConfig.setCacheConfiguration(cacheConfig);
igniteConfig.setMemoryConfiguration(new MemoryConfiguration()
.setPageSize(8 * 1024)
.setMemoryPolicies(new MemoryPolicyConfiguration()
.setInitialSize((long) 256L * 1024L * 1024L)
.setMaxSize((long) 1024L * 1024L * 1024L)));
Ignite ignite = IgniteSpring.start(igniteConfig, springApplicationCtx);
ignite.active(true);
here is my full log using -DIGNITE_QUITE=false
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Config URL: n/a
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Daemon mode: off
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS: Windows 10 10.0 amd64
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS user: user
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : PID: 684
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Language runtime: Java Platform API Specification ver. 1.8
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM information: Java(TM) SE Runtime Environment 1.8.0_131-b11 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.131-b11
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM total memory: 1.9GB
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Remote Management [restart: off, REST: on, JMX (remote: on, port: 58771, auth: off, ssl: off)]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : IGNITE_HOME=null
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM arguments: [-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=58771, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Djava.rmi.server.hostname=localhost, -Dspring.liveBeansView.mbeanDomain, -Dspring.application.admin.enabled=true, -Dspring.profiles.active=rdbms,multicastIp, -Dapi.port=10010, -Xmx2g, -Xms2g, -DIGNITE_QUIET=false, -Dfile.encoding=UTF-8, -Xbootclasspath:C:\Program Files\Java\jre1.8.0_131\lib\resources.jar;C:\Program Files\Java\jre1.8.0_131\lib\rt.jar;C:\Program Files\Java\jre1.8.0_131\lib\jsse.jar;C:\Program Files\Java\jre1.8.0_131\lib\jce.jar;C:\Program Files\Java\jre1.8.0_131\lib\charsets.jar;C:\Program Files\Java\jre1.8.0_131\lib\jfr.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\cldrdata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\dnsns.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jaccess.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jfxrt.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\localedata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\nashorn.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunec.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunmscapi.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\zipfs.jar]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : System cache's MemoryPolicy size is configured to 40 MB. Use MemoryConfiguration.systemCacheMemorySize property to change the setting.
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Configured caches [in 'sysMemPlc' memoryPolicy: ['ignite-sys-cache'], in 'default' memoryPolicy: ['myCache']]
2017-08-18 11:54:52.592 WARN 684 --- [ pub-#11%null%] o.apache.ignite.internal.GridDiagnostic : This operating system has been tested less rigorously: Windows 10 10.0 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : Configured plugins:
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : ^-- None
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor :
2017-08-18 11:54:52.724 INFO 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
2017-08-18 11:54:52.772 WARN 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
2017-08-18 11:54:52.787 WARN 684 --- [ main] o.a.i.s.c.noop.NoopCheckpointSpi : Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
2017-08-18 11:54:52.811 WARN 684 --- [ main] o.a.i.i.m.c.GridCollisionManager : Collision resolution is disabled (all jobs will be activated upon arrival).
2017-08-18 11:54:52.812 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Security status [authentication=off, tls/ssl=off]
2017-08-18 11:54:53.087 INFO 684 --- [ main] o.a.i.i.p.odbc.SqlListenerProcessor : SQL connector processor has started on TCP port 10800
2017-08-18 11:54:53.157 INFO 684 --- [ main] o.a.i.i.p.r.p.tcp.GridTcpRestProtocol : Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Non-loopback local IPs: 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831, fe80:0:0:0:159d:5c82:b4ca:7630%eth2, fe80:0:0:0:30a3:1c57:3f57:4831%net0, fe80:0:0:0:3857:b492:48ad:1dc%eth4
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Enabled local MACs: 00000000000000E0, 0A0027000004, BCEE7B8B7C00
2017-08-18 11:54:53.404 INFO 684 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=7d90a0ac-b620-436f-b31c-b538a04b0919]
2017-08-18 11:54:53.409 WARN 684 --- [ main] .s.d.t.i.m.TcpDiscoveryMulticastIpFinder : TcpDiscoveryMulticastIpFinder has no pre-configured addresses (it is recommended in production to specify at least one address in TcpDiscoveryMulticastIpFinder.getAddresses() configuration property)
2017-08-18 11:54:55.068 INFO 684 --- [orker-#34%null%] o.apache.ignite.internal.exchange.time : Started exchange init [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], crd=true, evt=10, node=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], customEvt=null]
2017-08-18 11:54:55.302 INFO 684 --- [orker-#34%null%] o.a.i.i.p.cache.GridCacheProcessor : Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL]
2017-08-18 11:55:15.066 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
2017-08-18 11:55:35.070 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Still waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], topVer=1, nodeId8=7d90a0ac, msg=null, type=NODE_JOINED, tstamp=1503024895045], crd=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], nodeId=7d90a0ac, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1821989981], init=false, lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null, skipPreload=false, clientOnlyExchange=false, initTs=1503024895057, centralizedAff=false, changeGlobalStateE=null, forcedRebFut=null, done=false, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=733156437]]]
I debug my code, I guess IgniteSpring cannot inject SpringResource
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private RdbmsCachePersistenceRepository repository;
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private CacheObjectFactory cacheObjectFactory;
repository, cacheObjectFactory is same instance like below code
public interface RdbmsCachePersistenceRepository extends
JpaRepository<RdbmsCachePersistence, Long>,
CachePersistenceRepository<RdbmsCachePersistence>,
CacheObjectFactory {
#Override
default CachePersistence createCacheObject(long key, Object value, int partition) {
return new RdbmsCachePersistence(key, value, partition);
}
}
And RdbmsCachePersistenceRepository implemented by spring data jpa
when I debug code line by line, IgniteContext cannot bring RdbmsCachePersistenceRepository
I don't know why it is
I resolve this problem, but I don't know why it is resolved.
I added this dummy code before IgniteSpring.start.
springApplicationCtx.getBean(RdbmsCachePersistenceRepository.class);
I think the spring resource bean not initialized when the ignite context get the bean.

Turbine AMQP does not receive Hystrix stream

I had a Turbine and Hystrix setup working, but decided to change it over to Turbine AMQP so I could aggregate multiple services into one stream/dashboard.
I have set up a Turbine AMQP server running on localhost:8989, but it doesn't appear to be getting Hystrix data from the client service. When I hit the Turbine server's IP in my browser, I see data: {"type":"Ping"} repeatedly, even while I am polling the URL of the Hystrix. If I attempt to show the Turbine AMQP stream in the Hystrix Dashboard, I get: Unable to connect to Command Metric Stream.
I have a default install of RabbitMQ running on port 5672.
My client service using Hystrix-AMQP has a application.yml file that looks like so:
spring:
application:
name: policy-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
The tail end of the startup log looks like this:
2015-09-14 16:31:13.030 INFO 52844 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2015-09-14 16:31:13.047 INFO 52844 --- [ main] c.n.e.EurekaDiscoveryClientConfiguration : Registering application policy-service with eureka with status UP
2015-09-14 16:31:13.194 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'policy-service:8088.errorChannel' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusOutboundFlow.channel#0' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy- service:8088.cloudBusOutboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusInboundChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#1
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {message-handler} as a subscriber to the 'cloudBusInboundFlow.channel#0' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#2
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter} as a subscriber to the 'cloudBusWiretapChannel' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusWiretapChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#3
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'cloudBusOutboundChannel' channel
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusOutboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#4
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'cloudBusAmqpInboundFlow.channel#0' channel
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusAmqpInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#5
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'hystrixStream' channel
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.hystrixStream' has 1 subscriber(s).
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#6
2015-09-14 16:31:13.219 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:31:13.219 INFO 52844 --- [ main] ApplicationEventListeningMessageProducer : started org.springframework.integration.event.inbound.ApplicationEventListeningMessageProducer#0
2015-09-14 16:31:13.555 INFO 52844 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (4640c1c8-ff8f-45d7-8426-19d1b7a4cdb0) durable:false, auto-delete:true, exclusive:true. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:31:13.572 INFO 52844 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:31:13.573 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:31:13.576 INFO 52844 --- [ main] c.n.h.c.m.e.HystrixMetricsPoller : Starting HystrixMetricsPoller
2015-09-14 16:31:13.609 INFO 52844 --- [ main] ration$HystrixMetricsPollerConfiguration : Starting poller
2015-09-14 16:31:13.803 INFO 52844 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8088 (http)
2015-09-14 16:31:13.805 INFO 52844 --- [ main] com.ml.springboot.PolicyService : Started PolicyService in 22.544 seconds (JVM running for 23.564)
So it looks like PolicyService successfully connects to the message broker.
The Turbine AMQP server's end of log:
2015-09-14 16:58:05.887 INFO 51944 --- [ main] i.reactivex.netty.server.AbstractServer : Rx server started at port: 8989
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'bootstrap:-1.errorChannel' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'hystrixStreamAggregatorInboundFlow.channel#0' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.integration.channel.DirectChannel : Channel 'bootstrap:-1.hystrixStreamAggregatorInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:58:06.238 INFO 51944 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (spring.cloud.hystrix.stream) durable:false, auto-delete:false, exclusive:false. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:58:06.289 INFO 51944 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:58:06.290 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:58:06.434 INFO 51944 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): -1 (http)
Any ideas why the Turbine AMQP server is not receiving communication from the Hystrix AMQP client?
EDIT: Turbine-AMQP main looks like:
package com.turbine.amqp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.turbine.amqp.EnableTurbineAmqp;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableAutoConfiguration
#EnableTurbineAmqp
#EnableDiscoveryClient
public class TurbineAmqpApplication {
public static void main(String[] args) {
SpringApplication.run(TurbineAmqpApplication.class, args);
}
}
Here's its application.yml:
server:
port: 8989
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
Hitting http://localhost:8989/turbine.stream produces a repeating stream of data: {"type":"Ping"}
and shows this in console:
2015-09-15 08:54:37.960 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:54:38.025 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : Starting aggregation
EDIT: The below exception is thrown when I stop listening to the turbine stream, not when I try to listen with the dashboard.
2015-09-15 08:56:47.934 INFO 83480 --- [o-eventloop-3-3] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:56:47.946 WARN 83480 --- [o-eventloop-3-3] io.netty.channel.DefaultChannelPipeline : An exception was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.NoSuchMethodError: rx.Observable.collect(Lrx/functions/Func0;Lrx/functions/Action2;)Lrx/Observable;
at com.netflix.turbine.aggregator.StreamAggregator.lambda$null$36(StreamAggregator.java:89)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:60)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:40)
at rx.Observable.unsafeSubscribe(Observable.java:8591)
at rx.internal.operators.OperatorMerge$MergeSubscriber.handleNewSource(OperatorMerge.java:190)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:160)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:96)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:54)
at rx.internal.operators.OperatorGroupBy$GroupBySubscriber.onNext(OperatorGroupBy.java:173)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.onNext(SubjectSubscriptionManager.java:224)
at rx.subjects.PublishSubject.onNext(PublishSubject.java:101)
at org.springframework.cloud.netflix.turbine.amqp.Aggregator.handle(Aggregator.java:53)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:112)
at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:102)
at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:49)
at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:342)
at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:88)
at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:131)
at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:330)
at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:164)
at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:276)
at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:142)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:75)
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:71)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:248)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:171)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:119)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:101)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$400(AmqpInboundChannelAdapter.java:45)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$1.onMessage(AmqpInboundChannelAdapter.java:93)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:756)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:167)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1241)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:660)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1005)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:989)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1103)
at java.lang.Thread.run(Thread.java:745)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: GroupedObservable.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:98)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:56)
... 58 common frames omitted
My dependencies for turbine-amqp are as follows:
dependencies {
compile('org.springframework.cloud:spring-cloud-starter-turbine-amqp:1.0.3.RELEASE')
compile 'org.springframework.boot:spring-boot-starter-web:1.2.5.RELEASE'
compile 'org.springframework.boot:spring-boot-starter-actuator:1.2.5.RELEASE'
testCompile("org.springframework.boot:spring-boot-starter-test")
}
dependencyManagement {
imports {
mavenBom 'org.springframework.cloud:spring-cloud-starter-parent:1.0.2.RELEASE'
}
}
It is so hard to find a solution.
Using Spring cloud 2.1.4.RELEASE I faced with similar problem.
The main cause is the incompatibility [exchanges] name in rabbitMQ between:
spring-cloud-netflix-hystrix-stream and spring-cloud-starter-netflix-turbine-stream.
So solve it:
See the name created exchange name when you start the service componente {the same that declare hystrix-stream}
on the componente that declare {turbine-stream}
update the property
turbine.stream.destination=
in my case
turbine.stream.destination=hystrixStreamOutput
I faced with similar problem and I find a solution.
My Spring Cloud version is 2.1.0.RELEASE
The solution:
add property
spring.cloud.stream.bindings.turbineStreamInput.destination: hystrixStreamOutput
turbine.stream.enabled: false
add auto configuration
#EnableBinding(TurbineStreamClient.class)
public class TurbineStreamAutoConfiguration {
#Autowired
private BindingServiceProperties bindings;
#Autowired
private TurbineStreamProperties properties;
#PostConstruct
public void init() {
BindingProperties inputBinding = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (inputBinding == null) {
this.bindings.getBindings().put(TurbineStreamClient.INPUT,
new BindingProperties());
}
BindingProperties input = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (input.getDestination() == null) {
input.setDestination(this.properties.getDestination());
}
if (input.getContentType() == null) {
input.setContentType(this.properties.getContentType());
}
}
#Bean
public HystrixStreamAggregator hystrixStreamAggregator(ObjectMapper mapper,
PublishSubject<Map<String, Object>> publisher) {
return new HystrixStreamAggregator(mapper, publisher);
}
}