Why the connection operation of Lettuce takes more time than Jedis? - redis

Connecting to local redis, Lettuce takes nearly 5000ms, but Jedis only takes 30ms.
I refer to thie example ConnectToRedis
I use the default spring-boot-starter with lombok dependency:
My Code:
#Component
#Slf4j
class LettuceRunner implements CommandLineRunner {
#Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
RedisClient redisClient = RedisClient.create("redis://localhost:6379");
watch.start();
StatefulRedisConnection<String, String> connection = redisClient.connect();
watch.stop();
log.info("lettuce : {} ms", watch.getLastTaskTimeMillis());
connection.close();
redisClient.shutdown();
}
}
#Component
#Slf4j
class JedisRunner implements CommandLineRunner {
#Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
watch.start();
Jedis jedis = new Jedis("localhost");
jedis.get("redis_key");
watch.stop();
log.info("jedis : {} ms", watch.getLastTaskInfo().getTimeMillis());
}
}
and the result is:
2020-08-14 17:02:28.236 INFO 21760 --- [ main] com.example.demo.JedisRunner : jedis : 27 ms
2020-08-14 17:02:33.318 INFO 21760 --- [ main] com.example.demo.LettuceRunner : lettuce : 4815 ms

Because Lettuce uses Netty and it spends the bulk of time initiating things in Netty.
Check the logs, as you can see, the bulk of the time spent is inside io.netty package:
2020-08-15 00:54:06.030 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.util.concurrent.DefaultEventExecutorGroup
2020-08-15 00:54:06.031 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Trying to get a Redis connection for: RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.120 DEBUG 728 --- [ main] io.lettuce.core.EpollProvider : Starting without optional epoll library
2020-08-15 00:54:06.122 DEBUG 728 --- [ main] io.lettuce.core.KqueueProvider : Starting without optional kqueue library
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Allocating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.124 DEBUG 728 --- [ main] i.n.channel.MultithreadEventLoopGroup : -Dio.netty.eventLoopThreads: 12
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.noKeySetOptimization: false
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.selectorAutoRebuildThreshold: 512
2020-08-15 00:54:06.421 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Adding reference to io.netty.channel.nio.NioEventLoopGroup#7c59cf66, existing ref count 0
2020-08-15 00:54:06.431 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Resolved SocketAddress localhost:6379 using RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.432 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379
2020-08-15 00:54:06.435 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.processId: 728 (auto-detected)
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv4Stack: false
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv6Addresses: false
2020-08-15 00:54:06.659 DEBUG 728 --- [ main] io.netty.util.NetUtil : Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
2020-08-15 00:54:06.660 DEBUG 728 --- [ main] io.netty.util.NetUtil : Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2020-08-15 00:54:06.898 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.machineId: 00:50:56:ff:fe:c0:00:08 (auto-detected)
2020-08-15 00:54:06.911 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.allocator.type: pooled
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.threadLocalDirectBufferSize: 0
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxCapacityPerThread: 4096
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.linkCapacity: 16
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.ratio: 8
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.delayedQueue.ratio: 8
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkAccessible: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkBounds: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] i.n.util.ResourceLeakDetectorFactory : Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#20e9fc6c
2020-08-15 00:54:06.950 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, [id: 0x7bd077d9] (inactive), chid=0x1] channelRegistered()
2020-08-15 00:54:06.953 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activateEndpointAndExecuteBufferedCommands 0 command(s) buffered
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activating endpoint
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands() Flushing 0 commands
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive() done
2020-08-15 00:54:06.955 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379: Success
2020-08-15 00:54:06.956 INFO 728 --- [ main] c.h.s.c.c.CacheStudyApplicationTests : lettuce : 925 ms
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : close()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : closeAsync()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] closeAsync()
2020-08-15 00:54:06.957 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] userEventTriggered(ctx, io.lettuce.core.ConnectionEvents$Activated#1cda757f)
2020-08-15 00:54:06.958 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive()
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] deactivating endpoint handler
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive() done
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelInactive()
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] Reconnect scheduling disabled
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelUnregistered()
2020-08-15 00:54:06.961 DEBUG 728 --- [ main] i.l.c.resource.DefaultClientResources : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Release executor io.netty.channel.nio.NioEventLoopGroup#7c59cf66
2020-08-15 00:54:06.965 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.PoolThreadCache : Freed 1 thread-local buffer(s) from thread: lettuce-nioEventLoop-8-1

Related

Somtimes , embedded-redis(lettuce) try to reconnect constantly in test

I have written redis test cases with embedded redis and its worked fine without any issues in local. But when I moved to CI / CD pipeline with jenkins or gitlab. sometime, i am facing connection refused issue.
enviroment:
spring-data-redis:2.2.8
lettuce:5.5.2
eu.monniot.redis:embedded-redis:1.2.2
log:
org.springframework.test.context.event.EventPublishingTestExecutionListener]
2020-08-24 14:22:02.281 INFO 4273 --- [ Test worker] t.a.d.r.DataRedisTestContextBootstrapper : Using TestExecutionListeners: [org.springframework.test.context.web.ServletTestExecutionListener#5c88a8a1, org.springframework.test.context.support.DirtiesContextBeforeModesTestExecutionListener#365ab4bb, org.springframework.boot.test.mock.mockito.MockitoTestExecutionListener#65d657e9, org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener#34ad82d5, org.springframework.test.context.support.DirtiesContextTestExecutionListener#2d79e941, org.springframework.test.context.transaction.TransactionalTestExecutionListener#4ee57f39, org.springframework.test.context.jdbc.SqlScriptsTestExecutionListener#19db0013, org.springframework.test.context.event.EventPublishingTestExecutionListener#413892d7, org.springframework.boot.test.autoconfigure.restdocs.RestDocsTestExecutionListener#23e94850, org.springframework.boot.test.autoconfigure.web.client.MockRestServiceServerResetTestExecutionListener#4690f7a3, org.springframework.boot.test.autoconfigure.web.servlet.MockMvcPrintOnlyOnFailureTestExecutionListener#78aacfa, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverTestExecutionListener#2a128f6, org.springframework.boot.test.mock.mockito.ResetMocksTestExecutionListener#6be35b3a]
2020-08-24 14:22:02.405 INFO 4273 --- [llEventLoop-4-6] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:16379
2020-08-24 14:22:02.405 INFO 4273 --- [llEventLoop-4-5] i.l.core.protocol.ReconnectionHandler : Reconnected to localhost:16379
2020-08-24 14:22:02.413 INFO 4273 --- [llEventLoop-7-2] i.l.core.protocol.ReconnectionHandler : Reconnected to localhost:16379
2020-08-24 14:22:02.414 INFO 4273 --- [llEventLoop-7-8] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:16379
2020-08-24 14:22:02.504 INFO 4273 --- [llEventLoop-4-8] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:16379
2020-08-24 14:22:02.613 INFO 4273 --- [llEventLoop-7-6] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:17379
2020-08-24 14:22:02.613 INFO 4273 --- [llEventLoop-7-5] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:16379
2020-08-24 14:22:02.704 INFO 4273 --- [llEventLoop-4-2] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:17379
2020-08-24 14:22:05.313 INFO 4273 --- [llEventLoop-7-1] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:19379
2020-08-24 14:22:05.505 INFO 4273 --- [llEventLoop-4-1] i.l.core.protocol.ReconnectionHandler : Reconnected to 127.0.0.1:18369
2020-08-24 14:22:07.012 INFO 4273 --- [xecutorLoop-5-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 127.0.0.1:16379
2020-08-24 14:22:07.012 INFO 4273 --- [xecutorLoop-5-8] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was localhost:16379
2020-08-24 14:22:07.013 WARN 4273 --- [llEventLoop-7-4] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [127.0.0.1:16379]: finishConnect(..) failed: Connection refused: /127.0.0.1:16379
2020-08-24 14:22:07.013 WARN 4273 --- [llEventLoop-7-3] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [localhost:16379]: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:16379
2020-08-24 14:22:07.104 INFO 4273 --- [xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was localhost:16379
2020-08-24 14:22:07.104 INFO 4273 --- [xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 127.0.0.1:17379
2020-08-24 14:22:07.104 INFO 4273 --- [xecutorLoop-1-8] i.l.core.protocol.ConnectionWatchdog : Reconnecting, last destination was 127.0.0.1:16379
2020-08-24 14:22:07.105 WARN 4273 --- [llEventLoop-4-2] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [127.0.0.1:16379]: finishConnect(..) failed: Connection refused: /127.0.0.1:16379
2020-08-24 14:22:07.105 WARN 4273 --- [llEventLoop-4-1] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [127.0.0.1:17379]: finishConnect(..) failed: Connection refused: /127.0.0.1:17379
2020-08-24 14:22:07.105 WARN 4273 --- [llEventLoop-4-8] i.l.core.protocol.ConnectionWatchdog : Cannot reconnect to [localhost:16379]: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:16379
code:
companion object {
private lateinit var cluster: Redis
#JvmStatic
#BeforeAll
#Timeout(120)
fun beforeAll() {
cluster = RedisCluster.Builder()
.serverPorts(arrayListOf(16379, 17379, 18369, 19379))
.numOfReplicates(1)
.numOfRetries(12)
.build()
cluster.start()
}
#JvmStatic
#AfterAll
fun afterAll() {
cluster.stop()
}
}
I can't fix it. Can anybody help me?

Klov installation failed

Currently, I am using klov tool to generate reports.
First I am installed MongoDB 3.2 then I start the MongoDB server.
Then I am trying to install klov 0.1.0 on command prompt
Using: java -jar klov-0.1.0.jar command
when the jar file executed it gives an exception: "APPLICATION FAILED TO START"
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.10.RELEASE)
2018-05-03 11:37:02.009 INFO 9308 --- [ main] com.aventstack.klov.Application : Starting Application v0.1.0 on CS69-PC with PID 9308 (E:\klov-0.1.0\klov-0.1.0.jar started by ADMIN in E:\klov-0.1.0)
2018-05-03 11:37:02.017 INFO 9308 --- [ main] com.aventstack.klov.Application : No active profile set, falling back to default profiles: default
2018-05-03 11:37:02.233 INFO 9308 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy2018-05-03 11:37:13.732 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/auth/signout],methods=[GET]}" onto public java.lang.String com.aventstack.klov.controllers.UserController.signout(javax.servlet.http.HttpSession)
2018-05-03 11:37:13.736 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-05-03 11:37:13.740 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-05-03 11:37:13.808 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:13.812 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:13.844 INFO 9308 --- [ main] .m.m.a.ExceptionHandlerExceptionResolver : Detected #ExceptionHandler methods in repositoryRestExceptionHandler
2018-05-03 11:37:13.950 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:14.440 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerAdapter : Looking for #ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy
2018-05-03 11:37:14.469 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}/{propertyId}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReferenceId(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.474 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReferenceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.476 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.478 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[PATCH || PUT || POST],consumes=[application/json || application/x-spring-data-compact+json || text/uri-list],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.createPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpMethod,org.springframework.hateoas.Resources<java.lang.Object>,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.483 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}/{propertyId}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.486 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.495 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.data.rest.webmvc.RepositorySearchesResource org.springframework.data.rest.webmvc.RepositorySearchController.listSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.497 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.executeSearch(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.util.MultiValueMap<java.lang.String, java.lang.Object>,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders)
2018-05-03 11:37:14.499 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<java.lang.Object> org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-05-03 11:37:14.501 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<java.lang.Object> org.springframework.data.rest.webmvc.RepositorySearchController.headForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-05-03 11:37:14.515 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.headForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.517 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.519 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[GET],produces=[application/x-spring-data-compact+json]}" onto public org.springframework.hateoas.ResourceSupport org.springframework.data.rest.webmvc.RepositorySearchController.executeSearchCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpHeaders,org.springframework.util.MultiValueMap<java.lang.String, java.lang.Object>,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler)
2018-05-03 11:37:14.526 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.RepositoryLinksResource> org.springframework.data.rest.webmvc.RepositoryController.listRepositories()
2018-05-03 11:37:14.529 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositoryController.optionsForRepositories()
2018-05-03 11:37:14.530 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryController.headForRepositories()
2018-05-03 11:37:14.537 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.Resource<?>> org.springframework.data.rest.webmvc.RepositoryEntityController.getItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.541 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[PUT],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.putItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.547 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.headForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.549 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[PATCH],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.patchItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException,org.springframework.data.rest.webmvc.ResourceNotFoundException
2018-05-03 11:37:14.551 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.553 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.headCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.561 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.564 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[POST],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.postCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.568 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}" onto public org.springframework.hateoas.Resources<?> org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResourceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.573 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.hateoas.Resources<?> org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.575 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.deleteItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.support.ETag) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.585 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[GET],produces=[application/alps+json || */*]}" onto org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.RootResourceInformation> org.springframework.data.rest.webmvc.alps.AlpsController.descriptor(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.589 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[OPTIONS],produces=[application/alps+json]}" onto org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.alps.AlpsController.alpsOptions()
2018-05-03 11:37:14.593 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[GET],produces=[application/schema+json]}" onto public org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.json.JsonSchema> org.springframework.data.rest.webmvc.RepositorySchemaController.schema(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.596 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile],methods=[OPTIONS]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.ProfileController.profileOptions()
2018-05-03 11:37:14.598 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile],methods=[GET]}" onto org.springframework.http.HttpEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.ProfileController.listAllFormsOfMetadata()
2018-05-03 11:37:14.895 INFO 9308 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-05-03 11:37:15.025 ERROR 9308 --- [ main] o.apache.catalina.core.StandardService : Failed to start connector [Connector[HTTP/1.1-80]]
org.apache.catalina.LifecycleException: Failed to start component [Connector[HTTP/1.1-80]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.core.StandardService.addConnector(StandardService.java:225) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.addPreviouslyRemovedConnectors(TomcatEmbeddedServletContainer.java:250) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start(TomcatEmbeddedServletContainer.java:193) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer(EmbeddedWebApplicationContext.java:297) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:145) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546) [spring-context-4.3.14.RELEASE.jar!/:4.3.14.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:693) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:360) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at com.aventstack.klov.Application.main(Application.java:45) [classes!/:0.1.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_121]
at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_121]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [klov-0.1.0.jar:0.1.0]
Caused by: org.apache.catalina.LifecycleException: Protocol handler start failed
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1021) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
... 21 common frames omitted
Caused by: java.net.BindException: Address already in use: bind
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source) ~[na:1.8.0_121]
at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:210) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:1150) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:591) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1018) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
... 22 common frames omitted
2018-05-03 11:37:15.051 INFO 9308 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2018-05-03 11:37:15.088 INFO 9308 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
2018-05-03 11:37:15.099 ERROR 9308 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
The Tomcat connector configured to listen on port 80 failed to start. The port may already be in use or the connector may be misconfigured.
Action:
Verify the connector's configuration, identify and stop any process that's listening on port 80, or configure this application to listen on another port.
2018-05-03 11:37:15.105 INFO 9308 --- [ main] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy
2018-05-03 11:37:15.114 INFO 9308 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-03 11:37:15.129 INFO 9308 --- [ main] org.mongodb.driver.connection : Closed connection [connectionId{localValue:2, serverValue:5}] to localhost:27017 because the pool has been closed.
Please give me any suggestions or solution
Well the error is that:
"The Tomcat connector configured to listen on port 80 failed to start. The port may already be in use or the connector may be misconfigured."
In the Klov folder locate and open application.properties
# klov
application.name=LibraBank
application.display.proLabels=false
server.port=80
Change the server.port=80 to any other port e.g. server.port=2571

Enabling SSL on Kafka

I'm trying to connect to a kafka cluster with SSL required on brokers for clients to connect. Most clients can communicate with the brokers over SSL, so I know the brokers are set up correctly. We intent to use 2-way SSL authentication and followed these instructions: https://docs.confluent.io/current/tutorials/security_tutorial.html#security-tutorial.
However I have a java application that I'd like to connect to the brokers. I think SSL handshake is not complete and as a result the request to the broker is timing out. The same java application can connect to non-SSL enabled Kafka brokers without an issue.
Update:
I run into this when I tried to enable ssl. While debugging, authentication exception turned is null. I can also see that my truststore and keystore are loaded appropriately. So how do I troubleshoot this metadata update request timeout further?
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
From
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
When I run kafka console producer using bitnami docker image with the same trustStore/keyStore passed as env variables, it works fine.
This works:
docker run -it -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.keystore.jks:/tmp/keystore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.truststore.jks:/tmp/truststore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/client_ssl.properties:/tmp/client.properties bitnami/kafka:1.0.0-r3 kafka-console-producer.sh --broker-list some-elb.elb.us-west-2.amazonaws.com:9094 --topic test --producer.config /tmp/client.properties
Here are the debug logs from my java client application. Appreciate any insight on how to troubleshoot this.
2018-03-13 20:13:38.661 INFO 20653 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2018-03-13 20:13:38.669 INFO 20653 --- [ main] c.i.aggregate.precompute.Application : Started Application in 14.066 seconds (JVM running for 15.12)
2018-03-13 20:13:42.225 INFO 20653 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [some-elb.elb.us-west-2.amazonaws.com:9094]
buffer.memory = 33554432
client.id =
compression.type = lz4
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 2000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /Users/kafka/Cluster-Certs/kafka.client.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /Users/kafka/Cluster-Certs/kafka.client.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = <some class>
2018-03-13 20:13:42.287 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Starting the Kafka producer
2018-03-13 20:13:42.841 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bufferpool-wait-time
2018-03-13 20:13:43.062 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name buffer-exhausted-records
2018-03-13 20:13:43.217 DEBUG 20653 --- [ main] org.apache.kafka.clients.Metadata : Updated cluster metadata version 1 to Cluster(id = null, nodes = [some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)], partitions = [])
2018-03-13 20:13:45.670 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name produce-throttle-time
2018-03-13 20:13:45.909 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2018-03-13 20:13:45.923 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2018-03-13 20:13:45.935 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2018-03-13 20:13:45.946 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2018-03-13 20:13:45.958 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2018-03-13 20:13:45.968 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2018-03-13 20:13:45.990 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2018-03-13 20:13:46.005 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2018-03-13 20:13:46.025 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2018-03-13 20:13:46.130 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-size
2018-03-13 20:13:46.139 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name compression-rate
2018-03-13 20:13:46.147 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name queue-time
2018-03-13 20:13:46.156 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name request-time
2018-03-13 20:13:46.165 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name records-per-request
2018-03-13 20:13:46.179 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-retries
2018-03-13 20:13:46.189 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name errors
2018-03-13 20:13:46.199 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-size
2018-03-13 20:13:46.250 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-split-rate
2018-03-13 20:13:46.275 DEBUG 20653 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-1] Starting Kafka producer I/O thread.
2018-03-13 20:13:46.329 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
2018-03-13 20:13:46.333 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
2018-03-13 20:13:46.369 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Kafka producer started
2018-03-13 20:13:52.982 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Requesting metadata update for topic ssl-txn.
2018-03-13 20:13:52.987 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initialize connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null) for sending metadata request
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initiating connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:53.217 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-sent
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-received
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.latency
2018-03-13 20:13:53.222 DEBUG 20653 --- [ad | producer-1] o.apache.kafka.common.network.Selector : [Producer clientId=producer-1] Created socket with SO_RCVBUF = 33488, SO_SNDBUF = 131376, SO_TIMEOUT = 0 to node -1
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeWrap -1
2018-03-13 20:13:53.225 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, handshakeResult Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 326, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap -1
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap: handshakeStatus NEED_UNWRAP status BUFFER_UNDERFLOW
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, handshakeResult Status = BUFFER_UNDERFLOW HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 0, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.485 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Completed connection to node -1. Fetching API versions.
2018-03-13 20:13:53.485 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:54.992 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
2018-03-13 20:13:54.992 INFO 20653 --- [ main] c.i.aggregate.precompute.kafka.Producer : sent message in callback
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
at com.intuit.aggregate.precompute.kafka.Producer.send(Producer.java:76)
at com.intuit.aggregate.precompute.Application.main(Application.java:58)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
Disconnected from the target VM, address: '127.0.0.1:53161', transport: 'socket'
This issue was due to an incorrect cert on brokers. java has different defaults than scala/python, for the ciphers which is why other language clients worked. But go also had a similar issue and then they enabled ssl logging on brokers and caught the issue.

Failed to wait for initial partition map exchange

After change apache Ignite 2.0 to 2.1, I got below warning.
2017-08-17 10:44:21.699 WARN 10884 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
I use third party persistence cache store.
when I remove cacheStore configuration, I didn't got warning. work fine.
Using cacheStore and changing down version 2.1 to 2.0, I didn't got warning. work fine.
Is there significant change in 2.1?
here is my full framework stack.
- spring boot 1.5.6
- spring data jpa
- apache ignite 2.1.0
here is my full configuration in java code.(I use embedded ignite in spring)
I use partitioned cache, write behind cache to rdbms storage using spring data jpa.
IgniteConfiguration igniteConfig = new IgniteConfiguration();
CacheConfiguration<Long, Object> cacheConfig = new CacheConfiguration<>();
cacheConfig.setCopyOnRead(false); //for better performance
cacheConfig
.setWriteThrough(true)
.setWriteBehindEnabled(true)
.setWriteBehindBatchSize(1024)
.setWriteBehindFlushFrequency(10000)
.setWriteBehindCoalescing(true)
.setCacheStoreFactory(new CacheStoreImpl()); //CacheStoreImpl use spring data jpa internally
cacheConfig.setName('myService');
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(2);
cacheConfig.setWriteSynchronizationMode(FULL_ASYNC);
cacheConfig.setNearConfiguration(new NearCacheConfiguration<>());//use default configuration
igniteConfig.setCacheConfiguration(cacheConfig);
igniteConfig.setMemoryConfiguration(new MemoryConfiguration()
.setPageSize(8 * 1024)
.setMemoryPolicies(new MemoryPolicyConfiguration()
.setInitialSize((long) 256L * 1024L * 1024L)
.setMaxSize((long) 1024L * 1024L * 1024L)));
Ignite ignite = IgniteSpring.start(igniteConfig, springApplicationCtx);
ignite.active(true);
here is my full log using -DIGNITE_QUITE=false
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Config URL: n/a
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Daemon mode: off
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS: Windows 10 10.0 amd64
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS user: user
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : PID: 684
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Language runtime: Java Platform API Specification ver. 1.8
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM information: Java(TM) SE Runtime Environment 1.8.0_131-b11 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.131-b11
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM total memory: 1.9GB
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Remote Management [restart: off, REST: on, JMX (remote: on, port: 58771, auth: off, ssl: off)]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : IGNITE_HOME=null
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM arguments: [-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=58771, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Djava.rmi.server.hostname=localhost, -Dspring.liveBeansView.mbeanDomain, -Dspring.application.admin.enabled=true, -Dspring.profiles.active=rdbms,multicastIp, -Dapi.port=10010, -Xmx2g, -Xms2g, -DIGNITE_QUIET=false, -Dfile.encoding=UTF-8, -Xbootclasspath:C:\Program Files\Java\jre1.8.0_131\lib\resources.jar;C:\Program Files\Java\jre1.8.0_131\lib\rt.jar;C:\Program Files\Java\jre1.8.0_131\lib\jsse.jar;C:\Program Files\Java\jre1.8.0_131\lib\jce.jar;C:\Program Files\Java\jre1.8.0_131\lib\charsets.jar;C:\Program Files\Java\jre1.8.0_131\lib\jfr.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\cldrdata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\dnsns.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jaccess.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jfxrt.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\localedata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\nashorn.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunec.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunmscapi.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\zipfs.jar]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : System cache's MemoryPolicy size is configured to 40 MB. Use MemoryConfiguration.systemCacheMemorySize property to change the setting.
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Configured caches [in 'sysMemPlc' memoryPolicy: ['ignite-sys-cache'], in 'default' memoryPolicy: ['myCache']]
2017-08-18 11:54:52.592 WARN 684 --- [ pub-#11%null%] o.apache.ignite.internal.GridDiagnostic : This operating system has been tested less rigorously: Windows 10 10.0 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : Configured plugins:
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : ^-- None
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor :
2017-08-18 11:54:52.724 INFO 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
2017-08-18 11:54:52.772 WARN 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
2017-08-18 11:54:52.787 WARN 684 --- [ main] o.a.i.s.c.noop.NoopCheckpointSpi : Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
2017-08-18 11:54:52.811 WARN 684 --- [ main] o.a.i.i.m.c.GridCollisionManager : Collision resolution is disabled (all jobs will be activated upon arrival).
2017-08-18 11:54:52.812 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Security status [authentication=off, tls/ssl=off]
2017-08-18 11:54:53.087 INFO 684 --- [ main] o.a.i.i.p.odbc.SqlListenerProcessor : SQL connector processor has started on TCP port 10800
2017-08-18 11:54:53.157 INFO 684 --- [ main] o.a.i.i.p.r.p.tcp.GridTcpRestProtocol : Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Non-loopback local IPs: 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831, fe80:0:0:0:159d:5c82:b4ca:7630%eth2, fe80:0:0:0:30a3:1c57:3f57:4831%net0, fe80:0:0:0:3857:b492:48ad:1dc%eth4
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Enabled local MACs: 00000000000000E0, 0A0027000004, BCEE7B8B7C00
2017-08-18 11:54:53.404 INFO 684 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=7d90a0ac-b620-436f-b31c-b538a04b0919]
2017-08-18 11:54:53.409 WARN 684 --- [ main] .s.d.t.i.m.TcpDiscoveryMulticastIpFinder : TcpDiscoveryMulticastIpFinder has no pre-configured addresses (it is recommended in production to specify at least one address in TcpDiscoveryMulticastIpFinder.getAddresses() configuration property)
2017-08-18 11:54:55.068 INFO 684 --- [orker-#34%null%] o.apache.ignite.internal.exchange.time : Started exchange init [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], crd=true, evt=10, node=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], customEvt=null]
2017-08-18 11:54:55.302 INFO 684 --- [orker-#34%null%] o.a.i.i.p.cache.GridCacheProcessor : Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL]
2017-08-18 11:55:15.066 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
2017-08-18 11:55:35.070 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Still waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], topVer=1, nodeId8=7d90a0ac, msg=null, type=NODE_JOINED, tstamp=1503024895045], crd=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], nodeId=7d90a0ac, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1821989981], init=false, lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null, skipPreload=false, clientOnlyExchange=false, initTs=1503024895057, centralizedAff=false, changeGlobalStateE=null, forcedRebFut=null, done=false, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=733156437]]]
I debug my code, I guess IgniteSpring cannot inject SpringResource
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private RdbmsCachePersistenceRepository repository;
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private CacheObjectFactory cacheObjectFactory;
repository, cacheObjectFactory is same instance like below code
public interface RdbmsCachePersistenceRepository extends
JpaRepository<RdbmsCachePersistence, Long>,
CachePersistenceRepository<RdbmsCachePersistence>,
CacheObjectFactory {
#Override
default CachePersistence createCacheObject(long key, Object value, int partition) {
return new RdbmsCachePersistence(key, value, partition);
}
}
And RdbmsCachePersistenceRepository implemented by spring data jpa
when I debug code line by line, IgniteContext cannot bring RdbmsCachePersistenceRepository
I don't know why it is
I resolve this problem, but I don't know why it is resolved.
I added this dummy code before IgniteSpring.start.
springApplicationCtx.getBean(RdbmsCachePersistenceRepository.class);
I think the spring resource bean not initialized when the ignite context get the bean.

Turbine AMQP does not receive Hystrix stream

I had a Turbine and Hystrix setup working, but decided to change it over to Turbine AMQP so I could aggregate multiple services into one stream/dashboard.
I have set up a Turbine AMQP server running on localhost:8989, but it doesn't appear to be getting Hystrix data from the client service. When I hit the Turbine server's IP in my browser, I see data: {"type":"Ping"} repeatedly, even while I am polling the URL of the Hystrix. If I attempt to show the Turbine AMQP stream in the Hystrix Dashboard, I get: Unable to connect to Command Metric Stream.
I have a default install of RabbitMQ running on port 5672.
My client service using Hystrix-AMQP has a application.yml file that looks like so:
spring:
application:
name: policy-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
The tail end of the startup log looks like this:
2015-09-14 16:31:13.030 INFO 52844 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2015-09-14 16:31:13.047 INFO 52844 --- [ main] c.n.e.EurekaDiscoveryClientConfiguration : Registering application policy-service with eureka with status UP
2015-09-14 16:31:13.194 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'policy-service:8088.errorChannel' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusOutboundFlow.channel#0' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy- service:8088.cloudBusOutboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusInboundChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#1
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {message-handler} as a subscriber to the 'cloudBusInboundFlow.channel#0' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#2
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter} as a subscriber to the 'cloudBusWiretapChannel' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusWiretapChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#3
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'cloudBusOutboundChannel' channel
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusOutboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#4
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'cloudBusAmqpInboundFlow.channel#0' channel
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusAmqpInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#5
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'hystrixStream' channel
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.hystrixStream' has 1 subscriber(s).
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#6
2015-09-14 16:31:13.219 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:31:13.219 INFO 52844 --- [ main] ApplicationEventListeningMessageProducer : started org.springframework.integration.event.inbound.ApplicationEventListeningMessageProducer#0
2015-09-14 16:31:13.555 INFO 52844 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (4640c1c8-ff8f-45d7-8426-19d1b7a4cdb0) durable:false, auto-delete:true, exclusive:true. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:31:13.572 INFO 52844 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:31:13.573 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:31:13.576 INFO 52844 --- [ main] c.n.h.c.m.e.HystrixMetricsPoller : Starting HystrixMetricsPoller
2015-09-14 16:31:13.609 INFO 52844 --- [ main] ration$HystrixMetricsPollerConfiguration : Starting poller
2015-09-14 16:31:13.803 INFO 52844 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8088 (http)
2015-09-14 16:31:13.805 INFO 52844 --- [ main] com.ml.springboot.PolicyService : Started PolicyService in 22.544 seconds (JVM running for 23.564)
So it looks like PolicyService successfully connects to the message broker.
The Turbine AMQP server's end of log:
2015-09-14 16:58:05.887 INFO 51944 --- [ main] i.reactivex.netty.server.AbstractServer : Rx server started at port: 8989
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'bootstrap:-1.errorChannel' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'hystrixStreamAggregatorInboundFlow.channel#0' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.integration.channel.DirectChannel : Channel 'bootstrap:-1.hystrixStreamAggregatorInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:58:06.238 INFO 51944 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (spring.cloud.hystrix.stream) durable:false, auto-delete:false, exclusive:false. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:58:06.289 INFO 51944 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:58:06.290 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:58:06.434 INFO 51944 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): -1 (http)
Any ideas why the Turbine AMQP server is not receiving communication from the Hystrix AMQP client?
EDIT: Turbine-AMQP main looks like:
package com.turbine.amqp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.turbine.amqp.EnableTurbineAmqp;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableAutoConfiguration
#EnableTurbineAmqp
#EnableDiscoveryClient
public class TurbineAmqpApplication {
public static void main(String[] args) {
SpringApplication.run(TurbineAmqpApplication.class, args);
}
}
Here's its application.yml:
server:
port: 8989
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
Hitting http://localhost:8989/turbine.stream produces a repeating stream of data: {"type":"Ping"}
and shows this in console:
2015-09-15 08:54:37.960 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:54:38.025 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : Starting aggregation
EDIT: The below exception is thrown when I stop listening to the turbine stream, not when I try to listen with the dashboard.
2015-09-15 08:56:47.934 INFO 83480 --- [o-eventloop-3-3] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:56:47.946 WARN 83480 --- [o-eventloop-3-3] io.netty.channel.DefaultChannelPipeline : An exception was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.NoSuchMethodError: rx.Observable.collect(Lrx/functions/Func0;Lrx/functions/Action2;)Lrx/Observable;
at com.netflix.turbine.aggregator.StreamAggregator.lambda$null$36(StreamAggregator.java:89)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:60)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:40)
at rx.Observable.unsafeSubscribe(Observable.java:8591)
at rx.internal.operators.OperatorMerge$MergeSubscriber.handleNewSource(OperatorMerge.java:190)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:160)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:96)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:54)
at rx.internal.operators.OperatorGroupBy$GroupBySubscriber.onNext(OperatorGroupBy.java:173)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.onNext(SubjectSubscriptionManager.java:224)
at rx.subjects.PublishSubject.onNext(PublishSubject.java:101)
at org.springframework.cloud.netflix.turbine.amqp.Aggregator.handle(Aggregator.java:53)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:112)
at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:102)
at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:49)
at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:342)
at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:88)
at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:131)
at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:330)
at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:164)
at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:276)
at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:142)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:75)
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:71)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:248)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:171)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:119)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:101)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$400(AmqpInboundChannelAdapter.java:45)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$1.onMessage(AmqpInboundChannelAdapter.java:93)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:756)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:167)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1241)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:660)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1005)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:989)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1103)
at java.lang.Thread.run(Thread.java:745)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: GroupedObservable.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:98)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:56)
... 58 common frames omitted
My dependencies for turbine-amqp are as follows:
dependencies {
compile('org.springframework.cloud:spring-cloud-starter-turbine-amqp:1.0.3.RELEASE')
compile 'org.springframework.boot:spring-boot-starter-web:1.2.5.RELEASE'
compile 'org.springframework.boot:spring-boot-starter-actuator:1.2.5.RELEASE'
testCompile("org.springframework.boot:spring-boot-starter-test")
}
dependencyManagement {
imports {
mavenBom 'org.springframework.cloud:spring-cloud-starter-parent:1.0.2.RELEASE'
}
}
It is so hard to find a solution.
Using Spring cloud 2.1.4.RELEASE I faced with similar problem.
The main cause is the incompatibility [exchanges] name in rabbitMQ between:
spring-cloud-netflix-hystrix-stream and spring-cloud-starter-netflix-turbine-stream.
So solve it:
See the name created exchange name when you start the service componente {the same that declare hystrix-stream}
on the componente that declare {turbine-stream}
update the property
turbine.stream.destination=
in my case
turbine.stream.destination=hystrixStreamOutput
I faced with similar problem and I find a solution.
My Spring Cloud version is 2.1.0.RELEASE
The solution:
add property
spring.cloud.stream.bindings.turbineStreamInput.destination: hystrixStreamOutput
turbine.stream.enabled: false
add auto configuration
#EnableBinding(TurbineStreamClient.class)
public class TurbineStreamAutoConfiguration {
#Autowired
private BindingServiceProperties bindings;
#Autowired
private TurbineStreamProperties properties;
#PostConstruct
public void init() {
BindingProperties inputBinding = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (inputBinding == null) {
this.bindings.getBindings().put(TurbineStreamClient.INPUT,
new BindingProperties());
}
BindingProperties input = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (input.getDestination() == null) {
input.setDestination(this.properties.getDestination());
}
if (input.getContentType() == null) {
input.setContentType(this.properties.getContentType());
}
}
#Bean
public HystrixStreamAggregator hystrixStreamAggregator(ObjectMapper mapper,
PublishSubject<Map<String, Object>> publisher) {
return new HystrixStreamAggregator(mapper, publisher);
}
}