Backpressure on WebClient - applying backpressure to inter-service async web requests - spring-webflux

The description below summarizes my novice attempt at applying backpressure on webclient.
Scenario 1: Applying limitRate() on reactive mongodb response. Service #1 runs on port 8080 on my machine
Code:
#GetMapping(path = "/getBooks/{name}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Book> getBooks(#PathVariable("name") final String name)
{
//SERVICE #1
return bookRepo.findByBookName(name)
.log()
.limitRate(3);
}
Logs:
2022-10-04 13:52:28.919 INFO 38716 --- [ restartedMain] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8080
2022-10-04 13:52:28.932 INFO 38716 --- [ restartedMain] c.example.demo.ReactiveDemoApplication : Started ReactiveDemoApplication in 3.538 seconds (JVM running for 4.198)
2022-10-04 13:58:20.363 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onSubscribe(FluxUsingWhen.UsingWhenSubscriber)
2022-10-04 13:58:20.366 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : request(3)
2022-10-04 13:58:20.424 INFO 38716 --- [ntLoopGroup-3-3] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:276}] to localhost:27017
2022-10-04 13:58:20.474 INFO 38716 --- [ntLoopGroup-3-3] reactor.Flux.UsingWhen.1 : onNext(Book(id=1, bookName=bn2, authorName=an))
2022-10-04 13:58:20.491 INFO 38716 --- [ntLoopGroup-3-3] reactor.Flux.UsingWhen.1 : onNext(Book(id=2, bookName=bn2, authorName=an))
2022-10-04 13:58:20.492 INFO 38716 --- [ntLoopGroup-3-3] reactor.Flux.UsingWhen.1 : onNext(Book(id=3, bookName=bn2, authorName=an))
2022-10-04 13:58:20.554 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : request(3)
2022-10-04 13:58:20.555 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=4, bookName=bn2, authorName=an))
2022-10-04 13:58:20.555 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=5, bookName=bn2, authorName=an))
2022-10-04 13:58:20.555 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=6, bookName=bn2, authorName=an))
and so on .. until :
2022-10-04 13:58:20.561 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=14, bookName=bn2, authorName=an))
2022-10-04 13:58:20.561 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=15, bookName=bn2, authorName=an))
2022-10-04 13:58:20.562 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=16, bookName=bn2, authorName=an))
2022-10-04 13:58:20.566 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onComplete()
2022-10-04 13:58:20.568 INFO 38716 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : request(3)
Scenario 2: Applying limitRate() on Service #2's webclient response. Service #2 calls Service #1. Service #1 runs on port 8080 on my machine
Code:
Service #1 endpoint:
#GetMapping(path = "/getBooks/{name}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Book> getBooks(#PathVariable("name") final String name)
{
//SERVICE #1
return bookRepo.findByBookName(name)
.log();
}
Service #2:
#GetMapping("/getBooks")
public Flux<Book> getBooks() {
// SERVICE #2
return webClient.get()
.uri("http://localhost:8080/api/v1/books/getBooks/bn2")
.retrieve()
.bodyToFlux(Book.class)
.log()
.limitRate(3);
}
Logs: SERVICE #1
2022-10-04 16:36:45.386 INFO 17520 --- [ restartedMain] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8080
2022-10-04 16:36:45.397 INFO 17520 --- [ restartedMain] c.example.demo.ReactiveDemoApplication : Started ReactiveDemoApplication in 4.894 seconds (JVM running for 6.311)
2022-10-04 16:42:21.718 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onSubscribe(FluxUsingWhen.UsingWhenSubscriber)
2022-10-04 16:42:21.720 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : request(1)
2022-10-04 16:42:21.782 INFO 17520 --- [ntLoopGroup-3-3] org.mongodb.driver.connection : Opened connection [connectionId{localValue:3, serverValue:7}] to localhost:27017
2022-10-04 16:42:21.840 INFO 17520 --- [ntLoopGroup-3-3] reactor.Flux.UsingWhen.1 : onNext(Book(id=1, bookName=bn2, authorName=an))
2022-10-04 16:42:21.913 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : request(31)
2022-10-04 16:42:21.914 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=2, bookName=bn2, authorName=an))
2022-10-04 16:42:21.915 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=3, bookName=bn2, authorName=an))
2022-10-04 16:42:21.916 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=4, bookName=bn2, authorName=an))
2022-10-04 16:42:21.917 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=5, bookName=bn2, authorName=an))
2022-10-04 16:42:21.918 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=6, bookName=bn2, authorName=an))
2022-10-04 16:42:21.919 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=7, bookName=bn2, authorName=an))
2022-10-04 16:42:21.920 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=8, bookName=bn2, authorName=an))
2022-10-04 16:42:21.921 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=10, bookName=bn2, authorName=an))
2022-10-04 16:42:21.921 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=11, bookName=bn2, authorName=an))
2022-10-04 16:42:21.922 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=12, bookName=bn2, authorName=an))
2022-10-04 16:42:21.923 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=13, bookName=bn2, authorName=an))
2022-10-04 16:42:21.923 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=14, bookName=bn2, authorName=an))
2022-10-04 16:42:21.924 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=15, bookName=bn2, authorName=an))
2022-10-04 16:42:21.925 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onNext(Book(id=16, bookName=bn2, authorName=an))
2022-10-04 16:42:21.931 INFO 17520 --- [ctor-http-nio-2] reactor.Flux.UsingWhen.1 : onComplete()
logs: SERVICE #2
2022-10-04 16:45:26.291 INFO 27496 --- [ restartedMain] c.example.demo.ReactiveDemo3Application : Started ReactiveDemo3Application in 3.507 seconds (JVM running for 4.1)
2022-10-04 16:45:43.159 INFO 27496 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : onSubscribe(MonoFlatMapMany.FlatMapManyMain)
2022-10-04 16:45:43.163 INFO 27496 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : request(3)
2022-10-04 16:45:43.845 INFO 27496 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : onNext(Book(id=1, bookName=bn2, authorName=an))
2022-10-04 16:45:43.875 INFO 27496 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : onNext(Book(id=2, bookName=bn2, authorName=an))
2022-10-04 16:45:43.876 INFO 27496 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : onNext(Book(id=3, bookName=bn2, authorName=an))
2022-10-04 16:45:45.924 INFO 27496 --- [ parallel-2] reactor.Flux.MonoFlatMapMany.1 : request(3)
and so on..
I was expecting request(3) like Scenario 1, but this time i see request(1) and request (31).
Even after adding delay in SERVICE #2 , the Logs remain the same. :-
return webClient.get()
.uri("http://localhost:8080/api/v1/books/getBooks/bn2")
.retrieve()
.bodyToFlux(Book.class)
.log()
.limitRate(3)
.delayElements(Duration.ofSeconds(1));
What am i missing?
EDIT 1: I am getting similar results with subscriber-controlled backpressure.

It could be hard to control backpreasure using limitRate and delayElements. If your goal is to have more granular control on number of requests, I would suggest to look at recilience4j RateLimiter. It's fully reactive and could be integrated into the flow using RateLimiterOperator
RateLimiterConfig rateLimiterConfig = RateLimiterConfig.custom()
.limitRefreshPeriod(Duration.ofSeconds(1))
.limitForPeriod(2) // 2 requests per second
.timeoutDuration(Duration.ofMinutes(30))
.build();
RateLimiterRegistry registry = RateLimiterRegistry.of(rateLimiterConfig);
RateLimiter rateLimiter = registry.rateLimiter("WebClient");
webClient.get()
.uri("http://localhost:8080/api/v1/books/getBooks/bn2")
.retrieve()
.bodyToFlux(Book.class)
.transform(RateLimiterOperator.of(rateLimiter));

Related

Klov installation failed

Currently, I am using klov tool to generate reports.
First I am installed MongoDB 3.2 then I start the MongoDB server.
Then I am trying to install klov 0.1.0 on command prompt
Using: java -jar klov-0.1.0.jar command
when the jar file executed it gives an exception: "APPLICATION FAILED TO START"
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.10.RELEASE)
2018-05-03 11:37:02.009 INFO 9308 --- [ main] com.aventstack.klov.Application : Starting Application v0.1.0 on CS69-PC with PID 9308 (E:\klov-0.1.0\klov-0.1.0.jar started by ADMIN in E:\klov-0.1.0)
2018-05-03 11:37:02.017 INFO 9308 --- [ main] com.aventstack.klov.Application : No active profile set, falling back to default profiles: default
2018-05-03 11:37:02.233 INFO 9308 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy2018-05-03 11:37:13.732 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/auth/signout],methods=[GET]}" onto public java.lang.String com.aventstack.klov.controllers.UserController.signout(javax.servlet.http.HttpSession)
2018-05-03 11:37:13.736 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)
2018-05-03 11:37:13.740 INFO 9308 --- [ main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)
2018-05-03 11:37:13.808 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:13.812 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:13.844 INFO 9308 --- [ main] .m.m.a.ExceptionHandlerExceptionResolver : Detected #ExceptionHandler methods in repositoryRestExceptionHandler
2018-05-03 11:37:13.950 INFO 9308 --- [ main] o.s.w.s.handler.SimpleUrlHandlerMapping : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
2018-05-03 11:37:14.440 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerAdapter : Looking for #ControllerAdvice: org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy
2018-05-03 11:37:14.469 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}/{propertyId}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReferenceId(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.474 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReferenceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.476 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.deletePropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.478 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[PATCH || PUT || POST],consumes=[application/json || application/x-spring-data-compact+json || text/uri-list],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.createPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpMethod,org.springframework.hateoas.Resources<java.lang.Object>,java.io.Serializable,java.lang.String) throws java.lang.Exception
2018-05-03 11:37:14.483 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}/{propertyId}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.486 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}/{property}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryPropertyReferenceController.followPropertyReference(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,java.lang.String,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws java.lang.Exception
2018-05-03 11:37:14.495 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.data.rest.webmvc.RepositorySearchesResource org.springframework.data.rest.webmvc.RepositorySearchController.listSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.497 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.executeSearch(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.util.MultiValueMap<java.lang.String, java.lang.Object>,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders)
2018-05-03 11:37:14.499 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<java.lang.Object> org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-05-03 11:37:14.501 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<java.lang.Object> org.springframework.data.rest.webmvc.RepositorySearchController.headForSearch(org.springframework.data.rest.webmvc.RootResourceInformation,java.lang.String)
2018-05-03 11:37:14.515 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.headForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.517 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositorySearchController.optionsForSearches(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.519 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/search/{search}],methods=[GET],produces=[application/x-spring-data-compact+json]}" onto public org.springframework.hateoas.ResourceSupport org.springframework.data.rest.webmvc.RepositorySearchController.executeSearchCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.http.HttpHeaders,org.springframework.util.MultiValueMap<java.lang.String, java.lang.Object>,java.lang.String,java.lang.String,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler)
2018-05-03 11:37:14.526 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.RepositoryLinksResource> org.springframework.data.rest.webmvc.RepositoryController.listRepositories()
2018-05-03 11:37:14.529 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.RepositoryController.optionsForRepositories()
2018-05-03 11:37:14.530 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/ || /rest],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryController.headForRepositories()
2018-05-03 11:37:14.537 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.Resource<?>> org.springframework.data.rest.webmvc.RepositoryEntityController.getItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.http.HttpHeaders) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.541 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[PUT],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<? extends org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.putItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.547 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.headForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.549 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[PATCH],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.patchItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,java.io.Serializable,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,org.springframework.data.rest.webmvc.support.ETag,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException,org.springframework.data.rest.webmvc.ResourceNotFoundException
2018-05-03 11:37:14.551 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.553 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[HEAD],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.headCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.561 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[OPTIONS],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.optionsForItemResource(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.564 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[POST],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.RepositoryEntityController.postCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.PersistentEntityResource,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler,java.lang.String) throws org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.568 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[GET],produces=[application/x-spring-data-compact+json || text/uri-list]}" onto public org.springframework.hateoas.Resources<?> org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResourceCompact(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.573 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}],methods=[GET],produces=[application/hal+json || application/json]}" onto public org.springframework.hateoas.Resources<?> org.springframework.data.rest.webmvc.RepositoryEntityController.getCollectionResource(org.springframework.data.rest.webmvc.RootResourceInformation,org.springframework.data.rest.webmvc.support.DefaultedPageable,org.springframework.data.domain.Sort,org.springframework.data.rest.webmvc.PersistentEntityResourceAssembler) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.575 INFO 9308 --- [ main] o.s.d.r.w.RepositoryRestHandlerMapping : Mapped "{[/rest/{repository}/{id}],methods=[DELETE],produces=[application/hal+json || application/json]}" onto public org.springframework.http.ResponseEntity<?> org.springframework.data.rest.webmvc.RepositoryEntityController.deleteItemResource(org.springframework.data.rest.webmvc.RootResourceInformation,java.io.Serializable,org.springframework.data.rest.webmvc.support.ETag) throws org.springframework.data.rest.webmvc.ResourceNotFoundException,org.springframework.web.HttpRequestMethodNotSupportedException
2018-05-03 11:37:14.585 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[GET],produces=[application/alps+json || */*]}" onto org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.RootResourceInformation> org.springframework.data.rest.webmvc.alps.AlpsController.descriptor(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.589 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[OPTIONS],produces=[application/alps+json]}" onto org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.alps.AlpsController.alpsOptions()
2018-05-03 11:37:14.593 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile/{repository}],methods=[GET],produces=[application/schema+json]}" onto public org.springframework.http.HttpEntity<org.springframework.data.rest.webmvc.json.JsonSchema> org.springframework.data.rest.webmvc.RepositorySchemaController.schema(org.springframework.data.rest.webmvc.RootResourceInformation)
2018-05-03 11:37:14.596 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile],methods=[OPTIONS]}" onto public org.springframework.http.HttpEntity<?> org.springframework.data.rest.webmvc.ProfileController.profileOptions()
2018-05-03 11:37:14.598 INFO 9308 --- [ main] o.s.d.r.w.BasePathAwareHandlerMapping : Mapped "{[/rest/profile],methods=[GET]}" onto org.springframework.http.HttpEntity<org.springframework.hateoas.ResourceSupport> org.springframework.data.rest.webmvc.ProfileController.listAllFormsOfMetadata()
2018-05-03 11:37:14.895 INFO 9308 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-05-03 11:37:15.025 ERROR 9308 --- [ main] o.apache.catalina.core.StandardService : Failed to start connector [Connector[HTTP/1.1-80]]
org.apache.catalina.LifecycleException: Failed to start component [Connector[HTTP/1.1-80]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.core.StandardService.addConnector(StandardService.java:225) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.addPreviouslyRemovedConnectors(TomcatEmbeddedServletContainer.java:250) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start(TomcatEmbeddedServletContainer.java:193) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.startEmbeddedServletContainer(EmbeddedWebApplicationContext.java:297) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:145) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546) [spring-context-4.3.14.RELEASE.jar!/:4.3.14.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:693) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:360) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1118) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1107) [spring-boot-1.5.10.RELEASE.jar!/:1.5.10.RELEASE]
at com.aventstack.klov.Application.main(Application.java:45) [classes!/:0.1.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_121]
at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_121]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [klov-0.1.0.jar:0.1.0]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [klov-0.1.0.jar:0.1.0]
Caused by: org.apache.catalina.LifecycleException: Protocol handler start failed
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1021) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
... 21 common frames omitted
Caused by: java.net.BindException: Address already in use: bind
at sun.nio.ch.Net.bind0(Native Method) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.Net.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.ServerSocketChannelImpl.bind(Unknown Source) ~[na:1.8.0_121]
at sun.nio.ch.ServerSocketAdaptor.bind(Unknown Source) ~[na:1.8.0_121]
at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:210) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:1150) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:591) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1018) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
... 22 common frames omitted
2018-05-03 11:37:15.051 INFO 9308 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2018-05-03 11:37:15.088 INFO 9308 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
2018-05-03 11:37:15.099 ERROR 9308 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
The Tomcat connector configured to listen on port 80 failed to start. The port may already be in use or the connector may be misconfigured.
Action:
Verify the connector's configuration, identify and stop any process that's listening on port 80, or configure this application to listen on another port.
2018-05-03 11:37:15.105 INFO 9308 --- [ main] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#1698c449: startup date [Thu May 03 11:37:02 IST 2018]; root of context hierarchy
2018-05-03 11:37:15.114 INFO 9308 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-03 11:37:15.129 INFO 9308 --- [ main] org.mongodb.driver.connection : Closed connection [connectionId{localValue:2, serverValue:5}] to localhost:27017 because the pool has been closed.
Please give me any suggestions or solution
Well the error is that:
"The Tomcat connector configured to listen on port 80 failed to start. The port may already be in use or the connector may be misconfigured."
In the Klov folder locate and open application.properties
# klov
application.name=LibraBank
application.display.proLabels=false
server.port=80
Change the server.port=80 to any other port e.g. server.port=2571

Enabling SSL on Kafka

I'm trying to connect to a kafka cluster with SSL required on brokers for clients to connect. Most clients can communicate with the brokers over SSL, so I know the brokers are set up correctly. We intent to use 2-way SSL authentication and followed these instructions: https://docs.confluent.io/current/tutorials/security_tutorial.html#security-tutorial.
However I have a java application that I'd like to connect to the brokers. I think SSL handshake is not complete and as a result the request to the broker is timing out. The same java application can connect to non-SSL enabled Kafka brokers without an issue.
Update:
I run into this when I tried to enable ssl. While debugging, authentication exception turned is null. I can also see that my truststore and keystore are loaded appropriately. So how do I troubleshoot this metadata update request timeout further?
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
From
private ClusterAndWaitTime waitOnMetadata(String topic, Integer partition, long maxWaitMs) throws InterruptedException {
When I run kafka console producer using bitnami docker image with the same trustStore/keyStore passed as env variables, it works fine.
This works:
docker run -it -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.keystore.jks:/tmp/keystore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/kafka.client.truststore.jks:/tmp/truststore.jks -v /Users/kafka/kafka_2.11-1.0.0/bin/client_ssl.properties:/tmp/client.properties bitnami/kafka:1.0.0-r3 kafka-console-producer.sh --broker-list some-elb.elb.us-west-2.amazonaws.com:9094 --topic test --producer.config /tmp/client.properties
Here are the debug logs from my java client application. Appreciate any insight on how to troubleshoot this.
2018-03-13 20:13:38.661 INFO 20653 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2018-03-13 20:13:38.669 INFO 20653 --- [ main] c.i.aggregate.precompute.Application : Started Application in 14.066 seconds (JVM running for 15.12)
2018-03-13 20:13:42.225 INFO 20653 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [some-elb.elb.us-west-2.amazonaws.com:9094]
buffer.memory = 33554432
client.id =
compression.type = lz4
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 2000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /Users/kafka/Cluster-Certs/kafka.client.keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /Users/kafka/Cluster-Certs/kafka.client.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = <some class>
2018-03-13 20:13:42.287 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Starting the Kafka producer
2018-03-13 20:13:42.841 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bufferpool-wait-time
2018-03-13 20:13:43.062 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name buffer-exhausted-records
2018-03-13 20:13:43.217 DEBUG 20653 --- [ main] org.apache.kafka.clients.Metadata : Updated cluster metadata version 1 to Cluster(id = null, nodes = [some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)], partitions = [])
2018-03-13 20:13:45.670 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name produce-throttle-time
2018-03-13 20:13:45.909 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-closed:
2018-03-13 20:13:45.923 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name connections-created:
2018-03-13 20:13:45.935 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name successful-authentication:
2018-03-13 20:13:45.946 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name failed-authentication:
2018-03-13 20:13:45.958 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent-received:
2018-03-13 20:13:45.968 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-sent:
2018-03-13 20:13:45.990 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name bytes-received:
2018-03-13 20:13:46.005 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name select-time:
2018-03-13 20:13:46.025 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name io-time:
2018-03-13 20:13:46.130 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-size
2018-03-13 20:13:46.139 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name compression-rate
2018-03-13 20:13:46.147 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name queue-time
2018-03-13 20:13:46.156 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name request-time
2018-03-13 20:13:46.165 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name records-per-request
2018-03-13 20:13:46.179 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-retries
2018-03-13 20:13:46.189 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name errors
2018-03-13 20:13:46.199 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name record-size
2018-03-13 20:13:46.250 DEBUG 20653 --- [ main] org.apache.kafka.common.metrics.Metrics : Added sensor with name batch-split-rate
2018-03-13 20:13:46.275 DEBUG 20653 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-1] Starting Kafka producer I/O thread.
2018-03-13 20:13:46.329 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
2018-03-13 20:13:46.333 INFO 20653 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
2018-03-13 20:13:46.369 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Kafka producer started
2018-03-13 20:13:52.982 TRACE 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Requesting metadata update for topic ssl-txn.
2018-03-13 20:13:52.987 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initialize connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null) for sending metadata request
2018-03-13 20:13:52.987 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Initiating connection to node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:53.217 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-sent
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.bytes-received
2018-03-13 20:13:53.219 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--1.latency
2018-03-13 20:13:53.222 DEBUG 20653 --- [ad | producer-1] o.apache.kafka.common.network.Selector : [Producer clientId=producer-1] Created socket with SO_RCVBUF = 33488, SO_SNDBUF = 131376, SO_TIMEOUT = 0 to node -1
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.224 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeWrap -1
2018-03-13 20:13:53.225 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_WRAP channelId -1, handshakeResult Status = OK HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 326, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 0
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.226 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap -1
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake handshakeUnwrap: handshakeStatus NEED_UNWRAP status BUFFER_UNDERFLOW
2018-03-13 20:13:53.227 TRACE 20653 --- [ad | producer-1] o.a.k.common.network.SslTransportLayer : SSLHandshake NEED_UNWRAP channelId -1, handshakeResult Status = BUFFER_UNDERFLOW HandshakeStatus = NEED_UNWRAP
bytesConsumed = 0 bytesProduced = 0, appReadBuffer pos 0, netReadBuffer pos 0, netWriteBuffer pos 326
2018-03-13 20:13:53.485 DEBUG 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Completed connection to node -1. Fetching API versions.
2018-03-13 20:13:53.485 TRACE 20653 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Found least loaded node some-elb.elb.us-west-2.amazonaws.com:9094 (id: -1 rack: null)
2018-03-13 20:13:54.992 DEBUG 20653 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
2018-03-13 20:13:54.992 INFO 20653 --- [ main] c.i.aggregate.precompute.kafka.Producer : sent message in callback
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1124)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:823)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
at com.intuit.aggregate.precompute.kafka.Producer.send(Producer.java:76)
at com.intuit.aggregate.precompute.Application.main(Application.java:58)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 2000 ms.
Disconnected from the target VM, address: '127.0.0.1:53161', transport: 'socket'
This issue was due to an incorrect cert on brokers. java has different defaults than scala/python, for the ciphers which is why other language clients worked. But go also had a similar issue and then they enabled ssl logging on brokers and caught the issue.

Failed to wait for initial partition map exchange

After change apache Ignite 2.0 to 2.1, I got below warning.
2017-08-17 10:44:21.699 WARN 10884 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
I use third party persistence cache store.
when I remove cacheStore configuration, I didn't got warning. work fine.
Using cacheStore and changing down version 2.1 to 2.0, I didn't got warning. work fine.
Is there significant change in 2.1?
here is my full framework stack.
- spring boot 1.5.6
- spring data jpa
- apache ignite 2.1.0
here is my full configuration in java code.(I use embedded ignite in spring)
I use partitioned cache, write behind cache to rdbms storage using spring data jpa.
IgniteConfiguration igniteConfig = new IgniteConfiguration();
CacheConfiguration<Long, Object> cacheConfig = new CacheConfiguration<>();
cacheConfig.setCopyOnRead(false); //for better performance
cacheConfig
.setWriteThrough(true)
.setWriteBehindEnabled(true)
.setWriteBehindBatchSize(1024)
.setWriteBehindFlushFrequency(10000)
.setWriteBehindCoalescing(true)
.setCacheStoreFactory(new CacheStoreImpl()); //CacheStoreImpl use spring data jpa internally
cacheConfig.setName('myService');
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(2);
cacheConfig.setWriteSynchronizationMode(FULL_ASYNC);
cacheConfig.setNearConfiguration(new NearCacheConfiguration<>());//use default configuration
igniteConfig.setCacheConfiguration(cacheConfig);
igniteConfig.setMemoryConfiguration(new MemoryConfiguration()
.setPageSize(8 * 1024)
.setMemoryPolicies(new MemoryPolicyConfiguration()
.setInitialSize((long) 256L * 1024L * 1024L)
.setMaxSize((long) 1024L * 1024L * 1024L)));
Ignite ignite = IgniteSpring.start(igniteConfig, springApplicationCtx);
ignite.active(true);
here is my full log using -DIGNITE_QUITE=false
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Config URL: n/a
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Daemon mode: off
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS: Windows 10 10.0 amd64
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS user: user
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : PID: 684
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Language runtime: Java Platform API Specification ver. 1.8
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM information: Java(TM) SE Runtime Environment 1.8.0_131-b11 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.131-b11
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM total memory: 1.9GB
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Remote Management [restart: off, REST: on, JMX (remote: on, port: 58771, auth: off, ssl: off)]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : IGNITE_HOME=null
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM arguments: [-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=58771, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Djava.rmi.server.hostname=localhost, -Dspring.liveBeansView.mbeanDomain, -Dspring.application.admin.enabled=true, -Dspring.profiles.active=rdbms,multicastIp, -Dapi.port=10010, -Xmx2g, -Xms2g, -DIGNITE_QUIET=false, -Dfile.encoding=UTF-8, -Xbootclasspath:C:\Program Files\Java\jre1.8.0_131\lib\resources.jar;C:\Program Files\Java\jre1.8.0_131\lib\rt.jar;C:\Program Files\Java\jre1.8.0_131\lib\jsse.jar;C:\Program Files\Java\jre1.8.0_131\lib\jce.jar;C:\Program Files\Java\jre1.8.0_131\lib\charsets.jar;C:\Program Files\Java\jre1.8.0_131\lib\jfr.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\cldrdata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\dnsns.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jaccess.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jfxrt.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\localedata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\nashorn.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunec.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunmscapi.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\zipfs.jar]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : System cache's MemoryPolicy size is configured to 40 MB. Use MemoryConfiguration.systemCacheMemorySize property to change the setting.
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Configured caches [in 'sysMemPlc' memoryPolicy: ['ignite-sys-cache'], in 'default' memoryPolicy: ['myCache']]
2017-08-18 11:54:52.592 WARN 684 --- [ pub-#11%null%] o.apache.ignite.internal.GridDiagnostic : This operating system has been tested less rigorously: Windows 10 10.0 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : Configured plugins:
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : ^-- None
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor :
2017-08-18 11:54:52.724 INFO 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
2017-08-18 11:54:52.772 WARN 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
2017-08-18 11:54:52.787 WARN 684 --- [ main] o.a.i.s.c.noop.NoopCheckpointSpi : Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
2017-08-18 11:54:52.811 WARN 684 --- [ main] o.a.i.i.m.c.GridCollisionManager : Collision resolution is disabled (all jobs will be activated upon arrival).
2017-08-18 11:54:52.812 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Security status [authentication=off, tls/ssl=off]
2017-08-18 11:54:53.087 INFO 684 --- [ main] o.a.i.i.p.odbc.SqlListenerProcessor : SQL connector processor has started on TCP port 10800
2017-08-18 11:54:53.157 INFO 684 --- [ main] o.a.i.i.p.r.p.tcp.GridTcpRestProtocol : Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Non-loopback local IPs: 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831, fe80:0:0:0:159d:5c82:b4ca:7630%eth2, fe80:0:0:0:30a3:1c57:3f57:4831%net0, fe80:0:0:0:3857:b492:48ad:1dc%eth4
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Enabled local MACs: 00000000000000E0, 0A0027000004, BCEE7B8B7C00
2017-08-18 11:54:53.404 INFO 684 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=7d90a0ac-b620-436f-b31c-b538a04b0919]
2017-08-18 11:54:53.409 WARN 684 --- [ main] .s.d.t.i.m.TcpDiscoveryMulticastIpFinder : TcpDiscoveryMulticastIpFinder has no pre-configured addresses (it is recommended in production to specify at least one address in TcpDiscoveryMulticastIpFinder.getAddresses() configuration property)
2017-08-18 11:54:55.068 INFO 684 --- [orker-#34%null%] o.apache.ignite.internal.exchange.time : Started exchange init [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], crd=true, evt=10, node=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], customEvt=null]
2017-08-18 11:54:55.302 INFO 684 --- [orker-#34%null%] o.a.i.i.p.cache.GridCacheProcessor : Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL]
2017-08-18 11:55:15.066 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
2017-08-18 11:55:35.070 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Still waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], topVer=1, nodeId8=7d90a0ac, msg=null, type=NODE_JOINED, tstamp=1503024895045], crd=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], nodeId=7d90a0ac, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1821989981], init=false, lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null, skipPreload=false, clientOnlyExchange=false, initTs=1503024895057, centralizedAff=false, changeGlobalStateE=null, forcedRebFut=null, done=false, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=733156437]]]
I debug my code, I guess IgniteSpring cannot inject SpringResource
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private RdbmsCachePersistenceRepository repository;
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private CacheObjectFactory cacheObjectFactory;
repository, cacheObjectFactory is same instance like below code
public interface RdbmsCachePersistenceRepository extends
JpaRepository<RdbmsCachePersistence, Long>,
CachePersistenceRepository<RdbmsCachePersistence>,
CacheObjectFactory {
#Override
default CachePersistence createCacheObject(long key, Object value, int partition) {
return new RdbmsCachePersistence(key, value, partition);
}
}
And RdbmsCachePersistenceRepository implemented by spring data jpa
when I debug code line by line, IgniteContext cannot bring RdbmsCachePersistenceRepository
I don't know why it is
I resolve this problem, but I don't know why it is resolved.
I added this dummy code before IgniteSpring.start.
springApplicationCtx.getBean(RdbmsCachePersistenceRepository.class);
I think the spring resource bean not initialized when the ignite context get the bean.

Double connections for one consumer

I am a new user of RabbitMQ and I really enjoy it but I have an issue (well it doesn't throw any error and it doesn't affect anything except my mind ...).
Each time I run a consumer, it creates 2 connections. I can't find why so I am asking for your help.
I am using Spring-Boot and Spring AMQP (maybe it because of Spring ...)
Here is the code :
receiver-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:rabbit="http://www.springframework.org/schema/rabbit"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd">
<rabbit:connection-factory id="connectionFactory" host="localhost" username="admin" password="admin" />
<bean id="receiver" class="com.test.Receiver" />
<bean id="messageListener" class="org.springframework.amqp.rabbit.listener.adapter.MessageListenerAdapter" >
<constructor-arg name="delegate" ref="receiver"/>
<constructor-arg name="defaultListenerMethod" value="receiveMessage" />
</bean>
<bean id="container" class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer" >
<property name="connectionFactory" ref="connectionFactory" />
<property name="queueNames" value="AMQP-PoC" />
<property name="messageListener" ref="messageListener" />
<property name="defaultRequeueRejected" value="false" />
</bean>
AMQPPoCReceiverApplication.java
#SpringBootApplication
#ImportResource("classpath:com.test/rabbit-receiver-context.xml")
public class AMQPPoCReceiverApplication implements CommandLineRunner {
private AnnotationConfigApplicationContext context;
#Override
public void run(String... strings) throws Exception {
context = new AnnotationConfigApplicationContext(AMQPPoCReceiverApplication.class);
System.out.println("Waiting for message");
}
#Override
protected void finalize() throws Throwable {
super.finalize();
this.context.close();
}
public static void main(String[] args) {
SpringApplication.run(AMQPPoCReceiverApplication.class, args);
}
}
Receiver.java
public class Receiver {
public void receiveMessage(String message) {
System.out.println("Message received : " + message);
}
}
Here the logs at the start (notice the lines with the double '*'):
2016-02-18 11:32:51.956 INFO 10196 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2016-02-18 11:32:51.966 INFO 10196 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase -2147482648
2016-02-18 11:32:51.967 INFO 10196 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
**2016-02-18 11:32:52.062 INFO 10196 --- [cTaskExecutor-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: SimpleConnection#2069bb0a [delegate=amqp://admin#127.0.0.1:5672/]**
2016-02-18 11:32:52.148 INFO 10196 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 52752 (http)
2016-02-18 11:32:52.153 INFO 10196 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#57bf85b2: startup date [Thu Feb 18 11:32:52 GMT+01:00 2016]; root of contex
t hierarchy
**2016-02-18 11:32:52.320 INFO 10196 --- [ main] o.s.b.f.xml.XmlBeanDefinitionReader : Loading XML bean definitions from class path resource [com.test/receiver-context.xml]**
2016-02-18 11:32:52.357 INFO 10196 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2016-02-18 11:32:52.362 INFO 10196 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.amqp.rabbit.annotation.RabbitBootstrapConfiguration' of type [class org.springframework.amqp.rabbit.annotation.RabbitBootstrapConfigur
ation$$EnhancerBySpringCGLIB$$eccd4a65] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2016-02-18 11:32:52.487 INFO 10196 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2016-02-18 11:32:52.489 INFO 10196 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase -2147482648
2016-02-18 11:32:52.489 INFO 10196 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
**2016-02-18 11:32:52.498 INFO 10196 --- [cTaskExecutor-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: SimpleConnection#768748cf [delegate=amqp://admin#127.0.0.1:5672/]**
Waiting for message
2016-02-18 11:32:52.505 INFO 10196 --- [ main] com.test.AMQPPoCReceiverApplication : Started AMQPPoCReceiverApplication in 3.509 seconds (JVM running for 6.961)
And here the double connections:
If I stop the client, it closes both (that is why I am sure it's a double connections for the same consumer).
If you need more information, ask here and I will reply as soon as possible.
Thank you all for any kind of help.
The answer is simple : I created 2 contexts in the same application.
new AnnotationConfigApplicationContext(AMQPPoCReceiverApplication.class);
and
SpringApplication.run(AMQPPoCReceiverApplication.class, args);
Only create one and it's done !

Turbine AMQP does not receive Hystrix stream

I had a Turbine and Hystrix setup working, but decided to change it over to Turbine AMQP so I could aggregate multiple services into one stream/dashboard.
I have set up a Turbine AMQP server running on localhost:8989, but it doesn't appear to be getting Hystrix data from the client service. When I hit the Turbine server's IP in my browser, I see data: {"type":"Ping"} repeatedly, even while I am polling the URL of the Hystrix. If I attempt to show the Turbine AMQP stream in the Hystrix Dashboard, I get: Unable to connect to Command Metric Stream.
I have a default install of RabbitMQ running on port 5672.
My client service using Hystrix-AMQP has a application.yml file that looks like so:
spring:
application:
name: policy-service
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
The tail end of the startup log looks like this:
2015-09-14 16:31:13.030 INFO 52844 --- [ main] com.netflix.discovery.DiscoveryClient : Starting heartbeat executor: renew interval is: 30
2015-09-14 16:31:13.047 INFO 52844 --- [ main] c.n.e.EurekaDiscoveryClientConfiguration : Registering application policy-service with eureka with status UP
2015-09-14 16:31:13.194 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'policy-service:8088.errorChannel' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusOutboundFlow.channel#0' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy- service:8088.cloudBusOutboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {filter} as a subscriber to the 'cloudBusInboundChannel' channel
2015-09-14 16:31:13.195 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#1
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {message-handler} as a subscriber to the 'cloudBusInboundFlow.channel#0' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#2
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter} as a subscriber to the 'cloudBusWiretapChannel' channel
2015-09-14 16:31:13.196 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusWiretapChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#3
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'cloudBusOutboundChannel' channel
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusOutboundChannel' has 1 subscriber(s).
2015-09-14 16:31:13.197 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#4
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'cloudBusAmqpInboundFlow.channel#0' channel
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.cloudBusAmqpInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#5
2015-09-14 16:31:13.198 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {amqp:outbound-channel-adapter} as a subscriber to the 'hystrixStream' channel
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.integration.channel.DirectChannel : Channel 'policy-service:8088.hystrixStream' has 1 subscriber(s).
2015-09-14 16:31:13.199 INFO 52844 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#6
2015-09-14 16:31:13.219 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:31:13.219 INFO 52844 --- [ main] ApplicationEventListeningMessageProducer : started org.springframework.integration.event.inbound.ApplicationEventListeningMessageProducer#0
2015-09-14 16:31:13.555 INFO 52844 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (4640c1c8-ff8f-45d7-8426-19d1b7a4cdb0) durable:false, auto-delete:true, exclusive:true. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:31:13.572 INFO 52844 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:31:13.573 INFO 52844 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:31:13.576 INFO 52844 --- [ main] c.n.h.c.m.e.HystrixMetricsPoller : Starting HystrixMetricsPoller
2015-09-14 16:31:13.609 INFO 52844 --- [ main] ration$HystrixMetricsPollerConfiguration : Starting poller
2015-09-14 16:31:13.803 INFO 52844 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8088 (http)
2015-09-14 16:31:13.805 INFO 52844 --- [ main] com.ml.springboot.PolicyService : Started PolicyService in 22.544 seconds (JVM running for 23.564)
So it looks like PolicyService successfully connects to the message broker.
The Turbine AMQP server's end of log:
2015-09-14 16:58:05.887 INFO 51944 --- [ main] i.reactivex.netty.server.AbstractServer : Rx server started at port: 8989
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'bootstrap:-1.errorChannel' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'hystrixStreamAggregatorInboundFlow.channel#0' channel
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.integration.channel.DirectChannel : Channel 'bootstrap:-1.hystrixStreamAggregatorInboundFlow.channel#0' has 1 subscriber(s).
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
2015-09-14 16:58:05.991 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 1073741823
2015-09-14 16:58:06.238 INFO 51944 --- [cTaskExecutor-1] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (spring.cloud.hystrix.stream) durable:false, auto-delete:false, exclusive:false. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2015-09-14 16:58:06.289 INFO 51944 --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter : started org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter#0
2015-09-14 16:58:06.290 INFO 51944 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 2147483647
2015-09-14 16:58:06.434 INFO 51944 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): -1 (http)
Any ideas why the Turbine AMQP server is not receiving communication from the Hystrix AMQP client?
EDIT: Turbine-AMQP main looks like:
package com.turbine.amqp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.netflix.turbine.amqp.EnableTurbineAmqp;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableAutoConfiguration
#EnableTurbineAmqp
#EnableDiscoveryClient
public class TurbineAmqpApplication {
public static void main(String[] args) {
SpringApplication.run(TurbineAmqpApplication.class, args);
}
}
Here's its application.yml:
server:
port: 8989
spring:
rabbitmq:
addresses: ${vcap.services.${PREFIX:}rabbitmq.credentials.uri:amqp://${RABBITMQ_HOST:localhost}:${RABBITMQ_PORT:5672}}
Hitting http://localhost:8989/turbine.stream produces a repeating stream of data: {"type":"Ping"}
and shows this in console:
2015-09-15 08:54:37.960 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:54:38.025 INFO 83480 --- [o-eventloop-3-1] o.s.c.n.t.amqp.TurbineAmqpConfiguration : Starting aggregation
EDIT: The below exception is thrown when I stop listening to the turbine stream, not when I try to listen with the dashboard.
2015-09-15 08:56:47.934 INFO 83480 --- [o-eventloop-3-3] o.s.c.n.t.amqp.TurbineAmqpConfiguration : SSE Request Received
2015-09-15 08:56:47.946 WARN 83480 --- [o-eventloop-3-3] io.netty.channel.DefaultChannelPipeline : An exception was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.NoSuchMethodError: rx.Observable.collect(Lrx/functions/Func0;Lrx/functions/Action2;)Lrx/Observable;
at com.netflix.turbine.aggregator.StreamAggregator.lambda$null$36(StreamAggregator.java:89)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:60)
at rx.internal.operators.OnSubscribeMulticastSelector.call(OnSubscribeMulticastSelector.java:40)
at rx.Observable.unsafeSubscribe(Observable.java:8591)
at rx.internal.operators.OperatorMerge$MergeSubscriber.handleNewSource(OperatorMerge.java:190)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:160)
at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:96)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:54)
at rx.internal.operators.OperatorGroupBy$GroupBySubscriber.onNext(OperatorGroupBy.java:173)
at rx.subjects.SubjectSubscriptionManager$SubjectObserver.onNext(SubjectSubscriptionManager.java:224)
at rx.subjects.PublishSubject.onNext(PublishSubject.java:101)
at org.springframework.cloud.netflix.turbine.amqp.Aggregator.handle(Aggregator.java:53)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:112)
at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:102)
at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:49)
at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:342)
at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:88)
at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:131)
at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:330)
at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:164)
at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:276)
at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:142)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:75)
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:71)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:99)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:248)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:171)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:119)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:105)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:277)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:239)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:101)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$400(AmqpInboundChannelAdapter.java:45)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$1.onMessage(AmqpInboundChannelAdapter.java:93)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:756)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:167)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1241)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:660)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1005)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:989)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$700(SimpleMessageListenerContainer.java:82)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1103)
at java.lang.Thread.run(Thread.java:745)
Caused by: rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext value: GroupedObservable.class
at rx.exceptions.OnErrorThrowable.addValueAsLastCause(OnErrorThrowable.java:98)
at rx.internal.operators.OperatorMap$1.onNext(OperatorMap.java:56)
... 58 common frames omitted
My dependencies for turbine-amqp are as follows:
dependencies {
compile('org.springframework.cloud:spring-cloud-starter-turbine-amqp:1.0.3.RELEASE')
compile 'org.springframework.boot:spring-boot-starter-web:1.2.5.RELEASE'
compile 'org.springframework.boot:spring-boot-starter-actuator:1.2.5.RELEASE'
testCompile("org.springframework.boot:spring-boot-starter-test")
}
dependencyManagement {
imports {
mavenBom 'org.springframework.cloud:spring-cloud-starter-parent:1.0.2.RELEASE'
}
}
It is so hard to find a solution.
Using Spring cloud 2.1.4.RELEASE I faced with similar problem.
The main cause is the incompatibility [exchanges] name in rabbitMQ between:
spring-cloud-netflix-hystrix-stream and spring-cloud-starter-netflix-turbine-stream.
So solve it:
See the name created exchange name when you start the service componente {the same that declare hystrix-stream}
on the componente that declare {turbine-stream}
update the property
turbine.stream.destination=
in my case
turbine.stream.destination=hystrixStreamOutput
I faced with similar problem and I find a solution.
My Spring Cloud version is 2.1.0.RELEASE
The solution:
add property
spring.cloud.stream.bindings.turbineStreamInput.destination: hystrixStreamOutput
turbine.stream.enabled: false
add auto configuration
#EnableBinding(TurbineStreamClient.class)
public class TurbineStreamAutoConfiguration {
#Autowired
private BindingServiceProperties bindings;
#Autowired
private TurbineStreamProperties properties;
#PostConstruct
public void init() {
BindingProperties inputBinding = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (inputBinding == null) {
this.bindings.getBindings().put(TurbineStreamClient.INPUT,
new BindingProperties());
}
BindingProperties input = this.bindings.getBindings()
.get(TurbineStreamClient.INPUT);
if (input.getDestination() == null) {
input.setDestination(this.properties.getDestination());
}
if (input.getContentType() == null) {
input.setContentType(this.properties.getContentType());
}
}
#Bean
public HystrixStreamAggregator hystrixStreamAggregator(ObjectMapper mapper,
PublishSubject<Map<String, Object>> publisher) {
return new HystrixStreamAggregator(mapper, publisher);
}
}