Application runtime exceptions are not being sent to errorChannel or ServiceActivator not able to listen on to errorChannel - error-handling

After listening on a kafka topic using #StreamListener, upon RuntimeException, global erroChannel or topic specific errorChannel (topic.group.errors) not receiving any error message. #ServiceActivator not receiving anything.
POM Dependencies : Greenwich.RELEASE
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-schema</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
application.properties
spring.cloud.stream.bindings.input.destination=input
spring.cloud.stream.bindings.input.group=myGroup
spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.enableDlq=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName=input_deadletter
spring.cloud.stream.kafka.streams.bindings.input.consumer.autoCommitOnError=true
spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.bindings.output.destination=output
spring.cloud.stream.bindings.output.content-Type=application/*+avro
spring.cloud.stream.bindings.output.producer.useNativeEncoding=true
spring.cloud.stream.bindings.output.producer.errorChannelEnabled=true
spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
spring.cloud.stream.schemaRegistryClient.endpoint.schema.avro.schema-locations=classpath:avro/*.avsc
spring.cloud.stream.kafka.streams.binder.brokers=localhost
spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000
spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url=http://localhost:8082
spring.cloud.stream.kafka.streams.binder.application-id=myGroup
spring.cloud.stream.kafka.streams.binder.serdeError=sendtodlq
I can see in the logs that service activator is registered and subscribed to the error Channels.
All the streams are stopped and going to shutdown mode once runtime exception occurs.
Registering beans for JMX exposure on startup
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel input.myGroup.errors
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name="input-myGroup.errors"': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name="input.myGroup.errors"] org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel errorChannel
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name=errorChannel': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name=errorChannel]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageChannel nullChannel
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageChannel,name=nullChannel': registering with JMX server as MBean [org.springframework.integration:type=MessageChannel,name=nullChannel]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler errorLogger
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=errorLogger,bean=internal': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=errorLogger,bean=internal]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler myTopicListener.error.serviceActivator
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=myTopicListener.error.serviceActivator,bean=endpoint': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=myTopicListener.error.serviceActivator,bean=endpoint]
org.springframework.integration.monitor.IntegrationMBeanExporter - Registering MessageHandler myTopicListener.errorGlobal.serviceActivator
org.springframework.integration.monitor.IntegrationMBeanExporter - Located managed bean 'org.springframework.integration:type=MessageHandler,name=myTopicListener.errorGlobal.serviceActivator,bean=endpoint': registering with JMX server as MBean [org.springframework.integration:type=MessageHandler,name=myTopicListener.errorGlobal.serviceActivator,bean=endpoint]
org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor - No #KafkaListener annotations found on bean type: class org.springf
#SendTo(MyStreams.OUTPUT)
public KStream<Key, MyEntity> process(KStream<Key, Envelope> myStreamObject) {
return myStreamObject.mapValues(this::transform);
}
#ServiceActivator(inputChannel = "input.myGroup.errors") //channel name 'input.myGroup.errors'
public void error(Message<?> message) {
System.out.println("Handling ERROR: " + message);
}
#ServiceActivator(inputChannel = "errorChannel")
public void errorGlobal(Message<?> message) {
System.out.println("Handling ERROR: GLOBAL " + message);
}

The kafka streams binder is not based on MessageChannels so there is no Message<?> to send to the error channel.
The standard kafka binder is a MessageChannelBinder and supports the error channel.
With Kafka Streams you have to implement your own error handling.

Related

How to setup jms in Red Hat middleware to RabbitMQ

I run Red Hat middleware with CodeReady Studio 12.16.0.GA on standalone Spring-boot environment as local Camel context. I have local RabbitMQ running in Docker.
I have failed to setup any scenario using tutorials on web in/out JMS using Camel.
All tutorials don't use camel-context.xml configuration only pure java spring.
Please help me to configure camel-context.xml and all resource to use RabbitMQ or just any JMS.
Thanks in advance.
Here is simple camel-context.xml
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans https://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring https://camel.apache.org/schema/spring/camel-spring.xsd">
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<route id="simple-route">
<from id="_to1" uri="jms:myQeue?connectionFactory=#myConnectionFactory&jmsMessageType=Text"/>
<log id="route-log" message=">>> ${body}"/>
</route>
</camelContext>
</beans>
and simple spring application to run it
package org.mycompany;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ImportResource;
#SpringBootApplication
#ImportResource({"classpath:spring/camel-context.xml"})
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
But it went to exception
Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: jms://myQeue?connectionFactory=%23myConnectionFactory&jmsMessageType=Text due to: No bean could be found in the registry for: myConnectionFactory of type: javax.jms.ConnectionFactory
I have added registration of ConnectionFactory
ConnectionFactory myCF = new ConnectionFactory();
myCF.setUsername("guest");
myCF.setPassword("guest");
myCF.setVirtualHost("/");
myCF.setHost("localhost");
myCF.setPort(5672);
SimpleRegistry reg = new SimpleRegistry();
reg.put("myConnectionFactory", myCF);
CamelContext camContext = new DefaultCamelContext(reg);
but new exception arose I think because of using com.rabbitmq.client.ConnectionFactory
Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route simple-route: Route(simple-route)[[From[jms:queue:myQeue?connectionFactory... because of connectionFactory must be specified
How to define javax.jms.ConnectionFactory to registry?

Weblogic Jaxws deployment - class does not support JDK1.5

WebLogic Server Version: 10.3.6.0
Spring version: 3.2.1.RELEASE
Java JDK 1.6
I am trying to deploy a Spring application as WAR that uses jaxws into a Weblogic server.
The application works well with Jetty. However when deploying(I mean starting deployed app) Weblogic following exception occurs:
Caused By: java.lang.UnsupportedOperationException: This class does not support JDK1.5
at weblogic.xml.jaxp.RegistryTransformerFactory.setFeature(RegistryTransformerFactory.java:317)
at com.sun.xml.ws.util.xml.XmlUtil.newTransformerFactory(XmlUtil.java:392)
at com.sun.xml.ws.util.xml.XmlUtil.newTransformerFactory(XmlUtil.java:400)
at com.sun.xml.ws.util.xml.XmlUtil.<clinit>(XmlUtil.java:233)
at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:36
.
maven pom.xml
<dependency>
<groupId>com.sun.xml.ws</groupId>
<artifactId>jaxws-rt</artifactId>
<version>2.2.10</version>
</dependency>
<dependency>
<groupId>org.jvnet.jax-ws-commons.spring</groupId>
<artifactId>jaxws-spring</artifactId>
<version>1.9</version>
</dependency>
Weblogic.xml
<weblogic-web-app>
<context-root>/MyApp</context-root>
<container-descriptor>
<prefer-web-inf-classes>true</prefer-web-inf-classes>
<show-archived-real-path-enabled>true</show-archived-real-path-enabled>
</container-descriptor>
</weblogic-web-app>
It is being fixed by changing weblogic.xml
<container-descriptor>
<prefer-web-inf-classes>false</prefer-web-inf-classes>
<show-archived-real-path-enabled>true</show-archived-real-path-enabled>
<prefer-application-packages>
<package-name>com.sun.xml.ws.server.*</package-name>
</prefer-application-packages>
</container-descriptor>
And in init servlet (if you use the old style) you should change the way you acquire the context as:
private static WebApplicationContext context;
#Override
public void contextInitialized(ServletContextEvent sce) {
ServletContext sc = sce.getServletContext();
this.context = WebApplicationContextUtils.getWebApplicationContext(sc);
...
}
public static WebApplicationContext getApplicationContext(){
return context;
}
That fixes it

Jax-WS Axis2 Proxy over SSL error using ProxySelector

In my project I have the following project structure:
I have a module that is producing a war file and can be deployed inside a Tomcat application server. This module has dependencies on Axis2 libraries:
<dependency>
<groupId>org.apache.axis2</groupId>
<artifactId>axis2</artifactId>
</dependency>
<dependency>
<groupId>org.apache.axis2</groupId>
<artifactId>axis2-transport-http</artifactId>
</dependency>
<dependency>
<groupId>org.apache.axis2</groupId>
<artifactId>axis2-webapp</artifactId>
<type>war</type>
</dependency>
And this class contains an axis2.xml file in the conf folder under WEB-INF.
Now this module has a dependency on a unit module, that has the package type of a jar.
Now in my web-module, in the code for my stub I have following code:
GazelleObjectValidator.getInstance().validateObject();
The XcpdValidationService is a class in the jar module (dependency) and this method calls an external web service over SSL and using a proxy.
This web service client is generated by JAX WS RI
BUT this class doesn't use the axis2.xml configuration from the parent module and uses it's own axis configuration, being the default one, where my proxy is not configured...
#WebEndpoint(name = "GazelleObjectValidatorPort")
public GazelleObjectValidator getGazelleObjectValidatorPort() {
return super.getPort(new QName("http://ws.validator.sch.gazelle.ihe.net/", "GazelleObjectValidatorPort"), GazelleObjectValidator.class);
}
The method itself looks like this:
#WebMethod
#WebResult(name = "validationResult", targetNamespace = "")
#RequestWrapper(localName = "validateObject", targetNamespace = "http://ws.validator.sch.gazelle.ihe.net/", className = "net.ihe.gazelle.schematron.ValidateObject")
#ResponseWrapper(localName = "validateObjectResponse", targetNamespace = "http://ws.validator.sch.gazelle.ihe.net/", className = "net.ihe.gazelle.schematron.ValidateObjectResponse")
public String validateObject(
#WebParam(name = "base64ObjectToValidate", targetNamespace = "")
String base64ObjectToValidate,
#WebParam(name = "xmlReferencedStandard", targetNamespace = "")
String xmlReferencedStandard,
#WebParam(name = "xmlMetadata", targetNamespace = "")
String xmlMetadata)
throws SOAPException_Exception
;
My GazelleObjectValidatorService is generated by following plugin:
<plugin>
<groupId>org.apache.axis2</groupId>
<artifactId>axis2-aar-maven-plugin</artifactId>
<version>${axis2.version}</version>
<extensions>true</extensions>
<executions>
<execution>
<id>package-aar</id>
<phase>prepare-package</phase>
<goals>
<goal>aar</goal>
</goals>
</execution>
</executions>
<configuration>
<fileSets>
<fileSet>
<directory>${project.basedir}/src/main/resources/wsdl</directory>
<outputDirectory>META-INF</outputDirectory>
<includes>
<include>**/*.xsd</include>
</includes>
</fileSet>
</fileSets>
<servicesXmlFile>${project.build.outputDirectory}/axis2/services.xml</servicesXmlFile>
<wsdlFile>${project.build.outputDirectory}/wsdl/ClientConnectorService.wsdl</wsdlFile>
</configuration>
</plugin>
I tried to override the transportSender in my axis2.xml configuration with my own defined MyCommonsHttpTransportSender:
<transportSender name="http"
class="eu.epsos.pt.cc.MyCommonsHTTPTransportSender">
<parameter name="PROTOCOL">HTTP/1.1</parameter>
<parameter name="Transfer-Encoding">chunked</parameter>
and
<transportSender name="https"
class="eu.epsos.pt.cc.MyCommonsHTTPTransportSender">
<parameter name="PROTOCOL">HTTP/1.1</parameter>
<parameter name="Transfer-Encoding">chunked</parameter>
</transportSender>
that knows about the proxy.
but unfortunately since the web service client is inside the jar that is a dependency of the war, it doesn't seem to use my axis2.xml configuration, but uses it's own axis configuration, which doesn't know about the proxy.
This causes the following error where you see clearly that it uses the default CommonsHTTPTransportSender and therefore throwing the error:
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.commons.httpclient.protocol.ReflectionSocketFactory.createSocket(ReflectionSocketFactory.java:140)
at org.apache.commons.httpclient.protocol.SSLProtocolSocketFactory.createSocket(SSLProtocolSocketFactory.java:130)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:621)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:404)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:231)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:443)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:406)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:165)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.execute(AxisInvocationController.java:578)
at org.apache.axis2.jaxws.core.controller.impl.AxisInvocationController.doInvoke(AxisInvocationController.java:127)
at org.apache.axis2.jaxws.core.controller.impl.InvocationControllerImpl.invoke(InvocationControllerImpl.java:93)
at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invokeSEIMethod(JAXWSProxyHandler.java:373)
... 40 common frames omitted
Is there a way to let the WS client in the child jar make use of the same axis2 configuration of the parent module (that is a deployable war and has the axis2 dependencies?)
UPDATE:
My WAR file has an axis2 configuration, from the source code of this war, a service generated with wsimport is called which is in a JAR that is a dependency of the parent WAR. This service calls an external WebService and this happens over Axis (although doesn't use the axis2.xml configuration file, since this one is in the WEB-INF folder of the JAR.
Wouldn't there be any possibility to make the external WebService call in the JAR without Axis and use just JAXWS? This would solve my problems...
Axis2 provides a convenient method to configure the HTTP Transport. So, following from your sample code:
HttpTransportProperties.ProxyProperties proxyProperties = new HttpTransportProperties.new ProxyProperties();
proxyProperties.setProxyHostName("hostName");
proxyProperties.setProxyPort("hostPort");
proxyProperties.setUsername("User");
proxyProperties.setPassword("pw");
//set the properties
objectValidatorService.getServiceClient().getOptions().setProperty(HttpConstants.PROXY, proxyProperties);
The above wouldn't work for you because you're using the stock JAX-WS implementation, not the Axis2-specific client.
Based on your stacktrace, it appears you're connecting to a TLS-secured endpoint. There's a solution for that
I've done a lot of research, and there's no access to the underlying HTTPUrlConnection using stock JAX-WS. What we do have, is a way to set a custom SSLContextFactory. So we start by creating a custom factory, that will connect to the proxy first:
public class CustomSocketFactory extends SSLProtocolSocketFactory {
private static final CustomSocketFactory factory = new CustomSocketFactory();
static CustomSocketFactory getSocketFactory(){
return factory;
}
public CustomSocketFactory() {
super();
}
#Override
public Socket createSocket(String host, int port, InetAddress clientHost, int clientPort) {
Socket socket = null;
try {
int proxyPort = 1000;
InetSocketAddress proxyAddr = new InetSocketAddress("proxyAddr", proxyPort);
Socket proxyConn = new Socket(new Proxy(Proxy.Type.SOCKS, proxyAddr));
proxyConn.connect(new InetSocketAddress("endHost", 443));
socket = (SSLSocket) super.createSocket(proxyConn, "proxyEndpoint", proxyPort, true);
} catch (IOException ex) {
Logger.getLogger(CustomSocketFactory.class.getName()).log(Level.SEVERE, null, ex);
}
return socket;
}
}
we'll now register this custom socket factory with the Apache HTTPClient runtime (Axis does not use the stock java HTTPUrlConnection, as is evidenced by your stacktrace):
Protocol.registerProtocol("https",new Protocol("https", new CustomSocketFactory(), 443));
This works only for TLS connections. (although, a custom socket factory is applicable to non-https endpoints also). You also need to set the timeout to 0 so we can guarantee that your overriden createSocket gets invoked

Apache Kafka consumer client connecting to Apache Zookeeper: EndOfStreamException

I get an error when trying to 'consume' messages from Kafka (2.9.2-0.8.1) with a Zookeer stand-alone (3.4.5). You can see the source code below as well as the error message and logfile from Zookeeper.
I'm not sure if the Java libraries are incompatible, because I added dependency kafka_0.9.2 (0.8.1) via Maven which automatically resolved dependency of zkclient (0.3) and zookeeper (3.3.4).
The consumer source code:
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.javaapi.consumer.ConsumerConnector;
public class ConsumerTest {
public static void main(String[] args)
{
try
{
Properties props = new Properties();
props.put("zookeeper.connect", "192.168.0.1:2181/kafka");
props.put("group.id", "my-consumer");
props.put("zookeeper.session.timeout.ms", "400");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
ConsumerConfig config = new ConsumerConfig(props);
#SuppressWarnings("unused")
ConsumerConnector consumer = Consumer.createJavaConsumerConnector(config);
}
catch(Exception e)
{
System.out.println(e.getMessage());
e.printStackTrace();
}
}
}
The pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>test.my</groupId>
<artifactId>kafka-consumer</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<exclusions>
<exclusion>
<artifactId>jms</artifactId>
<groupId>javax.jms</groupId>
</exclusion>
<exclusion>
<artifactId>jmxtools</artifactId>
<groupId>com.sun.jdmk</groupId>
</exclusion>
<exclusion>
<artifactId>jmxri</artifactId>
<groupId>com.sun.jmx</groupId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.1</version>
</dependency>
</dependencies>
</dependencyManagement>
</project>
The exception message and stack trace:
Unable to connect to zookeeper server within timeout: 400
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 400
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.consumer.ZookeeperConsumerConnector.connectZk(ZookeeperConsumerConnector.scala:156)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:114)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:65)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:67)
at kafka.consumer.Consumer$.createJavaConsumerConnector(ConsumerConnector.scala:100)
at kafka.consumer.Consumer.createJavaConsumerConnector(ConsumerConnector.scala)
at ConsumerTest.main(ConsumerTest.java:23)
The zookeeper log:
2014-05-06 11:48:11,907 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.0.4:52568
2014-05-06 11:48:11,909 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#349] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:701)
2014-05-06 11:48:11,909 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1001] - Closed socket connection for client /192.168.0.4:52568 (no session established for client)
Note I can successfully 'produce' and 'consume' messages from Kafka nodes with the command line tools:
$ sudo -u kafka bin/kafka-console-producer.sh --broker-list 192.168.0.2:9092,192.168.0.3:9092 --topic my-topic
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
This is a first message.
This is a second message.
$ sudo -u kafka bin/kafka-console-consumer.sh --zookeeper 192.168.0.1:2181/kafka --topic my-topic --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
This is a first message.
This is a second message.
I can even successfully produce messages from a Java client producer.
I had the same problem and I have solved it.The zookeeper timeout is too small.

ProducerTemplate and Direct:start in camel

My camel route is :
from("direct:start")
.to("http://myhost/mypath");
I used :
ProducerTemplate template;
template.sendBody("direct:start", "This is a test message");
to send the exchange. I am getting following exception:
No consumers available on endpoint: Endpoint[direct://start].
How can i receive the same exchange in direct:start endpoint?
The reason you get this error is because you have not configured a Route that starts from direct:start.
If you have configured the Route, but did not mention it in your original query, then the next step to try is to first start the Camel Context, before calling the sendBody method.
camelContext.start();
template.sendBody("direct:start", "This is a test message");
Hope this resolves your issue.
I know this a very old question. But writing this for anyone who're still getting this kind of issue.
Scenario: During the process of a http GET method call, I am fetching some data from DB in the middle of the process and putting the data as message on to an artemis producer.
Firstly, if you're using camel with spring - you don't need to create any camel context at all. Because spring is smart enough to create camel context for you with below dependencies.
Few necessary dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-artemis</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>2.24.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jaxb-starter</artifactId>
<version>2.24.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>2.24.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jackson-starter</artifactId>
<version>2.24.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.24.2</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-amqp</artifactId>
<version>2.24.2</version>
</dependency>
So to fix it, I created a class that extends RouteBuilder class from camel library. In this builder, I created a dummy consumer and used it to send message to an actual producer. My destination is an artemis producer endpoint.
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import org.apache.camel.LoggingLevel;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jackson.JacksonDataFormat;
import org.apache.camel.spi.DataFormat;
import org.springframework.stereotype.Component;
#Component
public class MyRouteBuilder extends RouteBuilder {
private DataFormat marshalDataFormat;
public MyRouteBuilder(ObjectMapper objectMapper) {
objectMapper.registerModule(new JavaTimeModule());
objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
marshalDataFormat = new JacksonDataFormat(objectMapper, MyClass.class);
}
#Override
public void configure() throws Exception {
from("direct:imaginary-consumer")
.marshal(marshalDataFormat)
.log(LoggingLevel.INFO, "Message ready to send is ${body}")
.to("producer:message-data")
.log(LoggingLevel.INFO, "Message has been sent successfully to topic.");
}
}
Below snippet is in any implementation class that carries the message body. This method takes message data and send it to the imaginary/dummy consumer we created in MyRouteBuilder class. The router class gets invoked and sends the message to the destination (producer here). It can be to http endpoints as well.
#Autowired
private ProducerTemplate producerTemplate;
public void sendMessage(Map<String, MyClass> messageBody) {
producerTemplate.sendBody("direct:imaginary-consumer", messageBody);
}
This is also posted on the Apache Camel mailing list, where its active being discussed.
http://camel.465427.n5.nabble.com/ProducerTemplate-and-direct-start-in-camel-tp5730558.html