I want to log the payloads (centralized logging) being sent out from my HTTP endpoints (requester) for which I figured it is best to use the EndpointMessageNotificationListener
The server starts up correctly but my notification logger does not get called. I expect the onNotification method to be invoked each time a request is sent out from the HTTP Requester.
Now I have configured the spring bean.
<spring:bean name="endpointNotificationLogger" class="EndpointAuditor" />
My Java class is as below
public class EndpointAuditor implements EndpointMessageNotificationListener<EndpointMessageNotification> {
public EndpointAuditor(){
System.out.println("The EndpointAuditor has been instantiated");
}
#Override
public void onNotification(EndpointMessageNotification notification) {
try {
System.out.println("This comes from the endpoint " + notification.getSource().getPayloadAsString());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
and my notification configuration as below
<notifications>
<notification event="ENDPOINT-MESSAGE" />
<notification-listener ref="endpointNotificationLogger" />
</notifications>
I have picked up all this from here.
Any ideas why Mule is not happy ?
This one turned out to be a tricky one. I was using Mule 3.6.1 EE with the new HTTP Connector and this feature is not implemented with the HTTP Connector in this particular Mule version.
So basically I was doing all the right things and Mule just did'nt have the feature.
But they have implemented it with Mule 3.6.3 EE and Mule 3.7.X. This commit shows how the code written for it.
Related
I'm trying to add custom headers on my message, so whenever an exception occurs and it ends up in the dead-letter-queue, I can see what the exception was. However all my attempts at this have failed.
using .setHeader()
setting header on the outMessage
setting property of the exchange
Setting the exception as a property in the payload is not allowed.
#Component
public class ProcessRoute extends RouteBuilder {
...
#Override
public void configure() throws Exception {
onException(Exception.class)
.log("Error for ${body}! Requeue")
.redeliveryDelay(2000)
.maximumRedeliveries(3)
.handled(true)
.setHeader("TEST", constant("TEST"))
.process(e -> {
e.getOut().setHeader("TEST", "TEST");
e.setProperty("TEST","TEST");
});
from(SOME_ROUTE)
.doSomeStuff()
.to(RABBITMQ);
}
...
}
RABBITMQ-string:
rabbitmq://foo
?exchangeType=topic
&addresses=localhost:1234
&routingKey=#
&autoDelete=false
&queue=bar
&autoAck=false
&deadLetterExchange=DLX
&deadLetterQueue=bar.dlq
&deadLetterExchangeType=direct
&deadLetterRoutingKey=#
&username=foo
&password=bar
Resulting message on the dead-letter-queue:
If you use a header key following the pattern that the Camel RabbitMQ component has established, then your custom header will get picked up when the message is published to RabbitMQ.
Taking from your code above, instead of:
.setHeader("TEST", constant("TEST"))
Do this:
.setHeader("rabbitmq.TEST", constant("TEST"))
The Camel RabbitMQ component seems to ignore all the other non- "rabbitmq.*" headers that might be on the Camel exchange, and probably for good reason. There could be quite a few and most of them wouldn't make sense in the context of a message published to RabbitMQ.
I am sending an message through my standalone application that uses EJB MDB to communicate to my other application server that is running on JBOSS server.My application server is connected to a MSSQL server. In certain scenario, connection to the database is lost on application server side and we get following error -
Connection is reset.
Later , when i try to send message i don't get any error at my standalone EJB MDB logs and the process just stops executing.I get error log on application server side logs but same logs don't get propagated to my EJB MDB error logs.
As per my understanding, when db connection is lost all the ejb bean present in jboss container get nullified too.(I could be wrong here, i am new to EJB).
I tried implementing below code in my code that use to send message -
QueueConnection qcon = null;
#PostConstruct
public void initialize() {
System.out.println("In PostConstruct");
try {
qcon = qconFactory.createQueueConnection();
} catch (Exception e) {
e.printStackTrace();
}
}
#PreDestroy
public void releaseResources() {
System.out.println("In PreDestroy");
try {
if(qcon != null)
{
qcon.close();
}
if(qcon== null){
throw new Exception(" new exception occured.");
}
} catch (Exception e) {
e.printStackTrace();
}
}
I was in a impression that Queueconnection object will be nullified, when our db connection have been lost(as we are creating bean and making connection for message). But it doesn't seem to work.
I did found a way to call back my application after sending message. I used a separate temporary queue and used setJMSReplyTo method to set the reply destination. More info could be obtained from this
link. Hope this helps others.
I have an issue in getting the message to spring-cloud-stream spring-boot app.
I am using rabbitMq as message engine.
Message producer is a non spring-boot app, which sends a message using Spring RestTemplate.
Queue Name: "audit.logging.rest"
The consumer application is setup to listen that queue. This app is spring-boot app(spring-cloud-stream).
Below is the consumer code
application.yml
cloud:
stream:
bindings:
restChannel:
binder: rabbit
destination: audit.logging
group: rest
AuditServiceApplication.java
#SpringBootApplication
public class AuditServiceApplication {
#Bean
public ByteArrayMessageConverter byteArrayMessageConverter() {
return new ByteArrayMessageConverter();
}
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
...
}
AuditTestLogger.java
public class AuditTestLogger {
private String applicationName;
public String getApplicationName() {
return applicationName;
}
public void setApplicationName(String applicationName) {
this.applicationName = applicationName;
}
}
Below is the request being sent from the producer App in JSON format.
{"applicationName" : "AppOne" }
Found couple of issues:
Issue1:
What I noticed is the below method is getting triggered only when the method Parameter is mentioned as Object, as spring-cloud-stream is not able to parse the message into Java POJO object.
#Input
#StreamListener(AuditChannelProperties.REST_CHANNEL)
public void receive(AuditTestLogger logger) {
Issue2:
When I changed the method to receive object. I see the object is of type RMQTextMessage which cannot be parsed. However I see actual posted message within it against text property.
I had written a ByteArrayMessageConverter which even didn't help.
Is there any way to tell spring cloud stream to extract the message from RMQTextMessage using MessageConverter and get the actual message out of it.
Thanks in Advance..
RMQTextMessage? Looks like it is a part of rabbitmq-jms-client.
In case of RabbitMQ Binder you should rely only on the Spring AMQP.
Now let's figure out what your producer application is doing.
Since you get RMQTextMessage as value for the #StreamListener method that says me that the sender really uses rabbitmq-jms-client for producing, and therefore the real AMQP message in queue has that RMQTextMessage as a wrapper for real payload.
Why don't use Spring AMQP there as well?
It's a late reply but I have the exact problem and solved it by sending and receiving the messages in application/json format. use this in the spring cloud stream config.
content-type: application/json
I've a simple route that listens to a Redis channel. For some reason it's not working.
Here is my route. I verified that data is being published into the Redis channel and I can read it back using a normal Jedis subscriber. I'm running Camel inside Jetty and it is deployed as a war.
public class RedisSubscriberRoute extends RouteBuilder{
#Override
public void configure() throws Exception {
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String res = exchange.getIn().getBody().toString();
System.out.println("************ " + res);
exchange.getOut().setBody(res);
}
})
.to("log:foo");
}
}
UPDATE (10-May-2013 9:56 AM EST): Adding version information
<properties>
<spring.version>3.2.2.RELEASE</spring.version>
<camel.version>2.11.0</camel.version>
<jetty.version>7.6.8.v20121106</jetty.version>
</properties>
Redis server version is 2.6.11
The sample git project is here.
https://github.com/soumyasd/camelredisdemo
UPDATE 10-May-2013 (10:18 PM EST):
As suggested in the comments below I changed the version of the spring-data to 1.0.0.RELEASE. Looks like the message is getting to the subscriber but I'm still getting an exception.
java.lang.RuntimeException: org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.StreamCorruptedException: invalid stream header: 77686174
at org.apache.camel.component.redis.RedisConsumer.onMessage(RedisConsumer.java:73)[camel-spring-redis-2.11.0.jar:2.11.0]
at org.springframework.data.redis.listener.RedisMessageListenerContainer.executeListener(RedisMessageListenerContainer.java:242)[spring-data-redis-1.0.0.RELEASE.jar:]
at org.springframework.data.redis.listener.RedisMessageListenerContainer.processMessage(RedisMessageListenerContainer.java:231)[spring-data-redis-1.0.0.RELEASE.jar:]
at org.springframework.data.redis.listener.RedisMessageListenerContainer$DispatchMessageListener$1.run(RedisMessageListenerContainer.java:726)[spring-data-redis-1.0.0.RELEASE.jar:]
at java.lang.Thread.run(Thread.java:680)[:1.6.0_45]
There is something broken in the consumer with v 1.0.3.RELEASE, use 1.0.0.RELEASE instead.
The exception you are getting is something different: Camel producer uses Spring RedisTemplate, which in turn uses JdkSerializationRedisSerializer. To make it symetric, the consumer by default also uses JdkSerializationRedisSerializer to deserialize data. So if you are using Camel producer to publish data, it should work fine w/o hustle. But if you are publishing data to redis using other redis clients (or as in your case some other libraries) you have to use another serializer for the consumer. Long explanation, but to make it work is actually two lines:
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel&serializer=#serializer")
Here is a summary of what I had to change to make this work.
As pointed out by #Bilgin Ibryam - you have to use the version 1.0.0.RELEASE of spring-data-redis (as on 11-May-2013)
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<!-- IMPORTANT - as of 10-May-2013 the Redis Camel
component only works with version 1.0.0.RELASE -->
<version>1.0.0.RELEASE</version>
</dependency>
Other versions that I used in my pom.xml are
3.2.2.RELEASE
2.11.0
7.6.8.v20121106
If you are publishing and consuming using the Camel Redis component you don't have to declare a different serializer. In my case I was publishing from python as well as plain old Java using Jedis. I had to change as my route to include the serializer and define the serializer in my spring/camel config.
#Override
public void configure() throws Exception {
from("spring-redis://localhost:6379?command=SUBSCRIBE&channels=mychannel&serializer=#redisserializer")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String res = exchange.getIn().getBody().toString();
System.out.println("************ " + res);
exchange.getOut().setBody(res);
}
})
.to("log:foo");
}
I have a durable consumer to a remote JMS queue in embedded Camel routing. Is it possible to have this kind of routing with master-slave configuration? Now it seems that the Camel routes are started and activated already when slave ActiveMQ is started and not when the actual failover happens.
Now it causes the slave instance to receive the same messages that are also sent to master and this causes duplicate messages to arrive to the queue on failover.
I'm using ActiveMQ 5.3 along with Apache Camel 2.1.
Unfortunately, when the slave broker starts so does the CamelContext along with the routes. However you can accomplish this by doing the following:
On the camelContext deployed with slave broker add the following autoStartup attribute to prevent the routes from starting:
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring" autoStartup="false">
...
</camelContext>
Next you need to create a class that implements the ActiveMQ Service Interface. A sample of this would be as follows:
package com.fusesource.example;
import org.apache.activemq.Service;
import org.apache.camel.spring.SpringCamelContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Example used to start and stop the camel context using the ActiveMQ Service interface
*
*/
public class CamelContextService implements Service
{
private final Logger LOG = LoggerFactory.getLogger(CamelContextService.class);
SpringCamelContext camel;
#Override
public void start() throws Exception {
try {
camel.start();
} catch (Exception e) {
LOG.error("Unable to start camel context: " + camel);
e.printStackTrace();
}
}
#Override
public void stop() throws Exception {
try {
camel.stop();
} catch (Exception e) {
LOG.error("Unable to stop camel context: " + camel);
e.printStackTrace();
}
}
public SpringCamelContext getCamel() {
return camel;
}
public void setCamel(SpringCamelContext camel) {
this.camel = camel;
}
}
Then in broker's configuration file, activemq.xml, add the following to register the service:
<services>
<bean xmlns="http://www.springframework.org/schema/beans" class="com.fusesource.example.CamelContextService">
<property name="camel" ref="camel"/>
</bean>
</services>
Now, once the slave broker takes over as the master, the start method will be invoked on the service class and the routes will be started.
I have also posted a blog about this here: http://jason-sherman.blogspot.com/2012/04/activemq-how-to-startstop-camel-routes.html
this shouldn't be an issue because the Camel context/routes on the slave will not start until it becomes the master (when the message store file lock is released by the master)
With camel routepolicies you can decide to suspend/resume certain routes based on your own conditions.
http://camel.apache.org/routepolicy.html
There is an existing ZookeeperRoutePolicy that can be used to do leader election.
http://camel.apache.org/zookeeper.html (see bottom of the page)