I have a stream defined as stream create --name foo --definition samplesource | sampleprocessor | samplesink --deploy. I'm using Redis as MessageBus.
If sampleprocessor or samplesink is failed, then xd will push failed messages to Redis into ERRORS:foo.n queue. I'm writing code to bring the messages from errors queue to foo.n.
Challenge here is I don't want to hardcode the stream name in my code as this piece is handled across all my xd modules.
Can we get the channel name on the fly ?
Thanks In Advance
I'm writing code...
Where is that code going to run?
If it's within a module in the stream, module metadata is available via the application context environment properties:
${xd.group.name}
${xd.module.label}
${xd.module.type}
${xd.module.sequence}
Related
Mule application (Any Point Studio application) is consuming messages published by API application through Kafka ,
But Mule(Any Point Studio) Kafka Consumer is not able to receive messages. Every time i have stop start the application to consume messages ,
even we do that we are receiving old messages including new message,
Using Mule 3.9 Version, kafka-client 0.10.0.0
I tried adding some properties to consumer.properties file, like poll.
Consumer.Properties having below code,
group.id=user
auto.offset.reset=earliest
enable.auto.commit=false
Consumer.Properties having below code,
group.id=user
In Any point studio flow , Kafka Connector [Consumer] -> Given topic name as customer_data, partitions as 1 . Haven't provided any Offset
I expected the Consumer should read message with out re-starting application and older messages shouldn't receive again
The problem is that mule only manages offset if your enable.auto.commit is true.
Keeping it true is bad idea since it would commit offset even if your muleflow will be failing. So you did a right thing by disabling it.
Here comes the problem - after enable.auto.commit=false you basically suppose to manage offset yourself. I am having similar problem and considering creating custom class that will explicitly call sync() after mule flow executed successfully.
I have stream defined in my spring xd my stream looks something like this and transport is rabbit mq
stream source|transformer1|transformer2|transformer3|sink
I have custom transformers which I have deployed.I want to write all exception/error happening in my transformers/custom modules to a errorQueue
I want to then pull the message in errorQueue to mongoSink
I can achieve this by creating tap from rabbiterror queue to mongo
`tap -->rabbit_ERROR_QUEUE-->mongoSink`
Is there any way I can configure my spring xd custom modules xml to write all exception and error to error queue by default?
If you set autoBindDLQ to true (in servers.yml for all streams, or in the deployment properties at the stream level), XD will create a dead letter queue for you.
You also need to configure retry.
By default, the bus will try to deliver the message 3 times then reject it and the broker will forward it to the dead letter queue.
Another bus/deployment property republishToDLQ provides a mechanism for the bus to republish the message to the DLQ (instead of rejecting it). This will include additional information in the error message (exception exception etc) as headers.
See the application configuration section the deployment section of the reference manual for complete information about these properties.
However, you would not consume from the DLQ using a tap, but....
stream create errorStream --definition "rabbit ..."
i.e. use the rabbit source to pull from the DLQ.
I wanted to know if RabbitMQ has any built capabilities to call an external exe once its message queue get populated. I understand that we can implement task queues/worker queues in rabbitmq but it has to be done by writing an external application(say in java like they have mentioned in tuttorials http://www.rabbitmq.com/tutorials/tutorial-two-java.html) . Please help me out with this
Adding to my previous question :
I have decided to write an application that will run an exe . But i dont want the application that i write to poll my queue. Instead i want my rabbitmq to trigger my application whenever there is a new message by sending a job to process. Can i do this? how can i add jobs to the queues?
You are probably going to have to write your own consumer. The question is what is sending the messages in the first place and what is the format o the message and do you need that data.
Python is probably the best choice for this task.
I'm working on proof-of-concept project designed to explore the benefits of offloading work from a NIO server to a message queue for backend processing. I'm using Grizzly for the NIO boilerplate stuff, and Spring Integration for the messaging (with JMS/ActiveMQ as the messaging implementation). Basically, what I want to do is this:
Client connection -> Server -> Server creates "work-to-be-done" message -> JMS/ActiveMQ
On the ActiveMQ message queue, a number of "workers" will be actively consuming these messages, processing them, and placing the result on another queue. The server is listening for "response messages" on that queue, and once a message is picked up it will execute the following:
Response queue -> Server serializes the message to something the client can understand -> back to the client
My immediate problem is my lack of understanding of Grizzly, specifically how to decouple the event handling from the messaging. The server has to create the work-to-be-done message in such a way that when the reply message comes back from the worker, the server knows who the client was (find the related FilterChainContext in Grizzly) in order to send the tcp message.
I might be able to use FilterChainContext.getAddress() and place that on the work message, but I'm not sure how to code a method which takes a peer address and a message and somehow sends that (FilterChainContext.write()) when it has no FilterChainContext.
I'm playing with the idea now of keeping a Map around, but I'm apprehensive about this approach because I don't want stuff to go stale in a map if something happens to the message during serialization or processing.
Ideas and suggestions are welcome.
-Michael
You could use the TCP adapters/gateways (which have an option to use NIO), together with custom (de)serializers. If you must use Grizzly, you could write a server connection factory implementation. In the case of the outbound adapter (or inbound gateway), the endpoint is registered as a 'TcpListener' (using the connectionId) and the SI message contains the IpHeaders.CONNECTION_ID header used to determine which connection gets the reply. When a connection closes, it is unregistered (removed from the map).
I'm using the web console against my AMQ 5.2 instance successfully, except for I cannot see the content of all of my messages.
If I send a test message using the web console, I can see the sample text content, but I believe the vendor app I am working with has binary or byte array message content.
Is there something I need to do to be able to view this raw data?
Thanks,
To my knowledge, it is not possible to inspect messages in the Admin Console. You can get some statistics (like how many messages have been sent etc.).
ActiveMQ does not unmarshal messages when receiving them (for performance reasons, unmarshalling is rather expensive).
Thus, if you want to have some way to inspect messages for their content, you can basically do 2 things:
Write a consumer which registers for all topics/queues, through which you can see messages' content. Drawback: if you're using queue-based interaction, your "real" consumers will not get all messages
Write an activeMQ plugin which looks at the messages. Have a look at ActiveMQ's Logger Plugin. Then write your own (you'll need the sources to compile it) and load it with ActiveMQ (see the documentation on how to configure ActiveMQ to load plugins). You want to override the send() method which is called whenever someone sends a message to the broker. There you get a reference to the message and can access its content.
Neither of the two messages provides a convenient viewing-mechanism though. You'll have to resort to standard out, or write your own web-based access.
hawtio now shows first 256 chars of messages. Don't know if that is enough for you. Use browse() method.