With Apache Camel I want to send to rabbitmq exchange messages with different routing keys for load balancing(now i have exchange with 4 routing keys, more in the future). Is there an easy way to add different headers(routing keys .setHeader("rabbitmq.ROUTING_KEY", envelope.getRoutingKey()); ) to messages?
UPDATED:
I solved problem with processors and ${id}:
.setHeader("id", simple("${id}"))
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
String id = exchange.getIn().getHeader("id").toString();
String newRoutingKey = ROUTING_KEY_PREFIX +
(Integer.valueOf(id.split(":")[MESSAGE_NUMBER_IND]) % ROUTING_KEYS_NUMBER);
exchange.getIn().removeHeader("id");
exchange.getIn().setHeader("rabbitmq.ROUTING_KEY", newRoutingKey);
} }).to(rmqQueue)
Is there any hidden problems?
You can use toD to set the routing keys dynamically to an RabbitMq endpoint.
XML Syntax:
<toD uri="rabbitmq://hostname[:port]/exchangeName?routingKey=**${header.routekey}**"/>
where header.routekey is the dynamic key you intend to use and it is set in the header.
In case of Java DSL, the syntax might look like:
.toD("rabbitmq://hostname[:port]/exchangeName?routingKey=${header.routekey}");
Related
Currently I am able to see the streaming values exposed by the code below, but only one http client will receive the continuous stream of values, the others will not be able to.
The code, a modified version of the quarkus quickstart for kafka reactive streaming is:
#Path("/migrations")
public class StreamingResource {
private volatile Map<String, String> counterBySystemDate = new ConcurrentHashMap<>();
#Inject
#Channel("migrations")
Flowable<String> counters;
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS) // denotes that server side events (SSE) will be produced
#SseElementType("text/plain") // denotes that the contained data, within this SSE, is just regular text/plain data
public Publisher<String> stream() {
Flowable<String> mainStream = counters.doOnNext(dateSystemToCount -> {
String key = dateSystemToCount.substring(0, dateSystemToCount.lastIndexOf("_"));
counterBySystemDate.put(key, dateSystemToCount);
});
return fromIterable(counterBySystemDate.values().stream().sorted().collect(Collectors.toList()))
.concatWith(mainStream)
.onBackpressureLatest();
}
}
Is it possible to make any modification that would allow multiple clients to consume the same data, in a broadcast fashion?
I guess this implies letting go of backpressure, because that would imply a state per consumer?
I saw that Observable is not accepted as a return type in the resteasy-rxjava2 for the Server Side Events media-tpe.
Please let me know any ideas,
Thank you
Please find the full code in Why in multiple connections to PricesResource Publisher, only one gets the stream?
The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}
Trying a very simple camel route:
from("aws-s3://javatutorial1232boomiau?amazonS3Client=#s3client&deleteAfterRead=true&fileName=My2.jsp").process(Empty2).log(LoggingLevel.INFO, "Replay Message Sent to file:s3out ${in.header.CamelAwsS3Key}")
.to("stream:out");
I am using verion 2.20.2 (latest as of today). The file is not getting deleted from the bucket. I have done some research and by the looks of it the the exchange passed in to the processCommit method lacks any headers. The headers it is looking for are bucket name and key
String bucketName = exchange.getIn().getHeader(S3Constants.BUCKET_NAME, String.class);
String key = exchange.getIn().getHeader(S3Constants.KEY, String.class);
I've also tried a to("file://Users/user/out.txt") file is also not getting deleted and the headers appear to be those of file component.
EDIT:
I noticed that if I remove the .processor(Empty2) the file is deleted from the bucket. The processor does not do any work:
#Override
public void process(Exchange exchange) throws Exception {
Object body = exchange.getIn().getBody();
System.out.println("1: "+body);
Object body2 = exchange.getOut().getBody();
System.out.println("2: "+body2);
}
So why would it work without it but not with a processor? How should I process a message if processor cannot be used?
As Claus pointed out, using exchange.getOut() creates outgoing (empty) message body on the exchange. None of the headers are copied to the exchange and as such they are all lost. When it comes to processCommit the header for bucket name and key have been lost.
So either do not access getOut() or copy all headers from In to Out.
I have a camel route which processes a message from a process queue and sends it to upload queue.
from("activemq:queue:process" ).routeId("activemq_processqueue")
.process(exchange -> {
SomeImpl impl = new SomeImpl();
impl.process(exchange);
})
.to(ExchangePattern.InOnly, "activemq:queue:upload");
In impl.process I am populating an Id and destination server path. Now I need to define a new route which consumes messages from upload queue ,and copy a local folder (based on Id generated in previous route) and upload it to destination folder which is an ftp server (this is also populated in previous route)
So how to design a new route where both from and to endpoints are dynamic which would look something like below ?
from("activemq:queue:upload" )
.from("file:basePath/"+{idFromExchangeObject})
.to("ftp:"+{serverIpFromExchangeObject}+"/"+{pathFromExchangeObject});
I think there is a better alternative for your case, taking as granted that you are using a Camel version newer than 2.16.(alternatives for a previous version exist but the are more complicated and don't look elegant - ( e.g consumerTemplate & recipientList).
You can replace the first "dynamic from" with pollEnrich which enriches the message using a polling consumer and simple expression to build the dynamic file endpoint. For the second part, as already mentioned, a dynamic uri .toD will do the job. So your route would look like this:
from("activemq:queue:upload" )
.pollEnrich().simple("file:basePath/${header.idFromExchangeObject})
.aggregationStrategy(new ExampleAggregationStrategy()) // * see explanation
.timeout(2000) // the timeout is optional but recommended
.toD("ftp:${header.serverIpFromExchangeObject}/${header.pathFromExchangeObject}")
See content enricher section "Using dynamic uris"
http://camel.apache.org/content-enricher.html .
You will need an aggregation strategy, to combine the original exchange with the resource exchange in order to make sure that the headers serverIpFromExchangeObject, pathFromExchangeObject will be included in the aggregated exchange after the enrichment. If you don't include the custom strategy then Camel will by default use the body obtained from the resource. Have a look at the ExampleAggregationStrategy example in content-enricher.html to see how this works.
For the .toD() have a look at http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html
Adding a dynamic to endpoint in Camel (as noted in the comment) can be done with the .toD() which is described on this page on the Camel site.
I don't know of any fromD() equivalent. However, you could add a dynamic route by calling the addRoutes method on the CamelContext. This is described on this page on the Camel site.
Expanding slightly on the example from the Camel site here is something that should get you heading in the right direction.
public void process(Exchange exchange) throws Exception {
String idFromExchangeObject = ...
String serverIpFromExchangeObject = ...
String pathFromExchangeObject = ...
exchange.getContext().addRoutes(new RouteBuilder() {
public void configure() {
from("file:basePath/"+ idFromExchangeObject)
.to("ftp:"+ serverIpFromExchangeObject +"/"+pathFromExchangeObject);
}
});
}
There may be other options in Camel as well since this framework has an amazing number of EIP and capabilities.
I was currently using NMS to develop application based ActiveMQ(5.6).
We have several consumers(exe) trying to recieving massgaes from the same queue(not topic). While all the messages just all go to one consumer though I have make the consumer to sleep for seconds after recieving a message. By the way, we don't want the consumers recieving the same messages other consumers have recieved.
It is mentioned in the official website that we should set Prefetch Limit to decide how many messages can be streamed to a consumer at any point in time. And it can both be configured and coded.
One way I tried is to code using PrefetchPolicy class binding the ConnectionFactory class like bellow.
PrefetchPolicy poli = new PrefetchPolicy();
poli.QueuePrefetch = 0;
ConnectionFactory fac = new ConnectionFactory("activemq:tcp://Localhost:61616?jms.prefetchPolicy.queuePrefetch=1");
fac.PrefetchPolicy = poli;
using (IConnection con = fac.CreateConnection())
{
using (ISession se = con.CreateSession())
{
IDestination destination = SessionUtil.GetDestination(se, queue, DestinationType.Queue);
using (IMessageConsumer consumer = se.CreateConsumer(queue1))
{
con.Start();
while (true)
{
ITextMessage message = consumer.Receive() as ITextMessage;
Thread.Sleep(2000);
if (message != null)
{
Task.Factory.StartNew(() => extractAndSend(message.Text)); //do something
}
else
{
Console.WriteLine("No message received~");
}
}
}
}
}
But no matter what prefetch value I set the behavior of the consumers stay the same as before.
And I've tried the second way tying to get the result, namely configure the server conf file. I change the activemq.xml of the server like bellow.
" producerFlowControl="true" memoryLimit="5mb" />
" producerFlowControl="true" memoryLimit="5mb">
But though I've set the dispatchpolicy the messages still go to one consumer.
I want to know that:
Whether this behavior can be achieved by just configuring the server xml file to enable all the consumers recieve messages from one queue? If so, how to configure this and what is wrong with my configuration? If not, how can I use codes to achieve the goal?
Thanks.
Take a look at "Message Groups" feature.
I had the same problem. Only one consumer processed all messages. I found in my code I used group header during send:
request.Properties["NMSXGroupID"] = "cheese";
According to official docs:
Standard JMS header JMSXGroupID is used to define which message group
the message belongs to. The Message Group feature then ensures that
all messages for the same message group will be sent to the same JMS
consumer - while that consumer stays alive. As soon as the consumer
dies another will be chosen.
See full details at http://activemq.apache.org/message-groups.html