I have a CMS producer which sends message in a while loop. This is extremely fast and unnecessary. I would like to restrict this to 1 message per second or so.
BytesMessage *message = session->createBytesMessage();
message->setStringProperty("M_P_C_N","someMsg");
message->setStringProperty("M_P_T_N","someTopic");
message->writeBytes(data);
producer->send(message);
I was wondering if CMS has a function or a way to set the frequency of the sending?
There is no such facility in ActiveMQ-CPP. Controlling producer send rate is something your application needs to handle, the C++ client is only responsible for sending the messages, you are responsible for the logic surrounding what and when a message is sent.
Related
I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
I have a clients that uses API. The API sends messeges to rabbitmq. Rabbitmq to workers.
I ought to reply to clients if somethings went wrong - message wasn't routed to a certain queue and wasn't obtained for performing at this time ( full confirmation )
A task who is started after 5-10 seconds does not make sense.
Appropriately, I must use mandatory and immediate flags.
I can't increase counts of workers, I can't run workers on another servers. It's a demand.
So, as I could find the immediate flag hadn't been supporting since rabbitmq v.3.0x
The developers of rabbitmq suggests to use TTL=0 for a queue instead but then I will not be able to check status of message.
Whether any opportunity to change that behavior? Please, share your experience how you solved problems like this.
Thank you.
I'm not sure, but after reading your original question in Russian, it might be that using both publisher and consumer confirms may be what you want. See last three paragraphs in this answer.
As you want to get message result for published message from your worker, it looks like RPC pattern is what you want. See RabbitMQ RPC tuttorial. Pick a programming language section there you most comfortable with, overall concept is the same. You may also find Direct reply-to useful.
It's not the same as immediate flag functionality, but in case all your publishers operate with immediate scenario, it might be that AMQP protocol is not the best choice for such kind of task. Immediate mean "deliver this message right now or burn in hell" and it might be a situation when you publish more than you can process. In such cases RPC + response timeout may be a good choice on application side (e.g. socket timeout). But it doesn't work well for non-idempotent RPC calls while message still be processed, so you may want to use per-queue or per-message TTL (or set queue length limit). In case message will be dead-lettered, you may get it there (in case you need that for some reason).
TL;DR
As to "something" can go wrong, it can go so on different levels which we for simplicity define as:
before RabbitMQ, like sending application failure and network problems;
inside RabbitMQ, say, missed destination queue, message timeout, queue length limit, some hard and unexpected internal error;
after RabbitMQ, in most cases - messages processing application error or some third-party services like data persistence or caching layer outage.
Some errors like network outage or hardware error are a bit epic and are not a subject of this q/a.
Typical scenario for guaranteed message delivery is to use publisher confirms or transactions (which are slower). After you got a confirm it mean that RabbitMQ got your message and if it has route - placed in a queue. If not it is dropped OR if mandatory flag set returned with basic.return method.
For consumers it's similar - after basic.consumer/basic.get, client ack'ed message it considered received and removed from queue.
So when you use confirms on both ends, you are protected from message loss (we'll not run into a situation that there might be some bug in RabbitMQ itself).
Bogdan, thank you for your reply.
Seems, I expressed my thought enough clearly.
Scheme may looks like this. Each component of system must do what it must do :)
The an idea is make every component more simple.
How to task is performed.
Clients goes to HTTP-API with requests and must obtain a respones like this:
Positive - it have put to queue
Negative - response with error and a reason
When I was talking about confirmation I meant that I must to know that a message is delivered ( there are no free workers - rabbitmq can remove a message ), a client must be notified.
A sent message couldn't be delivered to certain queue, a client must be notified.
How to a message is handled.
Messages is sent for performing.
Status of perfoming is written into HeartBeat
Status.
Clients obtain status from HeartBeat by itself and then decide that
it's have to do.
I'm not sure, that RPC may be useful for us i.e. RPC means that clients must to wait response from server. Tasks may works a long time. Excess bound between clients and servers, additional logic on client-side.
Limited size of queue maybe not useful too.
Possible situation when a size of queue maybe greater than counts of workers. ( problem in configuration or defined settings ).
Then an idea with 5-10 seconds doesn't make sense.
TTL doesn't usefull because of:
Setting the TTL to 0 causes messages to be expired upon reaching a
queue unless they can be delivered to a consumer immediately. Thus
this provides an alternative to basic.publish's immediate flag, which
the RabbitMQ server does not support. Unlike that flag, no
basic.returns are issued, and if a dead letter exchange is set then
messages will be dead-lettered.
direct reply-to :
The RPC server will then see a reply-to property with a generated
name. It should publish to the default exchange ("") with the routing
key set to this value (i.e. just as if it were sending to a reply
queue as usual). The message will then be sent straight to the client
consumer.
Then I will not be able to route messages.
So, I'm sorry. I may flounder in terms i.e. I'm new in AMQP and rabbitmq.
I'm working with RabbitMQ and I want on the server side to conduct a calculation each time an Exchange receives a message.
I have a queue for ratings and when too many bad reviews (let's say more than ten) received, then a consumer should be notified.
What options are there for serverside logic ?
I've been reading about Spring RabbitMQ, but am not sure ?
There isn't really a "server side" with a message-based system; rather, the RabbitMQ service sits somewhere and relays messages to and from any number of producers and consumers. Depending on the hardware you have available, and the amount of processing being performed, these could all be on the same server, or you could have resources dedicated to each task.
Calculations based on the content of messages is the job of consumers, which can be written in any language you feel comfortable writing them in, as long as you use a serialization of the message that all can understand (e.g. JSON, XML). For a simple counter, you may not need much framework to extract the data you need.
Any number of Queues can receive copies of messages from the same Exchange, so you can either pick up all messages from the exchange and count only the bad reviews, or you can put the rating into the "routing key" and use a "topic exchange" to pre-filter them.
After that, you could use a simple memory store like Redis to store a counter, and when it reaches the limit, either act on it within that consumer, or publish a message to a new exchange for processing by a different consumer.
I am developing an app. and I am using activemq. Is there any way to do that one producer always send messages to one broker but on the opposite side there 3 consumers.Each consumer listens broker and can take any of message from queue.Is this possible?
I am using activemq for writing my app. logs to db.As u know writing logs to db is time taking process.That's why consumer is more and more slow than producer.For ex. I send 100.000 message(huge objects).Producer finishes sending messages in 20 mins.But When the producer finished, consumer has finished 4.000 message processing yet.
Yes, what you are describing is possible. In fact, you can have any number of consumers listening on a single queue. The messages are dispatched in a round-robin fashion between consumers.
What you should be aware of is that ActiveMQ performs much better sending small messages than large ones. If you need to send very large payloads (e.g. 100mb), you are far better off saving the message to a location that is accessible by both the producer and consumers (e.g. a network file system), and sending the location of the message instead. The consumer can then use that to read the message manually. This way you get a relatively small amount of traffic through the message broker.
Let's say I have a ClientRequestMessage message that contains a request for a specific Client. A web application will generate these requests and they need to be sent to the correct Client for handling. I can think of a few options for this.
I could have a single queue that all messages go to and specific client handlers check a property (like ClientId) to decide whether they care about it. This feels wrong on many levels to me though.
I could publish a message to all of the clients and they could decide whether or not they care about it during handling. This seems like too much traffic and wastes each client's time handling messages they shouldn't care about in the first place though.
I could have client specific queues that these messages get routed too. This one feels the best to me, but I am unsure of how to do it. I'd like to keep it simple and avoid client specific message types, but I am not sure how to tell NServiceBus "for client A send it to client A's queue and for client B send it to client B's queue".
So my question is, what is the best (most efficient? easiest to manage?) way to set this up? I am pretty sure I need to use the distributor, but not positive so thought I would ask.
BONUS QUESTION:
Let's say each client has multiple handlers. How can I make sure only one of them handles a given message? Would I need a distributor per client?
If what you really want is the solution that allows you to have just a single message where you can place a specific filter on the message based on clientId and only route the message to the client when it relates to them then I would use PServiceBus(pservicebus.codeplex.com). It will make it easier for you specific a set of subscriptions for each of your client where their messages are all filtered by clientId into a specific queue or what transport you have available. The below example shows filtering a ChatTopic by the UserName Property and the subscriber only receives the message at the specified transport when the message been published UserName property is not TJ. You are also allowed to use complex filter where you do thing such as GreaterThan("MyComplexProperty.Blah.ID", 5)
Subscriber.New("MyUserName").Durable(false)
.SubscribeTo(Topic.Select<ChatTopic>().NotEqual("UserName", "TJ"))
.AddTransport("Tcp",
Transport.New<TcpTransport>(
transport => {
transport.Format = TransportFormat.Json;
transport.IPAddress = "127.0.0.1";
transport.Port = port;
}), "ChatTopic")
.Save();
You can tell NSB where to put messages by using the MessageEndpointMappings configuration section. You can map a specific message type or a whole assembly to a queue. If you don't want to create specific message types and map them, then I would recommend the publish approach. The overhead of removing a message from the queue is pretty minimal.
If your "client" has many instances of NSB to pick up messages then you will need to use a Distributor. Check out the distributed Pub/Sub documentation.