How can a task wait on multiple vxworks Queues? - embedded

We have a vxWorks design which requires one task to process both high and low priority messages sent over two message queues.
The messages for a given priority have to be processed in FIFO order.
For example, process all the high priority messages in the order they were received, then process the low priority messages. If there is no high priority message, then process the low priority message immediately.
Is there a way to do this?

If you use named pipes (pipeDevCreate(), write(), read()) instead of message queues, you can use select() to block until there are messages in either pipe.
Whenever select() triggers, you process all messages in the high priority pipe. Then you process a single message from the low priority pipe. Then call select again (loop).
Example Code snippets:
// Initialization: Create high and low priority named pipes
pipeDrv(); //initialize pipe driver
int fdHi = pipeDevCreate("/pipe/high",numMsgs,msgSize);
int fdLo = pipeDevCreate("/pipe/low",numMsgs,msgSize);
...
// Message sending thread: Add messages to pipe
write(fdHi, buf, sizeof(buf));
...
// Message processing Thread: select loop
fd_set rdFdSet;
while(1)
{
FD_ZERO(&rdFdSet);
FD_SET(fdHi, &rdFdSet);
FD_SET(fdLo, &rdFdSet;
if (select(FD_SETSIZE, &rdFdSet, NULL, NULL, NULL) != ERROR)
{
if (FD_ISSET(fdHi, &rdFdSet))
{
// process all high-priority messages
while(read(fdHi,buf,size) > 0)
{
//process high-priority
}
}
if (FD_ISSET(fdLo, &rdFdSet))
{
// process a single low priority message
if (read(fdLo,buf,size) > 0)
{
// process low priority
}
}
}
}

In vxWorks, you can't wait directly on multiple queues. You can however use the OS events (from eventLib) to achieve this result.
Here is a simple code snippet:
MSG_Q_ID lowQ, hiQ;
void Init() {
// Task Initialization Code. This should be called from the task that will
// be receiving the messages
...
hiQ = msgQCreate(...);
lowQ = msgQCreate(...);
msgQEvStart(hiQ, VX_EV01); // Event 1 sent when hiQ receives message
msgQEvStart(loQ, VX_EV02); // Event 2 sent when loQ receives message
...
}
void RxMessages() {
...
UINT32 ev; // Event received
// Blocks until we receive Event 1 or 2
eventReceive(VX_EV01 | VX_EV02, EVENT_WAIT_ANY, WAIT_FOREVER, &ev);
if(ev & VX_EV01) {
msgQReceive(hiQ, ...);
}
if(ev & VX_EV02) {
msgQReceive(loQ, ...);
}
}
Note that you need to modify that code to make sure you drain all your queues in case there is more than one message that was received.
The same mechanism can also be applied to Binary semaphores using the semEvStart() function.

Related

mbed: Triggering interrupt at reception of an UDP broadcast message

I'm trying to trigger an interrupt function each time I receive a broadcast message on a given port of an STM32 board (Nucleo f429zi). The communication protocol I use is UDP and the mbed library is UDPSocket which inherits from Socket.
Does anyone have an idea how to achieve it?
Edit:
Thanks to PeterJ's comment I found an interesting (but deprecated) member function of the class Socket which is called attach(). This method registers a callback on state change of the socket (recv/send/accept).
Since I have an incoming broadcast on the socket, there is no state change in my case (only receiving data, never sending). Is there a way I could use this attach() method to detect every message received?
// Open Ethernet connection
EthernetInterface eth;
eth.connect();
// Create an UDP socket for listening to the broadcast
UDPSocket broadcastSocket;
broadcastSocket.open(&eth);
broadcastSocket.bind(BROADCAST_PORT);
// Function to call when a broadcast message is received
broadcastSocket.attach(&onUDPSocketEvent);
void onUDPSocketEvent(){
printf("UDP event detected\n");
}
attach has been replaced by sigio, but I don't think it's going to do what you want. A nice way would be to spin up a new thread, and use this thread to handle the socket.
void onUDPSocketEvent(void* buffer, size_t size) {
printf("UDP event detected\n");
}
void udp_main() {
// Open Ethernet connection
EthernetInterface eth;
eth.connect();
// Create an UDP socket for listening to the broadcast
UDPSocket broadcastSocket;
broadcastSocket.open(&eth);
broadcastSocket.bind(BROADCAST_PORT);
void* recvBuffer = malloc(1024);
while (1) {
// this blocks until next packet comes in
nsapi_size_or_error_t size = broadcastSocket.recvfrom(NULL, recvBuffer, 1024);
if (size < 0) {
printf("recvfrom failed with error code %d\n", size);
}
onUDPSocketEvent(recvBuffer, size);
}
}
int main() {
Thread t; // can pass in the stack size here if you run out of memory
t.start(&udp_main);
while (1) {
wait(osWaitForever);
}
}
(note that the callback function does not run in an ISR - so not in an interrupt context - but I assume you don't actually want that).
Edit: I have created mbed-udp-ping-pong which shows how to listen for UDP messages on a separate thread.

RabbitMQ - Non Blocking Consumer with Manual Acknowledgement

I'm just starting to learn RabbitMQ so forgive me if my question is very basic.
My problem is actually the same with the one posted here:
RabbitMQ - Does one consumer block the other consumers of the same queue?
However, upon investigation, i found out that manual acknowledgement prevents other consumers from getting a message from the queue - blocking state. I would like to know how can I prevent it. Below is my code snippet.
...
var message = receiver.ReadMessage();
Console.WriteLine("Received: {0}", message);
// simulate processing
System.Threading.Thread.Sleep(8000);
receiver.Acknowledge();
public string ReadMessage()
{
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void Acknowledge()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
I modify how I get messages from the queue and it seems blocking issue was fixed. Below is my code.
public string ReadOneAtTime()
{
Consumer = new QueueingBasicConsumer(Model);
var result = Model.BasicGet(QueueName, false);
if (result == null) return null;
DeliveryTag = result.DeliveryTag;
return Encoding.ASCII.GetString(result.Body);
}
public void Reject()
{
Model.BasicReject(DeliveryTag, true);
}
public void Acknowledge()
{
Model.BasicAck(DeliveryTag, false);
}
Going back to my original question, I added the QOS and noticed that other consumers can now get messages. However some are left unacknowledged and my program seems to hangup. Code changes are below:
public string ReadMessage()
{
Model.BasicQos(0, 1, false); // control prefetch
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void AckConsume()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
In Program.cs
private static void Consume(Receiver receiver)
{
int counter = 0;
while (true)
{
var message = receiver.ReadMessage();
if (message == null)
{
Console.WriteLine("NO message received.");
break;
}
else
{
counter++;
Console.WriteLine("Received: {0}", message);
receiver.AckConsume();
}
}
Console.WriteLine("Total message received {0}", counter);
}
I appreciate any comments and suggestions. Thanks!
Well, the rabbit provides infrastructure where one consumer can't lock/block other message consumer working with the same queue.
The behavior you faced with can be a result of couple of following issues:
The fact that you are not using auto ack mode on the channel leads you to situation where one consumer took the message and still didn't send approval (basic ack), meaning that the computation is still in progress and there is a chance that the consumer will fail to process this message and it should be kept in rabbit queue to prevent message loss (the total amount of messages will not change in management consule). During this period (from getting message to client code and till sending explicit acknowledge) the message is marked as being used by specific client and is not available to other consumers. However this doesn't prevent other consumers from taking other messages from the queue, if there are more mossages to take.
IMPORTANT: to prevent message loss with manual acknowledge make sure
to close the channel or sending nack in case of processing fault, to
prevent situation where your application took the message from queue,
failed to process it, removed from queue, and lost the message.
Another reason why other consumers can't work with the same queue is QOS - parameter of the channel where you declare how many messages should be pushed to client cache to improve dequeue operation latency (working with local cache). Your code example lackst this part of code, so I am just guessing. In case like this the QOS can be so big that there are all messages on server marked as belonging to one client and no other client can take any of those, exactly like with manual ack I've already described.
Hope this helps.

Custom component to read messages from queue

I'm using Mule 3.3.1
I am trying to write a Component that reads all available messages from queue, which I intend to be polled using a Quartz scheduler.
Here is my code.
#Override
public Object onCall(MuleEventContext muleEventContext) throws Exception {
MuleMessage[] messages = null;
MuleMessage result = muleEventContext.getMessage();
do {
if (result == null) {
break;
}
if (result instanceof MuleMessageCollection) {
MuleMessageCollection resultsCollection = (MuleMessageCollection) result;
System.out.println("Number of messages: " + resultsCollection.size());
messages = resultsCollection.getMessagesAsArray();
} else {
messages = new MuleMessage[1];
messages[0] = result;
}
result = muleEventContext.getMessage();
} while (result !=null);
return messages;
}
Unfortunately, it loops indefinitely on the first message. Thoughts?
The onCall() method provided in the post is going to loop infinitely because
muleEventContext.getMessage()
always returns a MuleMessage. And so the loop will go in infinitely.
MuleEventContext object here is not an iterator or stream where the pointer points to next element after reading current element.
In order to read all the avialable messages from the queue. You can have a JMS inbound to read by polling on the queue and read all the messages. But remember each message on the queue will be one iteration(one message) from your JMS inbound.
If you want to gather all the Queue messages as a collection of objects and then proceed, then that is a different story. That cannot be done with your component code.
If you intend to gather all your messages as a collection and then start processing then try using something like collection-aggregator of mule at your inbound.
Hope this helps.

Rabbitmq retrieve multiple messages using single synchronous call

Is there a way to receive multiple message using a single synchronous call ?
When I know that there are N messages( N could be a small value less than 10) in the queue, then I should be able to do something like channel.basic_get(String queue, boolean autoAck , int numberofMsg ). I don't want to make multiple requests to the server .
RabbitMQ's basic.get doesn't support multiple messages unfortunately as seen in the docs. The preferred method to retrieve multiple messages is to use basic.consume which will push the messages to the client avoiding multiple round trips. acks are asynchronous so your client won't be waiting for the server to respond. basic.consume also has the benefit of allowing RabbitMQ to redeliver the message if the client disconnects, something that basic.get cannot do. This can be turned off as well setting no-ack to true.
Setting basic.qos prefetch-count will set the number of messages to push to the client at any time. If there isn't a message waiting on the client side (which would return immediately) client libraries tend to block with an optional timeout.
You can use a QueueingConsumer implementation of Consumer interface which allows you to retrieve several messages in a single request.
QueueingConsumer queueingConsumer = new QueueingConsumer(channel);
channel.basicConsume(plugin.getQueueName(), false, queueingConsumer);
for(int i = 0; i < 10; i++){
QueueingConsumer.Delivery delivery = queueingConsumer.nextDelivery(100);//read timeout in ms
if(delivery == null){
break;
}
}
Not an elegant solution and does not solve making multiple calls but you can use the MessageCount method. For example:
bool noAck = false;
var messageCount = channel.MessageCount("hello");
BasicGetResult result = null;
if (messageCount == 0)
{
// No messages available
}
else
{
while (messageCount > 0)
{
result = channel.BasicGet("hello", noAck);
var message = Encoding.UTF8.GetString(result.Body);
//process message .....
messageCount = channel.MessageCount("hello");
}
First declare instance of QueueingBasicConsumer() wich wraps the model.
From the model execute model.BasicConsume(QueueName, false, consumer)
Then implement a loop that will loop around messages from the queue which will then processing
Next line - consumer.Queue.Dequeue() method - waiting for the message to be received from the queue.
Then convert byte array to a string and display it.
Model.BasicAck() - release message from the queue to receive next message
And then on the server side can start waiting for the next message to come through:
public string GetMessagesByQueue(string QueueName)
{
var consumer = new QueueingBasicConsumer(_model);
_model.BasicConsume(QueueName, false, consumer);
string message = string.Empty;
while (Enabled)
{
//Get next message
var deliveryArgs = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//Serialize message
message = Encoding.Default.GetString(deliveryArgs.Body);
_model.BasicAck(deliveryArgs.DeliveryTag, false);
}
return message;
}

RTOS - pending on different data in a queue

I'm programming a board from TI, and I'd like to somehow be able to have two different ISR's post to a task's message queue. That part works fine. However, on the receiving end, is there any intelligent way for the task to pend on its queue and perform a different operation on the data based on which ISR posted?
Basically, I have an LCD update task that displays information from my motors. However, if I have a motor sensor ISR and a button press ISR that send different information to be updated, can this be done on one queue?
Sure. When each ISR sends a message to the queue, put something in the message that identifies the ISR that sent it. Then, when the receiver reads the queue, it can decide which action to take based on the identifier.
ISR1() {
char msg[4];
msg[0] = '1'; // Identify the queue
get_3_ISR1_data_bytes(msg+1); // Get the data
q_send(msg);
}
ISR2() {
char msg[4];
msg[0] = '2'; // Identify the queue
get_3_ISR2_data_bytes(msg+1); // Get the data
q_send(msg);
}
handler() {
char *msg;
q_rcv(msg);
switch (msg[0]) {
case '1':
// Do ISR1 stuff
break;
case '2':
// Do ISR2 stuff
break;
default:
// Something unpleasant has happened
}
}
If an entire char is too expensive, you can set just one bit (to 0 or 1) to identify the ISR.