Call `recvfrom` multiple times from different task on the same server socket - vxworks

I am developing a system whereby I have to communicate with 18-different subsystems.
All 18-subsystems are UDP clients. I have created UDP server.
I'm using recvfrom to receive data these 18-subsystems.
char buf[1000];
int buf_len = 1000;
int sockfd;
//Code to configure socket
//Code to create Socket
//Code to bind socket
FOREVER
{
bytes_read = recvfrom(sockfd, (void *)buf, buf_len, 0,
(struct sockaddr *)&client_addr, &sock_addr_size);
//Spawn New Task to process data
}
I have three options process received data
Process the data immediately after receiving new data. This approach is not feasible as it will increase the latency in processing message and system will loose its deterministic hard-real time capabilities.
Spawn a new task after receiving new data. This new task will process incoming data and forward the processed data to appropriate task that will consume this data.
Create multiple task each running recvfrom on the same socket and will process data immediately after receiving the new data and forward the processed data to appropriate task that will consume this data.
I am more inclined towards Option 3. I wish to know is it allowed in vxWorks to call recvfrom multiple times from different task (disjoint task) on the same server socket or will it cause some complication.

Related

How to receive from two UDP clients concurrently using multi-threaded UDP server in python?

I am trying to implement UDP concurrent server (multi-threaded) which can accept new client and communicate with old client simultaneously. I have a main python program (called server) which creates a thread for every new accepted client and then loops back and wait for request from new client. The created thread deals with the respective client regarding upload, download and other tasks. The problem arises when two socket.recvfrom(1024) functions are called simultaneously, one in the main thread (where new clients are being accepted) and the other in the previously created thread where a function tries to receives data from the corresponding client. So, an UPD packet sent from a client, ends up in unexpected location as the function socket.recvfrom(1024) are running concurrently in two threads.
In my case, when a client which is already connected, tries to send data to the server for a function, the data which is supposed to be received by the corresponding thread of the server, is being received by the main program (main thread) which is supposed to receive new client request. So, the client get stuck there.
My code for the main program (main thread) and the function (which runs as a separate thread) is given below:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((UDP_IP, PORT))
def ClientHandler(addr):
while True:
str = "Select an option from below:\n 1. Upload\n 2. Download"
sock.sendto(str.encode(), addr)
data, addr = sock.recvfrom(1024) #receiver 2 running in parallel thread
if(data.decode()=="Upload" or data.decode()=="upload"):
ReceiveFile() #receive a file from the client
elif(data.decode()=="Download" or data.decode()=="download"):
SendFile() #send a file to the client
while True:
print("\nListening...\n")
data, addr = sock.recvfrom(1024) #receiver 1 running in main thread
if(data.decode()=="Hello"): #Using the Hello keyword to detect new client
print("Connected to a New Client " + str(addr))
t1=threading.Thread(target=ClientHandler, args=(addr,))
t1.start()

How to keep redis connection open when reading from reactive API

I am continuously listening on redis streams using the spring reactive api(using lettuce driver). I am using a standalone connection. It seems like the reactor's event loop opens a new connection every time it reads the messages instead of keeping the connection open. I see a lot of TIME_WAIT ports in my machine when i run my program. Is this normal? Is there a way to let lettuce know to re-use the connection instead of reconnecting every time?
This is my code:
StreamReceiver<String, MapRecord<String, String, String>> receiver = StreamReceiver.create(factory);
return receiver
.receive(Consumer.from(keyCacheStreamsConfig.getConsumerGroup(), keyCacheStreamsConfig.getConsumer()),
StreamOffset.create(keyCacheStreamsConfig.getStreamName(), ReadOffset.lastConsumed()))//
// flatMap reads 256 messages by default and processes them in the given scheduler
.flatMap(record -> Mono.fromCallable(() -> consumer.consume(record)).subscribeOn(Schedulers.boundedElastic()))//
.doOnError(t -> {
log.error("Error processing.", t);
streamConnections.get(nodeName).setDirty(true);
})//
.onErrorContinue((err, elem) -> log.error("Error processing message. Continue listening."))//
.subscribe();
Looks like the spring-data-redis library re-uses the connection only if the poll timeout is set to '0' in the stream receiver options and pass it as the second argument in StreamReceiver.create(factory, options). Figured by looking into spring-data-redis' source code.

Is there a way to control the number of bytes read in Reactor Netty's TcpClient?

I am using TcpClient to connect to a simple TCP echo server. Messages consist of the message size in 4 bytes followed by the message itself. For instance, to send the message "hello", the server will expect "0005hello", and respond with "0005hello".
When testing under load (approximately 300+ concurrent users), adjacent requests sometimes result in responses "piling up", e.g. sending "0004good" followed by "0003day" might result in the client receiving "0004good0003" followed by "day".
In a conventional, non-WebFlux-based TCP client, one would normally read the first 4 bytes from the socket into a buffer, determine the length of the message N, then read the following N bytes from the socket into a buffer, before returning the response. Is it possible to achieve such fine-grained control, perhaps by using TcpClient's underlying Channel?
I have also considered the approach of accumulating responses in some data structure (Queue, StringBuffer, etc.) and having a daemon parse the result, but this has not had the desired performance in practice.
I solved this by adding a handler of type LengthFieldBasedFrameDecoder to the Connection:
TcpClient.create()
.host(ADDRESS)
.port(PORT)
.doOnConnected((connection) -> {
connection.addHandler("parseLengthFromFirstFourBytes", new LengthFieldBasedFrameDecoder(9999, 0, 4) {
#Override
protected long getUnadjustedFrameLength(ByteBuf buf, int offset, int length, ByteOrder order) {
ByteBuf lengthBuffer = buf.copy(0, 4);
byte[] messageLengthBytes = new byte[4];
lengthBuffer.readBytes(messageLengthBytes);
String messageLengthString = new String(messageLengthBytes);
return Long.parseLong(messageLengthString);
}
});
})
.connect()
.subscribe();
This solves the issue with the caveat that responses still "pile up" (as described in the question) when the application is subjected to sufficient load.

Query for Number of Messages in Mule ESB VM Queue

In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);

Why am I losing data when using a vxWorks pipe?

I am using pipes to transfer information between two vxWorks tasks.
Here is a code sample:
Init()
{
fd = open("/pipe/mydev", O_RDWR, 0777);
...
}
taskRx()
{
...
len = read(fd, rxbuf, MAX_RX_LEN);
...
}
taskTx()
{
...
len = write(fd, txbuf, txLen);
...
}
If we send a message that is longer than MAX_RX_LEN, (ie txLen > MAX_RX_LEN) we do 2 reads to get the remainder of the message.
What we noticed is that the 2nd read didn't receive any data!
Why is that?
VxWorks' pipe mechanism is not stream based (unlike unix named pipes).
It is a layer on top of the vxWorks message Queue facility. As such, it has the same limitations as a message queue: when reading from the pipe, you are really reading the entire message. If your receive buffer does not have enough space to store the received data, the overflow is simply discarded.
When doing a receive on a message Queue or a pipe, always make sure the buffer is set to the maximum size of a queue element.