Writing a c++ Client-Server program using winsock2 - udp

I am having some trouble with a UDP-based connection.
In my program, I restrict the time allocated for data transfer between the transmitter and receiver (both of them are sending/receiving in a loop).
When the time passes, I send a message from the transmitter that if the receiver receives and reads - the receiver knows not to wait for anymore packets so the program continues.
The problem I thought of was that because the connection is UDP, the message might not get to the receiver, and then the client will keep on waiting for messages, but no one is sending.
So what is the correct way to finish such a connection?
Thanks!

Related

Does the operation of the CAN peripheral in STM32 wait for the execution of the ISR routine code?

I'm developing a stack layer on microcontroller STM32L433 that uses the CAN protocol; a fundamental part of the stack is the authentication of the devices.
During authentication it can occur that two (or more) devices start to send a CAN message (authentication message) with the same identifier and different payload (true random value). In this case every device should be able to detect if this message was sent first from another device.
I have studied this case and three situations can occur:
the devices start to send message at the same time; in this case only one device is able to sent the message because all others devices detect one error and then abort the transmission.
only one device is able to send the message and occupy the bus before all others devices load the transmission MAILBOX of the CAN peripheral, or before the CAN peripheral of the others devices set the message that is going to be sent in the SCHEDULED state.
In this case, the devices that have not been able to send the message will receive the reception interrupt; within the ISR routine of reception I'm able to abort the transmission.
only one device are able to send the message and occupy the bus and all others CAN peripherals of others devices have message in SCHEDULED state and are waiting that bus become idle.
In this case the devices that have not been able to send the message will receive the reception interrupt. Also in this situation I thought to stop the transmission within the ISR routine of reception (like situation 2) ), but I'm not sure that this is guaranteed for all messages because if the CAN peripheral sets the message that is going to be sent in the TRANSMIT state before the code inside ISR is executed, the operation of abort will have no effect.
My question is (related to the situation 3): Is the message in the transmission MAILBOX in the SCHEDULED state set in the TRANSMISSION state after that the code in the receiving ISR routine is executed or is this thing not guaranteed?
To answer on your third case first, no it is not guaranteed that your message is not on the bus, while receiving. Because interrupts might have some latency too, and within this time, the mailbox might be able to go ahead with transmission.
Your "authentication" also sounds a bit troublesome, since nobody from outside could also actually decide which ECU was actually the one that won the arbitration and actually sent that specific message.
We have ECUs in vehicles which decide at runtime, according to certain methods, where they are mounted by pin and some CAN reception, but only in listen mode. TX is actually disabled in the stack. After that, detection has completed, we switch configurations and restart the communications stack and further initialize the software going up.
But these "setups" are usually defined beforehand, e.g. due to master/slave (vehicle/private bus communication), or maybe some connector pins connected to GND / OPEN / UBAT, or maybe some bus message which tells on which bus it is on.
That seems to be more reliable than your method.

What happens to client message if Server does not exist in UDP Socket programming?

I ran the client.java only when I filled the form and pressed send button, it was jammed and I could not do anything.
Is there any explanation for this?
enter image description here
TLDR; the User Datagram Protocol (UDP) is "fire-and-forget".
Unreliable – When a UDP message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission, or timeout.
So if a UDP message is sent and nobody listens then the packet is just dropped. (UDP packets can also be silently dropped due to other network issues/congestion.)
While there could be a prior-error such as resolving the IP for the server (eg. an invalid hostname) or attempting to use an invalid IP, once the UDP packet is out the door, it's out the door and is considered "successfully sent".
Now, if a program is waiting on a response that never comes (ie. the server is down or packet was "otherwise lost") then that could be .. problematic.
That is, this code which requires a UDP response message to continue would "hang":
sendUDPToServerThatNeverResponds();
// There is no guarantee the server will get the UDP message,
// much less that it will send a reply or the reply will get back
// to the client..
waitForUDPReplyFromServerThatWillNeverCome();
Since UDP has no reliability guarantee or retry mechanism, this must be handled in code. For example, in the above maybe the code would wait for 1 second and retry sending a packet, and after 5 seconds of no responses it would report an error to the client.
sendUDPToServerThatMayOrMayNotRespond();
while (i++ < 5) {
reply = waitForUDPReplyForOneSecond();
if (reply)
break;
}
if (reply)
doSomethingAwesome();
else
showErrorToUser();
Of course, "just using TCP" can often make these sorts of tasks simpler due to the stream and reliability characteristics that the Transmission Control Protoocol (TCP) provides. For example, the pseudo-code above is not very robust as the client must also be prepared to handle latent/slow UDP packet arrival from previous requests.
(Also, given the current "screenshot", the code might be as flawed as while(true) {} - make sure to provide an SSCCE and relevant code with questions.)

Suspend operation of lwIP Raw API

I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.

UDP transmit performance

I have an application that transmits some data in a loop.
Underlying protocol is UDP on WinSock. If I don't add sleep(1ms) after each transmit operation most of the data is not sent (or wireshark can not capture it) Have you experienced such a behavour that UDP does not handle repetitive sending in a loop ?
Regards
Tugrul
First thing you should check the return values when you send data to check if data is successfully sent or not.
Second thing, This can happen internal buffer of UDP cannot accommodate more data because previous data is yet not transmitted. So the simplest solution is that each time before send the data you should check if your UDP socket is writable or not. You can do it by calling "select" or "poll" on that UDP socket.

Prefered method of notifying upper layers about received message

I'm writing a RS485 driver for an embedded C project.
The driver is listening for incoming messages and should notify the upper layer application when a complete message is received and ready to be read.
What is the preferred way to do this?
By using interrupts? Trigger a SW interrupt and read the message from within the ISR?
Let the application poll the driver periodically?
I generally do as little work as possible in the ISR to secure the received data or clean up the transmitted data. This will usually mean reading data out of the hardware buffers and into a circular buffer.
On receive, for a multi-threaded os, a receive interrupt empties the hardware, clears the interrupt and signals a thread to service the received data.
For a polling environment, a receive interrupt empties the harwdware, clears the interrupt, and sets a flag to notify the polling loop that it has something to process.
Since interrupts can occur any time the data structures shared between the ISR and the polling loop or processing thread must be protected using a mutual exclusion mechanism.
Often this will mean disabling interrupts briefly while you adjust a pointer or count.
If the received data is packetized you can hunt for packet boundaries in the ISR
and notify the handler only when a full packet has arrived.