Prefered method of notifying upper layers about received message - embedded

I'm writing a RS485 driver for an embedded C project.
The driver is listening for incoming messages and should notify the upper layer application when a complete message is received and ready to be read.
What is the preferred way to do this?
By using interrupts? Trigger a SW interrupt and read the message from within the ISR?
Let the application poll the driver periodically?

I generally do as little work as possible in the ISR to secure the received data or clean up the transmitted data. This will usually mean reading data out of the hardware buffers and into a circular buffer.
On receive, for a multi-threaded os, a receive interrupt empties the hardware, clears the interrupt and signals a thread to service the received data.
For a polling environment, a receive interrupt empties the harwdware, clears the interrupt, and sets a flag to notify the polling loop that it has something to process.
Since interrupts can occur any time the data structures shared between the ISR and the polling loop or processing thread must be protected using a mutual exclusion mechanism.
Often this will mean disabling interrupts briefly while you adjust a pointer or count.
If the received data is packetized you can hunt for packet boundaries in the ISR
and notify the handler only when a full packet has arrived.

Related

Does the operation of the CAN peripheral in STM32 wait for the execution of the ISR routine code?

I'm developing a stack layer on microcontroller STM32L433 that uses the CAN protocol; a fundamental part of the stack is the authentication of the devices.
During authentication it can occur that two (or more) devices start to send a CAN message (authentication message) with the same identifier and different payload (true random value). In this case every device should be able to detect if this message was sent first from another device.
I have studied this case and three situations can occur:
the devices start to send message at the same time; in this case only one device is able to sent the message because all others devices detect one error and then abort the transmission.
only one device is able to send the message and occupy the bus before all others devices load the transmission MAILBOX of the CAN peripheral, or before the CAN peripheral of the others devices set the message that is going to be sent in the SCHEDULED state.
In this case, the devices that have not been able to send the message will receive the reception interrupt; within the ISR routine of reception I'm able to abort the transmission.
only one device are able to send the message and occupy the bus and all others CAN peripherals of others devices have message in SCHEDULED state and are waiting that bus become idle.
In this case the devices that have not been able to send the message will receive the reception interrupt. Also in this situation I thought to stop the transmission within the ISR routine of reception (like situation 2) ), but I'm not sure that this is guaranteed for all messages because if the CAN peripheral sets the message that is going to be sent in the TRANSMIT state before the code inside ISR is executed, the operation of abort will have no effect.
My question is (related to the situation 3): Is the message in the transmission MAILBOX in the SCHEDULED state set in the TRANSMISSION state after that the code in the receiving ISR routine is executed or is this thing not guaranteed?
To answer on your third case first, no it is not guaranteed that your message is not on the bus, while receiving. Because interrupts might have some latency too, and within this time, the mailbox might be able to go ahead with transmission.
Your "authentication" also sounds a bit troublesome, since nobody from outside could also actually decide which ECU was actually the one that won the arbitration and actually sent that specific message.
We have ECUs in vehicles which decide at runtime, according to certain methods, where they are mounted by pin and some CAN reception, but only in listen mode. TX is actually disabled in the stack. After that, detection has completed, we switch configurations and restart the communications stack and further initialize the software going up.
But these "setups" are usually defined beforehand, e.g. due to master/slave (vehicle/private bus communication), or maybe some connector pins connected to GND / OPEN / UBAT, or maybe some bus message which tells on which bus it is on.
That seems to be more reliable than your method.

Suspend operation of lwIP Raw API

I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.

Writing a c++ Client-Server program using winsock2

I am having some trouble with a UDP-based connection.
In my program, I restrict the time allocated for data transfer between the transmitter and receiver (both of them are sending/receiving in a loop).
When the time passes, I send a message from the transmitter that if the receiver receives and reads - the receiver knows not to wait for anymore packets so the program continues.
The problem I thought of was that because the connection is UDP, the message might not get to the receiver, and then the client will keep on waiting for messages, but no one is sending.
So what is the correct way to finish such a connection?
Thanks!

understanding the concept of running a program in interrupt handler

Early Cisco routers running IOS operating system enhanced their packet processing speed by doing packet switching within the interrupt handler instead in "regular" operating system process. Doing packet processing in interrupt handler ensured that context switching within operating system does not affect the packet processing. As I understand, interrupt handler is a piece of software in operating system meant for handling the interrupts. How to understand the concept of packet switching done within the interrupt handler?
use of interrupts is preferred when an event requires some immediate attention by the operating system, or a program which installed an interrupt service routine. This as opposed to polling, where software checks periodically whether a condition exists, which indicates that the event has occurred.
interrupt service routines aren't commonly meant to do a lot of work themselves. They are rather written to reach their end as quickly as possible, so that normal execution can resume. "normal execution" meaning, the location and state previous processing was interrupted when the interrupt occurred. reason is that it must be avoided that the same interrupt occurs again while its handler is still executed, or it may be ignored, or lead to incorrect results, or even worse, to software failure (crashes). So what an interrupt service routine usually does is, reading any data associated with that event and storing it in a queue, signalling that the queue experienced mutation, and setting things such that another interrupt may occur, then resume by restoring pre-interrupt context. the queued data, associated with that interrupt, can now be processed asynchronously, without risking that interrupts pile up.
The following is the procedure for executing interrupt-level switching:
Look up the memory structure to determine the next-hop address and outgoing interface.
Do an Open Systems Interconnection (OSI) Layer 2 rewrite, also called MAC rewrite, which means changing the encapsulation of the packet to comply with the outgoing interface.
Put the packet into the tx ring or output queue of the outgoing interface.
Update the appropriate memory structures (reset timers in caches, update counters, and so forth).
The interrupt which is raised when a packet is received from the network interface is called the "RX interrupt". This interrupt is dismissed only when all the above steps are executed. If any of the first three steps above cannot be performed, the packet is sent to the next switching layer. If the next switching layer is process switching, the packet is put into the input queue of the incoming interface for process switching and the interrupt is dismissed. Since interrupts cannot be interrupted by interrupts of the same level and all interfaces raise interrupts of the same level, no other packet can be handled until the current RX interrupt is dismissed.
Different interrupt switching paths can be organized in a hierarchy, from the one providing the fastest lookup to the one providing the slowest lookup. The last resort used for handling packets is always process switching. Not all interfaces and packet types are supported in every interrupt switching path. Generally, only those that require examination and changes limited to the packet header can be interrupt-switched. If the packet payload needs to be examined before forwarding, interrupt switching is not possible. More specific constraints may exist for some interrupt switching paths. Also, if the Layer 2 connection over the outgoing interface must be reliable (that is, it includes support for retransmission), the packet cannot be handled at interrupt level.
The following are examples of packets that cannot be interrupt-switched:
Traffic directed to the router (routing protocol traffic, Simple Network Management Protocol (SNMP), Telnet, Trivial File Transfer Protocol (TFTP), ping, and so on). Management traffic can be sourced and directed to the router. They have specific task-related processes.
OSI Layer 2 connection-oriented encapsulations (for example, X.25). Some tasks are too complex to be coded in the interrupt-switching path because there are too many instructions to run, or timers and windows are required. Some examples are features such as encryption, Local Area Transport (LAT) translation, and Data-Link Switching Plus (DLSW+).
More here: http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/12809-tuning.html

8051 UART, Receiving bytes serially

I have to send file byte-by-byte to serially connected AT89s52 from computer (VB.NET).
Every sended byte have some job to do in microcontroller what require some time.
Here is relevant part of my C code to receiving bytes:
SCON = 0x50;
TMOD = 0x20; // timer 1, mode 2, 8-bit reload
TH1 = 0xFD; // reload value for 9600 baud
TR1 = 1;
TI = 1;
again:
while(RI!=0)
{
P1=SBUF; // show data on led's
RI=0;
receivedBytes++;
}
if (key1==0)
{
goto exitreceive; // break receiving
}
show_lcd_received_bytes(receivedBytes);
// here is one more loop
// with different duration for every byte
goto again;
And here is VB.NET code for sending bytes:
For a As Integer = 1 To 10
For t As Integer = 0 To 255
SerialPort1.Write(Chr(t))
Next t
Next a
Problem is that mC have some job to do after every received byte and VB.NET don't know for that and send bytes too fast so in mC finishes just a part of all bytes (about 10%).
I can incorporate "Sleep(20)" in VB loop ant then thing will work but I have many of wasted time because every byte need different time to process and that would be unacceptable slow communication.
Now, my question is if 8051 can set some busy status on UART which VB can read before sending to decide to send byte or not.
Or how otherwise to setup such communication as described?
I also try to receive bytes with serial interrupt on mC side with same results.
Hardware is surely OK because I can send data to computer well (as expected).
Your problem is architectural. Don't try to do processing on the received data in the interrupt that handles byte Rx. Have your byte Rx interrupt only copy the received byte to a separate Rx data buffer, and have a background task that does the actual processing of the incoming data without blocking the Rx interrupt handler. If you can't keep up due to overall throughput issue, then RTS/CTS flow control is the appropriate mechanism. For example, when your Rx buffer gets 90% full, deassert the flow control signal to pause the transmit side.
As #TJD mentions hardware flow control can be used to stop the PC from sending characters while the microcomputer is processing received bytes. In the past I have implemented hardware flow by using an available port line as an output. The output needs to be connected to an TTL to RS-232 driver(if you are currently using a RS-232 you may have and extra driver available). If you are using a USB virtual serial port or RS-422/485 you will need to implement software flow control. Typically a control-S is sent to tell the PC to stop sending and a control-Q to continue. In order to take full advantage of flow control you most likely will need to also implement a fully interrupt driven FIFO to receive/send characters.
If you would like additional information concerning hardware flow control, check out http://electronics.stackexchange.com.
Blast from the past, I remember using break out boxes to serial line tracers debugging this kind of stuff.
With serial communication, if you have all the pins/wires utililzed then there is flow control via the RTS (Ready To Send) and DTR (Data Terminal Ready) that are used to signal when it is OK to send more data. Do you have control over that in the device you are coding via C? IN VB.NET, there are events used to receive these signals, or they can be queried using properties on the SerialPort object.
A lot of these answers are suggesting hardware flow control, but you also have the option of enhancing your transmission to be more robust by using software flow control. Currently, your communication is strong, but if you start running a higher baud rate or a longer distance or even just have a noisy connection, characters could be received that are incorrect, or characters could be dropped.
You could add a simple two-byte ACK sequence upon completion of whatever action is set to happen. It could look something like this:
Host sends command byte: <0x00>
Device echoes command byte: <0x00>
Device executes whatever action is needed
Device sends ACK/NAK byte (based on result):
This would allow you to see on the host side if communication is breaking down. The echoed character may mismatch what was sent which would alert you to an issue. Additionally, if a character is not received by the host within some timeout, the host can try retransmitting. Finally, the ACK/NAK gives you the option of returning a status, but most importantly it will let the host know that you've completed the operation and that it can send another command.
This can be extended to include a checksum to give the device a way to verify that the command received was valid (A simple logical inverse sent alongside the command byte would be sufficient).
The advantage to this solution is that it does not require extra lines or UART support on either end for hardware flow control.