i have a device which is streaming via usb2serial to ttyUSB0 on my pi4. I use the Baudrate 345600 but I looks like I loose data in between. Does anybody know something about the receive uart buffer? Is there a way to detect a buffer overflow? Or could I increase or set a bigger uart buffer?
Thanks!
Related
I am using MC3635 Evaluation Board(EV3635B). There is no problem about SPI connection and i can read X,Y,Z movement via SPI. But i can not use interrupt pin as output. I want that let interrupt pin of the accelerometer give me HIGH when there is a tapping or movement and let me turn this interrupt pin LOW via SPI when i read this changing. Because i am trying to wake my microcontroller by this movement. Additionally i am using PIC12LF1822 but it is not a necessary information. Could you help me to configure the registers of the accelerometer?
Best regards.
I am implementing a protocol decoder which receives bytes through UART of a microcontroller. The ISR takes bytes from the UART peripheral and puts it in a ring buffer. The main loop reads from the ring buffer and runs a state machine to decode it.
The UART internally has a 32-byte receive FIFO, and provides interrupts when this FIFO is quarter-full, half-full, three-quarter full and completely full. How should i determine which of these interrupts should trigger my ISR? What is the tradeoff involved?
Note - The protocol involves packets of 32-byte (fixed length), send every 10ms.
This depends on a lot of things, most of all the maximum baudrate supported, and how much time your application needs for executing other tasks.
Traditional ring buffers work on byte-per-byte interrupt basis. But it is of course always nice to reduce the number of interrupts. It probably doesn't matter much how often you let it trigger.
It is much more important to implement a double-buffer scheme. You should of course not start to run a state machine decoding straight from a single ring buffer. That will turn into a race condition nightmare.
Your main program should hit the semaphore/disable the UART interrupt, then copy the whole buffer, then allow interrupt. Ideally buffer copy is done by changing a pointer, rather than doing a hard copy. The code doing this needs to be benchmarked to perform faster than 1/baudrate * 10 seconds. Where 10 is: 1 start, 8 data, 1 stop, assuming UART is 8-N-1.
If available, use DMA over software ring buffers.
Given a packet based protocol and a UART that interrupts when more than one byte has been received, consider what should happen if the final byte of a packet is received but that final byte isn't enough to fill the FIFO past the threshold and trigger an interrupt. Is your application simply not going to receive that incomplete packet until some subsequent packet is received and the FIFO finally fills enough? What if the other end is waiting for a response and never sends another packet? Or is your application supposed to poll the UART to check for lingering bytes remaining in the UART FIFO? That seems overly complicated to both use an interrupt and poll for received bytes.
With the packet-based protocols I have implemented, the UART driver does not rely on the UART FIFO and configures the UART to interrupt when a single byte is available. This way the driver gets notified for every byte and there is no chance for the final byte of a packet to be left lingering in the UART's FIFO.
The UART's FIFO can be convenient for streaming protocols (such as audio or video data). When the driver is receiving a stream of data then there will always be incoming data to keep filling the FIFO. The driver can rely on the UART's FIFO to buffer some data. The driver can be more efficient by processing multiple bytes per interrupt and reducing the interrupt rate.
You might consider using the UART FIFO since your packets are a fixed length. But consider how the driver would recover if a single byte is dropped due to noise or whatever. I think it's still best to not rely on the FIFO for packet-based protocols regardless of whether the packets are fixed length.
I'm currently working on an application where an MCU is receiving data from a hardware chip in the form of an asynchronous serial bit transmission at 2Mbps. This data has no encoding and no protocol aside from a start sequence, after which it is raw binary data.
The current approach for recovery is using the SPI module in 3-pin mode to oversample the stream 4x at 8MHz, allowing for recovery of the asynchronous data. While seemingly effective thus far with a simulated testbench, this method is rather complicated as an internal clock needs to be routed to the SPI CLK as the device is run in slave mode in order for DMA to recover the transmitted data while the processor executes another task.
Would it be possible to use any other peripherals efficiently for this task aside from SPI? Faking a communication protocol to recover a serial bit stream seems a bit roundabout, but I am not sure how to utilize UART or I2C without doing the same, and those might not even be possible to use as the protocol bits are not present in the stream. I also want to avoid using an ADC in the interest of power, along with the fact that the data is already digital so it seems unnecessary.
I have been working on a PIC18F45k20 running at 16 MHz and using it as an SPI slave. I find that no matter the SPI clock rate (SCK) from the master I always have to add a significant delay (~64 us) between SPI bytes to avoid SPI collisions or receive overflow. Without the delay and at very slow SPI clock rates, 95% of the SPI packets will get through without collision or overflow.
Online posts lend me to think that this may be a "feature" of this, and other, PIC18 processors.
Have others observed this same slave “feature”?
If this is a “feature”, is it found in all PIC18 processors?
I tested the PIC18 without an interrupt with the following:
if (SSPSTATbits.BF)
{
DataIn = SSPBUF;
SSPBUF = DataOut;
}
Also tested using an interrupt and saw the same challenge.
Makes me wonder if it doesn’t truly detect the SPI clock properly.
If you have an oscilloscope check to make sure that the chip select is not being released prior to the PIC clocking out the last SPI data byte. You need to wait on the SPI busy bit before releasing the chip select line.
As I know PIC18 is a 8bit microcontroller, although you can easily find that it's integer variable is mapped into 16bit. However SPI works with 8bit data. It means if your master send for this microcontroller more than 8bit, such as 16 bit, overflow happens in SPI module and cann't response to master clock anymore. So In Slave mode, make sure data from master have 8bit structure. But if pic18 was Master in SPI connection, even though its slave send 16bit data, pic18 hold clock data after first 8bit and wait until its buffer read and empty for next 8bit.
I've also come across this issue and it seems like what one should take into account is that supported SPI simple tells how fast MCU can receive one byte into SSPBUF.
Reading this byte from SSPBUF and storing it in a buffer will require some work like incrementing a pointer etc., which will take some time. This is what reduces actual SPI bandwidth for multi-byte SPI.
I am currently working on a custom hardware which has SH_MOBILE architecture. Hardware comes with USB(peripheral) and a DMAC having 2 channels.
I am using R8a66597 UDC driver which is available in mainline kernel. I have added DMA related functions to the peripheral controller driver. Currently I am able to get the DMA working in TX path. But in RX path I am not able to use DMA, instead PIO is getting used. This is because the buffer address (buf in struct usb_request) is not 8 bit aligned.
I would want to know how to ensure that these data transfer buffer is DMA able?
Thanks in advance,
Srinidhi KV