multi byte data transmission at once in I2C and Spi - embedded

I would like to ask, can we send multiple bytes (10-20 bytes or more) at one data transmission. With one data transmission I meant with single "start" communication bit.
("start bit" + "multi byte data" + "stop")
or do we have to stop transmission after every single byte and start new transmission to send rest of the data?
thank you.

Yes, this is not only possible but essential in most use-cases of I2C.
In SPI there is no start/stop but sequences of bytes are framed with the chip-select signal. It is normal to send many bytes per frame.

Related

How DMA Controller handles the Input devices like say Serial port

So, what I have learned so far is that CPU programs the source address, dest address, word count and the direction to the DMA controller whenever it needs to transfer the data from say a harddrive. But in this example, the hard drive is just a dumb device, so it makes sense because harddrive can never initiate a data transfer.
But, what if we have connected the serial port where in certain instances we are going to get 8 bits of data. I know the DMA controller is used for large memory transfer, but say I want to do DMA for these 8 bits. But the device driver on the CPU cannot tell when the data is coming and it also can not tell how much data is coming because the serial port may send 8 bits or 16 bits or no data at all. So in this case who fills the DMA controller's count and memory addresses since the device driver is completely unknown when the data is going to come in.
Using DMA serial input is complicated when the incoming data is not a continuous stream or fixed length packets. The exact details will depend on the specific UART and DMA controller, but generally, each character that arrives will be copied to the next location in the provided DMA buffer, and an interrupt will be generated by the DMA controller when the buffer is both half-filled and completely filled.
A single byte DMA buffer serves little purpose over using the UART's data avalable interrupt, and will simply delay byte processing by one character period.
If your DMA buffer were two characters long, you'd then get an interrupt for every character (one for the half transfer, and one for the full transfer), which solves the problem of partially filled buffers not being serviced, but does not reduce the interrupt overhead at all so offers little advantage over direct UART interrupt handling. If your UART includes a FIFO buffer, that would be a better method of dealing with asynchronous serial input when only a small amount of buffering is required.
When a larger DMA buffer is used the interrupt rate is reduced, but when a buffer is incomplete you will not get an interrupt, and the data may wait indefinitely. One solution to that problem is to implement a timeout mechanism whereby if the DMA interrupt does not arrive within a time period determined by the baud rate and buffer length, then the timeout handler retrieves all data currently buffered. Such a mechanism requires care to avoid race conditions between the timeout and the DMA interrupt, and to ensure that data arriving while the timeout is being processed is not lost, or that data retrieved by the timeout is not repeated when the DMA interrupt eventually arrives.

Missing line in "Follow UDP Stream" in wireshark

I am streaming raw UDP packets (rf data) from GNU Radio to Octave (or any other program). The data consists of 390625 4 byte floats per second. This is 1562500 bytes per second. When GNU Radio streams UDP, there is no header or sequence number in the UDP data, it's just raw floats. Because this is localhost to localhost, I am able to use a very large MTU.
Attached is a screenshot of Wireshark after right clicking and doing "Follow UDP Stream". There is a "blank" part of the Hex Dump at 0x6F38C8. I don't understand what this means? (I know that UDP does not provide reliable delivery and packets can be dropped and arrive out of order at any moment). Any help would be great!
The blank part is simply used as a barrier to differentiate between 2 UDP packets, solely for your convenience.
If you track down that exact data in the normal wireshark window you'll notice that the data before the blank part belongs to a certain UDP packet and that the data after the blank part belongs to the subsequent UDP packet.
The HEX numbers on the left specify the offset of the first byte of the line from the beginning of the stream in that direction (i.e. in the half blank line, the byte 0x08 has 0x6F38C8 bytes sent before it in that direction). Since the half blank line has just 8 bytes, the offset of the next line is 0x6F38C8 + 0x8 = 0x6F38D0. This is another indicator that the blank part isn't used as a filler for certain data that couldn't be represented due to some obscure reason.
Note that it would be impossible for wireshark to know whether data is lost and represent it to you in any manner, since UDP is unreliable (that is, unless the underlying protocol maintains certain counters by itself, and is parsed by wireshark).

Is there a hardware buffer in SPI module?

I am using a SAM4E-EK board, and the processor is SAM4E. The board is equiped with a ADS7843 touch controller, contected from the processor through a SPI channel.
The chapter of SPI in datasheet of SAM4E said that
While the data in the Shift Register is shifted on the MOSI line, the MISO line is sampled and shifted in the Shift Register. Receiving data cannot occur without transmitting data.
But in an example for ADS7843 from ASF, it just sends data(8 bits) three times at first, and then it can receives data(8 bits) three times! I have test it, and it work fine.
So I think there is a hardware FIFO buffer in SPI receiver. But I can not find any related information in the datasheet and internet.
Am I right? or is there others mechanism making the example runs correctly?
The SAM4E manual says that SPI Receive Data Register has the size of 2 bytes
The ADS7843 manual says:
One complete conversion can be accomplished with three serial
communications, for a total of 24 clock cycles on the DCLK input
The figure 5 shows that byte0 is a request to ADS7843 and bytes1-2 are response.
You should send 1 byte of command and 2 dump bytes to provide SPI clocking while the ADS7843 is responsing (the manual says
Receiving data cannot occur without transmitting data
And when you reading 3 bytes, you get the 2 bytes of answer, stored in the SPI Receiver register

8051 UART, Receiving bytes serially

I have to send file byte-by-byte to serially connected AT89s52 from computer (VB.NET).
Every sended byte have some job to do in microcontroller what require some time.
Here is relevant part of my C code to receiving bytes:
SCON = 0x50;
TMOD = 0x20; // timer 1, mode 2, 8-bit reload
TH1 = 0xFD; // reload value for 9600 baud
TR1 = 1;
TI = 1;
again:
while(RI!=0)
{
P1=SBUF; // show data on led's
RI=0;
receivedBytes++;
}
if (key1==0)
{
goto exitreceive; // break receiving
}
show_lcd_received_bytes(receivedBytes);
// here is one more loop
// with different duration for every byte
goto again;
And here is VB.NET code for sending bytes:
For a As Integer = 1 To 10
For t As Integer = 0 To 255
SerialPort1.Write(Chr(t))
Next t
Next a
Problem is that mC have some job to do after every received byte and VB.NET don't know for that and send bytes too fast so in mC finishes just a part of all bytes (about 10%).
I can incorporate "Sleep(20)" in VB loop ant then thing will work but I have many of wasted time because every byte need different time to process and that would be unacceptable slow communication.
Now, my question is if 8051 can set some busy status on UART which VB can read before sending to decide to send byte or not.
Or how otherwise to setup such communication as described?
I also try to receive bytes with serial interrupt on mC side with same results.
Hardware is surely OK because I can send data to computer well (as expected).
Your problem is architectural. Don't try to do processing on the received data in the interrupt that handles byte Rx. Have your byte Rx interrupt only copy the received byte to a separate Rx data buffer, and have a background task that does the actual processing of the incoming data without blocking the Rx interrupt handler. If you can't keep up due to overall throughput issue, then RTS/CTS flow control is the appropriate mechanism. For example, when your Rx buffer gets 90% full, deassert the flow control signal to pause the transmit side.
As #TJD mentions hardware flow control can be used to stop the PC from sending characters while the microcomputer is processing received bytes. In the past I have implemented hardware flow by using an available port line as an output. The output needs to be connected to an TTL to RS-232 driver(if you are currently using a RS-232 you may have and extra driver available). If you are using a USB virtual serial port or RS-422/485 you will need to implement software flow control. Typically a control-S is sent to tell the PC to stop sending and a control-Q to continue. In order to take full advantage of flow control you most likely will need to also implement a fully interrupt driven FIFO to receive/send characters.
If you would like additional information concerning hardware flow control, check out http://electronics.stackexchange.com.
Blast from the past, I remember using break out boxes to serial line tracers debugging this kind of stuff.
With serial communication, if you have all the pins/wires utililzed then there is flow control via the RTS (Ready To Send) and DTR (Data Terminal Ready) that are used to signal when it is OK to send more data. Do you have control over that in the device you are coding via C? IN VB.NET, there are events used to receive these signals, or they can be queried using properties on the SerialPort object.
A lot of these answers are suggesting hardware flow control, but you also have the option of enhancing your transmission to be more robust by using software flow control. Currently, your communication is strong, but if you start running a higher baud rate or a longer distance or even just have a noisy connection, characters could be received that are incorrect, or characters could be dropped.
You could add a simple two-byte ACK sequence upon completion of whatever action is set to happen. It could look something like this:
Host sends command byte: <0x00>
Device echoes command byte: <0x00>
Device executes whatever action is needed
Device sends ACK/NAK byte (based on result):
This would allow you to see on the host side if communication is breaking down. The echoed character may mismatch what was sent which would alert you to an issue. Additionally, if a character is not received by the host within some timeout, the host can try retransmitting. Finally, the ACK/NAK gives you the option of returning a status, but most importantly it will let the host know that you've completed the operation and that it can send another command.
This can be extended to include a checksum to give the device a way to verify that the command received was valid (A simple logical inverse sent alongside the command byte would be sufficient).
The advantage to this solution is that it does not require extra lines or UART support on either end for hardware flow control.

USB HID protocol question

I'm implementing USB on a PIC 18F2550 using a generic HID interface. I've set up the HID profile configuation to have a single 64 byte message for both inputs and outputs.
Now it's basically working. The device registers OK with windows. I can find it in my program on the PC and can send and receive data to it. The problem is this though - messages from the PC to the PIC are truncated to the size of the EP0 endpoint buffer.
Before I go debugging too much further I want to try to clarify my understanding of the USB protocols here and check I got it right.
Assume that the EP0 input buffer is 8 bytes. It's my understanding that the PC end will send a control packet which is 8 bytes. In there is the length in bytes of the data to follow. And then it will send a sequence of 8 byte data packets and the PIC end has to acknowledge each one.
It's my understanding that the PC end knows how big each packet may be by looking in the maximum packet size field in the device descriptor and will divide up the message accordingly into multiple data packets.
Before I go looking for more hours at the code, can anyone confirm that this is basically correct? That if the EP0 buffer size is 8 bytes then the PC should know this because of the configuration field I mentioned above and send multiple data packets?
If I make my receive buffer on the PIC 64 bytes then I get 64 bytes of the message which is sufficient for my needs, but I don't like not understanding why it doesn't work with small buffers, and one day I'll probably need them anyway.
Any advice or information would be welcome.
There is something called Endpoint Descriptor, which, among other things, defines the wMaxPacketSize - which is what the Host Controller Interface drivers use to subdivide a large USB transfer into smaller packets.
This is entirely different from the EP0 buffer size - which however, is always required to be larger than the wMaxPacketSize. My guess is (try posting your usb_config.h and usb_descriptors.c, if you use Microchip USB stack), that you're either trying to use 8-byte long EP0 with 64-byte long wMaxPacketSize, which is truncating the transfer.
Also, be aware that in USB 1.1 Low Speed, the wMaxPacketSize cannot exceed 8, and in USB 1.1 Full Speed it cannot exceed 64.
0x07,/*sizeof(USB_EP_DSC)*/
USB_DESCRIPTOR_ENDPOINT, //Endpoint Descriptor
HID_EP | _EP_IN, //EndpointAddress
_INTERRUPT, //Attributes
DESC_CONFIG_WORD(9), //size
0x01, //Interval
/* Endpoint Descriptor */
0x07,/*sizeof(USB_EP_DSC)*/
USB_DESCRIPTOR_ENDPOINT, //Endpoint Descriptor
HID_EP | _EP_OUT, //EndpointAddress
_INTERRUPT, //Attributes
DESC_CONFIG_WORD(9), //size
0x01 //Interval