Where "physically" in an Ethernet frame is the checksum located? - frame

Any time I see an illustrative image of an Ethernet frame it shows the checksum at the very end of the frame. This makes sense and I understand why it would be there. However, when I look at a packet in Wireshark the CRC appears to be before the payload data in the frame. I assume that Wireshark is showing me the raw data that is on the wire. When transmitting the frame over Ethernet is the CRC somehow moved up in the stream of bits, or is Wireshark just not showing me the exact placement of bits? Or am I just not understanding this correctly?

The FCS or Frame Check Sequence for ethernet when transmitted over the physical medium is "physically"/"electrically"/"optically" transmitted after the payload data in the frame. The FCS field is calculated with a CRC pass over the source mac address, destination mac address, length/type, data and pad fields using a 32-bit cyclic redundancy check (CRC.) It is for these reasons that the FCS occurs after the data fields 'physically'. Since most layer two devices want to be as low in latency as possible, ordering the fields in this way, it makes it real easy for the receiving device to copy the entire frame into a buffer and then perform a CRC check before passing the frame on. A re-ordering or parsing of the frame is not required to perform a CRC check.
Likely, most ethernet frame pdu diagrams are showing it to you correctly if FCS appears after payload/data. Wireshark is simply showing it in a different area so a user can evaluate all of the layer 2 details before analyzing the layer 3 data. It makes it real easy to understand a low-level problem if a crc doesn't match so you analyze layer 2 information before proceeding.

Related

Arducam Errors: Bogus Huffman Table

I am working with an Arducam OV2640 to capture images and transmit from one microcontroller to another. I am getting inconsistent images. Occasionally they turn out ok but a large portion of the time they have a 'bogus Huffman Table'.
I am familiar with Huffman tables which has me guessing that I am losing some bytes in transmission be it from the camera to the micro-controller or from the wireless link between the micro-controllers I am using.
The only thing that has me confused is that I tested several thousand packets over the communication link between the two micros and I have a BER of zero (16 byte packets with 16 bit CRC with packet rejection and re-transmission if errors occur).
The image is also fine when I transmit it from the camera micro to my computer through UART.
Is the camera occasionally having issues? I have seen it mentioned as a problem but have no idea how this might be resolved.

What happens at the receive part when we write with SPI?

When SPI master writes to the slave, something is shifting into the receive buffer right?
If yes, then it is normal "RXDATAAVAILABLE" flag to be set? It is nonsense! We send data, and when data is sent, we get notified that there is data received.
If all of my statements are correct, then how do we know what the correct data is into the RXFIFO?
Suppose we send two bytes frame. The first one is the address and the second one is dummy in order to read the value in that address (of the slave). Then suppose we have two levels Rx FIFO. In that FIFO instead the value read from the slave, we have two bytes, the first is who knows what, and the second the value read from the slave.
So the question is: how do we manage to receive only what is necessary, without getting junk data during the write part of the frame?
SPI works like simple 8 bit shift registers. You shift out bytes on MOSI at each flank of a clock and at the same time you shift in new data from MISO. Thus you send and receive at the same time. Hence the names MOSI = Master Out Slave In, and MISO = Master In Slave Out.
SPI peripherals on microcontrollers are more intricate than that though, and have separate data registers that are different from the actual hardware shift register, so that we can write data without worrying about the pending transmission. Some may even have multiple data buffers. But on the fundamental level, SPI always work with 8 bits.
When the microcontroller acting as SPI master writes something, there is usually two flags, one that says that the data buffer is made available, and another that tells that transmission is done.
When you are done sending, you are also done receiving. You'll get some sort of flag set. This is assuming that all devices are implementing SPI as intended, which is often not the case.
Note that some devices implement a system where you first send x bytes of data, and after that receive x bytes of data. This seems to be the scenario you describe. Sending and receiving is not done at the same time for that device, but instead in sequence. Meaning that during the first transmission, you'll clock in garbage, and then in order to receive data, you must clock out garbage. This is no fault of SPI, but how the manufacturer of the specific device has specified things.
Note that SPI is very poorly standardized and therefore all manner of weird crap exists on the market. The manner of sending/receiving data may vary, the clock polarity (flanks) may vary, where the device clocks the data may vary. Some devices might need delays between data bytes. Some devices might need some obscure handling of the Slave Select pin in order to work. It's all one big mess and the lack of international standardization is to blame.
An SPI master engine's received data available flag will be set as a simple result of the occurrence of a word's worth of clock cycles generated by the master itself. It tells you nothing about the operation or even existence of a peripheral on the bus.
When this flag is set, it is entirely up to you and your software to know if the contents of the received data register will have meaning or not.
If you have properly selected and interacted with an existent, operational peripheral in a read or transfer operation where it is documented to give a result, they will have meaning
If you have performed a purely write operation to a peripheral for which no reply data is documented at the word position in question, it will be meaningless, effectively no different than reading some random legal memory location. Note that in most cases, a write operation is simply a transfer where the received data is to be ignored - at implementation level there is usually no other difference.
If you have failed to address any existent peripheral it will be similarly be meaningless.
As with any other memory or read operation, it is up to you to know if the contents of the register in a particular situation are meaningful or not.
Since you know that the first byte contains "who knows what" while the second has meaning, write your software to ignore the first and use the second.
(As an aside, many, though by no means all, SPI peripherals are documented to shift out whatever constitutes their primary status register during the address phase, since that makes for a quick way to poll it)

USB performance issues

I am trying to transfer data over USB. These are packets of at most 64 bytes which get sent at a frequency of 4KHz. This gives a bit rate of about 2Mb/s.
The software task that picks up this data runs at 2.5 KHz.
Ideally we never want packets to get there slower than 2.5 KHz (so 2 KHz isn't very good).
Is anyone aware of any generic limits on what USB can achieve?
We are running on a main board which has a 1.33 GHz running QNX and a daughter board which is a TWR K60F120M tower system running MQX.
Apart from the details of the system, is USB supposed to be used in this kind of data transfers, i.e., high frequency and short packet sizes?
Many Thanks for your help
MG
USB, even at its slowest spec (1.1), can transfer data at up to 12MB/sec, provided you use the proper transfer mode. USB will process 1000 "frames" per second. The frames contain control and data information, and various portions of each frame are used for various purposes, and thus the total information content is "multiplexed" amongst these competing requirements.
Low speed devices will use just a few bytes in a frame to send or receive their data. Examples are modems, mice, keyboards, etc. The so-called Full Speed devices (in USB 1.1) can achieve up to 12 MB/sec by using isochronous mode transfers, meaning they get carved out a nice big chunk of each frame, and can send that much data (a fixed size) each time a frame comes along. This is the mode audio devices use to stream the relatively data-intensive music to USB speakers, for example.
If you are able to do a little bit of local buffering, you may be able to use isochronous mode to get your 64 bytes of data sent at 1 KHz, but with 2 or 3 periods (at 2.5KHz) worth of data in the USB frame transfer. You'd want to reserve 64 x 3 = 192 bytes of data (plus maybe a few extra bytes for control info, such as how many chunks are present: 2 or 3?). Then, as the USB frames come by, you'd put your 2 chunks or 3 chunks of data onto the wire, and the receiving end would then get that data, albeit in a more bursty way than just smoothly at a precise 2.5KHz rate. However, this way of transferring the data would more than keep up, even with USB 1.1, and still only use a fraction of the total available USB bandwidth.
The problem, as I see it, is whether your system design can tolerate a data delivery rate that is "bursty"... in other words, instead of getting 64 bytes at a rate of 2.5KHz, you'll be getting (on average) 160 bytes at a 1 KHz rate. You'll actually get something like this:
So, I think with USB that will be the best you can do -- get a somewhat bursty delivery of either 2 or 3 of your device's data packet per 1 mSec USB frame rep rate.
I am not an expert in USB, but I have done some work with it, including debugging a device-to-host tunneling protocol which used USB "interrupts", so I have seen this kind of implementation on other systems, to solve the problem of matching the USB frame rate to the device's data rate.

Coping with data loss for RTSP via UDP

I am receiving RTPs via UDP (video data).
The RTPs are holding H264 that I need to decode. Unfortunately, most of the RTPs hold fragmented data. As RTP sequences are missing, I cannot reconstruct the H264 properly.
Any idea on how to reduce data loss in order to be able to decode at least o couple of frames ?
There is not much one can say. Lost data is lost as the adjective suggests. You can't get it back. In almost any case you can still feed the remaining NALs into the decoder and render the video. You will see artifacts that are introduced by the missing NALs but that's life.
Lost data is lost.
In order to reduce data loss you will need to change your transmission protocol. Interleaved RTP in RTSP could be a good choice that bases on a similar technolgy stack.
Changing to TCP will obviously only help if you got enough bandwidth to transmit the video.
If you have control on H264 encoder, enable Error resilience tools,(http://www.slideshare.net/coldfire7/error-resiliency-and-concealment-in-h264-presentation)
which makes your video more robust towards transmission errors.
So that your RTP over UDP becomes 'more resistant' towards packets losses.

How does PPP or Ethernet recover from errors?

Looking at the data-link level standards, such as PPP general frame format or Ethernet, it's not clear what happens if the checksum is invalid. How does the protocol know where the next frame begins?
Does it just scan for the next occurrence of "flag" (in the case of PPP)? If so, what happens if the packet payload just so happens to contain "flag" itself? My point is that, whether packet-framing or "length" fields are used, it's not clear how to recover from invalid packets where the "length" field could be corrupt or the "framing" bytes could just so happen to be part of the packet payload.
UPDATE: I found what I was looking for (which isn't strictly what I asked about) by looking up "GFP CRC-based framing". According to Communication networks
The GFP receiver synchronizes to the GFP frame boundary through a three-state process. The receiver is initially in the hunt state where it examines four bytes at a time to see if the CRC computed over the first two bytes equals the contents of the next two bytes. If no match is found the GFP moves forward by one byte as GFP assumes octet synchronous transmission given by the physical layer. When the receiver finds a match it moves to the pre-sync state. While in this intermediate state the receiver uses the tentative PLI (payload length indicator) field to determine the location of the next frame boundary. If a target number N of successful frame detection has been achieved, then the receiver moves into the sync state. The sync state is the normal state where the receiver examines each PLI, validates it using cHEC (core header error checking), extracts the payload, and proceeds to the next frame.
In short, each packet begins with "length" and "CRC(length)". There is no need to escape any characters and the packet length is known ahead of time.
There seems to be two major approaches to packet framing:
encoding schemes (bit/byte stuffing, Manchester encoding, 4b5b, 8b10b, etc)
unmodified data + checksum (GFP)
The former is safer, the latter is more efficient. Both are prone to errors if the payload just happens to contain a valid packet and line corruption causes the proceeding bytes to contain the "start of frame" byte sequence but that sounds highly improbable. It's difficult to find hard numbers for GFP's robustness, but a lot of modern protocols seem to use it so one can assume that they know what they're doing.
Both PPP and Ethernet have mechanisms for framing - that is, for breaking a stream of bits up into frames, in such a way that if a receiver loses track of what's what, it can pick up at the start of the next frame. These sit right at the bottom of the protocol stack; all the other details of the protocol are built on the idea of frames. In particular, the preamble, LCP, and FCS are at a higher level, and are not used to control framing.
PPP, over serial links like dialup, is framed using HDLC-like framing. A byte value of 0x7e, called a flag sequence, indicates the start of the frame. The frame continues until the next flag byte. Any occurrence of the flag byte in the content of the frame is escaped. Escaping is done by writing 0x7d, known as the control escape byte, followed by the byte to be escaped xor'd with 0x20. The flag sequence is escaped to 0x5e; the control escape itself also has to be escaped, to 0x5d. Other values can also be escaped if their presence would upset the modem. As a result, if a receiver loses synchronisation, it can just read and discard bytes until it sees a 0x7e, at which points it knows it's at the start of a frame again. The contents of the frame are then structured, containing some odd little fields that aren't really important, but are retained from an earlier IBM protocol, along with the PPP packet (called a protocol data unit, PDU), and also the frame check sequence (FCS).
Ethernet uses a logically similar approach, of having symbols which are recognisable as frame start and end markers rather than data, but rather than having reserved bytes plus an escape mechanism, it uses a coding scheme which is able to express special control symbols that are distinct from data bytes - a bit like using punctuation to break up a sequence of letters. The details of the system used vary with the speed.
Standard (10 Mb/s) ethernet is encoded using a thing called Manchester encoding, in which each bit to be transmitted is represented as two successive levels on the line, in such a way that there is always a transition between levels in every bit, which helps the receiver to stay synchronised. Frame boundaries are indicated by violating the encoding rule, leading to there being a bit with no transition (i read this in a book years ago, but can't find a citation online - i might be wrong about this). In effect, this system expands the binary code to three symbols - 0, 1, and violation.
Fast (100 Mb/s) ethernet uses a different coding scheme, based on a 5b/4b code, where groups of four data bits (nybbles) are represented as groups of five bits on the wire, and transmitted directly, without the Manchester scheme. The expansion to five bits lets the sixteen necessary patterns used be chosen to fulfil the requirement for frequent level transitions, again to help the receiver stay synchronised. However, there's still room to choose some extra symbols, which can be transmitted but don't correspond to data value, in effect, expanding the set of nybbles to twenty-four symbols - the nybbles 0 to F, and symbols called Q, I, J, K, T, R, S and H. Ethernet uses a JK pair to mark frame starts, and TR to mark frame ends.
Gigabit ethernet is similar to fast ethernet, but with a different coding scheme - the optical fibre versions use an 8b/10b code instead of the 5b/4b code, and the twisted-pair version uses some very complex quinary code arrangement which i don't really understand. Both approaches yield the same result, which is the ability to transmit either data bytes or one of a small set of additional special symbols, and those special symbols are used for framing.
On top of this basic framing structure, there is then a fixed preamble, followed by a frame delimiter, and some control fields of varying pointlessness (hello, LLC/SNAP!). Validity of these fields can be used to validate the frame, but they can't be used to define frames on their own.
You're pretty close to the correct answer already. Basically if it starts with a preamble and ends in something that matches as a checksum, it's a frame and passed up to higher layers.
PPP and ethernet both look for the next frame start signal. In the case of Ethernet, it's the preamble, a sequence of 64 alternating bits. If an ethernet decoder sees that, it simply assumes what follows is a frame. By capturing the bits and then checking if the checksum matches, it decides if it has a valid frame.
As for the payload containing the FLAG, in PPP it is escaped with additional bytes to prevent such misinterpretation.
As far as I know, PPP only supports error detection, and does not support any form of error correction or recovery.
Backed up by Cisco here: http://www.cisco.com/en/US/docs/internetworking/technology/handbook/PPP.html
This Wikipedia PPP line activation section describes the basics of RFC 1661.
A Frame Check sequence is used to detect transmission errors in a frame (described in the earlier Encapsulation section).
The diagram from RFC 1661 on this Wikipedia page describes how the Network protocol phase can restart with Link Establishment on an error.
Also, notes from the Cisco page referred by Suvesh.
PPP Link-Control Protocol
The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases.
First, link establishment and configuration negotiation occur. Before any network layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters. This phase is complete when a configuration-acknowledgment frame has been both sent and received.
This is followed by link quality determination. LCP allows an optional link quality determination phase following the link-establishment and configuration-negotiation phase. In this phase, the link is tested to determine whether the link quality is sufficient to bring up network layer protocols. This phase is optional. LCP can delay transmission of network layer protocol information until this phase is complete.
At this point, network layer protocol configuration negotiation occurs. After LCP has finished the link quality determination phase, network layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network layer protocols so that they can take appropriate action.
Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.
Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage and debug a link.
These frames are used to accomplish the work of each of the LCP phases.