UART programming in C on Linux plateform and parity bit - uart

I am quite new to UART programming and trying to understand the concept of parity bit which is still not totally clear for me.
From what I understand so far :
Let's say I have 8 bits to transmit from UART deviceA to UART deviceB. Each time I want to send a byte to deviceB, then there is a start bit sent, then the 8 bits, then the parity bit and then the stop bit. OK, this is clear. Now, when deviceA is set to work with an odd parity, then the parity bit is set to 0 if the number of 1 in the byte is odd. And it's the opposite if the deviceA is set to even parity. OK, I understand that too.
Now, when deviceB receives the frame, it checks for the byte sent, that the parity bit is coherent with the number of 1 in the byte and there is a parity error if not. But this deviceB, has also a parity mode.
So my question is :
Should deviceA and deviceB be set to the same parity mode (even or odd) to make this control work as expected or am I wrong ?
Thanks for any help at clarifying this point.

You have understood the concept of parity quite clearly. It is a way to eliminate error that can occur during transmission of bit sequence.
So both device should know that which parity is in use, so that number of 1 should remain same while sending and receiving. If you define different parity for different machine then both will understand bit meaning differently.
With this table you can understand, that how parity bit changes with odd and even parity for same number.
Hope this Helps :)

Related

How UART handles spike or noise state change?

In UART, the default state is high it will start the process when it receives start bit i.e, state change of high to low.
My question is In some worst scenario any spike, noise or interrupt may
change the default sate from high to lowwhich is enough for the UART to detect it as a start bit, So that it should not start
processing the next bits. I know UART internally handles these
scenarios and avoids the processing.
Can anyone explain me how UART behaves for this?
If the signal drop is short enough, the UART might not detect it as a start bit. For example, a UART might oversample the line level at 16x the bit rate and when it detects a falling edge in the RX line will then look for a certain number of 0 samples in the next 16 to detect the start bit. If it doesn't see enough 0 samples, it won't consider it a start bit (and might set some condition bit that indicates a noisy/dirty line).
But if the UART sees the line drop for long enough to consider it a start bit, then the UART will start processing as if a character is being received. If the signal is "junk" this might result in:
a spurious character,
a framing error,
a parity error (if the UART is configured to check parity),
a 'break' signal (if the line stays in the low state long enough)
The datasheet for the UART device should give details on how bits are sampled & detected.
UART Receiver samples the Rx line 16 times(most uC's) before confirming the value of each bit. For example, if the baud is 9600,each bit time will be 104uS. Now to correctly detect the value of each bit(i.e whether its high or low), the UART receiver samples the bus every 104uS/16 seconds. The majority voting of these samples is then used to decide the value of a bit. The start bit is used to indicate to the receiver that data bits are about to be received. Its mandatory that the start bit be low.By employing such a sampling scheme, the receiver ensures that the effects of noise is eliminated.
UART is an old, increasingly obsolete 1970s technology, so the error handling is poor. On the most fundamental level, the UART will trigger any read by a falling edge, which is the start bit.
So if there is noise which gets interpreted as the falling edge of the start bit, the UART will then sample 8 data bits, with sample times according to its pre-configured baudrate (usually it samples 16 times faster than the set baudrate). It will sample regardless of any edges present. After that, it will clock in a potential parity bit, and then 1 or 2 stop bits.
At the point where the stop bit is read, the UART checks to ensure that the parity bit (if present) is correct, if not, you will get parity errors. And then it checks the stop bit, which must be high, otherwise you get a framing error.
These kind of error checks are roughly of a fifty/fifty quality. Double-bit errors or larger will cause lots of problems. "Parity" in particular, is a poor method of detecting errors. Professionals stopped using it around 30 years ago. Overall, the error detection chance by hardware is very poor, by design.
This is why all UART-based protocols must have lots of checksums, sync bytes and other such overhead in order to work. UART is very vulnerable to EMI, to the point where you cannot reliably use UART buses outside a circuit board, without using differential signals (like RS-422).

Issues using PIC18 as an SPI slave

I have been working on a PIC18F45k20 running at 16 MHz and using it as an SPI slave.  I find that no matter the SPI clock rate (SCK) from the master I always have to add a significant delay (~64 us) between SPI bytes to avoid SPI collisions or receive overflow.   Without the delay and at very slow SPI clock rates, 95% of the SPI packets will get through without collision or overflow.
Online posts lend me to think that this may be a "feature" of this, and other, PIC18 processors.
Have others observed this same slave “feature”?
If this is a “feature”, is it found in all PIC18 processors?
I tested the PIC18 without an interrupt with the following:
if (SSPSTATbits.BF)
{
DataIn = SSPBUF;
SSPBUF = DataOut;
}
Also tested using an interrupt and saw the same challenge.
Makes me wonder if it doesn’t truly detect the SPI clock properly.
If you have an oscilloscope check to make sure that the chip select is not being released prior to the PIC clocking out the last SPI data byte. You need to wait on the SPI busy bit before releasing the chip select line.
As I know PIC18 is a 8bit microcontroller, although you can easily find that it's integer variable is mapped into 16bit. However SPI works with 8bit data. It means if your master send for this microcontroller more than 8bit, such as 16 bit, overflow happens in SPI module and cann't response to master clock anymore. So In Slave mode, make sure data from master have 8bit structure. But if pic18 was Master in SPI connection, even though its slave send 16bit data, pic18 hold clock data after first 8bit and wait until its buffer read and empty for next 8bit.
I've also come across this issue and it seems like what one should take into account is that supported SPI simple tells how fast MCU can receive one byte into SSPBUF.
Reading this byte from SSPBUF and storing it in a buffer will require some work like incrementing a pointer etc., which will take some time. This is what reduces actual SPI bandwidth for multi-byte SPI.

Plot a graph of Time vs RSSI for a 433Mhz RF ASK Receiver

Hi Im using the following RF module
http://www.apogeekits.com/rf_receiver_module_rx433.htm
on an embedded board with the PIC16F628A. Sadly, I realized that the signal strength was in analog form and couldn't get any ideas to get the RSSI reading off the pin because well my PIC is digital DUH!.
My basic idea was
To get the RSSI value from my Receiver
Send it to the PIC
Link the PIC to a PC via RS232
Plot a graph of time vs RSSI of the receiver (so I can make out how close my TX is to my RX)
I thought it was bloody brilliant at first but ive hit a dead end here. Any ideas on getting the RSSI data to my PC from this receiver would be nice.
Thanks in Advance
You can get a PIC that has an integrated ADC for sampling the analog signal. Or, you can use an external ADC chip to do the conversion. You would connect that to your PIC using SPI or I2C.
The simplest thing to do is obviously to use a more appropriate microcontroller - one with an ADC! There are many (most), including PICs (though that wouldn't be my first choice).
Attaching an external SPI or I2C ADC might be a bit tedious since having no SPI or I2C on your part, you'd have to bit-bash it. If you do that, use an SPI part - its simpler. Your sample rate will suffer and may end-up being a bit jittery if you are not careful.
Another solution is to use a voltage controlled PWM, then use the timer input capture to time the pulse width. That will give you good regularity and potentially good resolution. You can get a chip (example) to do that, or grow your own. That last option requires a triangle wave input as well as the measured (control) voltage, but on the same site...
In a similar vein, you could use a low frequency VCO (example) and use the output to clock one of the timers, then using a second timer periodically sampling the first and reset it. The count will relate to the voltage, though not necessarily a linear relationship, linearisation could be none on the PIC or at the receiving PC - I'd go for the latter - your micro will suck at arithmetic (performance wise) - even integer arithmetic, especially if it involves division.

Simple robust error correction for transmission of ascii over serial (RS485)

I have a very low speed data connection over serial (RS485):
9600 baud
actual data transmission rate is about 25% of that.
The serial line is going through an area of extremely high EMR. Peak fluctuations can reach 3000 KV.
I am not in the position (yet) to force a change in the physical medium, but could easily offer to put in a simple robust forward error correction scheme. The scheme needs to be easy to implement on a PIC18 series micro.
Ideas?
This site claims to implement Reed-Solomon on the PIC18. I've never used it myself, but perhaps it could be a helpful reference?
Search for CRC algorithm used in MODBUS ASCII protocol.
I develop with PIC18 devices and currently use the MCC18 and PICC18 compilers. I noticed a few weeks ago that the peripheral headers for PICC18 incorrectly map the Busy2USART() macro to the TRMT bit instead of the TRMT2 bit. This caused me major headaches for short time before I discovered the problem. Example, a simple transmission:
putc2USART(*p_value++);
while Busy2USART();
putc2USART(*p_value);
When the Busy2USART() macro was incorrectly mapped to the TRMT bit, I was never waiting for bytes to leave the shift register because I was monitoring the wrong bit. Before I realized the inaccurate header file, the only way I was able to successfully transmit a byte over 485 was to wait 1 ms between bytes. My baud rate was 91912 and the delays between bytes killed my throughput.
I also suggest implementing a means of collision detection and checksums. Checksums are cheap, even on a PIC18. If you are able to listen to your own transmissions, do so, it will allow you to be aware of collisions that may result from duplicate addresses on the same loop and incorrect timings.

Protocols used to talk between an embedded CPU and a PC

I am building a small device with its own CPU (AVR Mega8) that is supposed to connect to a PC. Assuming that the physical connection and passing of bytes has been accomplished, what would be the best protocol to use on top of those bytes? The computer needs to be able to set certain voltages on the device, and read back certain other voltages.
At the moment, I am thinking a completely host-driven synchronous protocol: computer send requests, the embedded CPU answers. Any other ideas?
Modbus might be what you are looking for. It was designed for exactly the type of problem you have. There is lots of code/tools out there and adherence to a standard could mean easy reuse later. It also support human readable ASCII so it is still easy to understand/test.
See FreeModBus for windows and embedded source.
There's a lot to be said for client-server architecture and synchronous protocols. Simplicity and robustness, to start. If speed isn't an issue, you might consider a compact, human-readable protocol to help with debugging. I'm thinking along the lines of modem AT commands: a "wakeup" sequence followed by a set/get command, followed by a terminator.
Host --> [V02?] // Request voltage #2
AVR --> [V02=2.34] // Reply with voltage #2
Host --> [V06=3.12] // Set voltage #6
AVR --> [V06=3.15] // Reply with voltage #6
Each side might time out if it doesn't see the closing bracket, and they'd re-synchronize on the next open bracket, which cannot appear within the message itself.
Depending on speed and reliability requirements, you might encode the commands into one or two bytes and add a checksum.
It's always a good idea to reply with the actual voltage, rather than simply echoing the command, as it saves a subsequent read operation.
Also helpful to define error messages, in case you need to debug.
My vote is for the human readable.
But if you go binary, try to put a header byte at the beginning to mark the beginning of a packet. I've always had bad luck with serial protocols getting out of sync. The header byte allows the embedded system to re-sync with the PC. Also, add a checksum at the end.
I've done stuff like this with a simple binary format
struct PacketHdr
{
char syncByte1;
char syncByte2;
char packetType;
char bytesToFollow; //-or- totalPacketSize
};
struct VoltageSet
{
struct PacketHdr;
int16 channelId;
int16 voltageLevel;
uint16 crc;
};
struct VoltageResponse
{
struct PacketHdr;
int16 data[N]; //Num channels are fixed
uint16 crc;
}
The sync bytes are less critical in a synchronous protocol than in an asynchronous one, but they still help, especially when the embedded system is first powering up, and you don't know if the first byte it gets is the middle of a message or not.
The type should be an enum that tells how to intepret the packet. Size could be inferred from type, but if you send it explicitly, then the reciever can handle unknown types without choking. You can use 'total packet size', or 'bytes to follow'; the latter can make the reciever code a little cleaner.
The CRC at the end adds more assurance that you have valid data. Sometimes I've seen the CRC in the header, which makes declaring structures easier, but putting it at the end lets you avoid an extra pass over the data when sending the message.
The sender and reciever should both have timeouts starting after the first byte of a packet is recieved, in case a byte is dropped. The PC side also needs a timeout to handle the case when the embedded system is not connected and there is no response at all.
If you are sure that both platforms use IEEE-754 floats (PC's do) and have the same endianness, then you can use floats as the data type. Otherwise it's safer to use integers, either raw A/D bits, or a preset scale (i.e. 1 bit = .001V gives a +/-32.267 V range)
Adam Liss makes a lot of great points. Simplicity and robustness should be the focus. Human readable ASCII transfers help a LOT while debugging. Great suggestions.
They may be overkill for your needs, but HDLC and/or PPP add in the concept of a data link layer, and all the benefits (and costs) that come with a data link layer. Link management, framing, checksums, sequence numbers, re-transmissions, etc... all help ensure robust communications, but add complexity, processing and code size, and may not be necessary for your particular application.
USB bus will answer all your requirements. It might be very simple usb device with only control pipe to send request to your device or you can add an interrupt pipe that will allow you to notify host about changes in your device.
There is a number of simple usb controllers that can be used, for example Cypress or Microchip.
Protocol on top of the transfer is really about your requirements. From your description it seems that simple synchronous protocol is definitely enough. What make you wander and look for additional approach? Share your doubts and we will try to help :).
If I wasn't expecting to need to do efficient binary transfers, I'd go for the terminal-style interface already suggested.
If I do want to do a binary packet format, I tend to use something loosely based on the PPP byte-asnc HDLC format, which is extremely simple and easy to send receive, basically:
Packets start and end with 0x7e
You escape a char by prefixing it with 0x7d and toggling bit 5 (i.e. xor with 0x20)
So 0x7e becomes 0x7d 0x5e
and 0x7d becomes 0x7d 0x5d
Every time you see an 0x7e then if you've got any data stored, you can process it.
I usually do host-driven synchronous stuff unless I have a very good reason to do otherwise. It's a technique which extends from simple point-point RS232 to multidrop RS422/485 without hassle - often a bonus.
As you may have already determined from all the responses not directly directing you to a protocol, that a roll your own approach to be your best choice.
So, this got me thinking and well, here are a few of my thoughts --
Given that this chip has 6 ADC channels, most likely you are using Rs-232 serial comm (a guess from your question), and of course the limited code space, defining a simple command structure will help, as Adam points out -- You may wish to keep the input processing to a minimum at the chip, so binary sounds attractive but the trade off is in ease of development AND servicing (you may have to trouble shoot a dead input 6 months from now) -- hyperterminal is a powerful debug tool -- so, that got me thinking of how to implement a simple command structure with good reliability.
A few general considerations --
keep commands the same size -- makes decoding easier.
Framing the commands and optional check sum, as Adam points out can be easily wrapped around your commands. (with small commands, a simple XOR/ADD checksum is quick and painless)
I would recommend a start up announcement to the host with the firmware version at reset - e.g., "HELLO; Firmware Version 1.00z" -- would tell the host that the target just started and what's running.
If you are primarily monitoring, you may wish to consider a "free run" mode where the target would simply cycle through the analog and digital readings -- of course, this doesn't have to be continuous, it can be spaced at 1, 5, 10 seconds, or just on command. Your micro is always listening so sending an updated value is an independent task.
Terminating each output line with a CR (or other character) makes synchronization at the host straight forward.
for example your micro could simply output the strings;
V0=3.20
V1=3.21
V2= ...
D1=0
D2=1
D3=...
and then start over --
Also, commands could be really simple --
? - Read all values -- there's not that many of them, so get them all.
X=12.34 - To set a value, the first byte is the port, then the voltage and I would recommend keeping the "=" and the "." as framing to ensure a valid packet if you forgo the checksum.
Another possibility, if your outputs are within a set range, you could prescale them. For example, if the output doesn't have to be exact, you could send something like
5=0
6=9
2=5
which would set port 5 off, port 6 to full on, and port 2 to half value -- With this approach, ascii and binary data are just about on the same footing in regards to computing/decoding resources at the micro. Or for more precision, make the output 2 bytes, e.g., 2=54 -- OR, add an xref table and the values don't even have to be linear where the data byte is an index into a look-up table ...
As I like to say; simple is usually better, unless it's not.
Hope this helps a bit.
Had another thought while re-reading; adding a "*" command could request the data wrapped with html tags and now your host app could simply redirect the output from your micro to a browser and wala, browser ready --
:)