How to set bits on the TI TM4C123G launchpad - embedded

I have a question about how bits are set(or cleared) on the TI launchpad registers. It seems sometimes they are bitwise or'd and other times they are set by just an assignement statement. For example, there is the register that is the clock gate and bit 5 must be set in order to be able to use GPIO Port F:
#define SYSCTL_RCGC2_R (*((volatile unsigned long *)0x400FE108))
SYSCTL_RCGC2_R = 0x00000020; //What are the values of all the bits now?
Also, I've seen bits set by bitwise or:
SYSCTL_RCGC2_R |= 0x00000020;

SYSCTL_RCGC2_R = 0x00000020 ;
Sets all bits regardless of their current state. In this case all but b5 are zeroed.
SYSCTL_RCGC2_R |= 0x00000020 ;
Sets only b5, leaving all other bits unchanged. The |= assignment is equivalent to:
SYSCTL_RCGC2_R = SYSCTL_RCGC2_R | 0x00000020 ;
i.e. whatever SYSCTL_RCGC2_R contains is OR'ed with 0x00000020. So b5 must become 1 while all other bits remain unchanged because x OR 0 = x while x OR 1 = 1.
Similarly you can clear an individual bit by AND'ing an inverted bit-mask thus:
SYSCTL_RCGC2_R &= ~0x00000020 ;
because ~ reverses the bits (0xffffffdf), and x AND 0 = 0 while x AND 1 = x.
Note that none of this is specific to TI Launchpad or GPIO registers, it is universal to the programming language for any platform or integer data object.

This is basic C language operator behavior and nothing special about TI Launchpad. The assignment operator sets or clears every bit of the register. The bitwise OR operator sets the bits specified but doesn't clear any bits that were already set. Use a bitwise OR when you want to set a portion of the register without changing the rest. (A bitwise AND operator can be used to clear a portion without changing the rest.)

Related

Can't get my DAC(PT8211) to work correctly using a PIC32MX uc and SPI

I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool

How to change only part of a register to a number (Examples are doing it wrong?)

I want to write for example the number 32 to the 16-24 bits of a register. This register is 100 bits long and the rest or some of the register contains "reserved bits" that shouldn't be write to (According to the datasheet.) or lets say it contains other values I don't want to change(Previous settings.).
If it was only a few bits I could set each one of them with the R &= ~(1 << x) or R |= 1 << x for each bit. But if it was a number, It'd be a huge pain to turn 32 to binary and do it one by one. I see some of the examples basically do something like R = 0x20 << 16. but I'm confused. wouldn't that ruin every other bit and set the reserved bits to 0 messing with the MCU Operation?
I want to write for example the number 32 to the 16-24 bits of a register. This register is 100 bits long and the rest or some of the register contains "reserved bits" that shouldn't be write to (According to the datasheet.) or lets say it contains other values I don't want to change(Previous settings.).
You want to perform a Read-Modify-Write. In this case, you are interested in setting bits 16-24 to a specific value. Assuming those values are zero, you can do that like this:
my_register |= (32 << 16);
This is a Bitwise-OR operation and that is important to note because it keeps whatever the value of the bits were.
Assuming those values are non-zero, you will want to clear those bits first, then write the new value. You can do that like this:
my_register &= ~(0xFF << 16); //Clear bits 16-24
my_register |= (0x20 << 16); //Set bits 16-24 to 32
The above uses Bitwise AND, Bitwise OR, and Bitwise inversion. Again, these operations maintain the values of other bits.
I see some of the examples basically do something like R = 0x20 << 16.
but I'm confused. wouldn't that ruin every other bit and set the
reserved bits to 0 messing with the MCU Operation?
That's not necessarily true. Those bits are likely write protected, or the default value for those bits might be 0 so writing 0 to them has no effect. It just depends on the MCU itself.
Here a function for understanding the principe:
unsigned SetSomeBits(unsigned Var, unsigned StartBitNumber, unsigned NumberOfBits, unsigned Value2Set)
{
unsigned Mask = (1<<NumberOfBits)-1; //With NumberOfBits=3 Mask becomes 0b000111
Mask <<= StartBitNumber;
//Mask contains now 0 at do-not-touch bit positions
//Mask contains now 1 at to-be-changed bit positions
Var &= ~Mask; //Zero out the to-be-changed bits
return Var | (Value2Set<<StartBitNumber); //Set the requested bits
}
...and here as a macro:
#define SET_SOME_BITS(Var, StartBitNumber, NumberOfBits, Value2Set) ((Var) & ~(((1<<(NumberOfBits))-1)<<(StartBitNumber)) | (Value2Set)<<(StartBitNumber))
Both versions fail if Value2Set doesn't fit into NumberOfBits.

Measuring Program Execution Time with Cycle Counters

I have confusion in this particular line-->
result = (double) hi * (1 << 30) * 4 + lo;
of the following code:
void access_counter(unsigned *hi, unsigned *lo)
// Set *hi and *lo to the high and low order bits of the cycle
// counter.
{
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax");
}
double get_counter()
// Return the number of cycles since the last call to start_counter.
{
unsigned ncyc_hi, ncyc_lo;
unsigned hi, lo, borrow;
double result;
/* Get cycle counter */
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
if (result < 0) {
fprintf(stderr, "Error: counter returns neg value: %.0f\n", result);
}
return result;
}
The thing I cannot understand is that why is hi being multiplied with 2^30 and then 4? and then low added to it? Someone please explain what is happening in this line of code. I do know that what hi and low contain.
The short answer:
That line turns a 64bit integer that is stored as 2 32bit values into a floating point number.
Why doesn't the code just use a 64bit integer? Well, gcc has supported 64bit numbers for a long time, but presumably this code predates that. In that case, the only way to support numbers that big is to put them into a floating point number.
The long answer:
First, you need to understand how rdtscp works. When this assembler instruction is invoked, it does 2 things:
1) Sets ecx to IA32_TSC_AUX MSR. In my experience, this generally just means ecx gets set to zero.
2) Sets edx:eax to the current value of the processor’s time-stamp counter. This means that the lower 64bits of the counter go into eax, and the upper 32bits are in edx.
With that in mind, let's look at the code. When called from get_counter, access_counter is going to put edx in 'ncyc_hi' and eax in 'ncyc_lo.' Then get_counter is going to do:
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
What does this do?
Since the time is stored in 2 different 32bit numbers, if we want to find out how much time has elapsed, we need to do a bit of work to find the difference between the old time and the new. When it is done, the result is stored (again, using 2 32bit numbers) in hi / lo.
Which finally brings us to your question.
result = (double) hi * (1 << 30) * 4 + lo;
If we could use 64bit integers, converting 2 32bit values to a single 64bit value would look like this:
unsigned long long result = hi; // put hi into the 64bit number.
result <<= 32; // shift the 32 bits to the upper part of the number
results |= low; // add in the lower 32bits.
If you aren't used to bit shifting, maybe looking at it like this will help. If lo = 1 and high = 2, then expressed as hex numbers:
result = hi; 0x0000000000000002
result <<= 32; 0x0000000200000000
result |= low; 0x0000000200000001
But if we assume the compiler doesn't support 64bit integers, that won't work. While floating point numbers can hold values that big, they don't support shifting. So we need to figure out a way to shift 'hi' left by 32bits, without using left shift.
Ok then, shifting left by 1 is really the same as multiplying by 2. Shifting left by 2 is the same as multiplying by 4. Shifting left by [omitted...] Shifting left by 32 is the same as multiplying by 4,294,967,296.
By an amazing coincidence, 4,294,967,296 == (1 << 30) * 4.
So why write it in that complicated fashion? Well, 4,294,967,296 is a pretty big number. In fact, it's too big to fit in an 32bit integer. Which means if we put it in our source code, a compiler that doesn't support 64bit integers may have trouble figuring out how to process it. Written like this, the compiler can generate whatever floating point instructions it might need to work on that really big number.
Why the current code is wrong:
It looks like variations of this code have been wandering around the internet for a long time. Originally (I assume) access_counter was written using rdtsc instead of rdtscp. I'm not going to try to describe the difference between the two (google them), other than to point out that rdtsc does not set ecx, and rdtscp does. Whoever changed rdtsc to rdtscp apparently didn't know that, and failed to adjust the inline assembler stuff to reflect it. While your code might work fine despite this, it might do something weird instead. To fix it, you could do:
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax", "%ecx");
While this will work, it isn't optimal. Registers are a valuable and scarce resource on i386. This tiny fragment uses 5 of them. With a slight modification:
asm("rdtscp" // Read cycle counter
: "=d" (*hi), "=a" (*lo)
: /* No input */
: "%ecx");
Now we have 2 fewer assembly statements, and we only use 3 registers.
But even that isn't the best we can do. In the (presumably long) time since this code was written, gcc has added both support for 64bit integers and a function to read the tsc, so you don't need to use asm at all:
unsigned int a;
unsigned long long result;
result = __builtin_ia32_rdtscp(&a);
'a' is the (useless?) value that was being returned in ecx. The function call requires it, but we can just ignore the returned value.
So, instead of doing something like this (which I assume your existing code does):
unsigned cyc_hi, cyc_lo;
access_counter(&cyc_hi, &cyc_lo);
// do something
double elapsed_time = get_counter(); // Find the difference between cyc_hi, cyc_lo and the current time
We can do:
unsigned int a;
unsigned long long before, after;
before = __builtin_ia32_rdtscp(&a);
// do something
after = __builtin_ia32_rdtscp(&a);
unsigned long long elapsed_time = after - before;
This is shorter, doesn't use hard-to-understand assembler, is easier to read, maintain and produces the best possible code.
But it does require a relatively recent version of gcc.

Difference between bit and sbit?

What is the difference between the bit and sbit keywords in Keil C51 for the 8051 Microcontroller?
When should sbit be used and when bit?
Some examples would be very helpful.
This should help you :
BIT
C51 provides you with a bit data type which may be used for variable
declarations, argument lists, and function return values. A bit
variable is declared just as other C data types are declared. For
example:
static bit done_flag = 0; /* bit variable */
bit testfunc ( /* bit function */
bit flag1, /* bit arguments */
bit flag2)
{
.
.
.
return (0); /* bit return value */
}
All bit variables are stored in a bit segment located in the internal
memory area of the 8051. Because this area is only 16 bytes long, a
maximum of 128 bit variables may be declared within any one scope.
Memory types may be included in the declaration of a bit variable.
However, because bit variables are stored in the internal data area of
the 8051, the data and idata memory types only may be included in the
declaration. Any other memory types are invalid.
The following restrictions apply to bit variables and bit
declarations:
Functions which use disabled interrupts (#pragma disable) and functions that are declared using an explicit register bank (using n)
cannot return a bit value. The C51 compiler generates an error message
for functions of this type that attempt to return a bit type.
A bit cannot be declared as a pointer. For example:
bit *ptr
An array of type bit is invalid. For example:
bit ware [5]
SBIT
With typical 8051 applications, it is often necessary to access
individual bits within an SFR. The C51 compiler makes this possible
with the sbit data type. The sbit data type allows you to access
bit-addressable SFRs. For example:
sbit EA = 0xAF;
This declaration defines EA to be the SFR bit at address 0xAF. On the
8051, this is the enable all bit in the interrupt enable register.
NOTE:
Not all SFRs are bit-addressable. Only those SFRs whose address is
evenly divisible by 8 are bit-addressable. These SFR’s lower nibble
will be either 0 or 8; for example, SFRs at 0xA8 and 0xD0 are
bit-addressable, whereas SFRs at 0xC7 and 0xEB are not. SFR bit
addresses are easy to calculate. Add the bit position to the SFR byte
address to get the SFR bit address. So, to access bit 6 in the SFR at
0xC8, the SFR bit address would be 0xCE (0xC8 + 6).
Any symbolic name can be used in an sbit declaration. The expression
to the right of the equal sign (=) specifies an absolute bit address
for the symbolic name. There are three variants for specifying the
address.
Variant 1:
sfr_name ^ int_constant
This variant uses a previously-declared sfr (sfr_name) as the base
address for the sbit. The address of the existing SFR must be evenly
divisible by 8. The expression following the carat symbol (^)
specifies the position of the bit to access with this declaration. The
bit position must be a number in the range 0 to 7. For example:
sfr PSW = 0xD0;
sfr IE = 0xA8;
sbit OV = PSW ^ 2;
sbit CY = PSW ^ 7;
sbit EA = IE ^ 7;
Variant 2:
int_constant ^ int_constant
This variant uses an integer constant as the base address for the
sbit. The base address value must be evenly divisible by 8. The
expression following the carat symbol (^) specifies the position of
the bit to access with this declaration. The bit position must be a
number in the range 0 to 7. For example:
sbit OV = 0xD0 ^ 2;
sbit CY = 0xD0 ^ 7;
sbit EA = 0xA8 ^ 7;
Variant 3:
int_constant
This variant uses an absolute bit address for the sbit. For example:
sbit OV = 0xD2;
sbit CY = 0xD7;
sbit EA = 0xAF;
NOTES :
Special function bits represent an independent declaration class that
may not be interchanged with other bit declarations or bit fields.
The sbit data type declaration may be used to access individual bits
of variables declared with the bdata memory type specifier
Source :
BIT and SBIT
Check this forum:
The main difference between the bit and sbit is that you can declare
sbit a varible in a unit in such way that it points to a specific bit
in the SFR register. In the main program you need to specify to which
register this sbit points to.
dim Abit as sbit sfr external ' Abit is precisely defined in some external file, for example
in the main program unit
...
implements
....
end.
The mikroBasic PRO for PIC compiler provides a bit data type that may
be used for variable declarations. It can not be used for argument
lists, and function-return values, there are no pointers to bit
variables, and an array of type bit is not valid.
dim bf as bit ' bit variable
sbit is not a new variable and does not take extra memory space, while
with a bit different, will the new variable, which further defines and
takes additional space in memory.
Also check the references(added by nos in comments):
Bit
SBit
sBIT is a special type of register used in 8051 microcontroller and are used for accessing individual bits that are declared with bdata while Bit is used to define a single-bit variable.

How I can fix this code to allow my AVR to talk over serial port?

I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!