Can't get my DAC(PT8211) to work correctly using a PIC32MX uc and SPI - spi

I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?

First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.

First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool

Related

Can't get the analogue watchdog to trigger an interrupt on the DFSDM peripheral of a STM32L475

I have an AMC1306 current shunt modulator feeding 1-bit PDM data at 10 MHz into a STM32L475. Filter0 takes the bit stream from Channel0 and applies a sinc3 filter with Fosr=125 and Iosr=4. This provides 24-bit data at 20 kHz and is working fine. The DMA transfers the data into a 1-word circular buffer in main memory to maintain fresh data.
I want to be able to call an interrupt function if the 24-bit value leaves a certain window. This would be caused in an over-voltage situation and needs to disengage the MOSFET driver. It would seem this functionality is offered by the analogue watchdog within the peripheral.
I am using STM32CubeIDE and the graphical interface within the IDE to configure the peripherals. Filter0 global interrupts are enabled. I have added this code:
/* USER CODE BEGIN 2 */
HAL_DFSDM_FilterRegularStart_DMA(&hdfsdm1_filter0, Vbus_DMA, 1);
// Set up the watchdog
DFSDM_Filter_AwdParamTypeDef awdParamFilter0;
awdParamFilter0.DataSource = DFSDM_FILTER_AWD_FILTER_DATA;
awdParamFilter0.Channel = DFSDM_CHANNEL_0;
awdParamFilter0.HighBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.HighThreshold = 250;
awdParamFilter0.LowBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.LowThreshold = -250;
HAL_DFSDM_FilterAwdStart_IT(&hdfsdm1_filter0, &awdParamFilter0);
/* USER CODE END 2 */
I have also used the HAL callback function
/* USER CODE BEGIN 4 */
void HAL_DFSDM_FilterAwdCallback(DFSDM_Filter_HandleTypeDef *hdfsdm_filter, uint32_t Channel, uint32_t Threshold)
{
HAL_GPIO_WritePin(GPIOA, LED_Pin, GPIO_PIN_SET);
}
/* USER CODE END 4 */
But the callback function never runs! I have experimented with the thresholds (I even made them zero).
In the debugger I can see the AWDIE=0x1 (So the AWD interrupt is enabled). The AWDF = 0x1 (So the threshold has been crossed and the peripheral should be requesting an interrupt...). The code doesn't even trigger a breakpoint in the stm32l4xx_it.c filter0 interrupt. So it'd seem no DFSDM1_FLT0 interrupts are happening
I'd be enormously appreciative of any help, any example code, any resources to read. Thanks in advance.
I know the DMA conversion complete callbacks work
I have played around with various thresholds and note that the AWDF gets set when the threshold is crossed.

Built-in led won't turn on STM32F303RE Nucleo board

I am trying to turn the led (LD2 in schematic) inside the nucleo board on using only registers with the STM32CubeIDE.
The user manual states the following addresses for the clock, mode and data registers:
Led pin: PA5
Address of the Clock control register: RCC_AHBENR
[base address] + [offset] ===> [Result]
0x4002 1000 + 0x14 ===> 0x40021014
Address of the GPIOA mode register
0x4800 0000 + 0x00 ===> 0x48000000
Address of the GPIOA output data register
0x4800 0000 + 0x14 ===> 0x48000014
I am using the following code to set/clear the registers in the board:
#include <stdint.h>
int main(void)
{
uint32_t *pClkCtrlReg = (uint32_t*)0x40021014;
uint32_t *pPortAModeReg = (uint32_t*)0x48000000;
uint32_t *pPortAOutReg = (uint32_t*)0x48000014;
//1. enable the clock for GPIOA peripheral in the AHBENR
*pClkCtrlReg |= 0x00020000;
//2. configure the mode of the IO pin as output
//a. clear the 24th and 25th bit positions
*pPortAModeReg &= 0xFCFFFFFF;
//b set 24th bit position as 1
*pPortAModeReg |= 0x01000000;
//3. SET 12th bit of the output data register to make I/O pin-12 as HIGH
*pPortAOutReg |= 0x20;
while(1);
}
Using the register viewer from the IDE, I can see that the PA5 is set as output, but physically, my led is not turning on.
I do not know what I am doing wrong. I am suspecting that the pin PA5 is wrong, but i tried PA12 too and it does not work. Can someone please help me out here?
I ran through your code with reference manual in hand. RM0316 STM32F303 Reference manual.
You activate clock to GPIO Port A correctly (also, GPIOA registers would also read all 0x00 is it hadn't been activated).
Then you set GPIO mode as, quoting you:
//2. configure the mode of the IO pin as output
//a. clear the 24th and 25th bit positions
*pPortAModeReg &= 0xFCFFFFFF;
//b set 24th bit position as 1
*pPortAModeReg |= 0x01000000;
You work with bits 24 and 25. Which are these:
So you set the mode for pin A12, not A5. For GPIOA Pin 5 you need to manipulate bits 10 and 11.
//clear pin5 bits
*pPortAModeReg &= ~(0x03 << 10); //take 0b11, shift it to position 10, flip all bits, AND with original state of register
And follow it with setting those bits to "General purpose output mode" 01, which you also do for wrong bits:
//b set 10th bit position as 1
*pPortAModeReg |= (0x01 << 10);
I checked all the registers GPIO has, there shouldn't be anything else you need to set. If things still don't work, please post the contents of all GPIOA registers.
EDIT: also, try using bit set/reset registers. Note that they are read only, so no "|=" for them, only "=". Writing 0 to them doesn't do anything. So you need to write only (0x01<<5) straight to the GPIOA BSRR register.
GPIOA->BSRR = (1U<<5); //or however you want to address that register
A few comments on your code:
Get in the habit of always using the volatile keyword when accessing peripheral registers. Otherwise the compiler may end up optimizing out your code. I usually make a typedef for something like a reg32_t type in order not to forget.
As the OP pointed out, use code of the form (1 << b) to make it obvious which bit(s) you are referencing. Your compiler will optimize this, so the end result in the compiled code will be the same. You could even consider making this into a couple of macros or inline functions.
When you make several changes to a peripheral register, it's a good idea to copy it; make the changes to the copy; then store the copy back to the register. This way, you don't temporarily put the register in an unwanted state.
Here's your code with these changes:
#include <stdint.h>
typedef volatile uint32_t reg32_t;
int main(void)
{
reg32_t *pClkCtrlReg = (reg32_t*)0x40021014;
reg32_t *pPortAModeReg = (reg32_t*)0x48000000;
reg32_t *pPortABsrReg = (reg32_t*)0x48000018;
//1. enable the clock for GPIOA peripheral in the AHBENR
*pClkCtrlReg |= (1<<17);
//2. configure the mode of the IO pin as output
uint32_t modereg = *pPortAModeReg & ~(3<<(2*5));
modereg |= (1<<(2*5));
*pPortAModeReg = modereg;
//3. SET 5th bit of the bit set/reset register to make I/O pin 5 HIGH
*pPortABsrReg = (1<<5);
// If you want to turn the LED back off at a later time, do this:
// *pPortABsrReg = (1<<(5+16));
for (;;) ;
}
Good luck with your blinky!
//3. SET 12th bit of the output data register to make I/O pin-12 as HIGH
*pPortAOutReg |= 0x20;
for PIN 12, the value should be *pPortAOutReg |= 0x1000;
Or you could do as following:*pPortAOutReg |= (1 << 12); With this you don't need to know excactly value.

Unexpected behaviour of an iexternal interrupt function, dspic33f

The project I'm desperatly trying to finish is rather simple in term of microcontroller programming. The dspic33fj128mc802 I use basically has to do 3 things:
receive data via UART and convert it into PWM signals for servomotors
regularly wake up its ADC to check a battery level
change its baudrate working values when fed an external interrupt.
It's the last point that causes issue. In my circuit I have a switch. One position corresponds to a baudrate value, the other one to a second value. I didn't find any documentation on how to trigger an interrupt on any voltage level change, so I use rising edge and falling edge trigger in combination with checking the current state of the pin I chose for my interrupt.
Furthemore, I have two other interrupt functions in my code, one for UART reception, and the second one is a timer interrupt to wake up the ADC periodically. Interrupt priorities are the following: UART -> 1, Timer -> 2, External Interrupt -> 6 (any number above 2 really).
Here is my interrupt code:
void __attribute__((interrupt, auto_psv)) _INT0Interrupt( void )
{
IEC0bits.INT0IE = 0; // disable INT0 interrupt
if(IFS0bits.INT0IF){
if (PORTBbits.RB7 == 1){ //if pin at high logic level
INTCON2bits.INT0EP = 1; //falling edge trigger
LATAbits.LATA0 = 1;
U1BRG = 23;
}
else{ //if pin at low logic level
INTCON2bits.INT0EP = 0; //rising edge trigger
LATAbits.LATA0 = 0;
U1BRG = 1;
}
}
IFS0bits.INT0IF = 0; //clear INT0 flag
IEC0bits.INT0IE = 1; // enable INT0 interrupt
}
The weird behaviour now -> When pulling the pin to low, the baudrate is set at the right value, the UART commmunication works perfectly. When pulled to high, the previous communication doesn't work anymore, proof that the baudrate has changed, but setting the new communication at that new baudrate doesn't work either. The LED status change works fine as well.
It is to be noted that all the different apsects of this project have been tested multiple times, each section works well, only adding this External Interrupt made the whole thing crash. The microcontroller works fine, my baudrate values are good, my circuit has been tested and has no issues.... I jst think I don't know how to properly use an external interrupt.

How does one get high-speed UART data on a low-speed MSP430

My project has an MSP430 connected via UART to a Bluegiga Bluetooth module. The MCU must be able to receive variable length messages from the BG module. In the current architecture, each received byte generates a UART interrupt to allow message processing, and power constraints impose a limit on the clock speed of the MSP430. This makes it difficult for the MSP430 to keep up with the any UART speeds faster than 9600bps. The result is a slow communication interface. Speeding up the data rate results in overrun errors, lost bytes, and broken communication.
Any ideas on how communication speed can be increased without sacrificing data integrity in this situation?
I was able to accomplish a 12x speed improvement by employing 2 of the 3 available DMA channels on the MSP430 to populate a ring buffer that could then be processed by the CPU. It was a bit tricky because the MSP430 DMA interrupts are only generated when the size register reaches zero, so I couldn't just populate a ring buffer directly because the message size is variable.
Using one DMA channel as a single byte buffer that is triggered on every byte received by the UART, which then triggers a second DMA channel that populates the ring buffer does the trick.
Below is example C code that illustrates the method. Note that it incorporates references from the MSP430 libraries.
#include "dma.h"
#define BLUETOOTH_RXQUEUE_SIZE <size_of_ring_buffer>
static int headIndex = 0;
static int tailIndex = 0;
static char uartRecvByte;
static char bluetoothRXQueue[BLUETOOTH_RXQUEUE_SIZE];
/*!********************************************************************************
* \brief Initialize DMA hardware
*
* \param none
*
* \return none
*
******************************************************************************/
void init(void)
{
// This is the primary buffer.
// It will be triggered by UART Rx and generate an interrupt.
// It's purpose is to service every byte recieved by the UART while
// also waking up the CPU to let it know something happened.
// It uses a single address of RAM to store the data, so it needs to be
// serviced before the next byte is received from the UART.
// This was done so that any byte received triggers processing of the data.
DMA_initParam dmaSettings;
dmaSettings.channelSelect = DMA_CHANNEL_2;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = 1;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_20; // USCA1RXIFG, or any UART recieve trigger
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_2, (UINT32)&UCA1RXBUF, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_2, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
// This is the secondary buffer.
// It will be triggered when DMA_CHANNEL_2 copies a byte and will store bytes into a ring buffer.
// It's purpose is to pull data from DMA_CHANNEL_2 as quickly as possible
// and add it to the ring buffer.
dmaSettings.channelSelect = DMA_CHANNEL_0;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = BLUETOOTH_RXQUEUE_SIZE;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_30; // DMA2IFG
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_0, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_0, (UINT32)&bluetoothRXQueue, DMA_DIRECTION_INCREMENT);
DMA_enableTransfers(DMA_CHANNEL_2);
DMA_enableTransfers(DMA_CHANNEL_0);
DMA_enableInterrupt(DMA_CHANNEL_2);
}
/*!********************************************************************************
* \brief DMA Interrupt for receipt of data from the Bluegiga module
*
* \param none
*
* \return none
*
* \par Further Detail
* \n Dependencies: N/A
* \n Processing: Clear the interrupt and update the circular buffer head
* \n Error Handling: N/A
* \n Tests: N/A
* \n Special Considerations: N/A
*
******************************************************************************/
void DMA_Interrupt(void)
{
DMA_clearInterrupt(DMA_CHANNEL_2);
headIndex = BLUETOOTH_RXQUEUE_SIZE - DMA_getTransferSize(DMA_CHANNEL_0);
if (headIndex == tailIndex)
{
// This indicates ring buffer overflow.
}
else
{
// Perform processing on the current state of the ring buffer here.
// If only partial data has been received, just leave. Either set a flag
// or generate an event to process the message outside of the interrupt.
// Once the message is processed, move the tailIndex.
}
}

How I can fix this code to allow my AVR to talk over serial port?

I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!