How does one get high-speed UART data on a low-speed MSP430 - uart

My project has an MSP430 connected via UART to a Bluegiga Bluetooth module. The MCU must be able to receive variable length messages from the BG module. In the current architecture, each received byte generates a UART interrupt to allow message processing, and power constraints impose a limit on the clock speed of the MSP430. This makes it difficult for the MSP430 to keep up with the any UART speeds faster than 9600bps. The result is a slow communication interface. Speeding up the data rate results in overrun errors, lost bytes, and broken communication.
Any ideas on how communication speed can be increased without sacrificing data integrity in this situation?

I was able to accomplish a 12x speed improvement by employing 2 of the 3 available DMA channels on the MSP430 to populate a ring buffer that could then be processed by the CPU. It was a bit tricky because the MSP430 DMA interrupts are only generated when the size register reaches zero, so I couldn't just populate a ring buffer directly because the message size is variable.
Using one DMA channel as a single byte buffer that is triggered on every byte received by the UART, which then triggers a second DMA channel that populates the ring buffer does the trick.
Below is example C code that illustrates the method. Note that it incorporates references from the MSP430 libraries.
#include "dma.h"
#define BLUETOOTH_RXQUEUE_SIZE <size_of_ring_buffer>
static int headIndex = 0;
static int tailIndex = 0;
static char uartRecvByte;
static char bluetoothRXQueue[BLUETOOTH_RXQUEUE_SIZE];
/*!********************************************************************************
* \brief Initialize DMA hardware
*
* \param none
*
* \return none
*
******************************************************************************/
void init(void)
{
// This is the primary buffer.
// It will be triggered by UART Rx and generate an interrupt.
// It's purpose is to service every byte recieved by the UART while
// also waking up the CPU to let it know something happened.
// It uses a single address of RAM to store the data, so it needs to be
// serviced before the next byte is received from the UART.
// This was done so that any byte received triggers processing of the data.
DMA_initParam dmaSettings;
dmaSettings.channelSelect = DMA_CHANNEL_2;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = 1;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_20; // USCA1RXIFG, or any UART recieve trigger
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_2, (UINT32)&UCA1RXBUF, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_2, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
// This is the secondary buffer.
// It will be triggered when DMA_CHANNEL_2 copies a byte and will store bytes into a ring buffer.
// It's purpose is to pull data from DMA_CHANNEL_2 as quickly as possible
// and add it to the ring buffer.
dmaSettings.channelSelect = DMA_CHANNEL_0;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = BLUETOOTH_RXQUEUE_SIZE;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_30; // DMA2IFG
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_0, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_0, (UINT32)&bluetoothRXQueue, DMA_DIRECTION_INCREMENT);
DMA_enableTransfers(DMA_CHANNEL_2);
DMA_enableTransfers(DMA_CHANNEL_0);
DMA_enableInterrupt(DMA_CHANNEL_2);
}
/*!********************************************************************************
* \brief DMA Interrupt for receipt of data from the Bluegiga module
*
* \param none
*
* \return none
*
* \par Further Detail
* \n Dependencies: N/A
* \n Processing: Clear the interrupt and update the circular buffer head
* \n Error Handling: N/A
* \n Tests: N/A
* \n Special Considerations: N/A
*
******************************************************************************/
void DMA_Interrupt(void)
{
DMA_clearInterrupt(DMA_CHANNEL_2);
headIndex = BLUETOOTH_RXQUEUE_SIZE - DMA_getTransferSize(DMA_CHANNEL_0);
if (headIndex == tailIndex)
{
// This indicates ring buffer overflow.
}
else
{
// Perform processing on the current state of the ring buffer here.
// If only partial data has been received, just leave. Either set a flag
// or generate an event to process the message outside of the interrupt.
// Once the message is processed, move the tailIndex.
}
}

Related

Can't get the analogue watchdog to trigger an interrupt on the DFSDM peripheral of a STM32L475

I have an AMC1306 current shunt modulator feeding 1-bit PDM data at 10 MHz into a STM32L475. Filter0 takes the bit stream from Channel0 and applies a sinc3 filter with Fosr=125 and Iosr=4. This provides 24-bit data at 20 kHz and is working fine. The DMA transfers the data into a 1-word circular buffer in main memory to maintain fresh data.
I want to be able to call an interrupt function if the 24-bit value leaves a certain window. This would be caused in an over-voltage situation and needs to disengage the MOSFET driver. It would seem this functionality is offered by the analogue watchdog within the peripheral.
I am using STM32CubeIDE and the graphical interface within the IDE to configure the peripherals. Filter0 global interrupts are enabled. I have added this code:
/* USER CODE BEGIN 2 */
HAL_DFSDM_FilterRegularStart_DMA(&hdfsdm1_filter0, Vbus_DMA, 1);
// Set up the watchdog
DFSDM_Filter_AwdParamTypeDef awdParamFilter0;
awdParamFilter0.DataSource = DFSDM_FILTER_AWD_FILTER_DATA;
awdParamFilter0.Channel = DFSDM_CHANNEL_0;
awdParamFilter0.HighBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.HighThreshold = 250;
awdParamFilter0.LowBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.LowThreshold = -250;
HAL_DFSDM_FilterAwdStart_IT(&hdfsdm1_filter0, &awdParamFilter0);
/* USER CODE END 2 */
I have also used the HAL callback function
/* USER CODE BEGIN 4 */
void HAL_DFSDM_FilterAwdCallback(DFSDM_Filter_HandleTypeDef *hdfsdm_filter, uint32_t Channel, uint32_t Threshold)
{
HAL_GPIO_WritePin(GPIOA, LED_Pin, GPIO_PIN_SET);
}
/* USER CODE END 4 */
But the callback function never runs! I have experimented with the thresholds (I even made them zero).
In the debugger I can see the AWDIE=0x1 (So the AWD interrupt is enabled). The AWDF = 0x1 (So the threshold has been crossed and the peripheral should be requesting an interrupt...). The code doesn't even trigger a breakpoint in the stm32l4xx_it.c filter0 interrupt. So it'd seem no DFSDM1_FLT0 interrupts are happening
I'd be enormously appreciative of any help, any example code, any resources to read. Thanks in advance.
I know the DMA conversion complete callbacks work
I have played around with various thresholds and note that the AWDF gets set when the threshold is crossed.

Can't get my DAC(PT8211) to work correctly using a PIC32MX uc and SPI

I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool

STM32 SPI dropping data while using interrupt

I'm trying to send a variable size array of bytes over SPI using interrupts. The system is composed by two nucleo STM32L432 boards. The sender board works fine, but I'm having issue with the receiver board. Specifically, I noticed that very often some bytes are dropped. Beyond the default initialization provided by CubeMX, I have also the following settings in my init function:
// Trigger RXNE when the FIFO is 1/4 full
LL_SPI_SetRxFIFOThreshold(sw.spi_sw2pc,LL_SPI_RX_FIFO_TH_QUARTER);
// Enable RXNE interrupt
LL_SPI_EnableIT_RXNE(sw.spi_sw2pc);
// Enable SPI
if((SPI3->CR1 & SPI_CR1_SPE) != SPI_CR1_SPE)
{
// If disabled, I enable it
SET_BIT(sw.spi_sw2pc->CR1, SPI_CR1_SPE);
}
The SPI is set to work at 10 Mbit/s. Can it be that the communication speed is too fast?
Following are the IRQ handler and the callback.
IRQ handler
void SPI3_IRQHandler(void)
{
/* USER CODE BEGIN SPI3_IRQn 0 */
/* Check RXNE flag value in ISR register */
if(LL_SPI_IsActiveFlag_RXNE(SPI3))
{
/* Call function Slave Reception Callback */
SW_rx_callback();
}
/* USER CODE END SPI3_IRQn 0 */
/* USER CODE BEGIN SPI3_IRQn 1 */
/* USER CODE END SPI3_IRQn 1 */
}
Callback
void SW_rx_callback(void)
{
// RXNE flag is cleared by reading data in DR register
while(LL_SPI_IsActiveFlag_RXNE(SPI3))
recv_buffer[recv_buffer_index++] = LL_SPI_ReceiveData8(SPI3);
if(LL_SPI_GetRxFIFOLevel(SPI3) == LL_SPI_RX_FIFO_EMPTY)
{
// If there are no more data
new_data_arrived = true;
memset(recv_buffer,'\0',recv_buffer_index);
recv_buffer_index = 0;
}
}
Thank you in advance for your help.
SPI on 10 Mbits mean that you will have 1.25 millions interrupts per second (in case of 8bit transfer) and this is quite enough to process by interrupts especially in combination with HAL.
STM32L4xx is quite fast (80MHz) but in this case it mean that every interrupt call can't take longer than 64 cycles. but calling interrupt take 12 cycles, exit interrupt 10 cycles (it is in ideal state with no wait states on bus) so if your interrupt code will take 42 or more cycles then you can be sure that you miss some bytes.
Here are my suggestions:
First try to enable some compiler optimizations, to speed-up the code.
Change interrupt routine and remove everything unnecessary from interrupt handler (use SW FIFO and process received data in main loop)
But best solution in your case can be to use DMA transfer.

Every SPI send results in receiving a 0 on MSP430

I run this simplified program for SPI communication, running on the TI MSP430FR5969 on its corresponding launchpad MSP-EXP430FR5969, and set breakpoints just before TX and just after RX in CCS (Code Composer Studio). The breakpoints are labelled with comments.
My launchpad is not connected to anything. (Once I figure this out I intend to communicate it to some other device for real communication.)
I do not expect to receive any data because the launchpad is not connected to anything. But I receive exactly one zero for every send. The breakpoints are hit in alternate order starting with the first TX breakpoint.
Why am I receiving data? Is it because I need to enable pullup registers on some of the pins? I believe the launchpad itself uses the USCI "A" module(s) so the "B" module that I am using should have nothing connected to it.
#include <msp430.h>
int main(void) {
WDTCTL = WDTPW | WDTHOLD;
P1SEL0 &= ~BIT3; // UCB0STE
P1SEL0 &= ~BIT6; // UCB0SIMO
P1SEL0 &= ~BIT7; // UCB0SOMI
P2SEL0 &= ~BIT2; // UCB0CLK
P1SEL1 |= BIT3; // UCB0STE
P1SEL1 |= BIT6; // UCB0SIMO
P1SEL1 |= BIT7; // UCB0SOMI
P2SEL1 |= BIT2; // UCB0CLK
PM5CTL0 &= ~LOCKLPM5;
CSCTL0_H = CSKEY_H;
CSCTL1 &= ~DCORSEL;
CSCTL1 = (CSCTL1 & ~0x000e) | DCOFSEL_0; // 1 MHz
CSCTL3 |= DIVA__1 | DIVS__1 | DIVM__1; // clock dividers = 1
CSCTL0_H = 0;
UCB0CTLW0 |= UCSWRST;
UCB0CTLW0 |= UCCKPH;
UCB0CTLW0 |= UCCKPL;
UCB0CTLW0 |= UCMSB;
UCB0CTLW0 |= UCMST;
UCB0CTLW0 |= UCMODE_2;
UCB0CTLW0 |= UCSYNC;
UCB0CTLW0 |= UCSSEL__SMCLK;
UCB0CTLW0 |= UCSTEM;
// UCB0STATW |= UCLISTEN; // OK, if enabled i receive what i send
UCB0CTLW0 &= ~UCSWRST;
UCB0IE |= UCRXIE;
_enable_interrupts();
_delay_cycles(100000);
int send = 0;
while (1) {
while (!(UCB0IFG & UCTXIFG));
UCB0TXBUF = send; // BREAKPOINT 1
send = (send + 1) % 100;
_delay_cycles(100000);
}
return 0;
}
#pragma vector = USCI_B0_VECTOR
__interrupt void isr_usci_b0 (void) {
static volatile int received = 0;
switch (__even_in_range(UCB0IV, USCI_SPI_UCTXIFG)) {
case USCI_NONE:
break;
case USCI_SPI_UCRXIFG:
received = UCB0RXBUF;
UCB0IFG &= ~UCRXIFG; // BREAKPOINT 2
_no_operation();
break;
case USCI_SPI_UCTXIFG:
break;
}
}
The SPI peripheral does two things if MISO and MOSI are enabled (CLK enabled as well, of course). Assuming Master mode operation, it clocks out data from the TX shift register on the MOSI line and simultaneously clocks in data to the RX shift register from the MISO line.
In your circuit, the MISO input is hanging since you have not enabled either pull-up or pull-down internal resistances. Thus, observing 0x00 would not be out of the ordinary. If you had enabled the pull-up resistance, then you would have seen 0xFF in the receive buffer.
Another rule of thumb:
If you are using the peripheral functions then configure the GPIO pins of the MSP430 as output/input. (i.e. MOSI, CLK = output, MISO = input for SPI master mode)
Answer to the questions in the comments:
The MSP430 is configured in the listed code to be the SPI master. I see little point in the using a dedicated RX interrupt service routine, unless you want the controller to do something else in the time between shifting data from the TX buffer to the shift register and shifting data from the RX shift register to the RX buffer, i.e. one "byte" transfer period. You could as well have polled for the RX interrupt as you have for the TX interrupt. But you must wait for the RX interrupt.
Excerpt from the user guide:
The eUSCI initiates data transfer when data is moved to the transmit data buffer UCxTXBUF. The UCxTXBUF data is moved to the transmit (TX) shift register when the TX shift register is empty, initiating data transfer on UCxSIMO starting with either the MSB or LSB, depending on the UCMSB setting. Data on UCxSOMI is shifted into the receive shift register on the opposite clock edge. When the character is received, the receive data is moved from the receive (RX) shift register to the received data buffer UCxRXBUF and the receive interrupt flag UCRXIFG is set, indicating the RX/TX operation is complete.
A set transmit interrupt flag, UCTXIFG, indicates that data has moved from UCxTXBUF to the TX shift register and UCxTXBUF is ready for new data. It does not indicate RX/TX completion.
To receive data into the eUSCI in master mode, data must be written to UCxTXBUF, because receive and transmit operations operate concurrently.
The client will not send data by itself to the MSP430. The client device may need some time to execute the command the master just sent. Typically an "erase flash" command for SPI Flash chips.
In this case the master, i.e. MSP430, must poll the client device to see if it has data to send/completed the command. This is done typically either by polling a status register of the client device (or by using a dedicated IRQ interrupt). i.e. the client signals "completion of command"/"availability of data" via the status byte (or IRQ interrupt). On this event, the master could read out data from the client.
At first glance it may seem rather counter intuitive that data (dummy bytes) needs to be written in order to read data - perhaps your source of confusion as well :)
Perhaps reading about an SPI client may help. For example this SPI memory.
The SPI peripheral transmits a bit and receives a bit for every clock cycle. Instead of wondering how some unconnected device has sent a byte, think that your SPI peripheral has clocked in a receive byte even though nothing is connected. The byte you receive is 0 because the MISO line happens to be low while nothing is connected.
The SPI peripheral does not know the meaning of the data and does not know how many bytes must be transmitted and received for any particular command. It's up to your application to know when to transmit and receive dummy bytes. For example, if the slave responds to a command in the next byte then your application has to transmit two bytes (the command byte followed by a dummy byte) and at the same time receive two bytes (a dummy byte, followed by the response). Some slaves may send a generic status byte instead of a dummy byte as the first byte of all responses. It's up to your application to use or ignore the status byte.
The MSP430's SPI documentation is not going to tell you when you need to send/receive dummy bytes. You'll have to read the SPI documentation for the slave device for that information. Each slave may have different requirements. Some slaves my receive a command byte and send a reply. Other slaves may receive a command and address byte before sending a reply. Some slaves may reply with multiple bytes. You'll have to program your application to transmit/receive the appropriate number of bytes.
There are no stop bits. The master is both transmitting and receiving with every clock. If you want to stop receiving then stop transmitting. If you want to continue receiving then transmit dummy bytes.
Yes, you should use the RX interrupt. The RX interrupt indicates that it is safe for your application to read the received byte from the RX register. The SPI peripheral is shifting receive bits into a shift register with each clock. But after a byte has been received the SPI peripheral still has to copy the contents of the shift register to the RX register and then set the RX interrupt. You shouldn't assume that the received byte can be read from the RX register until the RX interrupt is indicated.
The behavior you describe is to be expected. With SPI, it is movement on the clock line that indicates the presence of data. The input line can be idle, and data will be received because, in order to send a byte, the clock must be toggled back and forth to latch the transmitted data, but at same time, data is latched into the receive buffer.
An SPI bus is intended to be a closed pathway. The TX line from your processor goes out and is daisy-chained to one or more slave devices and then looped back to your RX pin. Every transition on the clock line drives a data bit. This means that for every transition your hardware will shift one bit into your receive buffer. It's up to your code to know how many of those bits to discard before you start reading real data.
You are reading 0's because nothing is driving the RX pin. When you're connected to a real device, the first several bytes you send will also likely generate 00's on your RX pin. Usually you'll have to send some sort of command byte to the slave device which then will start sending real data. The length of that command should be discarded because the slave will not have started driving its output pin until the command byte (word, string, whatever) is complete.

Implementing SPI slave ISR on PIC32?

I have two PIC32MX microcontrollers that are connected over a 1.53MHz SPI bus with Chip Select. I am having trouble getting my slave side interrupt service routine to transmit data correctly. As a test case, I'm having the master send out two bytes (0x01, 0x00) every 10 ms. The slave is supposed to receive the 0x01 command id and respond with a 0x02 when the master sends the 2nd byte (the dummy 0x00).
Ideally each transfer should look like this.
Master Slave
0x01 0x00
0x00 0x02
I'm really not sure where to start with the slave interrupt though. I'm using a fifo buffer called airsysTx to hold data that needs to be shifted out the next time the master makes a request. The slave receives the 0x01 from the master just fine and writes 0x02 to the fifo buffer when it does. I'm not sure how to code the interrupt so that it will be sure to transmit correctly. The code I have below is a good start, but it's wrong. Suggestions?
/*******************************************************************************
* Interrupt service routine for SPI3 interrupts from Air MCU.
* The user's code at this vector should perform any application specific
* operations and MUST clear the SPI3 interrupt flags before exiting.
******************************************************************************/
void __ISR(_SPI_3_VECTOR, ipl7) _SPI3Interrupt()
{
BYTE MasterCMD;
SET_D1();//Set debug LED
// RX INTERRUPT
if(IFS0bits.SPI3RXIF) // receive data available in SPI3BUF Rx buffer
{
MasterCMD = SPI3BUF;
if(AirCMD == 0x01)
{
airsysTxFlush();
airsysTxWrite(0x02);
}
}
//Transmit data if needed.
if(SPI3STATbits.SPITBE)
{
if(!airsysTxIsEmpty())
{
SPI3BUF = airsysTxRead();
}
else
{
//Else write 0 to the tx buffer to clear the spi shift reg
SPI3BUF = 0x00;
}
}
IFS0bits.SPI3RXIF = 0;
IFS0bits.SPI3TXIF = 0;
IFS0bits.SPI3EIF = 0;
SPI3STATbits.SPIROV = 0;// clear the Overflow
CLEAR_D1();//CLEAR Debug LED
} // end ISR
What this code is actually transmitting is something like this:
Ideally each transfer should look like this.
Master Slave
0x01 0x02
0x00 0x01
Generally you can't write a slave SPI driver to interact in the way you describe because you can't control the timing precisely as a slave. What generates your ISR, is it Rx of first byte from master or assertion of chip select?
As the slave, you need to have set up the data bytes you want to transmit before the master starts the transaction. You usually don't have time to react to the first byte. There are a couple of ways to do this:
1) You could use a protocol where master does a 1 or 2 byte write-only transaction that tells the slave what it wants to read. Then master waits a few milliseconds to allow the slave to prepare the response. Then master does a read-only transaction to get the slave response.
2) If using DMA or FIFO, slave preloads the first padding byte(s) into the fifo before master starts the transaction. Then as you get the ISR you put the remaining response data into the fifo (without a flush). You need to have enough pad bytes to accommodate the slave ISR latency in forming the response. So for example, you may define your protocol where master knows that the first N bytes of response are pad bytes, followed by response data. Padding requirement would depend on your master clock speed and slave CPU speed/interrupt latency.