I'm trying to send a variable size array of bytes over SPI using interrupts. The system is composed by two nucleo STM32L432 boards. The sender board works fine, but I'm having issue with the receiver board. Specifically, I noticed that very often some bytes are dropped. Beyond the default initialization provided by CubeMX, I have also the following settings in my init function:
// Trigger RXNE when the FIFO is 1/4 full
LL_SPI_SetRxFIFOThreshold(sw.spi_sw2pc,LL_SPI_RX_FIFO_TH_QUARTER);
// Enable RXNE interrupt
LL_SPI_EnableIT_RXNE(sw.spi_sw2pc);
// Enable SPI
if((SPI3->CR1 & SPI_CR1_SPE) != SPI_CR1_SPE)
{
// If disabled, I enable it
SET_BIT(sw.spi_sw2pc->CR1, SPI_CR1_SPE);
}
The SPI is set to work at 10 Mbit/s. Can it be that the communication speed is too fast?
Following are the IRQ handler and the callback.
IRQ handler
void SPI3_IRQHandler(void)
{
/* USER CODE BEGIN SPI3_IRQn 0 */
/* Check RXNE flag value in ISR register */
if(LL_SPI_IsActiveFlag_RXNE(SPI3))
{
/* Call function Slave Reception Callback */
SW_rx_callback();
}
/* USER CODE END SPI3_IRQn 0 */
/* USER CODE BEGIN SPI3_IRQn 1 */
/* USER CODE END SPI3_IRQn 1 */
}
Callback
void SW_rx_callback(void)
{
// RXNE flag is cleared by reading data in DR register
while(LL_SPI_IsActiveFlag_RXNE(SPI3))
recv_buffer[recv_buffer_index++] = LL_SPI_ReceiveData8(SPI3);
if(LL_SPI_GetRxFIFOLevel(SPI3) == LL_SPI_RX_FIFO_EMPTY)
{
// If there are no more data
new_data_arrived = true;
memset(recv_buffer,'\0',recv_buffer_index);
recv_buffer_index = 0;
}
}
Thank you in advance for your help.
SPI on 10 Mbits mean that you will have 1.25 millions interrupts per second (in case of 8bit transfer) and this is quite enough to process by interrupts especially in combination with HAL.
STM32L4xx is quite fast (80MHz) but in this case it mean that every interrupt call can't take longer than 64 cycles. but calling interrupt take 12 cycles, exit interrupt 10 cycles (it is in ideal state with no wait states on bus) so if your interrupt code will take 42 or more cycles then you can be sure that you miss some bytes.
Here are my suggestions:
First try to enable some compiler optimizations, to speed-up the code.
Change interrupt routine and remove everything unnecessary from interrupt handler (use SW FIFO and process received data in main loop)
But best solution in your case can be to use DMA transfer.
Related
I have an AMC1306 current shunt modulator feeding 1-bit PDM data at 10 MHz into a STM32L475. Filter0 takes the bit stream from Channel0 and applies a sinc3 filter with Fosr=125 and Iosr=4. This provides 24-bit data at 20 kHz and is working fine. The DMA transfers the data into a 1-word circular buffer in main memory to maintain fresh data.
I want to be able to call an interrupt function if the 24-bit value leaves a certain window. This would be caused in an over-voltage situation and needs to disengage the MOSFET driver. It would seem this functionality is offered by the analogue watchdog within the peripheral.
I am using STM32CubeIDE and the graphical interface within the IDE to configure the peripherals. Filter0 global interrupts are enabled. I have added this code:
/* USER CODE BEGIN 2 */
HAL_DFSDM_FilterRegularStart_DMA(&hdfsdm1_filter0, Vbus_DMA, 1);
// Set up the watchdog
DFSDM_Filter_AwdParamTypeDef awdParamFilter0;
awdParamFilter0.DataSource = DFSDM_FILTER_AWD_FILTER_DATA;
awdParamFilter0.Channel = DFSDM_CHANNEL_0;
awdParamFilter0.HighBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.HighThreshold = 250;
awdParamFilter0.LowBreakSignal = DFSDM_NO_BREAK_SIGNAL;
awdParamFilter0.LowThreshold = -250;
HAL_DFSDM_FilterAwdStart_IT(&hdfsdm1_filter0, &awdParamFilter0);
/* USER CODE END 2 */
I have also used the HAL callback function
/* USER CODE BEGIN 4 */
void HAL_DFSDM_FilterAwdCallback(DFSDM_Filter_HandleTypeDef *hdfsdm_filter, uint32_t Channel, uint32_t Threshold)
{
HAL_GPIO_WritePin(GPIOA, LED_Pin, GPIO_PIN_SET);
}
/* USER CODE END 4 */
But the callback function never runs! I have experimented with the thresholds (I even made them zero).
In the debugger I can see the AWDIE=0x1 (So the AWD interrupt is enabled). The AWDF = 0x1 (So the threshold has been crossed and the peripheral should be requesting an interrupt...). The code doesn't even trigger a breakpoint in the stm32l4xx_it.c filter0 interrupt. So it'd seem no DFSDM1_FLT0 interrupts are happening
I'd be enormously appreciative of any help, any example code, any resources to read. Thanks in advance.
I know the DMA conversion complete callbacks work
I have played around with various thresholds and note that the AWDF gets set when the threshold is crossed.
I am in the process of writing software for an STM32F4. The STM32 needs to pull in a string via a UART. This string is variable in length and comes in from a sensor every second. The string is stored in a fixed buffer, so the buffer content changes continuously.
The incoming string looks like this: "A12941;P2507;T2150;C21;E0;"
The settings of the UART:
Baud Rate: 19200
Word lengt: 8Bits
Parity: None
Stop Bids: 1
Over sampling: 16 Samples
Global interrupt: Enabled
No DMA settings
Part of the used code in the main.c function:
uint8_t UART3_rxBuffer[25];
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
HAL_UART_Receive_IT(&huart3, UART3_rxBuffer, 25); //restart interrupt reception mode
int main(void)
{
HAL_UART_Receive_IT (&huart3, UART3_rxBuffer,25);
}
while (1)
{
}
}
Part of the code in stm32f4xx_it.c
void USART3_IRQHandler(void)
{
/* USER CODE BEGIN USART3_IRQn 0 */
/* USER CODE END USART3_IRQn 0 */
HAL_UART_IRQHandler(&huart3);
/* USER CODE BEGIN USART3_IRQn 1 */
/* USER CODE END USART3_IRQn 1 */
}
It does work to fill the buffer with the variable strings in this way, but because the buffer is constantly being replenished, it is difficult to extract a beginning and an end of the string. For example, the buffer might look like this:
[0]'E' [1]'0' [2]'/n' [3]'A' [4]'1' [5]'2' [6]'9' [7]'4' [8]'1' [9]';' [10]'P' etc....
But I'd like to have a buffer that starts on 'A'.
My question is, how can I process incoming strings on the uart correctly so that I only have the string "A12941;P2507;T2150;C21;E0;"?
Thanks in advance!!
I can see three possibilities:
Do all of your processing in the interrupt. When you get to the end of a variable-length message then do everything that you need to do with the information and then change the location variable to restart filling the buffer from the start.
Use (at least) two buffers in parallel. When you detect the end of the variable-length message in interrupt context then start filling a different buffer from position zero and signal to main context that previous buffer is ready for processing.
Use two buffers in series. Let the interrupt fill a ring buffer in a circular way that takes no notice of when a message ends. In main context scan from the end of the previous message to see if you have a whole message yet. If you do, then copy it out into another buffer in a way that makes it start at the start of the buffer. Record where it finished in the ring-buffer for next time, and then do your processing on the linear buffer.
Option 1 is only suitable if you can do all of your processing in less than the time it takes the transmitter to send the next byte or two. The other two options use a bit more memory and are a bit more complicated to implement. Option 3 could be implemented with circular mode DMA as long as you poll for new messages frequently enough, which avoids the need for interrupts. Option 2 allows to queue up multiple messages if your main context might not poll frequently enough.
I would like to share a sample code related to your issue. However it is not what you are exactly looking for. You can edit this code snippet as you wish. If i am not wrong you can also edit it according to option 3.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == USART2) {
HAL_UART_Receive_IT(&huart2,&rData,1);
rxBuffer[pos++] = rData;
if (rData == '\n') {
pos = 0;
}
}
Before start, in the main function, before while loop you should enable interrupt for one byte using "HAL_UART_Receive_IT(&huart2,&rData,1);". If your incoming data has limiter like '\n', so you can save whole data which may have different length for each frame.
If you want data frame start with some specific character, then you can wait to save data until you get this character. In this case you can edit this code by changing '\n' as your character, and after you get that character, you should start to save following data to inside the buffer.
How could I create an interrupt for a blue pill from scratch?
I do not want to use any sort of special library. Also, I use Keil IDE, thus, by "building from scratch" I refer rather not to use any extra library than to assemble the project without the help of an IDE.
I tried to find resources, but no success. Could anybody help me and at least provide some information/bibliography for me? I would be grateful.
Moreover, by "strange library" I mean any other library than the stmf32f1xx.h header. I would like to fire an interrupt when one of the pins' input value toggles. In order to do this, on AVR MCUs it was very simple as long as only a few register values should be changed. Unfortunately, I don't know how an interrupt within an ARM MCU functions and in which registers should I write what values.
Also, a better understanding of the ARM MCU's interrupt mechanism would make me more prepared for tackling debouncing issues.
I am not going to take you entirely literally when you mandate "no libraries", because no one who wants to get work done and knows what they are doing on Cortex-M would do that - and I will assume at least that you will use the CMSIS - a common API provided for all ARM Cortex-M devices, and which makes your code more, not less portable.
All the CMSIS code is provided as source, rather than static library, so there is nothing hidden and if you chose not to use it, you can see how it works and replicate that functionality (needlessly) if you wish.
In the CMSIS default implementations are provided as "weak-links" that can be overridden by user code simply by defining a function of the pre-defined name to override the default. The default implementation is generally an infinite loop - so that unhandled interrupts are "trapped" so you can intervene with your debugger or wait for a watchdog reset for example.
The Cortex-M core interrupt handlers and exception handlers have common names across all Cortex-M parts:
Reset_Handler
NMI_Handler
HardFault_Handler
MemManage_Handler
BusFault_Handler
UsageFault_Handler
SVC_Handler
DebugMon_Handler
PendSV_Handler
SysTick_Handler
Peripheral interrupt handlers have names defined by the vendor, but the naming convention is <interrupt_source>_IRQHandler. For example on STM32F1xx EXTI0_IRQHandler is the shared external interrupt assigned to bit zero of GPIO ports.
To implement an CMSIS interrupt handler, all you need do is:
Implement the interrupt handler function using the CMSIS handler function name
Enable the interrupt in the NVIC (interrupt controller).
There other are things you might do such as assign the interrupt priority scheme (the split between preempt priorities and subpriorities), but lets keep it simple for the time being.
Because it is ubiquitous to all Cortex-M parts, and because it is useful in almost any non-trivial application an illustration using the SYSTICK interrupt is useful as a starting point.
#include "stm32f1xx.h"
volatile uint32_t msTicks = 0 ;
void SysTick_Handler(void)
{
msTicks++ ;
}
int main (void)
{
if( SysTick_Config( SystemCoreClock / 1000 ) != 0 ) // 1ms tick
{
// Error Handling
}
...
}
SysTick_Config() is another CMSIS function. In core_cm3.h it looks like this:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
{
return (1UL); /* Reload value impossible */
}
SysTick->LOAD = (uint32_t)(ticks - 1UL); /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0UL); /* Function successful */
}
So let's say you have a external interrupt source on the falling edge of GPIOA pin 0, then you would use the STM32 EXTI0 interrupt. The minimal handler would look like:
void EXTI0_IRQHandler(void)
{
EXTI->PR |= (1<<0); // clear pending interrupt
// Handle interrupt...
}
Setting up the EXTI requires enabling the GPIO and the EXTI itself as well as the NVIC:
RCC->APB2ENR |= RCC_APB2ENR_IOPAEN ; // enable clock for GPIOA
RCC->APB2ENR |= RCC_APB2ENR_AFIOEN ; // enable clock for Alternate Function
AFIO->EXTICR[0] |= AFIO_EXTICR1_EXTI0 ; // set pin to use
EXTI->IMR = EXTI_IMR_MR0 ; // unmask interrupt
EXTI->EMR = EXTI_EMR_MR0 ; // unmask event
EXTI->FTSR = EXTI_FTSR_TR0 ; // set falling edge
NVIC->ISER[0] |= (1 << (EXTI0_IRQChannel & 0x1F)); // enable interrupt EXTI 0
The peripheral registers and structures are defined in stm32f10weakx.h, and the "weak" default peripheral handlers to be overridden are in startup_stm32f10x_cl.s for your specific part. Any handlers you override must match these symbol names exactly.
All the peripheral interrupt sources and how to configure them is defined un the ST Reference Manual RM0008.
All the Cortex-M core specific stuff - systtick, NVIC, exception handlers etc. is provided by ARM at https://developer.arm.com/ip-products/processors/cortex-m/cortex-m3
CMSIS for CM3 is documented at https://developer.arm.com/documentation/dui0552/a/
My project has an MSP430 connected via UART to a Bluegiga Bluetooth module. The MCU must be able to receive variable length messages from the BG module. In the current architecture, each received byte generates a UART interrupt to allow message processing, and power constraints impose a limit on the clock speed of the MSP430. This makes it difficult for the MSP430 to keep up with the any UART speeds faster than 9600bps. The result is a slow communication interface. Speeding up the data rate results in overrun errors, lost bytes, and broken communication.
Any ideas on how communication speed can be increased without sacrificing data integrity in this situation?
I was able to accomplish a 12x speed improvement by employing 2 of the 3 available DMA channels on the MSP430 to populate a ring buffer that could then be processed by the CPU. It was a bit tricky because the MSP430 DMA interrupts are only generated when the size register reaches zero, so I couldn't just populate a ring buffer directly because the message size is variable.
Using one DMA channel as a single byte buffer that is triggered on every byte received by the UART, which then triggers a second DMA channel that populates the ring buffer does the trick.
Below is example C code that illustrates the method. Note that it incorporates references from the MSP430 libraries.
#include "dma.h"
#define BLUETOOTH_RXQUEUE_SIZE <size_of_ring_buffer>
static int headIndex = 0;
static int tailIndex = 0;
static char uartRecvByte;
static char bluetoothRXQueue[BLUETOOTH_RXQUEUE_SIZE];
/*!********************************************************************************
* \brief Initialize DMA hardware
*
* \param none
*
* \return none
*
******************************************************************************/
void init(void)
{
// This is the primary buffer.
// It will be triggered by UART Rx and generate an interrupt.
// It's purpose is to service every byte recieved by the UART while
// also waking up the CPU to let it know something happened.
// It uses a single address of RAM to store the data, so it needs to be
// serviced before the next byte is received from the UART.
// This was done so that any byte received triggers processing of the data.
DMA_initParam dmaSettings;
dmaSettings.channelSelect = DMA_CHANNEL_2;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = 1;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_20; // USCA1RXIFG, or any UART recieve trigger
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_2, (UINT32)&UCA1RXBUF, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_2, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
// This is the secondary buffer.
// It will be triggered when DMA_CHANNEL_2 copies a byte and will store bytes into a ring buffer.
// It's purpose is to pull data from DMA_CHANNEL_2 as quickly as possible
// and add it to the ring buffer.
dmaSettings.channelSelect = DMA_CHANNEL_0;
dmaSettings.transferModeSelect = DMA_TRANSFER_REPEATED_SINGLE;
dmaSettings.transferSize = BLUETOOTH_RXQUEUE_SIZE;
dmaSettings.transferUnitSelect = DMA_SIZE_SRCBYTE_DSTBYTE;
dmaSettings.triggerSourceSelect = DMA_TRIGGERSOURCE_30; // DMA2IFG
dmaSettings.triggerTypeSelect = DMA_TRIGGER_RISINGEDGE;
DMA_init(&dmaSettings);
DMA_setSrcAddress(DMA_CHANNEL_0, (UINT32)&uartRecvByte, DMA_DIRECTION_UNCHANGED);
DMA_setDstAddress(DMA_CHANNEL_0, (UINT32)&bluetoothRXQueue, DMA_DIRECTION_INCREMENT);
DMA_enableTransfers(DMA_CHANNEL_2);
DMA_enableTransfers(DMA_CHANNEL_0);
DMA_enableInterrupt(DMA_CHANNEL_2);
}
/*!********************************************************************************
* \brief DMA Interrupt for receipt of data from the Bluegiga module
*
* \param none
*
* \return none
*
* \par Further Detail
* \n Dependencies: N/A
* \n Processing: Clear the interrupt and update the circular buffer head
* \n Error Handling: N/A
* \n Tests: N/A
* \n Special Considerations: N/A
*
******************************************************************************/
void DMA_Interrupt(void)
{
DMA_clearInterrupt(DMA_CHANNEL_2);
headIndex = BLUETOOTH_RXQUEUE_SIZE - DMA_getTransferSize(DMA_CHANNEL_0);
if (headIndex == tailIndex)
{
// This indicates ring buffer overflow.
}
else
{
// Perform processing on the current state of the ring buffer here.
// If only partial data has been received, just leave. Either set a flag
// or generate an event to process the message outside of the interrupt.
// Once the message is processed, move the tailIndex.
}
}
I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool