My setup is:
LPC1225FBD48/321
ext crystal: 16MHZ
PLL: MSEL=6, PSEL=2
UART0: CLKDIV=250 DL=1 DIVADDVAL=1 MULVAL=4
PLL gives mainclk=96MHz ; PCLK for UART is: 96MHz/250=384kHz ; Bit rate: 384kHz/(16x1x1.25)=19200
And it works, but only when LPC transmits. When LPC receives a character, it reports 2 characters received and sometimes framing error. Similar problem for other bit rates. At lower rates, like 2400, LPC reports single character received and sometimes framing error, but received character is not the same as I sent. It looks like Tx and Rx are using different clocks.
The UART works well when using bootloader with Flash Magic.
Has anyone encountered such a problem?
I finally found a solution. After setting CLKDIV = 50 and DL = 5, the UART became fully functional. While the documentation did not prohibit the previous set of parameters and both sets should give the same result, only the latter allows two-way communication.
Related
I'm just trying to learn to use external ADC and DAC (PT8211) with my PIC32MX534f06h.
So far, my code is just about sampling a signal with my ADC every time a timer-interrupt is triggered, then sending then same signal out to the DAC.
The interrupt and ADC part works fine and have been tested independently, but the voltages that my DAC outputs don't make much sens to me and stay at 2,5V (it's powered at 0 - 5V).
I've tried to feed the DAC various values ranging from 0 to 65534 (16bits DAC so i guess it should be the expected range of the values to feed to it, right?) voltage stays at 2.5V.
I've tried changing the SPI configuration, using different SPIs (3 and 4) and DACs (I have one soldered to my pcb, soldered to SPI3, and one one breadboard, linked to SPI4 in case the one soldered on my board was defective).
I made sure that the chip selection line works as expected.
I couldn't see the data and clock that are transmissed since i don't have a scope yet.
I'm a bit out of ideas now.
Chip selection and SPI configuration settings
signed short adc_value;
signed short DAC_output_value;
int Empty_SPI3_buffer;
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000 // SPI on, 16-bit master,CKE=1,CKP=0
#define SPI4_BAUD 100 // clock divider
DAC output function
//output to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
Chip_Select_DAC_Clr();
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write byte to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer
Chip_Select_DAC_Set();
INTEnableInterrupts();
}
ISR sampling the data, triggered by Timer1. This works fine.
ADC_input inputs the data in the global variable adc_value (12 bits, signed)
//ISR to sample data
void __ISR( _TIMER_1_VECTOR, IPL7SRS) Test_data_sampling_in( void)
{
IFS0bits.T1IF = 0;
ADC_Input();
//rescale the signed 12 bit audio values to unsigned 16 bits wide values
DAC_output_value = adc_value + 2048; //first unsign the signed 12 bit values (between 0 - 4096, center 2048)
DAC_output_value = DAC_output_value *16; // the scale between 12 and 16 bits is actually 16=65536/4096
DAC_Output(DAC_output_value);
}
main function with SPI, IO, Timer configuration
void main() {
SPI4CON = SPI4_CONF;
SPI4BRG = SPI4_BAUD;
TRISE = 0b00100000;
TRISD = 0b000000110100;
TRISG = 0b0010000000;
LATD = 0x0;
SYSTEMConfigPerformance(80000000L); //
INTCONSET = _INTCON_MVEC_MASK; /* Set the interrupt controller for multi-vector mode */
//
T1CONbits.TON = 0; /* turn off Timer 1 */
T1CONbits.TCKPS = 0b11; /* pre-scale = 1:1 (T1CLKIN = 80MHz (?) ) */
PR1 = 1816; /* T1 period ~ ? */
TMR1 = 0; /* clear Timer 1 counter */
//
IPC1bits.T1IP = 7; /* Set Timer 1 interrupt priority to 7 */
IFS0bits.T1IF = 0; /* Reset the Timer 1 interrupt flag */
IEC0bits.T1IE = 1; /* Enable interrupts from Timer 1 */
T1CONbits.TON = 1; /* Enable Timer 1 peripheral */
INTEnableInterrupts();
while (1){
}
}
I would expect to see the voltage at the ouput of my DAC to mimic those I put at the input of my ADC, instead the DAC output value is always constant, no matter what I input to the ADC
What am i missing?
Also, when turning the SPIs on, should I still manually manage the IO configuration of the SDI SDO SCK pins using TRIS or is it automatically taken care of?
First of all I agree that the documentation I first found for PT8211 is rather poor. I found extended documentation here. Your DAC (PT8211) is actually an I2S device, not SPI. WS is not chip select, it is word select (left/right channel). In I2S, If you are setting WS to 0, that means the left channel. However it looks like in the extended datasheet I found that WS 0 is actually right channel (go figure).
The PIC you've chosen doesn't seem to have any I2S hardware so you might have to bit bash it. There is a lot of info on I2S though ,see I2S bus specification .
There are some slight differences with SPI and I2C. Notice that the first bit is when WS transitions from high to low is the LSB of the right channel. and when WS transitions from low to high, it is not the LSB of the left channel. Note that the output should be between 0.4v to 2.4v (I2S standard), not between 0 and 5V. (Max is 2.5V which is what you've been seeing).
I2S
Basically, I'd try it with the proper protocol first with a bit bashing algorithm with continuous flip flopping between a left/right channel.
First of all, thanks a lot for your comment. It helps a lot to know that i'm not looking at a SPI transmission and that explains why it's not working.
A few reflexions about it
I googled Bit bashing (banging?) and it seems to be CPU intensive, which I would definately try to avoid
I have seen a (successful) projet (in MikroC) where someone transmit data from that exact same PIC, to the same DAC, using SPI, with apparently no problems whatsoever So i guess it SHOULD work, somehow?
Maybe he's transforming the data so that it works? here is the code he's using, I'm not sure what happens with the F15 bit toggle, I was thinking that it was done to manage the LSB shift problem. Here is the piece of (working) MikroC code that i'm talking about
valueDAC = valueDAC + 32768;
valueDAC.F15 =~ valueDAC.F15;
Chip_Select_DAC = 0;
SPI3_Write(valueDAC);
Chip_Select_DAC = 1;
From my understanding, the two biggest differences between SPI and I2S is that SPI sends "bursts" of data where I2S continuously sends data. Another difference is that data sent after the word change state is the LSB of the last word.
So i was thinking that my SPI is triggered by a timer, which is always the same, so even if the data is not sent continuously, it will just make the sound wave a bit more 'aliased' and if it's triggered regularly enough (say at 44Mhz), it should not be SO different from sending I2S data at the same frequency, right?
If that is so, and I undertand correctly, the "only" problem left is to manage the LSB-next-word-MSB place problem, but i thought that the LSB is virtually negligible over 16bit values, so if I could just bitshift my value to the right and then just fix the LSB value to 0 or 1, the error would be small, and the format would be right.
Does it sounds like I have a valid 'Mc-Gyver-I2S-from-my-SPI' or am I forgetting something important?
I have tried to implement it, so far without success, but I need to check my SPI configuration since i'm not sure that it's configured correctly
Here is the code so far
SPI config
#define Chip_Select_DAC_Set() {LATDSET=_LATE_LATE0_MASK;}
#define Chip_Select_DAC_Clr() {LATDCLR=_LATE_LATE0_MASK;}
#define SPI4_CONF 0b1000010100100000
#define SPI4_BAUD 20
DAaC output function
//output audio to external DAC
void DAC_Output(signed int valueDAC) {
INTDisableInterrupts();
valueDAC = valueDAC >> 1; // put the MSB of ValueDAC 1 bit to the right (becase the MSB of what is transmitted will be seen by the DAC as the LSB of the last value, after a word select change)
//Left channel
Chip_Select_DAC_Set(); // Select left channel
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF; // read RX buffer (don't know why we need to do this here, but we do)
//SPI3_Write(valueDAC); MikroC option
// Right channel
Chip_Select_DAC_Clr();
SPI4BUF=valueDAC;
while(!SPI4STATbits.SPITBE); // wait for TX buffer to empty
SPI4BUF=valueDAC; // write 16-bits word to TX buffer
while(!SPI4STATbits.SPIRBF); // wait for RX buffer to fill
Empty_SPI3_buffer=SPI4BUF;
INTEnableInterrupts();
}
The data I send here is signed, 16 bits range, I think you said that it's allright with this DAC, right?
Or maybe i could use framed SPI? the clock seems to be continous in this mode, but I would still have the LSB MSB shifting problem to solve.
I'm a bit lost here, so any help would be cool
I'm trying to demodulate a GFSK signal coming from an nRF24L01+ transceiver chip (hooked up to my Arduino). I've followed this guide so far:
https://www.bitcraze.io/2015/06/sniffing-crazyflies-radio-with-hackrf-blue/#comment-38046
..and managed to manually demodulate a package (the address and the message I sent 'martijn' are clearly recoverable):
https://drive.google.com/open?id=0B9CJ42CGPiF2TWoyelRmWldZcU0
However, now I want to receive packets and decode them as they come in. Someone already made a decoder for this job, but somehow it fails to find my nRF24 packets:
https://wiki.bitcraze.io/misc:hacks:hackrf
My Arduino code for sending the packets is as followed:
#include <SPI.h>
#include <nRF24L01.h>
#include <RF24.h>
#include <RF24_config.h>
RF24 radio(9,10);
const uint64_t pipe = 0xe7e7e7e7e7;
char package[] = "martijn";
void setup() {
Serial.begin(9600);
radio.begin();
radio.setDataRate(RF24_1MBPS);
radio.setChannel(95);
radio.openWritingPipe(pipe);
radio.enableDynamicPayloads();
radio.setAutoAck(true);
radio.powerUp();
}
void loop() {
radio.write(&package, strlen(package));
delay(1);
}
Basically I just want to use GNU Radio Companion to obtain the nRF24 packets, and send their binary data into a file. I'm fine with writing my own decoder. However, I have no clue on how to get this binary data from the incoming signals.
(The comments at the bitcraze site are also mine)
I've be very happy if someone could help me (or even point me in the right direction). Thanks in advance!
After the Quadrature Demod you have to use a Clock recovery block. The M&M Clock Recovery of GNU Radio should do the job. This block will dramatically increase the performance of the decoding.
However you have to take care some parameters that this block requires. The most important is the 'omega'. 'Omega' roughly speaking corresponds to the number of samples per symbol. For example, if your GFSK baudrate is 9600 and your incoming signal from the hardware is 96000, each symbol corresponds to 10 samples. The omega can be any float number. Note however, that clock recovery does not work for large omega values. So try to keep the omega up to 8.0. To do that, either adjust properly the hardware sampling rate or do some resampling.
After the Clock Recover just use a 'Binary Slicer' block. This will convert the floats to bits of 0's and 1's. Using the Pack K bits block you can convert the bit stream into byte stream, that can easily saved to file with a 'File Sink'.
Here is a good step-by-step tutorial for an FSK receiver. GFSK adds only a Gaussian filter so the procedure is quite the same for both of them.
I would like to send string of chars from one proc (master) to another (slave) and then read string from a slave.
Currently im mixing up the arduino and LPC1788, using lpc as master and arduino as slave.
LPC sent's the string correctly which is received by the arduino in ISR. In loop function i check if all of the chars are received and then try to send string back. On LPC side ISR is not working for some reason. I have set SR as
SR = (1<<TNF) | (1<<RNE);
So i have put delay after sending the string from LPC and then initiate read from arduino.
What i see on LA for sending the string is:
but reading of string from Arduino looks odd (string should be "Pong\n", it is not always P that i received... it varies)
i guess majority of problem is within the sync of sending and reading of SPI buffer. How do i achieve that without functional ISR on LPC?
The SPI specification states that the CS (SSEL) line should be active during a frame and become inactive in between. NXP interpreted this as a word being one frame. This means that the CS as generated by the SSP block (the same goes for the legacy SPI) is only active during one transaction of up to 16 bits.
Note also that there is always a gap in between the words/frames being sent. So even when you fill the FIFO or use DMA you will see 16 clock pulses, a short delay and then 16 more pulses.
When using a GPIO pin as SSEL, please note you have to wait for SSEL assertion or de-assertion until the peripheral is idle.
The Problem: I send one value into a UART and nulls emerge on the other UART.
--- Details ---
These are both PIC processors (PIC24 and PIC32)
They are both hard wired onto a printed circuit board.
They are communicating, each via one of the UART modules which reside within them.
They are (ostensibly; according to docs) both configured for 115200 bps, 8-N-1
No handshaking, no CTS enabled, no RTS enabled; I'm just putting bytes on the wire and out they go.
(These are short little 4-byte commands and responses which fits pretty neatly)
The PIC32 is going 80 MHz.
The PIC24 has F[cy] = 14745600
i.e., it is going 14.7456 MHz
The PIC24 sends four bytes (a specific command sequence)
When I set a breakpoint at the Interrupt Service Routine for the UART, The PIC32 shows nulls, then I am seeing repeated hits on the (PIC32 code) breakpoint after the first four, and I continue to see nulls (which makes sense since the PIC24 is not sending anything)
i.e., the UART appears to be repeatedly generating interrupts when there is no reason
I did not write the code on the PIC32 side, and I am learning daily how it works.
Then I let the code just run, and I inevitably wind up on a line that says
52570 1D01_335C 9D01_335C _general_execption_handler sdbbp 0x0
When I get there,
The cause register holds 0080181C
The EPC register holds 9D00F228
The SP register holds 9F8FFFA0
This happened like clockwork, so I got suspicious of the __ISR that would not stop. MpLab showed me this...
432:
433: //*********************************************************//
434: void __ISR(_UART1_VECTOR, ipl5) IntUart1Handler(void) //MCU communication port
435: {
9D00F204 415DE800 rdpgpr sp,sp
9D00F208 401A7000 mfc0 k0,EPC
9D00F20C 401B6000 mfc0 k1,Status
9D00F210 27BDFF88 addiu sp,sp,-120
9D00F214 AFBA0074 sw k0,116(sp)
9D00F218 AFBB0070 sw k1,112(sp)
9D00F21C 7C1B7844 ins k1,zero,1,15
9D00F220 377B1400 ori k1,k1,0x1400
9D00F224 409B6000 mtc0 k1,Status
9D00F228 AFBF0064 sw ra,100(sp) ;<<<-------EPC register always points here
9D00F22C AFBE0060 sw s8,96(sp)
9D00F230 AFB9005C sw t9,92(sp)
9D00F234 AFB80058 sw t8,88(sp)
9D00F238 AFAF0054 sw t7,84(sp)
9D00F23C AFAE0050 sw t6,80(sp)
9D00F240 AFAD004C sw t5,76(sp)
9D00F244 AFAC0048 sw t4,72(sp)
9D00F248 AFAB0044 sw t3,68(sp)
9D00F24C AFAA0040 sw t2,64(sp)
9D00F250 AFA9003C sw t1,60(sp)
9D00F254 AFA80038 sw t0,56(sp)
9D00F258 AFA70034 sw a3,52(sp)
9D00F25C AFA60030 sw a2,48(sp)
9D00F260 AFA5002C sw a1,44(sp)
9D00F264 AFA40028 sw a0,40(sp)
9D00F268 AFA30024 sw v1,36(sp)
9D00F26C AFA20020 sw v0,32(sp)
9D00F270 AFA1001C sw at,28(sp)
9D00F274 00001012 mflo v0
9D00F278 AFA2006C sw v0,108(sp)
9D00F27C 00001810 mfhi v1
9D00F280 AFA30068 sw v1,104(sp)
9D00F284 03A0F021 addu s8,sp,zero
I look a little more closely at the numbers, and I see that at that time, if we add 100 (0x64) to FFA0 (the bottom 16 bits of the SP) we get 0x10004, which I am guessing is 4 too much.
PIC Manual DS61143E-page 50 says that that nomenclature means, SW Store Word Mem[Rs+offset> = Rt and other experts have told me that the cause register is telling me that the EXCCODE bits are 7 which is the code for a bus exception on load or store.
Or, I'm totally guessing here (would love to get some experts' knowledge on this) something is not clearing something and I'm encountering infinite recursion on an int handler.
All of this is starting to make sense.
THE QUESTION
Could someone please suggest the most common reasons for an int like this to be repeatedly hitting me ?
Does anyone see any common relationship between the bogus nuls coming from the UART which could somehow be connected with this endlessly generated int ? Am I even on the right track ?
In your answer, please tell me how to acknowledge the Int from the UART. I know how I do that in the PIC24 (I wrote that code totally, in ASM) but I don't know how this is done in in C on the PIC32. Assembly will be fine. I'll inline it. I'm working with code I didn't write here, and I thank you for your answers
What is the most common reason that the UART (#1, in this case) would be repeatedly generating interrupts ?
The most common reason an interrupt subroutine is called over and over is that the interrupt request is never acknowledged in the routine.
Are you sure you clear the corresponding IRQ bit?
To ease UART debugging you should first connect the UART to a PC and make sure your target can communicate both ways with the PC. With two targets at the same time, you can't determine on which one the problem is apart from inspecting signals with a scope.
On many devices an interrupt must be explicitly cleared to prevent the ISR from simply re-entering when complete.
In most cases a UART will have status bits that indicate the source of the interrupt, knowing that might tell you something, but not telling us makes it difficult to help you. You can inspect the UART registers directly in the debugger, however in some devices the act of reading a bit may in fact clear a bit, and that is true in the debugger too, so be aware of that possibility (check the data sheet/user manual).
Some UARTS require their transmitter to be explicitly switched off to stop transmitting nulls, while others are triggered automatically when data is placed in the tx register and stop after the necessary number of bits are shifted out. Again check the data sheet/manual for the part. If the PIC32 code is known to be working, then since this possible error would be with the PIC24 code, it seems to fit. You can check this simply by using an oscilloscope on the Tx line from the PIC24, if it is transmitting, you will see at least start/stop bit transitions (framing). If there is nothing, then the problem is probably at the PIC32 end.
While you have the scope out, you can check that the bit timing is correct and that you are actually transmitting at 115200. It is easy to get the clocking wrong, and that should be your first check. If the baud rate is incorrect, the PIC32 will likely generate framing error interrupts, which if not handled may persist indefinitely.
Another possibility is that after transmission the PIC24 leaves the line in the "break" state, and that the PIC32 UART is generating "line-break" interrupts. That is why it is important to look at the UART status registers to determine the interrupt cause.
As you can see, there are many possibilities; I think I have covered the most likely ones, but more methodical debugging effort and information gathering on your part is required. I hope I have guided you in this too.
There were the three root causes which were in place...
The interrupt priority level was set at value 6 in the initialization code for UART1
The first line of the interrupt service routine was coded with an interrupt priority level of 5
The first three bytes of UART data were disappearing from the data stream (this is still unsolved)
Here's the not-so-obvious way they were causing the problem
First three bytes never appeared
Fourth byte did appear
Interrupt hit (as level 6) and invoked __ISR routine
__ISR was acting as ipl5 agent
First instruction executed (possibly more, I couldn't debug that accurately)
As soon as the first instruction finished, the "higher" priority 6 interrupt immediately kicked in
This resulted in the same interrupt again
The process repeated itself infinitely.
In short order, Stack Overflow resulted
The Fix
Make sure these two lines of code agree with each other...
The IPL line in the init code, wrong way then the right way
//IPC6bits.U1IP=6; //// Wrong !!! Uart 1 IPL should not be 6 !!!
IPC6bits.U1IP=5; //// Uart 1 IPL = 5 Correct way; matches __ISR
Interrupt Service Routine
void __ISR(_UART1_VECTOR, ipl5) IntUart1Handler(void) //// Operating as IPL 5
:
:
:
:
Poor design decision. If both are on same board SPI would have been more feasible and a lot faster.
I've been pulling my hair out lately trying to get an ATmega162 on my STK200 to talk to my computer over RS232. I checked and made sure that the STK200 contains a MAX202CPE chip.
I've configured the chip to use its internal 8MHz clock and divided it by 8.
I've tried to copy the code out of the data sheet (and made changes where the compiler complained), but to no avail.
My code is below, could someone please help me fix the problems that I'm having?
I've confirmed that my serial port works on other devices and is not faulty.
Thanks!
#include <avr/io.h>
#include <avr/iom162.h>
#define BAUDRATE 4800
void USART_Init(unsigned int baud)
{
UBRR0H = (unsigned char)(baud >> 8);
UBRR0L = (unsigned char)baud;
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
UCSR0C = (1 << URSEL0) | (1 << USBS0) | (3 << UCSZ00);
}
void USART_Transmit(unsigned char data)
{
while(!(UCSR0A & (1 << UDRE0)));
UDR0 = data;
}
unsigned char USART_Receive()
{
while(!(UCSR0A & (1 << RXC0)));
return UDR0;
}
int main()
{
USART_Init(BAUDRATE);
unsigned char data;
// all are 1, all as output
DDRB = 0xFF;
while(1)
{
data = USART_Receive();
PORTB = data;
USART_Transmit(data);
}
}
I have commented on Greg's answer, but would like to add one more thing. For this sort of problem the gold standard method of debugging it is to first understand asynchronous serial communications, then to get an oscilloscope and see what's happening on the line. If characters are being exchanged and it's just a baudrate problem this will be particularly helpful as you can calculate the baudrate you are seeing and then adjust the divisor accordingly.
Here is a super quick primer, no doubt you can find something much more comprehensive on Wikipedia or elsewhere.
Let's assume 8 bits, no parity, 1 stop bit (the most common setup). Then if the character being transmitted is say 0x3f (= ascii '?'), then the line looks like this;
...--+ +---+---+---+---+---+---+ +---+--...
| S | 1 1 1 1 1 1 | 0 0 | E
+---+ +---+---+
The high (1) level is +5V at the chip and -12V after conversion to RS232 levels.
The low (0) level is 0V at the chip and +12V after conversion to RS232 levels.
S is the start bit.
Then we have 8 data bits, least significant first, so here 00111111 = 0x3f = '?'.
E is the stop (e for end) bit.
Time is advancing from left to right, just like an oscilloscope display, If the baudrate is 4800, then each bit spans (1/4800) seconds = 0.21 milliseconds (approx).
The receiver works by sampling the line and looking for a falling edge (a quiescent line is simply logical '1' all the time). The receiver knows the baudrate, and the number of start bits (1), so it measures one half bit time from the falling edge to find the middle of the start bit, then samples the line 8 bit times in succession after that to collect the data bits. The receiver then waits one more bit time (until half way through the stop bit) and starts looking for another start bit (i.e. falling edge). Meanwhile the character read is made available to the rest of the system. The transmitter guarantees that the next falling edge won't begin until the stop bit is complete. The transmitter can be programmed to always wait longer (with additional stop bits) but that is a legacy issue, extra stop bits were only required with very slow hardware and/or software setups.
I don't have reference material handy, but the baud rate register UBRR usually contains a divisor value, rather than the desired baud rate itself. A quick google search indicates that the correct divisor value for 4800 baud may be 239. So try:
divisor = 239;
UBRR0H = (unsigned char)(divisor >> 8);
UBRR0L = (unsigned char)divisor;
If this doesn't work, check with the reference docs for your particular chip for the correct divisor calculation formula.
For debugging UART communication, there are two useful things to do:
1) Do a loop-back at the connector and make sure you can read back what you write. If you send a character and get it back exactly, you know that the hardware is wired correctly, and that at least the basic set of UART register configuration is correct.
2) Repeatedly send the character 0x55 ("U") - the binary bit pattern 01010101 will allow you to quickly see the bit width on the oscilloscope, which will let you verify that the speed setting is correct.
After reading the data sheet a little more thoroughly, I was incorrectly setting the baudrate. The ATmega162 data sheet had a chart of clock frequencies plotted against baud rates and the corresponding error.
For a 4800 baud rate and a 1 MHz clock frequency, the error was 0.2%, which was acceptable for me. The trick was passing 12 to the USART_Init() function, instead of 4800.
Hope this helps someone else out!