vybrid processor ADC ADC_HS COCO( conversion complete) bit is not going high for first 10 milliseconds , after 10ms reinitialized and working fine. ADC read called for every one milliseconds,ADC config details: 12bit, ADIV/8, no low power , 16samples avg, normal speed , software trigger
Related
Why is the analog read rate seemingly slow (46 ksamples/s) when it should be fast (250 ksamples/s) for my Adafruit Trinket M0? See this simple Arduino code for details; why is PointCount only 46?
//TrinketReadRateTest
//27Nov2022
//Running on Adafruit Trinket M0, SAMD21
//Measures read times of analog reads on Trinket M0
//nothing at all connected to the Trinket
//according to the settings in this wiring.c file lines 160-173, samples per second should be = 250,000:
//C:\Users\<MyUserName>\AppData\Local\Arduino15\packages\adafruit\hardware\samd\1.7.11\cores\arduino\wiring.c
//in this loop, every PointCount is 2 samples, so in 2 millisecs, number of PointCounts should be:
//(.002 secs)(250000 samples/sec)(PointCounts/ 2 samples) = 250
//however, this routine gives a value of 46 WHY?
//if line 170 prescaler is set to DIV16 instead of DIV32, PointCounts gets to 66 (accuracy ???) so this wiring.c is being loaded
#define INPUT1 A3 //ATSAMD21G PA04
#define INPUT2 A4 //ATSAMD21G PA05
unsigned int Input1[1000];
unsigned int Input2[1000];
unsigned int PointCount = 0;
void setup() {
pinMode(INPUT1, INPUT);
pinMode(INPUT2, INPUT);
}
void loop() {
PointCount = 0;
unsigned long StartTime = micros();
do {
Input1[PointCount] = analogRead(INPUT1);
Input2[PointCount] = analogRead(INPUT2);
PointCount++;
} while (micros() - StartTime < 2000); //read 2 millisecs of data points as fast as they come
Serial.begin(9600); //keep serial off during data reads to avoid the question...
delay(1000);
Serial.println(PointCount);
Serial.end();
delay(1000);
}
I tried reading analog samples as fast as they would come. I expected to receive samples at a rate of 250000 per second. What actually resulted was a rate of 46000 samples per second.
Added 28Nov: the wiring.c file is not easy to find. If you want it:
download the tar.bz2 file:
https://adafruit.github.io/arduino-board-index/boards/adafruit-samd-1.7.11.tar.bz2
extract the tar file using 7-zip or whatever
goto cores\arduino\wiring.c
Here are the relevant lines of wiring.c:
//set to 1/(1/(48000000/32) * 6) = 250000 SPS
while(GCLK->STATUS.reg & GCLK_STATUS_SYNCBUSY);
GCLK->CLKCTRL.reg = GCLK_CLKCTRL_ID( GCM_ADC ) | // Generic Clock ADC
GCLK_CLKCTRL_GEN_GCLK0 | // Generic Clock Generator 0 is source
GCLK_CLKCTRL_CLKEN ;
while( ADC->STATUS.bit.SYNCBUSY == 1 ); // Wait for synchronization of registers between the clock domains
ADC->CTRLB.reg = ADC_CTRLB_PRESCALER_DIV32 | // Divide Clock by 32.
ADC_CTRLB_RESSEL_10BIT; // 10 bits resolution as default
ADC->SAMPCTRL.reg = 5; // Sampling Time Length
Adding this additional question 8Dec2022:
wiring.analog.c (in same folder as wiring.c) executes the analog routines. Line 369 of wiring.analog.c says the same thing that the SAMD21 data sheet says: "The first conversion after the reference is changed must not be used."
In lines 371-394, the analogRead routine for SAMD21, two reads are always made; the first to account for the statement above. But why do two reads for every analogRead? The analog reference is not changed with every read and is set prior to any reads. So why not just do one conversion after the reference is set? That way, there only needs to be one conversion per analogRead.
I moved the first conversion routine to the very end of analogReference. It speeds things up to PointCount = 79. Is this a problem? It does not seem to reduce accuracy.
Your second question is easier to answer than your first. The reason there are two ADC reads in the Arduino code is because there is a bug in the ADC hardware on the SAMD21. In the past, Arduino provided a calibration method that allowed you to correct for this instead of adding in the second read and throwing out the first garbage data. This was problematic for a number of reasons and eventually library was modified. There's an old hackaday article that provides a little more detail.
As for the ADC reads being slow, the limitation you're running into is a limitation of the SAMD library for Arduino. For reference, I am using the SAMD21 datasheet and the code from Arduino SAMD on GitHub. To start out with, the Clock speed should be 48Mhz. Using the DIV32 predivider, the ADC clock frequency is 1.5Mhz. Each ADC conversion from the SAMD21 library takes 63 clock cycles. Leaving you with ~23.8Khz. 23.8Khz * 2ms = 47.619 Conversions. Add on top of that the overhead caused by switching between the two input pins (I don't know the exact characterization but likely 1-2 clock pulses) and you'd end up with closer to 46 Conversions in 2ms.
63 clock pulses per conversion is comically high. Typically, the first read is closer to 20 pulses and subsequent ones are 13.5. There is another post on the electrical engineering Stack Exchange where someone tackles this and posts a link to their own library for improving the conversion speeds.
Reading the following chapter:
https://users.ece.utexas.edu/~valvano/Volume1/E-Book/C10_FiniteStateMachines.htm
In the beginning, on top of Figure 10.1, the author claims that:
Because the reference clock is stable, the feedback loop in the PLL will drive the output to a stable 400 MHz frequency.
Question: How does a 16MHz clock drive a 400MHz PLL? (I checked the wiki for PLL but didn't understand much)
A bit of background: I don't know much about electronics, and apparently this book doesn't really require students to understand such questions (it focuses on writing C programs for an eval board). I'm just curious.
In simple words: A PLL works by "comparing" the reference frequency with its own frequency. If its frequency is too low, it raises it a bit, and if it's too high, it lowers it a bit. This is what the feedback loop does. (Actually, the phase is used for comparison. That's why it's called a "phase lock[ed] loop".)
So your question boils down to: How can a frequency of 400 MHz be compared with a frequency of 16 MHz?
Well, as such, it cannot. For the comparison in the "Phase/Freq Detector" both frequencies need to be nearly the same. "Nearly" because while not being locked, the VCO's frequency might be "off track".
The solution is to divide the 400 MHz down to 16 MHz, by the value of 25. This is what the block "/m" in the linked page does:
The "programming" aspect in your question: You set up the divisor by choosing the right XTAL from the table.
After division, the detector receives two frequencies in the same range.
I have the following problem on a Windows 7 using VB.net with .NET Framework 4.0.
I have to send via a serial port a buffer of byte. The PC act like master and a device connected as slave receive the buffer. Each byte must be spaced from its next by a certain amount of time expressed in microseconds.
This is my code snippet
Dim t1 As New Stopwatch
Dim WatchTotal As New Stopwatch
WatchTotal.Reset()
WatchTotal.Start()
t1.Stop()
For i As Integer = 0 To _buffer.Length - 1
SerialPort1.Write(_buffer, i, 1)
t1.Reset()
t1.Start()
While ((1000000000 * t1.ElapsedTicks / Stopwatch.Frequency) < 50000) ' wait 50us
End While
t1.Stop()
Next
WatchTotal.Stop()
Debug.Print(WatchTotal.ElapsedMilliseconds)
Everything loop inside a thread.
Everything works correctly but on a machine with Windows 7 the Serial.write of 1 byte takes 1ms so if we have to send 1024 bytes it takes 1024ms.
This is confirmed by printing the elapsed time
Debug.Print(WatchTotal.ElapsedMilliseconds)
The problem seems to be in SerialPort.Write method.
The same code on a machine with Windows 10 takes less than 1ms.
The problem is more visible when we have to send many buffers of byte, in this case we send 16 buffers of 1027 bytes. On Win 7 it takes less than 20 seconds, in Win10 it takes the half or less (to send 1 buffers of 1027 bytes it takes approximately 120-150ms and less than 5 seconds to send 16 buffers of data).
Does anyone have any idea what that might depend on?
Thanks
EDIT 22/05/2020
If i remove the debug printing and the little delay to pause the communication i always have about 1027ms for sending 1027 bytes so i think that the problem belong only to SerialPort method and not to the timing or stopwatch object. This happen on a Windows 7 machine. The same executable on a Windows 10 machine go fast as expected.
For i As Integer = 0 To _buffer.Length - 1
SerialPort1.Write(_buffer, i, 1)
Next
One thing, your wait code seems cumbersome, try this.
'one tick is 100ns, 10 ticks is as microsecond
Const wait As Long = 50L * 10L ' wait 50us as ticks
While t1.ElapsedTicks < wait
End While
Busy loops are problematic and vary from machines. As I recall serial port handling was not very good on Win7, but I could be mistaken.
It is hard to believe that the receiver is that time sensitive.
If the Win7 workstation doesn't have or isn't using a high resolution timer, then that could account for the described difference.
From the Remarks section of the StopWatch class:
The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time. Use the Frequency and IsHighResolution fields to determine the precision and resolution of the Stopwatch timing implementation.
Check the IsHighResolution field to determine if this is what is occurring.
For one MCU I have written some assembly routines performing RX and TX of a proprietary protocol (UART-based) in a bit-bang fashion. How can I test them?
TX might be tested by sending data, and at the same time, with the help of a logic analyzer, checking that all the sampled timings are correct (manually or with some scripts).
RX on the other hand is more difficult. On one hand I can check if I'm receiving what someone else is sending, but on the other hand how do I know that the RX sampling is happening correctly (timing-wise)?
For example, my RX routine may return the correct data by sampling at the edge of the "bit window" instead of the middle.
I thought about toggling a "debug pin" to indicate when the sampling is actually happening, but this introduces delays in the sampling procedure, hence I wouldn't be testing my original routine.
Some things worth clarifying after reading comments:
I know that hardware UART is better (it depends, though), but I can't use it. This is not a matter of "have you tried this ...?";
I know how to do the bit banging (I have already written the assembly routines);
I can't connect TX to RX because I'm only using 1 wire (the communication is half-duplex);
I'm asking how to test the RX sampling timings, not how to implement UART.
I thought about toggling a "debug pin" to indicate when the sampling
is actually happening, but this introduces delays in the sampling
procedure, hence I wouldn't be testing my original routine.
Test with the instrumentation code, and then leave the instrumentation - or near-equivalent code that doesn't actually twiddle hardware - in place.
You'll need something to send data to the MCU, perhaps a second MCU. I've worked on similar code for both 6502 and Z80 for old 8 bit Atari peripherals. These are half duplex protocols, so whenever the device is idle, it's polling for a start bit. After detecting a start bit, it delays 1.5 bit times, then receives 8 bits, with 1 bit time between bits. Both receiving and sending of data routines are coded to get exact cycle counts for timing. These were old devices, and even the fastest bit rate was relatively slow at 19 microseconds per bit ~= 52600 baud.
The question has been updated. If the input and output instructions take the exact same time to run (cycle count), you could modify the receive code to transmit data to verify the bit time, and confirm exactly how fast the processor is running.
For the timing regarding sensing the start bit and doing a 1.5 bit time wait, you'd have to calculate the minimum and maximum number of cycles to sense the start bit. The maximum cycle count would be an input instruction that just misses the trailing edge of the start bit, the test instruction, and the loop back to the input, followed by another test and then a fall through the loop to continue the receive. The minimum cycle count would be an input that just barely catches the leading edge of the start bit, does a test, then falls through the loop. Then the remainder of the receive code needs to sample as close as possible to the middle of the data bit periods.
Here is example of code for a 4mhz Z80 that receives data at 19 microseconds == 76 cycles per data bit. The comments include the cycle count for each instruction. The ideal wait time for start bit to 1st data bit is 114 cycles. The min,max cycle time for the start bit loop is 20,50 cycles. An additional delay plus the input of the first data bit of 79 cycles is used so min,max cycle time to sense start to receive 1st data bit is 99,129 cycles, within the min,max bounds of 76,152 cycles. The remaining data bits are read at exactly 76 cycles per bit.
LD E,0 ;SET UP
; ; START BIT TO DATA BIT=114
NRXF0: LD A,(FBS) ;(13) WAIT FOR START BIT
AND FBSRXD ;(7)
JP NZ,NRXF0 ;(10)
; ; NOTE: 20 MIN, 50 MAX, 35 AVG
EX (SP),HL ;(19) DELAY
EX (SP),HL ;(19)
LD A,(HL) ;(7)
NRXF1: LD A,(HL) ;(7)
LD A,(HL) ;(7)
LD D,8 ;(7) 8 BITS PER BYTE
; ; 76 CYCLES PER DATA BIT
NRXF2: LD A,(FBS) ;(13) GET DATA BIT
AND FBSRXD ;(7)
ADD A,0FFH ;(7)
RR C ;(8)
PUSH BC ;(11) DELAY
POP BC ;(10)
NOP ;(4)
DEC D ;(4) LP TIL BYTE DONE
JR NZ,NRXF2 ;(12/7)
RET NZ ;(5) DELAY
NRXF4: LD A,(FBS) ;(13) WAIT FOR NEXT START BIT
AND FBSRXD ;(7)
JP NZ,NRXF4 ;(10)
; ; START BIT TO DATA BIT=114
LD (HL),C ;(7) STORE BYTE
LD A,C ;(4) DO CKSUM
ADD A,E ;(4)
ADC A,0 ;(7)
LD E,A ;(4)
INC HL ;(6) ADV ADR
DJNZ NRXF1 ;(13/8) LP IF MORE BYTES
Every so often I start a bare metal microcontroller project and end up implementing a system time measurement using a random timer unit.
I am working with ARM Cortex-M devices for a (albeit short) while now and typically used the SysTick ("System Tick") interrupt to create a 1ms resolution timer. It recently stumbled over a post that suggested chaining two Programmable Interrupt Timers (on a Kinetis KL25Z device) in order to create an interrupt-less 32bit millisecond timer, however sacrificing two PIT interrupts which may come in handy later on.
So I was wondering if there are some (sort of) canonical ways to determine the system time on a microcontroller - preferrably for Kinetis KL2xZ devices as I currently work with these, but not necessarily so.
The canonical method as you put it is exactly as you have done - using systick. That is the single timer device defined by the Cortex-M architecture; any other timer hardware is external to the core and vendor specific.
Some parts (STM32F2 for example) include 32 bit timer/counter hardware, so you would not need to chain two.
The best approach is to abstract timer services by defining a generic timer API that you implement for all parts you need so that the application layer is identical for all parts. For example in this case you might simply implement the standard library clock() function and define CLOCKS_PER_SEC.
If you are using two free-running cascaded timers, you must ensure high/low word consistency when combining the two counter values:
#include <time.h>
clock_t clock( void )
{
uint16_t low_word = 0 ;
uint16_t hi_word = 0 ;
do
{
hi_word = readTimerH() ;
lo_word = readTimerL() ;
} while( hi_word != readTimerH() ) ;
return (clock_t)(hi_word << 16 | lo_word) ;
}
I just looked into KL25 Sub-Family Reference Manual.
In Chapter 34 Real Time Clock (RTC) section 34.3.2 Time counter (may differ with document version).
I found that there are Two registers for Timer counter in RTC
32-bit seconds counter
16-bit prescaler register that increments once every 32.768 kHz clock cycle
Reference Manual says
Always write to the prescaler register before writing to the seconds register,
because the seconds register increments on the falling edge of bit 14 of the prescaler
register.
Which means to calculate system time, read rtc_sec_counter and add 14 bits of prescalar_reg
you can even create a macro to give you system time in uSec and mSec from combination of rtc_sec_counter and prescalar_reg or Sec(obviously from rtc_sec_counter)
For 16 bit prescalar REG System clock is 32.768 Khz, with this we can create macros to get time in uSec and mSec
#define PRESCALAR_TICK 32768
#define KHZ 1000
#define MHZ 1000000
/// Here first we extract 14bit value of prescalar_reg and than multiply it with MHZ to get better precision
/// but this value will not go more than 14 Bit
#define GET_SYS_US ((((prescalar_reg & 0x03FFF)*MHZ)/PRESCALAR_TICK))
#define GET_SYS_MS (GET_SYS_US)/KHZ)
if you need time in milliseconds up to 32 bit use below macro
#define GET_SYS_US_32bit ((rtc_sec_counter * 0x3FFF) + GET_SYS_US)
#define GET_SYS_MS_32bit ((rtc_sec_counter * 0x3FFF) + GET_SYS_MS)
But to use these information you must initialise RTC of you micro (Obviously)