What exactly meant in FreeRTOS's configCPU_CLOCK_HZ description? - documentation

The configCPU_CLOCK_HZ option explanation starts with this:
Enter the frequency in Hz at which the internal clock that driver the peripheral used to generate the tick interrupt will be executing.
Although I do more or less understand what it means, I need some finer explanation of what exactly is said there. Removing obvious "the peripheral used to generate the tick interrupt" from the middle I'm getting the "Enter the frequency in Hz at which the internal clock that driver will be executing", and this phrase looks a bit uncoordinated to me. What did the autor want to say with this? Some "that" driver, unlike, say, "this"? What "that"? The context doesn't imply any "that" here.

I think 'driver' should be 'drives' in that explanation.
configCPU_CLOCK_HZ is the frequency of the platform dependent timer that generates the tick interrupt. It is used by some ports to program the timer so it generates the correct FreeRTOS tick rate (see configTICK_RATE_HZ).
Example: configCPU_CLOCK_HZ is 1000000 (1 MHz) and configTICK_RATE_HZ is 100, then you configure the timer to generate an interrupt every 1000000/100 = 10000 ticks. That interrupt is your FreeRTOS system tick.
Take a look at an ARM Cortex-M port for one of the most common examples of this that uses the Cortex-M SysTick

Related

TinyAVR 0-Series: Can I use pin-change sensing without entering interrupt handler?

I am evaluating the ATtiny806 running at 20MHz to build a cycle-accurate Intel 4004 microprocessor emulator. (I know it will be a bit too slow, but AVRs have a huge community.)
I need to synchronize to the external, two-phase non-overlapping clocks. These are not fast clocks (the original 4004 ran at 750kHz)
but if I spin-wait for every clock edge, I risk wasting most of my time budget.
The TinyAVR 0-series has a very nice pin-change interrupt facility that can be configured to trigger only on rising edges.
But, an interrupt routine round-trip is 8 cycles (3 in, 5 out).
My question is:
Can I leverage the pin-change sensing mechanism while never visiting an ISR?
(Other processor families let you poll for interruptible conditions without enabling interrupts from that peripheral). Can polling be done with a tight skip-on-bit/jump-back loop, followed by a set-bit instruction?
Straightforward way
You can always just poll on the level of the GPIO pin using the single cycle skip if bit set/clear instruction on the appropriate PORT register and bit.
But as you mention, polling does burn cycles so I'm not sure exactly what you want here - either a poll (that burns cycles but has low latency) or an interrupt (that has higher latency but allows processing to continue until the condition is true).
Note that if things get really tight and you are looking for, say, power savings by sleeping between clock signal transitions then you can do tricks like having an ISR that nevers returns (saving the IRET cycles) but that requires some careful coding probably with something like a state machine.
INTFLAG way
Alternately, if you want to use the internal pin state machine logic and you can live without interrupts, then you can use the INTFLAGS flags to check for the pin change configured in the ISC bits of the PINxCTRL register. As long as global interrupts are not enabled in SREG then you can spin poll on the appropriate INTFLAG bit to check/wait for the desired condition, and then write a 1 to that bit to clear the flag.
Note that if you want to make this fast, you will probably want to map the appropriate PORT to a VPORT since the VPORT registers are in I/O Memory. This lets you use SBIS to test the INTFLAG bit a single cycle and SBI to clear the bit in a single cycle (these instructions only work on IO memory and the normal PORT registers are not in IO Memory).
Finally one more complication, if you need to leave the interrupts on when doing this, it is probably possible by hacking the interrupt priority registers. You'd set the pin change to be on level 0, and then make sure the interrupts you care about are level 1 or higher, and then trick the interrupt controller into thinking that there is already a level 0 running so these interrupts do not actually fire. There are also other restrictions to this strategy so avoid it if at all possible.
Programmable logic way
If you want to get really esoteric, it is likely possible that you could route the input value of a pin to a configurable custom logic LUT in the chip and then route the output of that module to a bit that you test using a 1-cycle bit test (maybe an unused IO Pin). To do this, you'd feedback the output of the LUT back into one of its inputs and then use the LUT to create a strobe on the edge you are looking for. This is very complex, and also since the strobe has no acknowledgement that if the signal changes when you are not looking for it (in a spin check) then it will be lost and you will have to wait for the next edge (probably fatal in your application).

How to deep sleep an Attiny until an analog value of a photoresistor changes?

for a battery powered project I would like to put an Attiny85 into deep sleep mode immediately after program start and let it wake up only when a sensor value (in this case a photo resistor) changes. Unfortunately I could only find examples for interrupts by a button and not for photo resistors in the internet. Does anyone have an idea how I could implement it, or if it is impossible?
Turn out that this is probably a software question.
Probably to lowest power and simplest way to implement this would be to...
Connect the analog sensor value to any one of the analog input pins on the ATTINY.
Make sure you disable the digital buffer on that pin.
Set up the ADC to point to the pin and set other relevant values like precaller.
Set up a watchdog timer to fire a periodic interrupt.
Go into deep sleep and wait for the watchdog timer to fire.
Each time the watchdog fires...
Enable the the ADC.
Take a sample.
Jump to main code if the value has changed more than your threshold.
Disable ADC.
Go back to deep sleep.
How power efficient this will be really depends on how often the timer interrupt fires - the less often the better. If your application can live with only checking the sensor, say, once per second then I bet power usage will be single digits of microamps or less.
If you really need very low latency when that sensor values changes, then you could instead use the build in analog comparitor...
.. to generate an interrupt when the input voltage goes above or below a threshold value, but this will likely use much more power since just the analog comparitor itself uses ~30ua while on, and you will also need to generate the voltage that you are comparing to either with the internal 1.1 voltage reference or an external resistor bridge or buffer capacitor.

ATtiny85: how to respond simultaneously to pin and timer interrupts

I've recently been playing around with the ATtiny85 as a means of prototyping some simple electronics in a very small package. I'm having a spot of trouble with this since the language used for many of its functions is very different (and a lot less intuitive!) than that found in a standard Arduino sketch. I'm having some difficulty finding a decent reference for the hardware-specific functions too.
Primarily, what I'd like to do is listen for both a pin change and a timer at the same time. A change of state in the pin will reset the timer, but at the same time the code needs to respond to the timer itself if it ends before the pin's state changes.
Now, from the tutorials I've managed to find it seems that both pin change and timer interrupts are funnelled through the same function - ISR(). What I'd like to know is:
Is it possible to have both a pin and a timer interrupt going at the same time?
Assuming they both call the same function, how do you tell them apart?
ISR() is not a function, it's a construct (macro) that is used to generate the stub for an ISR as well as inject the ISR into the vector table. The vector name passed to the macro determines which interrupt it services.
ISR(INT0_vect)
{
// Handle external interrupt 0 (PB2)
...
};
ISR(TIM0_OVF_vect)
{
// Handle timer 0 overflow
...
};
According to the datasheet ATtiny85 doesn't have the same interrupt vector for PCINT0 and TIMER1 COMPA/OVF/COMPB, so you can define different ISR handlers for each one of them.
If you're using the same handler for more interrupts, it might be impossible to differentiate between them, as interrupt flags are usually cleared by hardware on ISR vector execution.
As an addition to the accepted answer:
Is it possible to have both a pin and a timer interrupt going at the same time?
The interrupt can occur at exactly the same time on the hardware level and the corresponding interrupt flags would be set accordingly. The flags indicate that the ISR for the respective interrupt should be executed. But the actual ISRs are (more or less obviously) not executed at the same time / in parallel. Which ISR is executed first (in case multiple interrupts are pending) depends on the interrupt priority, which is specified in the interrupt vector table from the data sheet.

What is a "clock source"?

I just encountered the phrase "8254 clock source" in The Well Grounded Java Developer.
Unfortunately, I don't know the definition of 'clock source'.
Is it similar to the Clock signal or is it some kind of source code that makes a timer?
The quote from the book is:
The 8254 is a programmable timer chip that’s been kicking around since
the dawn of time. The clock source for this is a 119.318 kHz crystal,
which is one-third of the NTSC color subcarrier frequency, for reasons
that go back to the CGA graphics system. This is what once was used
for feeding regular ticks (for timeslicing) to OS schedulers, but
that’s done from elsewhere (or isn’t required) now.
Yes, the "clock source" is the thing that is providing the "clock signal", which in turn determines the rate at which the 8254 timer counts.
Note that while TWGJD says
"The clock source for this is a 119.318 kHz crystal"
this doesn't mean that the timer is necessarily receiving a clock signal of this frequency: any given design might have dividers between the crystal and the timer chip.

Drive high frequency output (32Khz) on 20Mhz Renesas microcontroller IO pin

I need to drive a 32Khz square wave on pin 19 of a Renesas R8C/36C µController. The pin is non-negotiable (the circuit design is already complete.)
The software design uses a 250 µsec interrupt for simulating multi-tasking, but that's only good for 2Khz full-wave.
Do I need to create another higher-priority interrupt for driving 32 Khz, or is there some other trick that I'm not aware of?
R8C/36C Hardware Manual
R8C/36C Software Manual
I am not familiar with the RC8 and Renesas don't say much on the subject of performance, but it is a CISC processor with typically 4 cycles per instruction, so lets estimate about 4 MIPS? Some instructions are much longer with division up to 30 cycles.
So if you create a 64KHz timer and flip the output on each interrupt, you have about 63 instructions between each interrupt, you have the interrupt latency plus the code to flip the bit. If it works at all, it is likely to constitute a significant CPU load and may affect the timeliness of other operations.
Be realistic, without a redesign, the project may not be viable. You are already stressing it with the 4KHz OS tick in my opinion - the software overhead at that rate is likley to be a significant chunk of your CPU load.
[ADDED]
I previously suggested 6 instructions between interrupts - finger trouble in the calculator, I have changed that estimate to 63, and moderated my conclusion to "barely feasible".
However I looked again at the data sheet, interrupt latency is variable because the instruction execution is variable, and the current instruction must complete before the interrupt is serviced, the worst case is when the DIVX instruction is executing, when it takes up-to 51 cycles before the first instruction of the interrupt routine. That's 2.55us, when you need the interrupt to trigger every 15.625us, the variable latency will impose significant jitter and constitutes 6 to 16 % of your total CPU time without even considering that used by the ISR itself.. Plus if the interrupt itself is pre-empted, or a higher priority interrupt is running when this one becomes due, further jitter will be imposed.
Whether it works will depend on the accuracy and jitter constraints of the 32KHz, and whatever else your code needs to get done.
As many people have pointed out, this design doesn't seem to be very good from a hardware standpoint if the 32khz clock is meant to be generated with a gpio.
However, I don't know How desperate is your situation, nor do I know the volume involved. But if it is a prototype or very short series, and pin 20 is free, you can short-circuit pins 19 and 20, setup pin 19 as an input and 20 as output. Since pin 20 can be used as output from timer rd, you could set up that timer to output the 32khz without using any interrupts.
I am not a renesas micro expert, but I'm talking from what I've seen in the data sheet you attached and previous experience with other mcu's.
I hope this helps.
Looking at the datasheet for that chip:
It looks like your only real option is to use the pin as a generic output port.
the only usable output mode seems to be the generic output port.
If you can't strap pin 19 to another pin that has the hardware to generate 32KHz and just make pin 19 an input? Not a proud moment but it was easy on a DIL package.
Could you call an interrupt every 15.6us and toggle pin19 then on the sixteenth interrupt do the multi-tasking stuff but that is likely to be wasteful. With an interrupt rate of 32Khz, setting pin19 then eighth of the time doing the multi-tasking decisions and the other seven times wait till a point you can reset pin19 and do some background code for less than half the CPU time