I am working on a weird problem: As a part of my project, I migrated a firmware from CooCox to TrueStudio. Both, CooCox and TrueStudio automatically create some standard files while creating a project for a specific Microcontroller. The Microcontroller used here is the STM32F407VGT6. I am using ms - delay and s - delay which are derived from the µs - Delay function I will show you.
*edit2: I should mention, that the original project is a pure C project. I am trying to make the Project a C++/C project in TrueStudio.
What I will try now is to migrate the firmware into a TrueStudio pure C project and see if the problem still exists.
I will inform you about the results
**Results: The problem is actually gone now in the pure C Project, but I would really like to implement classes etc using C++. Any ideas how to solve this?
**
*
The initializing systick code is (HCLK Frequency = 168MHz).
*edit1: the HCLK Frequency equals the SYSCLK *
void systick_init(void){
RCC_ClocksTypeDef RCC_Clocks;
Systick_Delay=0;
RCC_GetClocksFreq(&RCC_Clocks);
SysTick_Config((RCC_Clocks.HCLK_Frequency / 1000000) - 1);
}
The function for the 1µs Delay looks like this:
void delay_us(volatile uint32_t delay)
{
Systick_Delay = delay;
while(Systick_Delay != 0);
}
The Systick Handler contains the following Code:
void SysTick_Handler(void)
{
// Tick für Delay
if(Systick_Delay != 0x00)
{
Systick_Delay--;
}
}
When I create a .hex file to flash the µC with using Coocox, the timing function works (with some minor accuracy mistakes that don't bother me).
When I create the .hex file with TrueStudio, the delays have massive inaccuracys. For example, a delay of 500ms becomes a delay of roughly 2s.
Since the Code is written dependant on the actual HCLK_Frequency, I can't understand the mistake and in my understanding, even if the HCLK should differ, the 1µs Delay should still take about 1µs.
My next step will be comparing the automatically created system files, but maybe anyone has a different approach / another idea?
*edit 3: I normally include my systick - header with the command ' extern "C" '. So my systick source file is a .c file. When I rename the file to systick.cpp, and I include the header without 'extern "C"', the delay function does not work at all. Maybe, that helps with the solution?
*
You are either running off a different clock or have different PLL settings. Looks like your clock speed is 1/4 of what you had before.
The basic startup code provided does not always set the maximum speed for the board. Have a look in some of the examples in the stm32cube.zip code. You will find some System Clock Configuration code for your board which will select the correct clock and pll settings. (this will be in your code somewhere as well).
Look in main.c under stm32cubef4/projects/STM32F4-Discovery\Demonstrations\src.
You will find the following code which sets up the clock:
/**
* #brief System Clock Configuration
* The system Clock is configured as follow :
* System Clock source = PLL (HSE)
* SYSCLK(Hz) = 168000000
* HCLK(Hz) = 168000000
* AHB Prescaler = 1
* APB1 Prescaler = 4
* APB2 Prescaler = 2
* HSE Frequency(Hz) = 8000000
* PLL_M = 8
* PLL_N = 336
* PLL_P = 2
* PLL_Q = 7
* VDD(V) = 3.3
* Main regulator output voltage = Scale1 mode
* Flash Latency(WS) = 5
* #param None
* #retval None
*/
static void SystemClock_Config(void)
{
RCC_ClkInitTypeDef RCC_ClkInitStruct;
RCC_OscInitTypeDef RCC_OscInitStruct;
/* Enable Power Control clock */
__HAL_RCC_PWR_CLK_ENABLE();
/* The voltage scaling allows optimizing the power consumption when the device is
clocked below the maximum system frequency, to update the voltage scaling value
regarding system frequency refer to product datasheet. */
__HAL_PWR_VOLTAGESCALING_CONFIG(PWR_REGULATOR_VOLTAGE_SCALE1);
/* Enable HSE Oscillator and activate PLL with HSE as source */
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSE;
RCC_OscInitStruct.HSEState = RCC_HSE_ON;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSE;
RCC_OscInitStruct.PLL.PLLM = 8;
RCC_OscInitStruct.PLL.PLLN = 336;
RCC_OscInitStruct.PLL.PLLP = RCC_PLLP_DIV2;
RCC_OscInitStruct.PLL.PLLQ = 7;
HAL_RCC_OscConfig(&RCC_OscInitStruct);
/* Select PLL as system clock source and configure the HCLK, PCLK1 and PCLK2
clocks dividers */
RCC_ClkInitStruct.ClockType = (RCC_CLOCKTYPE_SYSCLK | RCC_CLOCKTYPE_HCLK | RCC_CLOCKTYPE_PCLK1 | RCC_CLOCKTYPE_PCLK2);
RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK;
RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1;
RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV4;
RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV2;
HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_5);
/* STM32F405x/407x/415x/417x Revision Z devices: prefetch is supported */
if (HAL_GetREVID() == 0x1001)
{
/* Enable the Flash prefetch */
__HAL_FLASH_PREFETCH_BUFFER_ENABLE();
}
}
I found the solution now!
For anyone having similar problems:
You need to declare the SysTick_Handler function as
extern "C" void SysTick_Handler(void)
{
//Tick für Delay
if(Systick_Delay != 0x00)
{
Systick_Delay--;
}
}
Now it is working like it is supposed to do.
Related
I want to configure PLL in STM32F429 to its max frequency (180Mhz) without using STMCube-generated configurations. I am using my own register definitions like this
#define RCC_CFGR (*((volatile uint32 *)0x40023808))
and my own SET_BIT()/CLEAR_BIT() macros
My questions are:
1- Is this procedure correct?
2- How can I check if it is working?
3- May the MCU can not handle this speed (reading/writing in registers)?
#define PLL_M 16
#define PLL_N 360
#define PLL_P 2
#define PLL_Q 7
void PLL_init(void)
{
/* System Init */
/* HSI ON */
SET_BIT(RCC_CR, HSION);
/* Reset CFGR register */
RCC_CFGR = 0x00000000 ;
/* Reset PLLCFGR register */
RCC_PLLCFGR = 0x24003010;
/* Reset HSEON, CSSON and PLLON bits */
RCC_CR &= (uint32_t)0xFEF6FFFF;
/************* SetSysClock ************/
RCC_PLLCFGR = PLL_M | (PLL_N << 6) | (((PLL_P >> 1) -1) << 16)| (PLL_Q << 24);
/* PLL clock Source HSI or HSE */
CLEAR_BIT(RCC_PLLCFGR, PLLSRC);
/* APB1 PWR Enable*/
SET_BIT(RCC_APB1ENR, PWREN);
/* Select regulator voltage output Scale 1 mode, System frequency up to 180 MHz */
SET_BIT(PWR_CR, VOS0);
SET_BIT(PWR_CR, VOS1);
/* AHB div 1 */
CLEAR_BIT(RCC_CFGR,HPRE3);
CLEAR_BIT(RCC_CFGR,HPRE0);
/* APB2 Div = 2*/
CLEAR_BIT(RCC_CFGR,PPRE20);
CLEAR_BIT(RCC_CFGR,PPRE21);
SET_BIT(RCC_CFGR, PPRE23);
/* APB 1 Div = 8 */
CLEAR_BIT(RCC_CFGR, PPRE10);
SET_BIT(RCC_CFGR, PPRE11);
SET_BIT(RCC_CFGR, PPRE12);
/* SET PLL ON */
SET_BIT(RCC_CR, PLLON);
/* Check PLL is ready */
while(BIT_IS_CLEAR(RCC_CR,PLLRDY));
/* Enable the Over-drive to extend the clock frequency to 180 Mhz */
SET_BIT(PWR_CR, ODEN);
while(BIT_IS_CLEAR(PWR_CSR,ODRDY));
SET_BIT(PWR_CR, ODSWEN);
while(BIT_IS_CLEAR(PWR_CSR,ODSWRDY));
SET_BIT(FLASH_ACR, PRFTEN);
SET_BIT(FLASH_ACR, ICEN);
SET_BIT(FLASH_ACR, DCEN);
FLASH_ACR |= FLASH_ACR_LATENCY_5WS;
/* Select the main PLL as system clock source */
CLEAR_BIT(RCC_CFGR, SW0);
SET_BIT(RCC_CFGR, SW1);
/* Wait till the main PLL is used as system clock source */
while(!BIT_IS_CLEAR(RCC_CFGR,SWS0) && !BIT_IS_SET(RCC_CFGR,SWS1) ); /* Loop till is Set */
}
I did exactly this thing on my STM32F746 - set 216MHz with registers. It looks very similar to mine. Not observing any obvious things. You wait for all ready flags and all. Overdrive and overdrive switching are there, flash wait states are there.
How to check if it's working - timer is an easy idea. Timer that is clocked by a system clock with some prescaler and that outputs PWM to some pin. Given the configuration of the timer is set by you, you should know what output frequency to expect. You can set timer parameters in such a way that you expect timer-interrupt-toggle an LED every 1s.
The MCU should handle it OK if all prescalers for AHB/APB are within limits, as well as Flash latency is set correctly. Which seems to be the case.
If you want to compare the logic with what I do with STM32F746 (which looks almost identical in terms of how you do it), you can check it in this rcc.c file of mine. It definitely works, I tested it.
I am working on trying to get the EEPROM Emulator from stm32 working. I have followed the example given for a stm32 l47x board however I am still running into issues. When I call EE_init I end up running into a stack overflow. I am not too familiar with this emulator and am using the default configurations from the example.
This is how I am initializing everything.
EE_Status ee_status = EE_OK;
/* Enable and set FLASH Interrupt priority */
/* FLASH interrupt is used for the purpose of pages clean up under interrupt */
HAL_NVIC_SetPriority(FLASH_IRQn, 0, 0);
HAL_NVIC_EnableIRQ(FLASH_IRQn);
HAL_FLASH_Unlock();
if(__HAL_PWR_GET_FLAG(PWR_FLAG_SB) == RESET)
{
/* Blink LED_OK (Green) twice at startup */
LEDInterface_toggleColor(GREEN);
HAL_Delay(100);
LEDInterface_toggleColor(NONE);
HAL_Delay(100);
LEDInterface_toggleColor(GREEN);
HAL_Delay(100);
LEDInterface_toggleColor(NONE);
ee_status = EE_Init(EE_FORCED_ERASE);
if(ee_status != EE_OK)
{
while(1);
}
This is the eeprom_emul_conf.h settings which I also have not changed
/* Configuration of eeprom emulation in flash, can be custom */
#if defined (STM32L4R5xx) || defined (STM32L4R7xx) || defined (STM32L4R9xx) || defined (STM32L4S5xx) || defined (STM32L4S7xx) || defined (STM32L4S9xx)
#define START_PAGE_ADDRESS 0x08100000U /*!< Start address of the 1st page in flash, for EEPROM emulation */
#else
#define START_PAGE_ADDRESS 0x08080000U /*!< Start address of the 1st page in flash, for EEPROM emulation */
#endif
#define CYCLES_NUMBER 1U /*!< Number of 10Kcycles requested, minimum 1 for 10Kcycles (default),
for instance 10 to reach 100Kcycles. This factor will increase
pages number */
#define GUARD_PAGES_NUMBER 2U /*!< Number of guard pages avoiding frequent transfers (must be multiple of 2): 0,2,4.. */
/* Configuration of crc calculation for eeprom emulation in flash */
#define CRC_POLYNOMIAL_LENGTH LL_CRC_POLYLENGTH_16B /* CRC polynomial lenght 16 bits */
#define CRC_POLYNOMIAL_VALUE 0x8005U /* Polynomial to use for CRC calculation *
/
I am running into the osal_hooks.c file where I am getting stuck in this while loop
#if defined(DOXYGEN)
void vApplicationStackOverflowHook( TaskHandle_t xTask, char *pcTaskName )
#else
OSAL_WEAK_FN(void, vApplicationStackOverflowHook)( TaskHandle_t xTask, char *pcTaskName )
#endif
{
volatile char * name = pcTaskName;
(void)name;
while (1)
{
;
}
}
Im sure I need to change where I allocate the memory but what is the best way to go about this. Thank you
I have several OVERRUN errors on UART peripheral because I keep receiving UART data while my code is stall because I'm executing a write operation on flash.
I'm using interrupts for UART and has it is explained on Application Note AN3969 :
EEPROM emulation firmware runs from the internal Flash, thus access to
the Flash will be stalled during operations requiring Flash erase or
programming (EEPROM initialization, variable update or page erase). As
a consequence, the application code is not executed and the interrupt
can not be served.
This behavior may be acceptable for many applications, however for
applications with realtime constraints, you need to run the critical
processes from the internal RAM.
In this case:
Relocate the vector table in the internal RAM.
Execute all critical processes and interrupt service routines from the internal RAM. The compiler provides a keyword to declare functions
as a RAM function; the function is copied from the Flash to the RAM at
system startup just like any initialized variable. It is important to
note that for a RAM function, all used variable(s) and called
function(s) should be within the RAM.
So I've search on the internet and found AN4808 which provides examples on how to keep the interrupts running while flash operations.
I went ahead and modified my code :
Linker script : Added vector table to SRAM and define a .ramfunc section
/* stm32f417.dld */
ENTRY(Reset_Handler)
MEMORY
{
ccmram(xrw) : ORIGIN = 0x10000000, LENGTH = 64k
sram : ORIGIN = 0x20000000, LENGTH = 112k
eeprom_default : ORIGIN = 0x08004008, LENGTH = 16376
eeprom_s1 : ORIGIN = 0x08008000, LENGTH = 16k
eeprom_s2 : ORIGIN = 0x0800C000, LENGTH = 16k
flash_unused : ORIGIN = 0x08010000, LENGTH = 64k
flash : ORIGIN = 0x08020000, LENGTH = 896k
}
_end_stack = 0x2001BFF0;
SECTIONS
{
. = ORIGIN(eeprom_default);
.eeprom_data :
{
*(.eeprom_data)
} >eeprom_default
. = ORIGIN(flash);
.vectors :
{
_load_vector = LOADADDR(.vectors);
_start_vector = .;
*(.vectors)
_end_vector = .;
} >sram AT >flash
.text :
{
*(.text)
*(.rodata)
*(.rodata*)
_end_text = .;
} >flash
.data :
{
_load_data = LOADADDR(.data);
. = ALIGN(4);
_start_data = .;
*(.data)
} >sram AT >flash
.ramfunc :
{
. = ALIGN(4);
*(.ramfunc)
*(.ramfunc.*)
. = ALIGN(4);
_end_data = .;
} >sram AT >flash
.ccmram :
{
_load_ccmram = LOADADDR(.ccmram);
. = ALIGN(4);
_start_ccmram = .;
*(.ccmram)
*(.ccmram*)
. = ALIGN(4);
_end_ccmram = .;
} > ccmram AT >flash
.bss :
{
_start_bss = .;
*(.bss)
_end_bss = .;
} >sram
. = ALIGN(4);
_start_stack = .;
}
_end = .;
PROVIDE(end = .);
Reset Handler : Added vector table copy SRAM and define a .ramfunc section
void Reset_Handler(void)
{
unsigned int *src, *dst;
/* Copy vector table from flash to RAM */
src = &_load_vector;
dst = &_start_vector;
while (dst < &_end_vector)
*dst++ = *src++;
/* Copy data section from flash to RAM */
src = &_load_data;
dst = &_start_data;
while (dst < &_end_data)
*dst++ = *src++;
/* Copy data section from flash to CCRAM */
src = &_load_ccmram;
dst = &_start_ccmram;
while (dst < &_end_ccmram)
*dst++ = *src++;
/* Clear the bss section */
dst = &_start_bss;
while (dst < &_end_bss)
*dst++ = 0;
SystemInit();
SystemCoreClockUpdate();
RCC->AHB1ENR = 0xFFFFFFFF;
RCC->AHB2ENR = 0xFFFFFFFF;
RCC->AHB3ENR = 0xFFFFFFFF;
RCC->APB1ENR = 0xFFFFFFFF;
RCC->APB2ENR = 0xFFFFFFFF;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOAEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOBEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOCEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIODEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOEEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOFEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOGEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOHEN;
RCC->AHB1ENR |= RCC_AHB1ENR_GPIOIEN;
RCC->AHB1ENR |= RCC_AHB1ENR_CCMDATARAMEN;
main();
while(1);
}
system_stm32f4xxx.c : Uncommented VECT_TAB_SRAM define
/*!< Uncomment the following line if you need to relocate your vector Table in
Internal SRAM. */
#define VECT_TAB_SRAM
#define VECT_TAB_OFFSET 0x00 /*!< Vector Table base offset field.
This value must be a multiple of 0x200. */
Added a definition of RAMFUNC to set section attributes :
#define RAMFUNC __attribute__ ((section (".ramfunc")))
Addded RAMFUNC before UART related function and prototypes so it gets run from RAM.
RAMFUNC void USART1_IRQHandler(void)
{
uint32_t sr = USART1->SR;
USART1->SR & USART_SR_ORE ? GPIO_SET(LED_ERROR_PORT, LED_ERROR_PIN_bp):GPIO_CLR(LED_ERROR_PORT, LED_ERROR_PIN_bp);
if(sr & USART_SR_TXE)
{
if(uart_1_send_write_pos != uart_1_send_read_pos)
{
USART1->DR = uart_1_send_buffer[uart_1_send_read_pos];
uart_1_send_read_pos = (uart_1_send_read_pos + 1) % USART_1_SEND_BUF_SIZE;
}
else
{
USART1->CR1 &= ~USART_CR1_TXEIE;
}
}
if(sr & (USART_SR_RXNE | USART_SR_ORE))
{
USART1->SR &= ~(USART_SR_RXNE | USART_SR_ORE);
uint8_t byte = USART1->DR;
uart_1_recv_buffer[uart_1_recv_write_pos] = byte;
uart_1_recv_write_pos = (uart_1_recv_write_pos + 1) % USART_1_RECV_BUF_SIZE;
}
}
My target runs properly with vector table and UART function in RAM but I still I get an overrun on USART. I'm also not disabling interrupts when performing the flash write operation.
I also tried to run code from CCM RAM instead of SRAM but has I saw on this post code can't be executed on CCM RAM on STMF32F4XX...
Any idea ? Thanks.
Any attempt to read from flash while a write operation is ongoing causes the bus to stall.
In order to not be blocked by flash writes, I think not only the the interrupt code, but the interrupted function has to run from RAM too, otherwise the core cannot proceed to a state when interrupts are possible.
Try relocating the flash handling code to RAM.
If it's possible, I'd advise switching to an MCU with two independent banks of flash memory, like the pin- and software-compatible 427/429/437/439 series. You can dedicate one bank to program code and the other to EEPROM-like data storage, then writing the second bank won't disturb code running from the first bank.
As suggested, it might be necessary to execute code from RAM; or, rather, make sure that no flash read operations are performed while the write is in progress.
To test, you might want to compile the entire executable for RAM, rather than flash (i.e., place everything into RAM and not use the flash at all).
You could then use gdb to load the binary and start execution... test your uart and make sure it is working as expected. At least this way you can be sure the flash is unused.
Some micros have READ WHILE WRITE sections that do not have a problem performing multiple operations simultaneously.
I'm trying to program LPC824 microcontroller board ([https://www.switch-science.com/catalog/2265/][1]) with LPCOpen.
I'm using it with LPCLink 2 debugger board.
My goal is to get some information from the "pressure sensor" with an ADC.
My code stops with a HardFault when executing a NVIC_EnableIRQ function(on line: 92).
If I don't use "NVIC interrupt controller" then my code works and I can get value from sensor with ADC.
What I am doing wrong?
Here is my adc.c code:
#include "board.h"
static volatile int ticks;
static bool sequenceComplete = false;
static bool thresholdCrossed = false;
#define TICKRATE_HZ (100) /* 100 ticks per second */
#define BOARD_ADC_CH 2
/**
* #brief Handle interrupt from ADC sequencer A
* #return Nothing
*/
void ADC_SEQA_IRQHandler(void) {
uint32_t pending;
/* Get pending interrupts */
pending = Chip_ADC_GetFlags(LPC_ADC);
/* Sequence A completion interrupt */
if (pending & ADC_FLAGS_SEQA_INT_MASK) {
sequenceComplete = true;
}
/* Threshold crossing interrupt on ADC input channel */
if (pending & ADC_FLAGS_THCMP_MASK(BOARD_ADC_CH)) {
thresholdCrossed = true;
}
/* Clear any pending interrupts */
Chip_ADC_ClearFlags(LPC_ADC, pending);
}
/**
* #brief Handle interrupt from SysTick timer
* #return Nothing
*/
void SysTick_Handler(void) {
static uint32_t count;
/* Every 1/2 second */
if (count++ == TICKRATE_HZ / 2) {
count = 0;
Chip_ADC_StartSequencer(LPC_ADC, ADC_SEQA_IDX);
}
}
/**
* #brief main routine for ADC example
* #return Function should not exit
*/
int main(void) {
uint32_t rawSample;
int j;
SystemCoreClockUpdate();
Board_Init();
/* Setup ADC for 12-bit mode and normal power */
Chip_ADC_Init(LPC_ADC, 0);
Chip_ADC_Init(LPC_ADC, ADC_CR_MODE10BIT);
/* Need to do a calibration after initialization and trim */
Chip_ADC_StartCalibration(LPC_ADC);
while (!(Chip_ADC_IsCalibrationDone(LPC_ADC))) {
}
/* Setup for maximum ADC clock rate using sycnchronous clocking */
Chip_ADC_SetClockRate(LPC_ADC, ADC_MAX_SAMPLE_RATE);
Chip_ADC_SetupSequencer(LPC_ADC, ADC_SEQA_IDX,
(ADC_SEQ_CTRL_CHANSEL(BOARD_ADC_CH) | ADC_SEQ_CTRL_MODE_EOS));
Chip_Clock_EnablePeriphClock(SYSCTL_CLOCK_SWM);
Chip_SWM_EnableFixedPin(SWM_FIXED_ADC2);
Chip_Clock_DisablePeriphClock(SYSCTL_CLOCK_SWM);
/* Setup threshold 0 low and high values to about 25% and 75% of max */
Chip_ADC_SetThrLowValue(LPC_ADC, 0, ((1 * 0xFFF) / 4));
Chip_ADC_SetThrHighValue(LPC_ADC, 0, ((3 * 0xFFF) / 4));
Chip_ADC_ClearFlags(LPC_ADC, Chip_ADC_GetFlags(LPC_ADC));
Chip_ADC_EnableInt(LPC_ADC,
(ADC_INTEN_SEQA_ENABLE | ADC_INTEN_OVRRUN_ENABLE));
Chip_ADC_SelectTH0Channels(LPC_ADC, ADC_THRSEL_CHAN_SEL_THR1(BOARD_ADC_CH));
Chip_ADC_SetThresholdInt(LPC_ADC, BOARD_ADC_CH, ADC_INTEN_THCMP_CROSSING);
/* Enable ADC NVIC interrupt */
NVIC_EnableIRQ(ADC_SEQA_IRQn);
Chip_ADC_EnableSequencer(LPC_ADC, ADC_SEQA_IDX);
SysTick_Config(SystemCoreClock / TICKRATE_HZ);
/* Endless loop */
while (1) {
/* Sleep until something happens */
__WFI();
if (thresholdCrossed) {
thresholdCrossed = false;
printf("********ADC threshold event********\r\n");
}
/* Is a conversion sequence complete? */
if (sequenceComplete) {
sequenceComplete = false;
/* Get raw sample data for channels 0-11 */
for (j = 0; j < 12; j++) {
rawSample = Chip_ADC_GetDataReg(LPC_ADC, j);
/* Show some ADC data */
if (rawSample & (ADC_DR_OVERRUN | ADC_SEQ_GDAT_DATAVALID)) {
printf("Chan: %d Val: %d\r\n", j, ADC_DR_RESULT(rawSample));
printf("Threshold range: 0x%x ",
ADC_DR_THCMPRANGE(rawSample));
printf("Threshold cross: 0x%x\r\n",
ADC_DR_THCMPCROSS(rawSample));
printf("Overrun: %s ",
(rawSample & ADC_DR_OVERRUN) ? "true" : "false");
printf("Data Valid: %s\r\n\r\n",
(rawSample & ADC_SEQ_GDAT_DATAVALID) ?
"true" : "false");
}
}
}
}
}
Hard fault usually means that you try to execute code outside allowed addresses. If you have not registered the interrupt in the vector table but enabled it, the MCU will jump to whatever address that's written there instead, after which the program crashes.
How to fix that depends on tool chain. Assuming LPCXpresso, you have several options to set up libraries (I don't know about LPCOpen specifically), so where to find the vector table is different from case to case. However, this works quite similar on most MCUs, ARM or not. Somewhere in a "crt start-up" file you should have something along the lines of this:
void (* const g_pfnVectors[])(void) = ...
This is an array of function pointers which will be the vector table allocated in memory at address 0 on Cortex M. You have to place your function at the relevant interrupt vector. For example it may say something like
PIN_INT0_IRQHandler, // PIO INT0
If that's the interrupt you should implement, then you replace that line:
#include "my_irq_stuff.h"
...
void (* const g_pfnVectors[])(void) =
...
my_INT0, // PIO INT0
Assuming my_irq_stuff.h contains the function prototype my_INT0 for the interrupt service routine. The actual routine should be implemented in the corresponding .c file.
I need to write a program to implement real time clock for ARM architecture. example: LPC213x
It should display Hour Minute and Seconds. I have no idea about ARM so having trouble getting started.
My code below is not working
// ...
int main (void) {
int hour=0;
int min=0;
int sec;
init_serial(); /* Init UART */
Initialize();
CCR=0x11;
PCONP=0x1815BE;
ILR=0x1; // Clearing Interrupt
//printf("\nTime is %02d:%02x:%02d",hour,min,sec);
while (1) { /* Loop forever */
}
}
void Initialize()
{
VPBDIV=0x0;
//CCR=0x2;
//ILR=0x3;
HOUR=0x0;
SEC=0x0;
MIN=0x0;
ILR = 0x03;
CCR = (1<<4) | (1<<0);
VICVectAddr13 = (unsigned)read_rtc;
VICVectCntl13 |= 0x20 | VIC_RTC;
VICIntEnable |= (1 << VIC_RTC);
}
/* Interrupt Service Routine*/
__irq void read_rtc()
{
int hour=0;
int min=0;
int sec;
ILR=0x1; // Clearing Interrupt
hour=(CTIME0 & MASKHR)>>16;
min= (CTIME0 & MASKMIN)>>8;
sec=CTIME0 & MASKSEC;
printf("\nTime is %02d:%02x:%02d",hour,min,sec);
//VICVectAddr=0xff;
VICVectAddr = 0;
}
According to this board description for the LPC213x, it is delivered with an example program called "Real-Time Clock - Demonstrates how the real-time clock can be used". This also implies that the board features real-time clock hardware, which is going to make it a lot easier.
I suggest you read up on that program, to figure out how to talk to the RTC hardware. The next step would be to solve the display requirements. The two obvious choices are either 7-segment LED displays, or an LCD.
Both are well-known technologies about which loads have been written, follow the Wikipedia links to find out more.
This is all for the LPC2468. We have a setTime function too, but I don't want to do ALL the work for you. ;) We have custom register files for ease of access, but if you look at the LPC manual, it's obvious where they correlate. You just have to shift values into the right place, and do bitwise operations. For example:
#define RTC_HOUR (*(volatile RTC_HOUR_t *)(RTC_BASE_ADDR + (uint32_t)0x28))
Time struture:
typedef struct {
uint8_t seconds; /* Second value - [0,59] */
uint8_t minutes; /* Minute value - [0,59] */
uint8_t hour; /* Hour value - [0,23] */
uint8_t mDay; /* Day of the month value - [1,31] */
uint8_t month; /* Month value - [1,12] */
uint16_t year; /* Year value - [0,4095] */
uint8_t wDay; /* Day of week value - [0,6] */
uint16_t yDay; /* Day of year value - [1,365] */
} rtcTime_t;
RTC functions:
void rtc_ClockStart(void) {
/* Enable CLOCK into RTC */
scb_ClockStart(M_RTC);
RTC_CCR.B.CLKSRC = 1;
RTC_CCR.B.CLKEN = 1;
return;
}
void rtc_ClockStop(void) {
RTC_CCR.B.CLKEN = 0;
/* Disable CLOCK into RTC */
scb_ClockStop(M_RTC);
return;
}
void rtc_GetTime(rtcTime_t *p_localTime) {
/* Set RTC timer value */
p_localTime->seconds = RTC_SEC.R;
p_localTime->minutes = RTC_MIN.R;
p_localTime->hour = RTC_HOUR.R;
p_localTime->mDay = RTC_DOM.R;
p_localTime->wDay = RTC_DOW.R;
p_localTime->yDay = RTC_DOY.R;
p_localTime->month = RTC_MONTH.R;
p_localTime->year = RTC_YEAR.R;
}
System control block functions:
void scb_ClockStart(module_t module) {
PCONP.R |= (uint32_t)1 << module;
}
void scb_ClockStop(module_t module) {
PCONP.R &= ~((uint32_t)1 << module);
}
If you need information about ARM then this ARM System Developer's Guide: Designing and Optimizing System Software may help you.
We used to do some thing like this for ARM.
#include "LPC21xx.h"
void rtc()
{
*IODIR1 = 0x00FF0000;
// Set LED ports to output
*IOSET1 = 0x00020000;
*PREINT = 0x000001C8;
// Set RTC prescaler for 12.000Mhz Xtal
*PREFRAC = 0x000061C0;
*CCR = 0x01;
*SEC = 0;
*MIN = 0;
*HOUR= 0;
}
A real-time clock (RTC) is a computer clock (most often in the form of an integrated circuit) that keeps track of the current time. Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time.
You May refer this two link, i am sure it will give you further understanding :-
1). ARM Cortex Programming using CMSIS:- http://www.firmcodes.com/cmsis/
2). RTC Programming with ARM7:- http://www.firmcodes.com/microcontrollers/arm/real-time-clock-of-arm7-lpc2148/