I need to write a program to implement real time clock for ARM architecture. example: LPC213x
It should display Hour Minute and Seconds. I have no idea about ARM so having trouble getting started.
My code below is not working
// ...
int main (void) {
int hour=0;
int min=0;
int sec;
init_serial(); /* Init UART */
Initialize();
CCR=0x11;
PCONP=0x1815BE;
ILR=0x1; // Clearing Interrupt
//printf("\nTime is %02d:%02x:%02d",hour,min,sec);
while (1) { /* Loop forever */
}
}
void Initialize()
{
VPBDIV=0x0;
//CCR=0x2;
//ILR=0x3;
HOUR=0x0;
SEC=0x0;
MIN=0x0;
ILR = 0x03;
CCR = (1<<4) | (1<<0);
VICVectAddr13 = (unsigned)read_rtc;
VICVectCntl13 |= 0x20 | VIC_RTC;
VICIntEnable |= (1 << VIC_RTC);
}
/* Interrupt Service Routine*/
__irq void read_rtc()
{
int hour=0;
int min=0;
int sec;
ILR=0x1; // Clearing Interrupt
hour=(CTIME0 & MASKHR)>>16;
min= (CTIME0 & MASKMIN)>>8;
sec=CTIME0 & MASKSEC;
printf("\nTime is %02d:%02x:%02d",hour,min,sec);
//VICVectAddr=0xff;
VICVectAddr = 0;
}
According to this board description for the LPC213x, it is delivered with an example program called "Real-Time Clock - Demonstrates how the real-time clock can be used". This also implies that the board features real-time clock hardware, which is going to make it a lot easier.
I suggest you read up on that program, to figure out how to talk to the RTC hardware. The next step would be to solve the display requirements. The two obvious choices are either 7-segment LED displays, or an LCD.
Both are well-known technologies about which loads have been written, follow the Wikipedia links to find out more.
This is all for the LPC2468. We have a setTime function too, but I don't want to do ALL the work for you. ;) We have custom register files for ease of access, but if you look at the LPC manual, it's obvious where they correlate. You just have to shift values into the right place, and do bitwise operations. For example:
#define RTC_HOUR (*(volatile RTC_HOUR_t *)(RTC_BASE_ADDR + (uint32_t)0x28))
Time struture:
typedef struct {
uint8_t seconds; /* Second value - [0,59] */
uint8_t minutes; /* Minute value - [0,59] */
uint8_t hour; /* Hour value - [0,23] */
uint8_t mDay; /* Day of the month value - [1,31] */
uint8_t month; /* Month value - [1,12] */
uint16_t year; /* Year value - [0,4095] */
uint8_t wDay; /* Day of week value - [0,6] */
uint16_t yDay; /* Day of year value - [1,365] */
} rtcTime_t;
RTC functions:
void rtc_ClockStart(void) {
/* Enable CLOCK into RTC */
scb_ClockStart(M_RTC);
RTC_CCR.B.CLKSRC = 1;
RTC_CCR.B.CLKEN = 1;
return;
}
void rtc_ClockStop(void) {
RTC_CCR.B.CLKEN = 0;
/* Disable CLOCK into RTC */
scb_ClockStop(M_RTC);
return;
}
void rtc_GetTime(rtcTime_t *p_localTime) {
/* Set RTC timer value */
p_localTime->seconds = RTC_SEC.R;
p_localTime->minutes = RTC_MIN.R;
p_localTime->hour = RTC_HOUR.R;
p_localTime->mDay = RTC_DOM.R;
p_localTime->wDay = RTC_DOW.R;
p_localTime->yDay = RTC_DOY.R;
p_localTime->month = RTC_MONTH.R;
p_localTime->year = RTC_YEAR.R;
}
System control block functions:
void scb_ClockStart(module_t module) {
PCONP.R |= (uint32_t)1 << module;
}
void scb_ClockStop(module_t module) {
PCONP.R &= ~((uint32_t)1 << module);
}
If you need information about ARM then this ARM System Developer's Guide: Designing and Optimizing System Software may help you.
We used to do some thing like this for ARM.
#include "LPC21xx.h"
void rtc()
{
*IODIR1 = 0x00FF0000;
// Set LED ports to output
*IOSET1 = 0x00020000;
*PREINT = 0x000001C8;
// Set RTC prescaler for 12.000Mhz Xtal
*PREFRAC = 0x000061C0;
*CCR = 0x01;
*SEC = 0;
*MIN = 0;
*HOUR= 0;
}
A real-time clock (RTC) is a computer clock (most often in the form of an integrated circuit) that keeps track of the current time. Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time.
You May refer this two link, i am sure it will give you further understanding :-
1). ARM Cortex Programming using CMSIS:- http://www.firmcodes.com/cmsis/
2). RTC Programming with ARM7:- http://www.firmcodes.com/microcontrollers/arm/real-time-clock-of-arm7-lpc2148/
Related
I've been trying to write some code on STM32F411re usign IAR workbench in order to learn more about Cortex. I tried to implement TIMER3 PWM mode (center-aligned) with TIMER 2 being called every (half a second, second doesnt matter as much performing LED blink) and ADC performing continious regular conversion on one channel. I've tried to implement it all using interrupts. TIMER3 interrupt is inteded to be generated on Overflow and underflow and within ISR i would change PWM width with value from ADC (changed with potentiometer).
Problem that i faced while creating project seems to be that, when TIMER3 is activated, program doesnt hit breakpoint ( does not enter) ADC ISR routine nor within any line of program within while(1) loop. When i comment TIMER 3, program normally goes through ADC ISR.
#include "stdio.h"
void Uart6Configuration(void);
void send_data (uint8_t c);
void init_PWM(void);
void init_ADC(void) ;
void init_Interupts(void);
unsigned long vrednost_ADC=0;
float temp=0;
unsigned long counter=0;
int main()
{
RCC->APB1ENR|=(1<<0); //TIMER 2
RCC->AHB1ENR|=(1<<0); //GPIOA
RCC->AHB1ENR|=(1<<2); //GPIOC
GPIOA->MODER|=(1<<10);
RCC->APB2ENR|=(1<<5); // USART6[PC6,PC7]
/* Define TIMER-a 3 */
RCC->APB1ENR|=(1<<1); //TIMER 3
GPIOB->MODER|=(1<<9);
GPIOB->AFR[0]|=(1<<17);
TIM2->PSC=89;
TIM2->ARR=0xFFFF;
TIM2->DIER|= (1<<0);
TIM2->EGR|= (1<<0);
Uart6Configuration();
init_PWM();
init_ADC();
init_Interupts();
TIM2->CR1|=(1<<0);
TIM3->CR1|=(1<<0);
while(!(TIM2->SR & (1<<0)));
ADC1->CR2|=(1<<30); // START ADC
/*GLAVNA PROGRAMSKA PETLJA*/
while(1)
{
counter++;
if(counter>100000)
{
printf("AD konverzija=%f \n\r",temp); //Terminal I/O
counter=0;
}
}
/* ************************/
return 0;
}
void TIM2_IRQHandler(void )
{
if(TIM2->SR & TIM_SR_UIF)
{
TIM2->SR &= ~TIM_SR_UIF;
GPIOA->ODR^=(1<<5);
}
TIM2->SR =0;
}
void Uart6Configuration (void)
{
GPIOC->MODER |= (2<<12); // --> Alternate Function for Pin PA11
GPIOC->MODER |= (2<<14); // --> Alternate Function for Pin PA12
GPIOC->OSPEEDR|=(3<<12)|(3<<14);
GPIOC->AFR[0] |= (8<<24); //AF7 bitovi 8,9,10,11 PC6
GPIOC->AFR[0] |= (8<<28); //AF7 bitovi 15,14,13,12 PC7
USART6->CR1=0;
USART6->CR1|=(1<<13);
USART6->CR1 &= ~(1<<12);
USART6->BRR=(3<<0)|(104<<4);
USART6->CR1|=(1<<2);
USART6->CR1|=(1<<3);
}
void send_data (uint8_t c)
{
while(!(USART6->SR & (1<<6)));
USART6->DR=c;
}
uint8_t UART6_GetChar (void)
{
/*********** STEPS FOLLOWED *************
1. Wait for the RXNE bit to set. It indicates that the data has been received and can be read.
2. Read the data from USART_DR Register. This also clears the RXNE bit
****************************************/
uint8_t temp;
while (!(USART2->SR & (1<<5))); // wait for RXNE bit to set
temp = USART2->DR; // Read the data. This clears the RXNE also
return temp;
}
void init_PWM(void)
{
/*PB_4*/
TIM3->PSC=15;
TIM3->ARR=750;
TIM3->CR1|= (1<<5)|(1<<6) | (1<<2); // PWM CENTAR EDGE MODE
TIM3->CCER|=(1<<0); //Capture/Compare 1 output enable.
TIM3->CCR1=500; //DUTY CYCLE
TIM3->CCMR1|=(1<<5)|(1<<6); // PWM MODE bit 5 i6
TIM3->DIER|=(1<<0);
}
void init_ADC(void)
{
RCC->APB2ENR|=(1<<8); // Clock za adc
GPIOA->MODER|=(1<<2)|(1<<3); // Analog mode PA.1
ADC1->SQR3|=(1<<0); // Choose channel ADC1/1
ADC1->CR1|=(1<<5); //EOCIE interupt generates when ADC finish conversion
ADC1->CR2|=(1<<1)|(1<<0); // Continious mode, ADC ON
}
void ADC_IRQHandler(void)
{
vrednost_ADC=ADC1->DR;
temp=(float)((vrednost_ADC/4095.0)*3.3) ;
}
void TIM3_IRQHandler(void )
{
if((TIM3->CNT & 10)<=0) // DETECTOVATI UNDERFLOW
{
TIM3->CCR1=(vrednost_ADC/4095)*1000;
TIM3->EGR|=(1<<0);
}
}
void init_Interupts(void)
{
NVIC_SetPriority (ADC_IRQn, (13));
NVIC_SetPriority (TIM2_IRQn, 14);
NVIC_SetPriority (TIM3_IRQn, 15);
NVIC_EnableIRQ(TIM2_IRQn);
NVIC_EnableIRQ(TIM3_IRQn);
NVIC_EnableIRQ(ADC_IRQn );
}```
I'm working on Keil software and using LM3S316 microcontroller. Usually we address registers in microcontrollers in form of:
#define GPIO_PORTC_DATA_R (*((volatile uint32_t *)0x400063FC))
My question is how can I access to single pin of register for example, if I have this method:
char process_key(int a)
{ PC_0 = a ;}
How can I get PC_0 and how to define it?
Thank you
Given say:
#define PIN0 (1u<<0)
#define PIN1 (1u<<1)
#define PIN2 (1u<<2)
// etc...
Then:
char process_key(int a)
{
if( a != 0 )
{
// Set bit
GPIO_PORTC_DATA_R |= PIN0 ;
}
else
{
// Clear bit
GPIO_PORTC_DATA_R &= ~PIN0 ;
}
}
A generalisation of this idiomatic technique is presented at How do you set, clear, and toggle a single bit?
However the read-modify-write implied by |= / &= can be problematic if the register might be accessed in different thread/interrupt contexts, as well as adding a possibly undesirable overhead. Cortex-M3/4 parts have a feature known as bit-banding that allows individual bits to be addressed directly and atomically. Given:
volatile uint32_t* getBitBandAddress( volatile const void* address, int bit )
{
__IO uint32_t* bit_address = 0;
uint32_t addr = reinterpret_cast<uint32_t>(address);
// This bit maniplation makes the function valid for RAM
// and Peripheral bitband regions
uint32_t word_band_base = addr & 0xf0000000u;
uint32_t bit_band_base = word_band_base | 0x02000000u;
uint32_t offset = addr - word_band_base;
// Calculate bit band address
bit_address = reinterpret_cast<__IO uint32_t*>(bit_band_base + (offset * 32u) + (static_cast<uint32_t>(bit) * 4u));
return bit_address ;
}
Then you can have:
char process_key(int a)
{
static volatile uint32_t* PC0_BB_ADDR = getBitBandAddress( &GPIO_PORTC_DATA_R, 0 ) ;
*PC0_BB_ADDR = a ;
}
You could of course determine and hard-code the bit-band address; for example:
#define PC0 (*((volatile uint32_t *)0x420C7F88u))
Then:
char process_key(int a)
{
PC0 = a ;
}
Details of the bit-band address calculation can be found ARM Cortex-M Technical Reference Manual, and there is an on-line calculator here.
I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(>emp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.
I'm trying to program LPC824 microcontroller board ([https://www.switch-science.com/catalog/2265/][1]) with LPCOpen.
I'm using it with LPCLink 2 debugger board.
My goal is to get some information from the "pressure sensor" with an ADC.
My code stops with a HardFault when executing a NVIC_EnableIRQ function(on line: 92).
If I don't use "NVIC interrupt controller" then my code works and I can get value from sensor with ADC.
What I am doing wrong?
Here is my adc.c code:
#include "board.h"
static volatile int ticks;
static bool sequenceComplete = false;
static bool thresholdCrossed = false;
#define TICKRATE_HZ (100) /* 100 ticks per second */
#define BOARD_ADC_CH 2
/**
* #brief Handle interrupt from ADC sequencer A
* #return Nothing
*/
void ADC_SEQA_IRQHandler(void) {
uint32_t pending;
/* Get pending interrupts */
pending = Chip_ADC_GetFlags(LPC_ADC);
/* Sequence A completion interrupt */
if (pending & ADC_FLAGS_SEQA_INT_MASK) {
sequenceComplete = true;
}
/* Threshold crossing interrupt on ADC input channel */
if (pending & ADC_FLAGS_THCMP_MASK(BOARD_ADC_CH)) {
thresholdCrossed = true;
}
/* Clear any pending interrupts */
Chip_ADC_ClearFlags(LPC_ADC, pending);
}
/**
* #brief Handle interrupt from SysTick timer
* #return Nothing
*/
void SysTick_Handler(void) {
static uint32_t count;
/* Every 1/2 second */
if (count++ == TICKRATE_HZ / 2) {
count = 0;
Chip_ADC_StartSequencer(LPC_ADC, ADC_SEQA_IDX);
}
}
/**
* #brief main routine for ADC example
* #return Function should not exit
*/
int main(void) {
uint32_t rawSample;
int j;
SystemCoreClockUpdate();
Board_Init();
/* Setup ADC for 12-bit mode and normal power */
Chip_ADC_Init(LPC_ADC, 0);
Chip_ADC_Init(LPC_ADC, ADC_CR_MODE10BIT);
/* Need to do a calibration after initialization and trim */
Chip_ADC_StartCalibration(LPC_ADC);
while (!(Chip_ADC_IsCalibrationDone(LPC_ADC))) {
}
/* Setup for maximum ADC clock rate using sycnchronous clocking */
Chip_ADC_SetClockRate(LPC_ADC, ADC_MAX_SAMPLE_RATE);
Chip_ADC_SetupSequencer(LPC_ADC, ADC_SEQA_IDX,
(ADC_SEQ_CTRL_CHANSEL(BOARD_ADC_CH) | ADC_SEQ_CTRL_MODE_EOS));
Chip_Clock_EnablePeriphClock(SYSCTL_CLOCK_SWM);
Chip_SWM_EnableFixedPin(SWM_FIXED_ADC2);
Chip_Clock_DisablePeriphClock(SYSCTL_CLOCK_SWM);
/* Setup threshold 0 low and high values to about 25% and 75% of max */
Chip_ADC_SetThrLowValue(LPC_ADC, 0, ((1 * 0xFFF) / 4));
Chip_ADC_SetThrHighValue(LPC_ADC, 0, ((3 * 0xFFF) / 4));
Chip_ADC_ClearFlags(LPC_ADC, Chip_ADC_GetFlags(LPC_ADC));
Chip_ADC_EnableInt(LPC_ADC,
(ADC_INTEN_SEQA_ENABLE | ADC_INTEN_OVRRUN_ENABLE));
Chip_ADC_SelectTH0Channels(LPC_ADC, ADC_THRSEL_CHAN_SEL_THR1(BOARD_ADC_CH));
Chip_ADC_SetThresholdInt(LPC_ADC, BOARD_ADC_CH, ADC_INTEN_THCMP_CROSSING);
/* Enable ADC NVIC interrupt */
NVIC_EnableIRQ(ADC_SEQA_IRQn);
Chip_ADC_EnableSequencer(LPC_ADC, ADC_SEQA_IDX);
SysTick_Config(SystemCoreClock / TICKRATE_HZ);
/* Endless loop */
while (1) {
/* Sleep until something happens */
__WFI();
if (thresholdCrossed) {
thresholdCrossed = false;
printf("********ADC threshold event********\r\n");
}
/* Is a conversion sequence complete? */
if (sequenceComplete) {
sequenceComplete = false;
/* Get raw sample data for channels 0-11 */
for (j = 0; j < 12; j++) {
rawSample = Chip_ADC_GetDataReg(LPC_ADC, j);
/* Show some ADC data */
if (rawSample & (ADC_DR_OVERRUN | ADC_SEQ_GDAT_DATAVALID)) {
printf("Chan: %d Val: %d\r\n", j, ADC_DR_RESULT(rawSample));
printf("Threshold range: 0x%x ",
ADC_DR_THCMPRANGE(rawSample));
printf("Threshold cross: 0x%x\r\n",
ADC_DR_THCMPCROSS(rawSample));
printf("Overrun: %s ",
(rawSample & ADC_DR_OVERRUN) ? "true" : "false");
printf("Data Valid: %s\r\n\r\n",
(rawSample & ADC_SEQ_GDAT_DATAVALID) ?
"true" : "false");
}
}
}
}
}
Hard fault usually means that you try to execute code outside allowed addresses. If you have not registered the interrupt in the vector table but enabled it, the MCU will jump to whatever address that's written there instead, after which the program crashes.
How to fix that depends on tool chain. Assuming LPCXpresso, you have several options to set up libraries (I don't know about LPCOpen specifically), so where to find the vector table is different from case to case. However, this works quite similar on most MCUs, ARM or not. Somewhere in a "crt start-up" file you should have something along the lines of this:
void (* const g_pfnVectors[])(void) = ...
This is an array of function pointers which will be the vector table allocated in memory at address 0 on Cortex M. You have to place your function at the relevant interrupt vector. For example it may say something like
PIN_INT0_IRQHandler, // PIO INT0
If that's the interrupt you should implement, then you replace that line:
#include "my_irq_stuff.h"
...
void (* const g_pfnVectors[])(void) =
...
my_INT0, // PIO INT0
Assuming my_irq_stuff.h contains the function prototype my_INT0 for the interrupt service routine. The actual routine should be implemented in the corresponding .c file.
I'am using STM32F4 board with CMSIS library and I want setup an interrupt driven SPI, it means an interrupt is triggered each time a byte is sent by the SPI peripheral. The initiaisation function is as below:
void init_SPI1(void)
{
NVIC_InitTypeDef NVIC_InitStructure;
GPIO_InitTypeDef GPIO_InitStruct;
SPI_InitTypeDef SPI_InitStruct;
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOA, ENABLE);
GPIO_InitStruct.GPIO_Pin = GPIO_Pin_7 | GPIO_Pin_6 | GPIO_Pin_5|GPIO_Pin_4;
GPIO_InitStruct.GPIO_Mode = GPIO_Mode_AF;
GPIO_InitStruct.GPIO_OType = GPIO_OType_PP;
GPIO_InitStruct.GPIO_Speed = GPIO_Speed_50MHz;
GPIO_InitStruct.GPIO_PuPd = GPIO_PuPd_NOPULL;
GPIO_Init(GPIOA, &GPIO_InitStruct);
// connect SPI1 pins to SPI alternate function
//GPIO_PinAFConfig(GPIOA, GPIO_PinSource4, GPIO_AF_SPI1);
GPIO_PinAFConfig(GPIOA, GPIO_PinSource5, GPIO_AF_SPI1);
GPIO_PinAFConfig(GPIOA, GPIO_PinSource6, GPIO_AF_SPI1);
GPIO_PinAFConfig(GPIOA, GPIO_PinSource7, GPIO_AF_SPI1);
//Set chip select high
GPIOA->BSRRL |= GPIO_Pin_4; // set PE4 high
// enable peripheral clock
RCC_APB2PeriphClockCmd(RCC_APB2Periph_SPI1, ENABLE);
/* configure SPI1 in Mode 0
* CPOL = 0 --> clock is low when idle
* CPHA = 0 --> data is sampled at the first edge
*/
SPI_StructInit(&SPI_InitStruct); // set default settings
SPI_InitStruct.SPI_Direction = SPI_Direction_2Lines_FullDuplex; // set to full duplex mode, seperate MOSI and MISO lines
SPI_InitStruct.SPI_Mode = SPI_Mode_Master; // transmit in master mode, NSS pin has to be always high
SPI_InitStruct.SPI_DataSize = SPI_DataSize_8b; // one packet of data is 8 bits wide
SPI_InitStruct.SPI_CPOL = SPI_CPOL_Low; // clock is low when idle
SPI_InitStruct.SPI_CPHA = SPI_CPHA_1Edge; // data sampled at first edge
SPI_InitStruct.SPI_NSS = SPI_NSS_Soft ; // set the NSS management to internal and pull internal NSS high
SPI_InitStruct.SPI_BaudRatePrescaler = SPI_BaudRatePrescaler_4; // SPI frequency is APB2 frequency / 4
SPI_InitStruct.SPI_FirstBit = SPI_FirstBit_MSB;// data is transmitted MSB first
SPI_Init(SPI1, &SPI_InitStruct);
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_2);
NVIC_InitStructure.NVIC_IRQChannel = SPI1_IRQn;
NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 0;
NVIC_InitStructure.NVIC_IRQChannelSubPriority = 1;
NVIC_InitStructure.NVIC_IRQChannelCmd = ENABLE;
NVIC_Init(&NVIC_InitStructure);
/* Enable SPI1*/
SPI_Cmd(SPI1, ENABLE);
return;
}
Then i just loopback SPI_MOSI to SPI_MISO and use a function that transmit the data (a very basic function that takes data from a buffer and then uses CMSIS functions for the transmission). The problem is that when the SPI interrupt is triggered, the program won't get out from the handler. the handler function looks lihe this:
void SPI1_IRQHandler()
{
int a;
a++;
SPI_I2S_ClearITPendingBit(SPI1,SPI_I2S_IT_TXE);
return;
}
Is it a problem in the CMSIS library, or I am not configuring the SPI interrupt in the good way? Please guide me to the right point.
EDIT
This is the function i use for data transmission
void write_SPI1()
{
int i;
for (i=0;i<SPI_TX_MAX; i++)
{
SPI_I2S_SendData(SPI1,spiTxBuff[i]);
SPI_I2S_ITConfig(SPI1,SPI_I2S_IT_RXNE,ENABLE);
}
}
and the interruption deals with the data reception, it just fill spiRxBuff when receiving new data.
void SPI1_IRQHandler()
{
while (SPI_I2S_GetFlagStatus(SPI1,SPI_I2S_FLAG_RXNE)== RESET);
spiRxBuff[spiRxCount]= SPI_I2S_ReceiveData(SPI1);
spiRxCount++;
}
The variable used for Reception / Transmission are declared as below :
uint8_t spiTxBuff[SPI_TX_MAX] = {0x01,0x02,0x03,0x04,0x05,0x06};
uint8_t spiRxBuff[SPI_RX_MAX];
static volatile int spiRxCount= 0; // used in SPI1_IRQHandler
what is strange now is that i'am having {0x01,0x02,0x03,0x05,0x06} in spiRxBuff instead of {0x01,0x02,0x03,0x04,0x05,0x06}, but using debug mode the data in spiRxBuff are correct, what goes wrong in your opinion ?
You did not show the function doing the transmit, so I don't know exactly what are you trying to accomplish
Transmitting in a loop
If you are transmitting from a function (in a loop), then you don't need interrupts at all, just make sure that the TXE flag is set before you transmit. Note that you have to interleave sending and receiving somehow.
void SPI1_Transmit(uint8_t *send, uint8_t *receive, int count) {
while(count-- > 0) {
while(SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE)!=SET) {
if(SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_RXNE)==SET)
*receive++ = SPI_I2S_ReceiveData(SPI1);
}
SPI_I2S_SendData(SPI1, *send++);
}
while(SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_RXNE)!=SET) {
/* wait for the last incoming byte */
}
*receive++ = SPI_I2S_ReceiveData(SPI1);
}
Transmitting from interrupt
The TXE interrupt flag is set as long as the SPI device is not busy sending. If you don't do something about it in the interrupt handler, it will trigger an interrupt immediately again and again. You can't clear it manually, but by transmitting another byte, and resetting the transmit interrupt enable flag before sending the last byte.
volatile int spi1_tx_count, spi1_rx_count;
uint8_t *spi1_tx_ptr;
volatile uint8_t *spi1_rx_ptr;
/* set these global variables before enabling interrupts */
void SPI1_IRQHandler() {
if (SPI_I2S_GetITStatus(SPI1, SPI_I2S_IT_TXE) == SET) {
if(--spi1_tx_count < 1)
SPI_I2S_ITConfig(SPI1, SPI_I2S_IT_TXE, DISABLE);
SPI_I2S_SendData(SPI1, *spi1_tx_ptr++);
}
if(SPI_I2S_GetITStatus(SPI1, SPI_I2S_IT_RXNE) == SET) {
*spi_rx_ptr++ = SPI_I2S_ReceiveData(SPI1);
spi1_rx_count++;
}
}
Using DMA
The above examples are using processor power and cycles for a task that can be handled by the DMA conroller alone. A lot of (if not all) processor cycles, if you are talking to a peripheral at 2 MBit/s.
See Project/STM32F4xx_StdPeriph_Examples/SPI/SPI_TwoBoards in the library for an example.
Sorry, I haven't noticed at all that you've amended the question. Look like notifications are sent on new comments or answers, but not on edits.
There are multiple problems with your code. In write_SPI1(), I'd enable RX interrupt only once before the loop, there is no need to do it again and again. Also, you should definitely check whether the TX register is available before sending.
void write_SPI1() {
int i;
SPI_I2S_ITConfig(SPI1,SPI_I2S_IT_RXNE,ENABLE);
for (i=0;i<SPI_TX_MAX; i++) {
while(SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE)!=SET)
;
SPI_I2S_SendData(SPI1,spiTxBuff[i]);
}
}
It is however a bad idea to wait on a flag in the interrupt handler. If RXNE is the only possible interrupt source, then you can proceed straight to receiving.