Can someone help me here to understand about arduino library variable name like what is tempTC, tempCJC, faultOpen, faultShortGND, faultShortVCC,
I am so confused. please help here ! thanks
/*
read_MAX31855.ino
TODO:
Clean up code and comment!!
Also make use of all library functions and make more robust.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
http://creativecommons.org/licenses/by-sa/3.0/
*/
#include <MAX31855.h>
// Adruino 1.0 pre-defines these variables
#if ARDUINO < 100
int SCK = 13;
int MISO = 12;
int SS = 10;
#endif
int LED = 9;
// Setup the variables we are going to use.
double tempTC, tempCJC;
bool faultOpen, faultShortGND, faultShortVCC, x;
bool temp_unit = 1; // 0 = Celsius, 1 = Fahrenheit
// Init the MAX31855 library for the chip.
MAX31855 temp(SCK, SS, MISO);
void setup() {
Serial.begin(9600);
pinMode(LED, OUTPUT);
}
void loop() {
x = temp.readMAX31855(&tempTC, &tempCJC, &faultOpen, &faultShortGND, &faultShortVCC, temp_unit);
Serial.print(tempTC);
Serial.print("\t");
Serial.print(tempCJC);
Serial.print("\t");
Serial.print(faultOpen);
Serial.print(faultShortGND);
Serial.println(faultShortVCC);
digitalWrite(LED, HIGH);
delay(500);
digitalWrite(LED, LOW);
delay(500);
}
tempTC: The thermocouple value read from the IC.
tempCJC: The Cold Junction-Compensated temperature value.
faultOpen faultShortGND faultShortVCC: Flags that indicate wether any of these faults occurred while reading the temperature.
All these are done directly in the MAX31855 IC, read via SPI with the library you posted.
Related
I am using a NUCLEO-L476RG development board,
I am learning to write GPIO drivers for STM32 family
I have implementing a simple logic in which I need to turn on an LED when a push button is pressed.
I have a strange issue:
Edit 1:The Bread board LED turns ON when the line temp=10 is commented, it doesn't turn ON when the delay issue called. Assuming if I add any line of code into that while loop the LED does not turn ON
The Bread board LED turns ON when the delay() function is commented, it doesn't turn ON when the delay issue called.
What could be the issue?
I have powered the board using the mini usb connector on the board, and the clock is configured at MSI with 4MHz
#define delay() for(uint32_t i=0; i<=50000; i++);
int main(void)
{
GPIO_Handle_t NucleoUserLED,NucleoUserPB,BreadBoardLED,BreadBoardPB;
uint8_t inputVal,BBinpVal;
uint32_t temp;
//User green led in the nucleo board connected to PA5
NucleoUserLED.pGPIO = GPIOA;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_5;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User blue button in the nucleo connected to PC13
NucleoUserPB.pGPIO = GPIOC;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_13;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
//User led in the bread board connected to PC8
BreadBoardLED.pGPIO = GPIOC;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_8;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User DPDT connected in the breadboard connected to PC6
BreadBoardPB.pGPIO = GPIOC;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_6;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_PU;
GPIO_PeriClkCtrl(GPIOA, ENABLE);
GPIO_PeriClkCtrl(GPIOC, ENABLE);
GPIO_Init(&NucleoUserLED);
GPIO_Init(&NucleoUserPB);
GPIO_Init(&BreadBoardLED);
GPIO_Init(&BreadBoardPB);
while(1)
{
/*****************************************************************
* Controlling the IO present in the nucleo board *
*****************************************************************/
inputVal = GPIO_ReadInputPin(NucleoUserPB.pGPIO, NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber);
BBinpVal = GPIO_ReadInputPin(BreadBoardPB.pGPIO, BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber);
if(inputVal == 0)
{
GPIO_ToggleOutputPin(NucleoUserLED.pGPIO, NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber);
}
/*****************************************************************
* Controlling the IO present in the bread board *
*****************************************************************/
if (BBinpVal == 0 )
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 1);
}
else
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 0);
}
delay();
}
return 0;
}
It is not a function only the macrodefinition.
Your loop is likely to be optimized out
define it as
void inline __attribute__((always_inline)) delay(uint32_t delay)
{
while(delay--) __asm("");
}
Bear in mind that 50000 can be quite long if you run on low clock settings.
Not sure what the issue is because "it is not working" is not very specific.
However there are "quality" issues:
That is an inappropriate use of a macro - there is no benefit over using a function. The function call overhead argument does not hold - it is a delay, it is supposed to take time!
The empty-loop counter is not declared volatile - the compiler at any optimisation level other then the minimum is likely to remove the loop altogether.
A for-loop for a delay is a crude and generally non-deterministic solution, with a period that will change between compilers, with different compiler options and on different targets or with different clock speeds. STM32 is a Cortex-M device and given that you should use the SYSTICK counter for this. For example, as a minimum something like:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
void delayms( uint32_t millisec )
{
static bool init = false ;
if( !init )
{
SysTick_Config( SystemCoreClock / 1000 ) ;
init = true ;
}
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
The issue was solved by declaring the iterator as a global variable. Now the LED turns on when the Push button is pressed
Previous implementation
#define delay() for(uint32_t i=0; i<=50000; i++);
Working implementation
uint32_t temp;
void delay(void)
{
for(temp = 0;temp<=50000;temp++)
{
;
}
}
Can any one tell me how declaring the variable as global solves the issue?
Find the working implementation below
#include <stdint.h>
#include "stm32l476xx.h"
#include "stm32l476xx_gpoi_driver.h"
#if !defined(__SOFT_FP__) && defined(__ARM_FP)
#warning "FPU is not initialized, but the project is compiling for an FPU. Please initialize the FPU before use."
#endif
uint32_t temp;
void delay(void)
{
for(temp = 0;temp<=50000;temp++)
{
;
}
}
int main(void)
{
GPIO_Handle_t NucleoUserLED,NucleoUserPB,BreadBoardLED,BreadBoardPB;
volatile uint8_t inputVal,BBinpVal;
//User green led in the nucleo board connected to PA5
NucleoUserLED.pGPIO = GPIOA;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_5;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User blue button in the nucleo connected to PC13
NucleoUserPB.pGPIO = GPIOC;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_13;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
//User led in the bread board connected to PC8
BreadBoardLED.pGPIO = GPIOC;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_8;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User DPDT connected in the breadboard connected to PC6
BreadBoardPB.pGPIO = GPIOC;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_6;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_PU;
GPIO_PeriClkCtrl(GPIOA, ENABLE);
GPIO_PeriClkCtrl(GPIOC, ENABLE);
GPIO_Init(&NucleoUserLED);
GPIO_Init(&NucleoUserPB);
GPIO_Init(&BreadBoardLED);
GPIO_Init(&BreadBoardPB);
while(1)
{
/*****************************************************************
* Controlling the IO present in the nucleo board *
*****************************************************************/
inputVal = GPIO_ReadInputPin(NucleoUserPB.pGPIO, NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber);
BBinpVal = GPIO_ReadInputPin(BreadBoardPB.pGPIO, BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber);
if(inputVal == 0)
{
GPIO_ToggleOutputPin(NucleoUserLED.pGPIO, NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber);
}
/*****************************************************************
* Controlling the IO present in the bread board *
*****************************************************************/
temp = 10;
if (BBinpVal == 0 )
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 1);
}
else
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 0);
}
delay();
}
return 0;
}
The issue is solved,
There was an bug in the driver layer I have written
Whenever an GPIO is configured as Input, the registers related to Output for that GPIO pin should be set to their reset value or the driver should not implement the API related to the Output
I'm trying to connect to an STM8 using uart. The STM seems to transmit data OK, but what it receives seems to be mostly junk, and often seems to receive 2 bytes at once. Here's the code:
#include "../stm8.h"
//
// Setup the system clock to run at 16MHz using the internal oscillator.
//
void InitialiseSystemClock()
{
CLK_ICKR = 0; // Reset the Internal Clock Register.
CLK_ICKR |= CLK_ICKR_HSIEN ; // Enable the HSI.
CLK_ECKR = 0; // Disable the external clock.
while ((CLK_ICKR & CLK_ICKR_HSIRDY) == 0); // Wait for the HSI to be ready for use.
CLK_CKDIVR = 0; // Ensure the clocks are running at full speed.
CLK_PCKENR1 = 0xff; // Enable all peripheral clocks.
CLK_PCKENR2 = 0xff; // Ditto.
CLK_CCOR = 0; // Turn off CCO.
CLK_HSITRIMR = 0; // Turn off any HSIU trimming.
CLK_SWIMCCR = 0; // Set SWIM to run at clock / 2.
CLK_SWR = 0xe1; // Use HSI as the clock source.
CLK_SWCR = 0; // Reset the clock switch control register.
CLK_SWCR |= CLK_SWCR_SWEN; // Enable switching.
while ((CLK_SWCR & CLK_SWCR_SWBSY) != 0); // Pause while the clock switch is busy.
}
//
// Setup the UART to run at 115200 baud, no parity, one stop bit, 8 data bits.
//
// Important: This relies upon the system clock being set to run at 16 MHz.
//
void init_uart()
{
//
// Clear the Idle Line Detected bit in the status register by a read
// to the UART1_SR register followed by a Read to the UART1_DR register.
//
//unsigned char tmp = UART1_SR;
//tmp = UART1_DR;
//UART1_SR = 0xC0; // mcarter set to default value
//
// Reset the UART registers to the reset values.
//
UART1_CR1 = 0;
UART1_CR2 = 0;
UART1_CR4 = 0;
UART1_CR3 = 0;
UART1_CR5 = 0;
UART1_GTR = 0;
UART1_PSCR = 0;
//
// Now setup the port to 115200,n,8,1.
//
// clear certain bits
UART1_CR1 &= ~UART1_CR1_M ; // 8 Data bits.
UART1_CR1 &= ~UART1_CR1_PCEN; // Disable parity
// stop bits
UART1_CR3 &= 0b11001111; // unmask the stop bit to default (1 stop bit)
//UART1_CR3 |= 0b00100000; // two stop bits
//UART1_CR3 |= 0b00110000; // 1.5 stop bits
//UART1_CR3 &= ~UART1_CR3_STOP; // 1 stop bit.
#if 1 //115200 baud
//UART1_BRR2 = 0x0a; // given in original example
UART1_BRR2 = 0x0b; // Set the baud rate registers to 115200 baud
UART1_BRR1 = 0x08; // based upon a 16 MHz system clock.
#else // 9600 baud, but seems to be worse than 115200
UART1_BRR2 = 0x03;
UART1_BRR1 = 0x69;
#endif
//
// Disable the transmitter and receiver.
//
//UART1_CR2_TEN = 0; // Disable transmit.
//UART1_CR2_REN = 0; // Disable receive.
//
// Set the clock polarity, lock phase and last bit clock pulse.
//
UART1_CR3 |= UART1_CR3_CPOL;
UART1_CR3 |= UART1_CR3_CPHA;
//UART1_CR3 |= UART1_CR3_LBCL; // this seems to cause problems
UART1_CR2 |= UART1_CR2_TEN; // enable transmit
UART1_CR2 |= UART1_CR2_REN; // enable receive
UART1_CR3 |= UART1_CR3_CLKEN; // unable uart clock
}
char uart_getc()
{
while((UART1_SR & UART1_SR_RXNE)==0); // Block until char rec'd
//char c = UART1_DR;
//return c;
return UART1_DR;
}
void uart_putc(char c)
{
while((UART1_SR & UART1_SR_TXE)==0); // Wait for transmission complete
UART1_DR = c; // transmit char
}
void UARTPrintf(char *message)
{
char *ch = message;
while (*ch)
uart_putc(*ch++);
}
void main()
{
disable_interrupts();
InitialiseSystemClock();
init_uart();
enable_interrupts();
UARTPrintf("Uart example: you type, I echo\n\r");
while (1)
{
//continue;
char c = uart_getc();
uart_putc(c);
//UARTPrintf("Hello from my microcontroller....\n\r");
//for (long counter = 0; counter < 2500000; counter++);
}
}
Relevant declaration headers are:
#define UART1_SR *(uchar*)(0x5230)
#define UART1_DR *(uchar*)(0x5231)
#define UART1_BRR1 *(uchar*)(0x5232)
#define UART1_BRR2 *(uchar*)(0x5233)
#define UART1_CR1 *(uchar*)(0x5234)
#define UART1_CR2 *(uchar*)(0x5235)
#define UART1_CR3 *(uchar*)(0x5236)
#define UART1_CR4 *(uchar*)(0x5237)
#define UART1_CR5 *(uchar*)(0x5238)
#define UART1_GTR *(uchar*)(0x5239)
#define UART1_PSCR *(uchar*)(0x523A)
#define UART1_CR1_M (1<<4)
#define UART1_CR1_PCEN (1<<2)
#define UART1_CR2_TEN (1<<3)
#define UART1_CR2_REN (1<<2)
#define UART1_CR3_STOP 4
#define UART1_CR3_CPOL (1<<2)
#define UART1_CR3_CPHA (1<<1)
#define UART1_CR3_LBCL (1<<0)
#define UART1_CR3_CLKEN (1<<3)
#define UART1_SR_TXE (1<<7)
#define UART1_SR_TC (1<<6)
#define UART1_SR_RXNE (1<<5)
I'm not really sure about stop bits, and all that. It's just "regular" serial communication.
I found that if I uncommented the line
//UART1_CR3 |= UART1_CR3_LBCL; // this seems to cause problems
then the stm8 prints out a continuous stream of junk. But with it commented out, the mcu seems to correctly know that there has been a transmission. There doesn't seem to be any pattern as to what it sees, though.
Hmm. The offending line seems to be
UART1_CR3 |= UART1_CR3_CLKEN;
It's purpose seem to be to "enable the SCLK pin". I don't really understand what's going on here, but according to a pinout diagram, one of the purposes of pin PD4 is UART1_CK. So you can attach a UART clock to the STM8 and this enables it?? And thereby causes problems if a clock isn't attached. It doesn't make that much sense, really; I didn't know uarts could have external clocks.
Anyway, commenting out the line seems to have fixed things.
I am making HID for some data acquisition system. There are a lot of sensors who store test data and when I need I get to them and connect via USB and take it. USB host sent 3 bytes and USB device, if bytes are correct, sends its stored data. Sounds simple.
Previously it was implemented on PC, but now I try to implement it on STM32F769 Discovery and have some serious problems.
I am using ARM Keil 5.27, code generated with STM32CubeMX 5.3.0. I tried just to make a plain simple program, later to integrate with the entire touchscreen interface. I tried to implement this code in main:
if (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
while (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
{
Transmission_function();
}
And the function itself:
#define DLE 0x10
#define STX 0x2
uint8_t tx_buf[]={DLE, STX, 120}, RX_FLAG;
uint32_t size_tx=sizeof(tx_buf);
void Transmission_function (void)
{
if (Appli_state == APPLICATION_READY)
{
i=0;
USBH_CDC_Transmit(&hUsbHostHS, tx_buf, size_tx);
HAL_Delay(50);
RX_FLAG=0;
}
}
It should send the message after I press the blue button on the Discovery board. All that I get is Hard Fault. While trying to debug, I tried manually to check after which action I get this error and it was functioning in stm32f7xx_ll_usb.c:
HAL_StatusTypeDef USB_WritePacket(USB_OTG_GlobalTypeDef *USBx, uint8_t *src,
uint8_t ch_ep_num, uint16_t len, uint8_t dma)
{
uint32_t USBx_BASE = (uint32_t)USBx;
uint32_t *pSrc = (uint32_t *)src;
uint32_t count32b, i;
if (dma == 0U)
{
count32b = ((uint32_t)len + 3U) / 4U;
for (i = 0U; i < count32b; i++)
{
USBx_DFIFO((uint32_t)ch_ep_num) = *((__packed uint32_t *)pSrc);
pSrc++;
}
}
return HAL_OK;
}
But trying to scroll back in disassembly I notice, that just before Hard Fault program was in this function inside stm32f7xx_hal_hcd.c, in case GRXSTS_PKTSTS_IN:
static void HCD_RXQLVL_IRQHandler(HCD_HandleTypeDef *hhcd)
{
USB_OTG_GlobalTypeDef *USBx = hhcd->Instance;
uint32_t USBx_BASE = (uint32_t)USBx;
uint32_t pktsts;
uint32_t pktcnt;
uint32_t temp;
uint32_t tmpreg;
uint32_t ch_num;
temp = hhcd->Instance->GRXSTSP;
ch_num = temp & USB_OTG_GRXSTSP_EPNUM;
pktsts = (temp & USB_OTG_GRXSTSP_PKTSTS) >> 17;
pktcnt = (temp & USB_OTG_GRXSTSP_BCNT) >> 4;
switch (pktsts)
{
case GRXSTS_PKTSTS_IN:
/* Read the data into the host buffer. */
if ((pktcnt > 0U) && (hhcd->hc[ch_num].xfer_buff != (void *)0))
{
(void)USB_ReadPacket(hhcd->Instance, hhcd->hc[ch_num].xfer_buff, (uint16_t)pktcnt);
/*manage multiple Xfer */
hhcd->hc[ch_num].xfer_buff += pktcnt;
hhcd->hc[ch_num].xfer_count += pktcnt;
if ((USBx_HC(ch_num)->HCTSIZ & USB_OTG_HCTSIZ_PKTCNT) > 0U)
{
/* re-activate the channel when more packets are expected */
tmpreg = USBx_HC(ch_num)->HCCHAR;
tmpreg &= ~USB_OTG_HCCHAR_CHDIS;
tmpreg |= USB_OTG_HCCHAR_CHENA;
USBx_HC(ch_num)->HCCHAR = tmpreg;
hhcd->hc[ch_num].toggle_in ^= 1U;
}
}
break;
case GRXSTS_PKTSTS_DATA_TOGGLE_ERR:
break;
case GRXSTS_PKTSTS_IN_XFER_COMP:
case GRXSTS_PKTSTS_CH_HALTED:
default:
break;
}
}
Last few lines from Dissasembly shows this:
0x080018B4 E8BD81F0 POP {r4-r8,pc}
0x080018B8 0000 DCW 0x0000
0x080018BA 1FF8 DCW 0x1FF8
Why it fails? How could I fix it? I do not have much experience with USB protocol.
I will post my walkaround this, but I am not sure why it worked. Solution was to use EXTI0 interrupt instead of just detection if PA0 is high, as I showed I used here:
if (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
while (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
Transmission_function();
I changed it to this:
void EXTI0_IRQHandler(void)
{
/* USER CODE BEGIN EXTI0_IRQn 0 */
if(Appli_state == APPLICATION_READY){
USBH_CDC_Transmit(&hUsbHostHS, Buffer, 3);
}
/* USER CODE END EXTI0_IRQn 0 */
HAL_GPIO_EXTI_IRQHandler(GPIO_PIN_0);
/* USER CODE BEGIN EXTI0_IRQn 1 */
/* USER CODE END EXTI0_IRQn 1 */
}
I was looking into spi driver in u boot , here is a small snippet from
omap_spi.c
void spi_init(void)
{
gpMCSPIRegs = (MCSPI_REGS *)MCSPI_SPI1_IO_BASE;
unsigned long u, n;
/* initialize the multipad and interface clock */
spi_init_spi1();
/* soft reset */
CSP_BITFINS(gpMCSPIRegs->SYSCONFIG, SPI_SYSCONFIG_SOFTRESET, 1);
for (n = 0; n < 100; n++) {
u = CSP_BITFEXT(gpMCSPIRegs->SYSSTATUS,
SPI_SYSSTATUS_RESETDONE);
if (u)
break;
}
...more code
}
here in
omap_spi.h
#define CSP_BITFINS(var, bit, val) \
(CSP_BITFCLR(var, bit)); (var |= CSP_BITFVAL(bit, val))
my confusion here is that when they do soft reset , they call this CSP_BITFINS macro. inside this macro all they do is just manipulate bits and fill structures. where do they access that hardware registers to configure ?
If you look further, you'll find that the pointer they are using, gpMCSPIRregs, is volatile and pointing at the memory-mapper hardware registers. The bits they are setting/clearing are in the hardware registers.
I need to write a program to implement real time clock for ARM architecture. example: LPC213x
It should display Hour Minute and Seconds. I have no idea about ARM so having trouble getting started.
My code below is not working
// ...
int main (void) {
int hour=0;
int min=0;
int sec;
init_serial(); /* Init UART */
Initialize();
CCR=0x11;
PCONP=0x1815BE;
ILR=0x1; // Clearing Interrupt
//printf("\nTime is %02d:%02x:%02d",hour,min,sec);
while (1) { /* Loop forever */
}
}
void Initialize()
{
VPBDIV=0x0;
//CCR=0x2;
//ILR=0x3;
HOUR=0x0;
SEC=0x0;
MIN=0x0;
ILR = 0x03;
CCR = (1<<4) | (1<<0);
VICVectAddr13 = (unsigned)read_rtc;
VICVectCntl13 |= 0x20 | VIC_RTC;
VICIntEnable |= (1 << VIC_RTC);
}
/* Interrupt Service Routine*/
__irq void read_rtc()
{
int hour=0;
int min=0;
int sec;
ILR=0x1; // Clearing Interrupt
hour=(CTIME0 & MASKHR)>>16;
min= (CTIME0 & MASKMIN)>>8;
sec=CTIME0 & MASKSEC;
printf("\nTime is %02d:%02x:%02d",hour,min,sec);
//VICVectAddr=0xff;
VICVectAddr = 0;
}
According to this board description for the LPC213x, it is delivered with an example program called "Real-Time Clock - Demonstrates how the real-time clock can be used". This also implies that the board features real-time clock hardware, which is going to make it a lot easier.
I suggest you read up on that program, to figure out how to talk to the RTC hardware. The next step would be to solve the display requirements. The two obvious choices are either 7-segment LED displays, or an LCD.
Both are well-known technologies about which loads have been written, follow the Wikipedia links to find out more.
This is all for the LPC2468. We have a setTime function too, but I don't want to do ALL the work for you. ;) We have custom register files for ease of access, but if you look at the LPC manual, it's obvious where they correlate. You just have to shift values into the right place, and do bitwise operations. For example:
#define RTC_HOUR (*(volatile RTC_HOUR_t *)(RTC_BASE_ADDR + (uint32_t)0x28))
Time struture:
typedef struct {
uint8_t seconds; /* Second value - [0,59] */
uint8_t minutes; /* Minute value - [0,59] */
uint8_t hour; /* Hour value - [0,23] */
uint8_t mDay; /* Day of the month value - [1,31] */
uint8_t month; /* Month value - [1,12] */
uint16_t year; /* Year value - [0,4095] */
uint8_t wDay; /* Day of week value - [0,6] */
uint16_t yDay; /* Day of year value - [1,365] */
} rtcTime_t;
RTC functions:
void rtc_ClockStart(void) {
/* Enable CLOCK into RTC */
scb_ClockStart(M_RTC);
RTC_CCR.B.CLKSRC = 1;
RTC_CCR.B.CLKEN = 1;
return;
}
void rtc_ClockStop(void) {
RTC_CCR.B.CLKEN = 0;
/* Disable CLOCK into RTC */
scb_ClockStop(M_RTC);
return;
}
void rtc_GetTime(rtcTime_t *p_localTime) {
/* Set RTC timer value */
p_localTime->seconds = RTC_SEC.R;
p_localTime->minutes = RTC_MIN.R;
p_localTime->hour = RTC_HOUR.R;
p_localTime->mDay = RTC_DOM.R;
p_localTime->wDay = RTC_DOW.R;
p_localTime->yDay = RTC_DOY.R;
p_localTime->month = RTC_MONTH.R;
p_localTime->year = RTC_YEAR.R;
}
System control block functions:
void scb_ClockStart(module_t module) {
PCONP.R |= (uint32_t)1 << module;
}
void scb_ClockStop(module_t module) {
PCONP.R &= ~((uint32_t)1 << module);
}
If you need information about ARM then this ARM System Developer's Guide: Designing and Optimizing System Software may help you.
We used to do some thing like this for ARM.
#include "LPC21xx.h"
void rtc()
{
*IODIR1 = 0x00FF0000;
// Set LED ports to output
*IOSET1 = 0x00020000;
*PREINT = 0x000001C8;
// Set RTC prescaler for 12.000Mhz Xtal
*PREFRAC = 0x000061C0;
*CCR = 0x01;
*SEC = 0;
*MIN = 0;
*HOUR= 0;
}
A real-time clock (RTC) is a computer clock (most often in the form of an integrated circuit) that keeps track of the current time. Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time.
You May refer this two link, i am sure it will give you further understanding :-
1). ARM Cortex Programming using CMSIS:- http://www.firmcodes.com/cmsis/
2). RTC Programming with ARM7:- http://www.firmcodes.com/microcontrollers/arm/real-time-clock-of-arm7-lpc2148/