Why my interrupt is not connected? qemu_irq_is_connected( ) returns false - interrupt

I'm trying to make interrupt work for a device in qemu. The machnie name is ab21q, a modified version of arm64 virt machine, and the device name is ab21q_axpu.
Here are some relevant codes. I referenced pl011.c. (I switched temporarily back to qemu-5.1.0 for this test.)
==== hw/arm/ab21q.c
machab21q_init(MachineState *machine)
{
.... skip ....
create_ab21q_axpu_device(vms, sysmem); // ab21q-axpu test
....
}
static void create_ab21q_axpu_device(const Ab21qMachineState *vms, MemoryRegion *mem)
{
char *nodename;
hwaddr base = vms->memmap[AB21Q_AXPU].base;
hwaddr size = vms->memmap[AB21Q_AXPU].size;
int irq = vms->irqmap[AB21Q_AXPU];
const char compat[] = "ab21q-axpu";
DeviceState *dev = qdev_new(TYPE_AB21Q_AXPU);
SysBusDevice *s = SYS_BUS_DEVICE(dev);
//sysbus_create_simple("ab21q-axpu", base, qdev_get_gpio_in(vms->gic, irq));
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
memory_region_add_subregion(mem, base,
sysbus_mmio_get_region(s, 0));
sysbus_connect_irq(s, 0, qdev_get_gpio_in(vms->gic, irq));
nodename = g_strdup_printf("/ab21q_axpu#%" PRIx64, base);
qemu_fdt_add_subnode(vms->fdt, nodename);
qemu_fdt_setprop(vms->fdt, nodename, "compatible", compat, sizeof(compat));
qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg", 2, base, 2, size);
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
GIC_FDT_IRQ_TYPE_SPI, irq,
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
qemu_fdt_setprop_cell(vms->fdt, nodename, "interrupt-parent", vms->gic_phandle);
g_free(nodename);
}
==== hw/misc/ab21q_axpu.c
static void ab21q_axpu_init(Object *obj)
{
Ab21qAxpuState *s = AB21Q_AXPU(obj);
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
int i;
memory_region_init_io(&s->iomem, OBJECT(s), &ab21q_axpu_ops, s,
TYPE_AB21Q_AXPU, 0x200000*64);
sysbus_init_mmio(sbd, &s->iomem);
sysbus_init_irq(sbd, &s->irq);
s->init = 0;
s->int_flag = 0;
s->status = 0;
s->id = CHIP_ID;
}
static void ab21q_axpu_realize(DeviceState *d, Error **errp)
{
Ab21qAxpuState *s = AB21Q_AXPU(d);
SysBusDevice *sbd = SYS_BUS_DEVICE(d);
if (qemu_irq_is_connected(s->irq)) {printf("axpu irq connected!\n");}
else { printf("axpu irq not connected!\n");}
}
static void ab21q_axpu_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
dc->realize = ab21q_axpu_realize;
}
static void ab21q_axpu_set_irq(Ab21qAxpuState *s, int irq)
{
s->status = irq;
qemu_set_irq(s->irq, 1);
}
static void ab21q_axpu_write(void *opaque, hwaddr offset, uint64_t value,
unsigned size)
{
Ab21qAxpuState *s = (Ab21qAxpuState *)opaque;
.... skip ...
switch (offset) {
case TRIGGER_RUN:
....
if (((uint64_t *)(s->ioctl_arg + *host_virt_offset_p))[0] == 0x1604) {
s->int_flag = 1;
ab21q_axpu_set_irq(s, INT_AXPU_RUN_FINISHED);
}
The machine and the device works. (actually the device is a shared library that qemu links to) except the interrupt doesn't work even though qemu does set_irq. I checked with qemu_irq_is_connected in realize function. The pl011 case says pl011 irq connected!, but in my device I see axpu irq not connected!. So it's not related to device driver but qemu model itself.
Could anyone find what is missing in the above code? Should I add something to acpi table(in ab21q-build-acpi.c)?
I tried adding in hw/arm/ab21q-build-acpi.c these lines. In function build_dsdt,
acpi_dsdt_add_axpu(scope, &memmap[AB21Q_AXPU],
(irqmap[AB21Q_AXPU] + ARM_SPI_BASE));
The acpi_dsdt_add_axpu function being
static void acpi_dsdt_add_axpu(Aml *scope, const MemMapEntry *uart_memmap,
uint32_t irq)
{
Aml *dev = aml_device("AXPU");
aml_append(dev, aml_name_decl("_HID", aml_string("AXPU0011")));
aml_append(dev, aml_name_decl("_UID", aml_int(0)));
Aml *crs = aml_resource_template();
aml_append(crs, aml_memory32_fixed(uart_memmap->base,
uart_memmap->size, AML_READ_WRITE));
aml_append(crs,
aml_interrupt(AML_CONSUMER, AML_LEVEL, AML_ACTIVE_HIGH,
AML_EXCLUSIVE, &irq, 1));
aml_append(dev, aml_name_decl("_CRS", crs));
aml_append(scope, dev);
}
Inside the virtual machine (ubuntu 20.04), I did acpidump and converted it to .dsl files. The dsdt.dsl contains this entry.
Device (AXPU)
{
Name (_HID, "AXPU0011") // _HID: Hardware ID
Name (_UID, Zero) // _UID: Unique ID
Name (_CRS, ResourceTemplate () // _CRS: Current Resource Settings
{
Memory32Fixed (ReadWrite,
0x09100000, // Address Base
0x00080000, // Address Length
)
Interrupt (ResourceConsumer, Level, ActiveHigh, Exclusive, ,, )
{
0x000000D0,
}
})
}
I'm not sure what I should fix in acpi table.
Any comment or advice will be deeply appreciated.
Thank you!
Chan Kim

So I added some prints. This is the result.
create_uart called!
pl011_init called!
pl011_realize called!
pl011 irq not connected!
now calling sysbus_connect_irq for pl011..
now passed sysbus_connect_irq for pl011..
pl011 irq connected!
create_ab21q_axpu_device called!
ab21q_axpu_init called!
ab21q_axpu_realize called!
axpu irq not connected!
now calling sysbus_connect_irq for ab21q_axpu..
now passed sysbus_connect_irq for ab21q_axpu..
ab21q_axpu irq connected!
So irq-connection-wise speaking, pl011 and ab21q_axpu is the same!
For pl011, the irq was not connected either before the sysbus_connect_irq,
but in the code above, I used qemu_irq_is_connected(s->irq)) by mistake and it gave me true (a false true). I fixed it to qemu_irq_is_connected(s->irq[0])) because it had 6 irq outputs and it returned false before the sysbus_connect_irq. After the sysbus_connect_irq, both ab21q_axpu and pl011's irq shown to have been connected.
And of course I used qemu_irq_is_connected(AB21Q_AXPU(dev)->irq) or
qemu_irq_is_connected(PL011(dev)->irq[0]) to check the irq connection inside the xxx_create functions.
ADD : I later found the request_irq functino returned -EINVAL. so interrupt was not registerred correctly in the driver.

Related

Delay implementation in STM32 using for loop

I am using a NUCLEO-L476RG development board,
I am learning to write GPIO drivers for STM32 family
I have implementing a simple logic in which I need to turn on an LED when a push button is pressed.
I have a strange issue:
Edit 1:The Bread board LED turns ON when the line temp=10 is commented, it doesn't turn ON when the delay issue called. Assuming if I add any line of code into that while loop the LED does not turn ON
The Bread board LED turns ON when the delay() function is commented, it doesn't turn ON when the delay issue called.
What could be the issue?
I have powered the board using the mini usb connector on the board, and the clock is configured at MSI with 4MHz
#define delay() for(uint32_t i=0; i<=50000; i++);
int main(void)
{
GPIO_Handle_t NucleoUserLED,NucleoUserPB,BreadBoardLED,BreadBoardPB;
uint8_t inputVal,BBinpVal;
uint32_t temp;
//User green led in the nucleo board connected to PA5
NucleoUserLED.pGPIO = GPIOA;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_5;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User blue button in the nucleo connected to PC13
NucleoUserPB.pGPIO = GPIOC;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_13;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
//User led in the bread board connected to PC8
BreadBoardLED.pGPIO = GPIOC;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_8;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User DPDT connected in the breadboard connected to PC6
BreadBoardPB.pGPIO = GPIOC;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_6;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_PU;
GPIO_PeriClkCtrl(GPIOA, ENABLE);
GPIO_PeriClkCtrl(GPIOC, ENABLE);
GPIO_Init(&NucleoUserLED);
GPIO_Init(&NucleoUserPB);
GPIO_Init(&BreadBoardLED);
GPIO_Init(&BreadBoardPB);
while(1)
{
/*****************************************************************
* Controlling the IO present in the nucleo board *
*****************************************************************/
inputVal = GPIO_ReadInputPin(NucleoUserPB.pGPIO, NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber);
BBinpVal = GPIO_ReadInputPin(BreadBoardPB.pGPIO, BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber);
if(inputVal == 0)
{
GPIO_ToggleOutputPin(NucleoUserLED.pGPIO, NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber);
}
/*****************************************************************
* Controlling the IO present in the bread board *
*****************************************************************/
if (BBinpVal == 0 )
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 1);
}
else
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 0);
}
delay();
}
return 0;
}
It is not a function only the macrodefinition.
Your loop is likely to be optimized out
define it as
void inline __attribute__((always_inline)) delay(uint32_t delay)
{
while(delay--) __asm("");
}
Bear in mind that 50000 can be quite long if you run on low clock settings.
Not sure what the issue is because "it is not working" is not very specific.
However there are "quality" issues:
That is an inappropriate use of a macro - there is no benefit over using a function. The function call overhead argument does not hold - it is a delay, it is supposed to take time!
The empty-loop counter is not declared volatile - the compiler at any optimisation level other then the minimum is likely to remove the loop altogether.
A for-loop for a delay is a crude and generally non-deterministic solution, with a period that will change between compilers, with different compiler options and on different targets or with different clock speeds. STM32 is a Cortex-M device and given that you should use the SYSTICK counter for this. For example, as a minimum something like:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
void delayms( uint32_t millisec )
{
static bool init = false ;
if( !init )
{
SysTick_Config( SystemCoreClock / 1000 ) ;
init = true ;
}
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
The issue was solved by declaring the iterator as a global variable. Now the LED turns on when the Push button is pressed
Previous implementation
#define delay() for(uint32_t i=0; i<=50000; i++);
Working implementation
uint32_t temp;
void delay(void)
{
for(temp = 0;temp<=50000;temp++)
{
;
}
}
Can any one tell me how declaring the variable as global solves the issue?
Find the working implementation below
#include <stdint.h>
#include "stm32l476xx.h"
#include "stm32l476xx_gpoi_driver.h"
#if !defined(__SOFT_FP__) && defined(__ARM_FP)
#warning "FPU is not initialized, but the project is compiling for an FPU. Please initialize the FPU before use."
#endif
uint32_t temp;
void delay(void)
{
for(temp = 0;temp<=50000;temp++)
{
;
}
}
int main(void)
{
GPIO_Handle_t NucleoUserLED,NucleoUserPB,BreadBoardLED,BreadBoardPB;
volatile uint8_t inputVal,BBinpVal;
//User green led in the nucleo board connected to PA5
NucleoUserLED.pGPIO = GPIOA;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_5;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User blue button in the nucleo connected to PC13
NucleoUserPB.pGPIO = GPIOC;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_13;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
//User led in the bread board connected to PC8
BreadBoardLED.pGPIO = GPIOC;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_8;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_OP;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_NO_PUPD;
BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinOpType = GPIO_OP_TYPE_PP;
//User DPDT connected in the breadboard connected to PC6
BreadBoardPB.pGPIO = GPIOC;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber = GPIO_PIN_6;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinMode = GPIO_MODE_IP;
BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinPuPdControl = GPIO_IP_PU;
GPIO_PeriClkCtrl(GPIOA, ENABLE);
GPIO_PeriClkCtrl(GPIOC, ENABLE);
GPIO_Init(&NucleoUserLED);
GPIO_Init(&NucleoUserPB);
GPIO_Init(&BreadBoardLED);
GPIO_Init(&BreadBoardPB);
while(1)
{
/*****************************************************************
* Controlling the IO present in the nucleo board *
*****************************************************************/
inputVal = GPIO_ReadInputPin(NucleoUserPB.pGPIO, NucleoUserPB.GPIO_Pin_Cfg.GPIO_PinNumber);
BBinpVal = GPIO_ReadInputPin(BreadBoardPB.pGPIO, BreadBoardPB.GPIO_Pin_Cfg.GPIO_PinNumber);
if(inputVal == 0)
{
GPIO_ToggleOutputPin(NucleoUserLED.pGPIO, NucleoUserLED.GPIO_Pin_Cfg.GPIO_PinNumber);
}
/*****************************************************************
* Controlling the IO present in the bread board *
*****************************************************************/
temp = 10;
if (BBinpVal == 0 )
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 1);
}
else
{
GPIO_WriteOutputPin(BreadBoardLED.pGPIO, BreadBoardLED.GPIO_Pin_Cfg.GPIO_PinNumber, 0);
}
delay();
}
return 0;
}
The issue is solved,
There was an bug in the driver layer I have written
Whenever an GPIO is configured as Input, the registers related to Output for that GPIO pin should be set to their reset value or the driver should not implement the API related to the Output

STM32F769NI USB CDC host problem sending simple data to the device

I am making HID for some data acquisition system. There are a lot of sensors who store test data and when I need I get to them and connect via USB and take it. USB host sent 3 bytes and USB device, if bytes are correct, sends its stored data. Sounds simple.
Previously it was implemented on PC, but now I try to implement it on STM32F769 Discovery and have some serious problems.
I am using ARM Keil 5.27, code generated with STM32CubeMX 5.3.0. I tried just to make a plain simple program, later to integrate with the entire touchscreen interface. I tried to implement this code in main:
if (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
while (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
{
Transmission_function();
}
And the function itself:
#define DLE 0x10
#define STX 0x2
uint8_t tx_buf[]={DLE, STX, 120}, RX_FLAG;
uint32_t size_tx=sizeof(tx_buf);
void Transmission_function (void)
{
if (Appli_state == APPLICATION_READY)
{
i=0;
USBH_CDC_Transmit(&hUsbHostHS, tx_buf, size_tx);
HAL_Delay(50);
RX_FLAG=0;
}
}
It should send the message after I press the blue button on the Discovery board. All that I get is Hard Fault. While trying to debug, I tried manually to check after which action I get this error and it was functioning in stm32f7xx_ll_usb.c:
HAL_StatusTypeDef USB_WritePacket(USB_OTG_GlobalTypeDef *USBx, uint8_t *src,
uint8_t ch_ep_num, uint16_t len, uint8_t dma)
{
uint32_t USBx_BASE = (uint32_t)USBx;
uint32_t *pSrc = (uint32_t *)src;
uint32_t count32b, i;
if (dma == 0U)
{
count32b = ((uint32_t)len + 3U) / 4U;
for (i = 0U; i < count32b; i++)
{
USBx_DFIFO((uint32_t)ch_ep_num) = *((__packed uint32_t *)pSrc);
pSrc++;
}
}
return HAL_OK;
}
But trying to scroll back in disassembly I notice, that just before Hard Fault program was in this function inside stm32f7xx_hal_hcd.c, in case GRXSTS_PKTSTS_IN:
static void HCD_RXQLVL_IRQHandler(HCD_HandleTypeDef *hhcd)
{
USB_OTG_GlobalTypeDef *USBx = hhcd->Instance;
uint32_t USBx_BASE = (uint32_t)USBx;
uint32_t pktsts;
uint32_t pktcnt;
uint32_t temp;
uint32_t tmpreg;
uint32_t ch_num;
temp = hhcd->Instance->GRXSTSP;
ch_num = temp & USB_OTG_GRXSTSP_EPNUM;
pktsts = (temp & USB_OTG_GRXSTSP_PKTSTS) >> 17;
pktcnt = (temp & USB_OTG_GRXSTSP_BCNT) >> 4;
switch (pktsts)
{
case GRXSTS_PKTSTS_IN:
/* Read the data into the host buffer. */
if ((pktcnt > 0U) && (hhcd->hc[ch_num].xfer_buff != (void *)0))
{
(void)USB_ReadPacket(hhcd->Instance, hhcd->hc[ch_num].xfer_buff, (uint16_t)pktcnt);
/*manage multiple Xfer */
hhcd->hc[ch_num].xfer_buff += pktcnt;
hhcd->hc[ch_num].xfer_count += pktcnt;
if ((USBx_HC(ch_num)->HCTSIZ & USB_OTG_HCTSIZ_PKTCNT) > 0U)
{
/* re-activate the channel when more packets are expected */
tmpreg = USBx_HC(ch_num)->HCCHAR;
tmpreg &= ~USB_OTG_HCCHAR_CHDIS;
tmpreg |= USB_OTG_HCCHAR_CHENA;
USBx_HC(ch_num)->HCCHAR = tmpreg;
hhcd->hc[ch_num].toggle_in ^= 1U;
}
}
break;
case GRXSTS_PKTSTS_DATA_TOGGLE_ERR:
break;
case GRXSTS_PKTSTS_IN_XFER_COMP:
case GRXSTS_PKTSTS_CH_HALTED:
default:
break;
}
}
Last few lines from Dissasembly shows this:
0x080018B4 E8BD81F0 POP {r4-r8,pc}
0x080018B8 0000 DCW 0x0000
0x080018BA 1FF8 DCW 0x1FF8
Why it fails? How could I fix it? I do not have much experience with USB protocol.
I will post my walkaround this, but I am not sure why it worked. Solution was to use EXTI0 interrupt instead of just detection if PA0 is high, as I showed I used here:
if (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
while (HAL_GPIO_ReadPin(BUTTON_GPIO_Port, BUTTON_Pin))
Transmission_function();
I changed it to this:
void EXTI0_IRQHandler(void)
{
/* USER CODE BEGIN EXTI0_IRQn 0 */
if(Appli_state == APPLICATION_READY){
USBH_CDC_Transmit(&hUsbHostHS, Buffer, 3);
}
/* USER CODE END EXTI0_IRQn 0 */
HAL_GPIO_EXTI_IRQHandler(GPIO_PIN_0);
/* USER CODE BEGIN EXTI0_IRQn 1 */
/* USER CODE END EXTI0_IRQn 1 */
}

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

No Data Receiving form UART with Interrupt, STM32F4, HAL drivers

When I send a request across the UART port from the PC (serial monitor) to the STM32F4 discovery board the signal will not be received. The board should normally answer with the same request which was received before (UART mirroring). I used an interrupt (without DMA) to send or to receive a message. In the interrupt service routine, interrupt flag has been set. This flag will be read in the main loop. I am not using a callback function.
Without the interrupt service (with only HAL_UART_Transmit(...) and HAL_UART_Receive(...) in the main loop) everything works fine. But I want a communication with interrupt.
In the RUN- Mode I have enable breakpoints in the ISR and the two if statements.
I would like to know, whether there is an issue with the ISR routine. But the ISR Routine works as it should be. By a request from the PC the ISR and the receive if statement is called.
The Problem is, that the receive register stay empty. And if this remains empty the controller will not send the request message.
What is wrong? Where is the problem and can you please help me? Is the configuration of the UART Port right?
Thanks for your help & support!
volatile enum RxStates {
UART_RX, DUMMY_RX, WAIT_RX
} stateRx;
volatile enum TxStates {
UART_TX_READY, DUMMY_TX, WAIT_TX
} stateTx;
static UART_HandleTypeDef s_UARTHandle;
GPIO_InitTypeDef GPIOC2_InitStruct; //GPIO
uint8_t empfang[8]; //buffer
void UART_ini(void)
{
__HAL_RCC_GPIOA_CLK_ENABLE();
__USART2_CLK_ENABLE();
//PIN TX
GPIOC2_InitStruct.Pin = GPIO_PIN_2;
GPIOC2_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIOC2_InitStruct.Alternate = GPIO_AF7_USART2;
GPIOC2_InitStruct.Speed = GPIO_SPEED_FAST;
GPIOC2_InitStruct.Pull = GPIO_PULLUP;
HAL_GPIO_Init(GPIOA, &GPIOC2_InitStruct);
//PIN RX
GPIOC2_InitStruct.Pin = GPIO_PIN_3;
GPIOC2_InitStruct.Mode = GPIO_MODE_AF_OD;
HAL_GPIO_Init(GPIOA, &GPIOC2_InitStruct);
//USART2
s_UARTHandle.Instance = USART2;
s_UARTHandle.Init.BaudRate = 9600;
s_UARTHandle.Init.WordLength = UART_WORDLENGTH_8B;
s_UARTHandle.Init.StopBits = UART_STOPBITS_1;
s_UARTHandle.Init.Parity = UART_PARITY_NONE;
s_UARTHandle.Init.HwFlowCtl = UART_HWCONTROL_NONE;
s_UARTHandle.Init.Mode = UART_MODE_TX_RX;
HAL_UART_Init(&s_UARTHandle);
__HAL_UART_ENABLE_IT(&s_UARTHandle, UART_IT_RXNE | UART_IT_TC);
HAL_NVIC_SetPriority(USART2_IRQn, 15, 15);
HAL_NVIC_EnableIRQ(USART2_IRQn); // Enable Interrupt
}//UART
int main(int argc, char* argv[])
{
//initialization of the interrupt flags
stateRx = WAIT_RX;
stateTx = WAIT_TX;
UART_ini(); //initialization UART
while (1)
{
//receive interrupt flag
if (stateRx == UART_RX)
{
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_13, GPIO_PIN_SET); //set LED
HAL_UART_Receive(&s_UARTHandle, empfang, 2, 10000000); //receive message
stateRx = WAIT_RX; //RESET flag
}
//transmit interrupt flag
if (stateTx == UART_TX_READY)
{
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_13, GPIO_PIN_SET); //set LED
HAL_UART_Transmit(&s_UARTHandle, empfang, 2, 10000); //send message
stateTx = WAIT_TX; //RESET flag
}
//RESET LED
if (stateTx != UART_TX_READY && stateRx != UART_RX)
{
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_13, RESET); //RESET LED
}
}//while
}//main
void USART2_IRQHandler()
{
if (__HAL_UART_GET_FLAG(&s_UARTHandle, UART_FLAG_RXNE) == SET)
{
__HAL_UART_CLEAR_FLAG(&s_UARTHandle, UART_FLAG_RXNE); //clear ISR flag
stateRx = UART_RX; //set RX flag
}
if (__HAL_UART_GET_FLAG(&s_UARTHandle, UART_FLAG_TC) == SET)
{
__HAL_UART_CLEAR_FLAG(&s_UARTHandle, UART_FLAG_TC); //clear ISR flag
stateTx = UART_TX_READY; //set TX flag
}
}//ISR
//EOF
Look into HAL_UART_Receive function, this function waits for UART_FLAG_RXNE flag but you clear this flag in USART2_IRQHandler
Usually, when you use RX interrupt, you have to save received data into user buffer and then parse it.

STM32F4: SD-Card using FatFs and USB fails

(also asked on SE: Electrical Engineering)
In my application, I've set up a STM32F4, SD-Card and USB-CDC (all with CubeMX).
Using a PC, I send commands to the STM32, which then does things on the SD-Card.
The commands are handled using a "communicationBuffer" (implemented by me) which waits for commands over USB, UART, ... and sets a flag, when a \n character was received. The main loop polls for this flag and if it is set, a parser handles the command. So far, so good.
When I send commands via UART, it works fine, and I can get a list of the files on the SD-Card or perform other access via FatFs without a problem.
The problem occurs, when I receive a command via USB-CDC. The parser works as expected, but FatFs claims FR_NO_FILESYSTEM (13) in f_opendir.
Also other FatFs commands fail with this error-code.
After one failed USB-command, commands via UART will also fail. It seems, as if the USB somehow crashes the initialized SD-Card-driver.
Any idea how I can resolve this behaviour? Or a starting point for debugging?
My USB-Implementation:
I'm using CubeMX, and therefore use the prescribed way to initialize the USB-CDC interface:
main() calls MX_USB_DEVICE_Init(void).
In usbd_conf.c I've got:
void HAL_PCD_MspInit(PCD_HandleTypeDef* pcdHandle)
{
GPIO_InitTypeDef GPIO_InitStruct;
if(pcdHandle->Instance==USB_OTG_FS)
{
/* USER CODE BEGIN USB_OTG_FS_MspInit 0 */
/* USER CODE END USB_OTG_FS_MspInit 0 */
/**USB_OTG_FS GPIO Configuration
PA11 ------> USB_OTG_FS_DM
PA12 ------> USB_OTG_FS_DP
*/
GPIO_InitStruct.Pin = OTG_FS_DM_Pin|OTG_FS_DP_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.Alternate = GPIO_AF10_OTG_FS;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* Peripheral clock enable */
__HAL_RCC_USB_OTG_FS_CLK_ENABLE();
/* Peripheral interrupt init */
HAL_NVIC_SetPriority(OTG_FS_IRQn, 7, 1);
HAL_NVIC_EnableIRQ(OTG_FS_IRQn);
/* USER CODE BEGIN USB_OTG_FS_MspInit 1 */
/* USER CODE END USB_OTG_FS_MspInit 1 */
}
}
and the receive-process is implemented in usbd_cdc_if.c as follows:
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
mRootObject->mUsbBuffer->fillBuffer(Buf, *Len);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
return (USBD_OK);
/* USER CODE END 6 */
}
fillBuffer is implemented as follows (I use the same implementation for UART and USB transfer - with separate instances for the respective interfaces. mBuf is an instance-variable of type std::vector<char>):
void commBuf::fillBuffer(uint8_t *buf, size_t len)
{
// Check if last fill has timed out
if(SystemTime::getMS() - lastActionTime > timeout) {
mBuf.clear();
}
lastActionTime = SystemTime::getMS();
// Fill new content
mBuf.insert(mBuf.end(), buf, buf + len);
uint32_t done = 0;
while(!done) {
for(auto i = mBuf.end() - len, ee = mBuf.end(); i != ee; ++i) {
if(*i == '\n') {
newCommand = true;
myCommand = std::string((char*) &mBuf[0],i - mBuf.begin() + 1);
mBuf.erase(mBuf.begin(), mBuf.begin() + (i - mBuf.begin() + 1));
break;
}
}
done = 1;
}
}
I resolved the problem:
In usb_cdc_if.c the #define APP_RX_DATA_SIZE was set to 4 (for some unknown reason). As this is lower than the packet size, incoming packets of a larger size than 4 bytes were overwriting my memory.
It happened, that the following portion of my memory was the FATFS* FatFs[] pointer-list to the initialized FATFS-Filesystem structs.
So subsequently the address to this struct was overwritten, when a command of 5 or more bytes arrived.
Phew, that was a tough one.