STM32L496 - FLASH Page Erase No Effect - embedded

I have a project originally built for the STM32L432 that I am porting to the STM32L496. However, I am having an issue where the HAL_FLASHEx_Erase function provided by ST as part of the HAL works on one device but not the other. The two FLASH spaces of these micros are organized virtually identically (same 72-bit wide data read/writes). Only difference is the L496 has a second bank of FLASH. I have seen some users run into issues with this; I am NOT attempting to use Bank 2 or come anywhere close to it; my last address is at 0x801FFFF.
I can manually erase the FLASH using the STCube Programmer, which fills it with FFs. This then satisfies the requirement for the FLASH to be "erased" before writing, and I can write one block of data. But I cannot modify it again via code. As soon as I write the first time, I cannot clear the block of data that I'd just written (verified using the Memory window view in IAR).
Again, exact same piece of code works for one L-series, but not another. Anyone have any ideas?
HAL_StatusTypeDef HAL_FLASHEx_Erase(FLASH_EraseInitTypeDef *pEraseInit, uint32_t *PageError)
{
HAL_StatusTypeDef status = HAL_ERROR;
uint32_t page_index = 0;
/* Process Locked */
__HAL_LOCK(&pFlash);
/* Check the parameters */
assert_param(IS_FLASH_TYPEERASE(pEraseInit->TypeErase));
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
if (status == HAL_OK)
{
pFlash.ErrorCode = HAL_FLASH_ERROR_NONE;
if (pEraseInit->TypeErase == FLASH_TYPEERASE_MASSERASE)
{
/* Mass erase to be done */
FLASH_MassErase(pEraseInit->Banks);
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
#if defined(STM32L471xx) || defined(STM32L475xx) || defined(STM32L476xx) || defined(STM32L485xx) || defined(STM32L486xx)
/* If the erase operation is completed, disable the MER1 and MER2 Bits */
CLEAR_BIT(FLASH->CR, (FLASH_CR_MER1 | FLASH_CR_MER2));
#else
/* If the erase operation is completed, disable the MER1 Bit */
CLEAR_BIT(FLASH->CR, (FLASH_CR_MER1));
#endif
}
else
{
/*Initialization of PageError variable*/
*PageError = 0xFFFFFFFF;
for(page_index = pEraseInit->Page; page_index < (pEraseInit->Page + pEraseInit->NbPages); page_index++)
{
FLASH_PageErase(page_index, pEraseInit->Banks);
/* Wait for last operation to be completed */
status = FLASH_WaitForLastOperation((uint32_t)FLASH_TIMEOUT_VALUE);
/* If the erase operation is completed, disable the PER Bit */
CLEAR_BIT(FLASH->CR, (FLASH_CR_PER | FLASH_CR_PNB));
if (status != HAL_OK)
{
/* In case of error, stop erase procedure and return the faulty address */
*PageError = page_index;
break;
}
}
}
/* Flush the caches to be sure of the data consistency */
FLASH_FlushCaches();
}
/* Process Unlocked */
__HAL_UNLOCK(&pFlash);
return status;
}

Related

Toggling LED through button (ESP32 FreeRTOS) + binary semaphore

I had already done several projects using simple freertos ideas: led, button. Implementing semaphores, queues or some interrupt. I can't run this simple code tough.
#include <stdio.h>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "freertos/semphr.h"
#include "driver/gpio.h"
#define BLINK_GPIO 21 //2
#define BUTTON_GPIO 0
void task_blink(void *pvParameters);
void task_botao(void *pvParameters);
//void wd_off_task(void *pvParameters);
SemaphoreHandle_t sem_sinc;
void app_main(void)
{
gpio_pad_select_gpio(BLINK_GPIO); // Configura o pino como IO
gpio_set_direction(BLINK_GPIO,GPIO_MODE_OUTPUT); // Configura o IO como saida
gpio_pad_select_gpio(BUTTON_GPIO); // Configura o pino como IO
gpio_set_direction(BUTTON_GPIO,GPIO_MODE_INPUT); // Configura o IO como entrada
vSemaphoreCreateBinary(sem_sinc); // Cria o Semaforo
xSemaphoreTake(sem_sinc,0); // Garante que inicializa com 0
xTaskCreate(task_blink,"Task Blink",1024,NULL,2,NULL);
printf("Task Blink Criada!!!\r\n");
xTaskCreate(task_botao,"Task Botao",1024,NULL,2,NULL);
printf("Task Botao Criada!!!\r\n");
//xTaskCreate(wd_off_task,"Task desliga WD",1024,NULL,1,NULL);
}
void task_botao(void *pvParameters)
{
while(1)
{
if(gpio_get_level(BUTTON_GPIO) == 0)
{
while(gpio_get_level(BUTTON_GPIO) == 0){}
printf("Botao Pressionado!!!\r\n");
xSemaphoreGive(sem_sinc);
vTaskDelay(1);
}
}
}
void task_blink(void *pvParameters)
{
while(1)
{
if(xSemaphoreTake(sem_sinc,portMAX_DELAY)==pdTRUE)
{
printf("Pisca Led!!!\r\n");
if((gpio_get_level(BUTTON_GPIO) == 0))
gpio_set_level(BLINK_GPIO, 1);
else
gpio_set_level(BLINK_GPIO, 0);
}
}
}
The issue:
The code is built nicely, and the same for the flashing to ESP. As I press the button, it shows in the terminal the designed messages. See, the only problem here lies on I can't set the LED's level, toggling it! Because of this, all I can get is the LED turning on and turning off afterwards quickly(every time the semaphore syncronizes the 2 tasks).
I suspect it's all about some kind of config, related to this GPIO. (Although I'm using the reset port to read the button, I still think this is not the matter, because the port was properly configured on the lines above)
Your switch polling needs to detect transitions, but avoid erroneously detecting switch bounce as a valid transition. For example:
#define BUTTON_DN = 0 ;
#define BUTTON_UP = 1 ;
#define POLL_DELAY = 50 ;
void task_botao(void *pvParameters)
{
int button_state = gpio_get_level( BUTTON_GPIO ) ;
for(;;)
{
int input_state = gpio_get_level( BUTTON_GPIO ) ;
// If button pressed...
if( input_state == BUTTON_DN &&
button_state != BUTTON_UP )
{
button_state = BUTTON_DN ;
// Signal button press event.
xSemaphoreGive(sem_sinc ) ;
}
// otherwise if button released...
else if( input_state == BUTTON_UP &&
button_state != BUTTON_DN )
{
button_state = BUTTON_UP ;
}
// Delay to yield processor and
// avoid switch bounce on transitions
vTaskDelay( POLL_DELAY );
}
}
The blinking task need not be reading the button input at all; not is it unnecessary, it is also a bad design:
void task_blink(void *pvParameters)
{
int led_state = 0 ;
gpio_set_level( BLINK_GPIO, led_state ) ;
for(;;)
{
if( xSemaphoreTake( sem_sinc, portMAX_DELAY ) == pdTRUE )
{
led_state = !led_state ;
gpio_set_level( BLINK_GPIO, led_state ) ;
}
}
}
There are some things to consider. Your thinking is logical, but there are some issues.
A button is a mechanical device and while you press it, you think it will be a straightforward 0 instead of 1 it’s not. If you have an oscilloscope, I recommend you to check the voltage level on the gpio input. Or google button bounce. And floating pins. Those two concepts should be clear. The processor is very straightforward in interpreting the values.
Example: https://hackaday.com/wp-content/uploads/2015/11/debounce_bouncing.png
Now your functions are in fact constantly checking the button status, somehow at the cost of processor time. For small projects not of an issue, but when they get bigger they are.
What you want to do is to setup an interrupt to the button status: at the moment the level changes it will fire some code. And it doesn’t have to double check the gpio status in two tasks, with the chance it will miss the status in the second (because of delays). It’s important to realize you are checking the same level twice now.
Not a problem now but maybe later: the stack size of the tasks is somehow small, make it a good use to always check if it’s enough by checking the current free size. Vague problems arise if it’s not.

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

LPC824 microcontroller ADC demo HardFault problem

I'm trying to program LPC824 microcontroller board ([https://www.switch-science.com/catalog/2265/][1]) with LPCOpen.
I'm using it with LPCLink 2 debugger board.
My goal is to get some information from the "pressure sensor" with an ADC.
My code stops with a HardFault when executing a NVIC_EnableIRQ function(on line: 92).
If I don't use "NVIC interrupt controller" then my code works and I can get value from sensor with ADC.
What I am doing wrong?
Here is my adc.c code:
#include "board.h"
static volatile int ticks;
static bool sequenceComplete = false;
static bool thresholdCrossed = false;
#define TICKRATE_HZ (100) /* 100 ticks per second */
#define BOARD_ADC_CH 2
/**
* #brief Handle interrupt from ADC sequencer A
* #return Nothing
*/
void ADC_SEQA_IRQHandler(void) {
uint32_t pending;
/* Get pending interrupts */
pending = Chip_ADC_GetFlags(LPC_ADC);
/* Sequence A completion interrupt */
if (pending & ADC_FLAGS_SEQA_INT_MASK) {
sequenceComplete = true;
}
/* Threshold crossing interrupt on ADC input channel */
if (pending & ADC_FLAGS_THCMP_MASK(BOARD_ADC_CH)) {
thresholdCrossed = true;
}
/* Clear any pending interrupts */
Chip_ADC_ClearFlags(LPC_ADC, pending);
}
/**
* #brief Handle interrupt from SysTick timer
* #return Nothing
*/
void SysTick_Handler(void) {
static uint32_t count;
/* Every 1/2 second */
if (count++ == TICKRATE_HZ / 2) {
count = 0;
Chip_ADC_StartSequencer(LPC_ADC, ADC_SEQA_IDX);
}
}
/**
* #brief main routine for ADC example
* #return Function should not exit
*/
int main(void) {
uint32_t rawSample;
int j;
SystemCoreClockUpdate();
Board_Init();
/* Setup ADC for 12-bit mode and normal power */
Chip_ADC_Init(LPC_ADC, 0);
Chip_ADC_Init(LPC_ADC, ADC_CR_MODE10BIT);
/* Need to do a calibration after initialization and trim */
Chip_ADC_StartCalibration(LPC_ADC);
while (!(Chip_ADC_IsCalibrationDone(LPC_ADC))) {
}
/* Setup for maximum ADC clock rate using sycnchronous clocking */
Chip_ADC_SetClockRate(LPC_ADC, ADC_MAX_SAMPLE_RATE);
Chip_ADC_SetupSequencer(LPC_ADC, ADC_SEQA_IDX,
(ADC_SEQ_CTRL_CHANSEL(BOARD_ADC_CH) | ADC_SEQ_CTRL_MODE_EOS));
Chip_Clock_EnablePeriphClock(SYSCTL_CLOCK_SWM);
Chip_SWM_EnableFixedPin(SWM_FIXED_ADC2);
Chip_Clock_DisablePeriphClock(SYSCTL_CLOCK_SWM);
/* Setup threshold 0 low and high values to about 25% and 75% of max */
Chip_ADC_SetThrLowValue(LPC_ADC, 0, ((1 * 0xFFF) / 4));
Chip_ADC_SetThrHighValue(LPC_ADC, 0, ((3 * 0xFFF) / 4));
Chip_ADC_ClearFlags(LPC_ADC, Chip_ADC_GetFlags(LPC_ADC));
Chip_ADC_EnableInt(LPC_ADC,
(ADC_INTEN_SEQA_ENABLE | ADC_INTEN_OVRRUN_ENABLE));
Chip_ADC_SelectTH0Channels(LPC_ADC, ADC_THRSEL_CHAN_SEL_THR1(BOARD_ADC_CH));
Chip_ADC_SetThresholdInt(LPC_ADC, BOARD_ADC_CH, ADC_INTEN_THCMP_CROSSING);
/* Enable ADC NVIC interrupt */
NVIC_EnableIRQ(ADC_SEQA_IRQn);
Chip_ADC_EnableSequencer(LPC_ADC, ADC_SEQA_IDX);
SysTick_Config(SystemCoreClock / TICKRATE_HZ);
/* Endless loop */
while (1) {
/* Sleep until something happens */
__WFI();
if (thresholdCrossed) {
thresholdCrossed = false;
printf("********ADC threshold event********\r\n");
}
/* Is a conversion sequence complete? */
if (sequenceComplete) {
sequenceComplete = false;
/* Get raw sample data for channels 0-11 */
for (j = 0; j < 12; j++) {
rawSample = Chip_ADC_GetDataReg(LPC_ADC, j);
/* Show some ADC data */
if (rawSample & (ADC_DR_OVERRUN | ADC_SEQ_GDAT_DATAVALID)) {
printf("Chan: %d Val: %d\r\n", j, ADC_DR_RESULT(rawSample));
printf("Threshold range: 0x%x ",
ADC_DR_THCMPRANGE(rawSample));
printf("Threshold cross: 0x%x\r\n",
ADC_DR_THCMPCROSS(rawSample));
printf("Overrun: %s ",
(rawSample & ADC_DR_OVERRUN) ? "true" : "false");
printf("Data Valid: %s\r\n\r\n",
(rawSample & ADC_SEQ_GDAT_DATAVALID) ?
"true" : "false");
}
}
}
}
}
Hard fault usually means that you try to execute code outside allowed addresses. If you have not registered the interrupt in the vector table but enabled it, the MCU will jump to whatever address that's written there instead, after which the program crashes.
How to fix that depends on tool chain. Assuming LPCXpresso, you have several options to set up libraries (I don't know about LPCOpen specifically), so where to find the vector table is different from case to case. However, this works quite similar on most MCUs, ARM or not. Somewhere in a "crt start-up" file you should have something along the lines of this:
void (* const g_pfnVectors[])(void) = ...
This is an array of function pointers which will be the vector table allocated in memory at address 0 on Cortex M. You have to place your function at the relevant interrupt vector. For example it may say something like
PIN_INT0_IRQHandler, // PIO INT0
If that's the interrupt you should implement, then you replace that line:
#include "my_irq_stuff.h"
...
void (* const g_pfnVectors[])(void) =
...
my_INT0, // PIO INT0
Assuming my_irq_stuff.h contains the function prototype my_INT0 for the interrupt service routine. The actual routine should be implemented in the corresponding .c file.

STM32F4: SD-Card using FatFs and USB fails

(also asked on SE: Electrical Engineering)
In my application, I've set up a STM32F4, SD-Card and USB-CDC (all with CubeMX).
Using a PC, I send commands to the STM32, which then does things on the SD-Card.
The commands are handled using a "communicationBuffer" (implemented by me) which waits for commands over USB, UART, ... and sets a flag, when a \n character was received. The main loop polls for this flag and if it is set, a parser handles the command. So far, so good.
When I send commands via UART, it works fine, and I can get a list of the files on the SD-Card or perform other access via FatFs without a problem.
The problem occurs, when I receive a command via USB-CDC. The parser works as expected, but FatFs claims FR_NO_FILESYSTEM (13) in f_opendir.
Also other FatFs commands fail with this error-code.
After one failed USB-command, commands via UART will also fail. It seems, as if the USB somehow crashes the initialized SD-Card-driver.
Any idea how I can resolve this behaviour? Or a starting point for debugging?
My USB-Implementation:
I'm using CubeMX, and therefore use the prescribed way to initialize the USB-CDC interface:
main() calls MX_USB_DEVICE_Init(void).
In usbd_conf.c I've got:
void HAL_PCD_MspInit(PCD_HandleTypeDef* pcdHandle)
{
GPIO_InitTypeDef GPIO_InitStruct;
if(pcdHandle->Instance==USB_OTG_FS)
{
/* USER CODE BEGIN USB_OTG_FS_MspInit 0 */
/* USER CODE END USB_OTG_FS_MspInit 0 */
/**USB_OTG_FS GPIO Configuration
PA11 ------> USB_OTG_FS_DM
PA12 ------> USB_OTG_FS_DP
*/
GPIO_InitStruct.Pin = OTG_FS_DM_Pin|OTG_FS_DP_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
GPIO_InitStruct.Alternate = GPIO_AF10_OTG_FS;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* Peripheral clock enable */
__HAL_RCC_USB_OTG_FS_CLK_ENABLE();
/* Peripheral interrupt init */
HAL_NVIC_SetPriority(OTG_FS_IRQn, 7, 1);
HAL_NVIC_EnableIRQ(OTG_FS_IRQn);
/* USER CODE BEGIN USB_OTG_FS_MspInit 1 */
/* USER CODE END USB_OTG_FS_MspInit 1 */
}
}
and the receive-process is implemented in usbd_cdc_if.c as follows:
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
mRootObject->mUsbBuffer->fillBuffer(Buf, *Len);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
return (USBD_OK);
/* USER CODE END 6 */
}
fillBuffer is implemented as follows (I use the same implementation for UART and USB transfer - with separate instances for the respective interfaces. mBuf is an instance-variable of type std::vector<char>):
void commBuf::fillBuffer(uint8_t *buf, size_t len)
{
// Check if last fill has timed out
if(SystemTime::getMS() - lastActionTime > timeout) {
mBuf.clear();
}
lastActionTime = SystemTime::getMS();
// Fill new content
mBuf.insert(mBuf.end(), buf, buf + len);
uint32_t done = 0;
while(!done) {
for(auto i = mBuf.end() - len, ee = mBuf.end(); i != ee; ++i) {
if(*i == '\n') {
newCommand = true;
myCommand = std::string((char*) &mBuf[0],i - mBuf.begin() + 1);
mBuf.erase(mBuf.begin(), mBuf.begin() + (i - mBuf.begin() + 1));
break;
}
}
done = 1;
}
}
I resolved the problem:
In usb_cdc_if.c the #define APP_RX_DATA_SIZE was set to 4 (for some unknown reason). As this is lower than the packet size, incoming packets of a larger size than 4 bytes were overwriting my memory.
It happened, that the following portion of my memory was the FATFS* FatFs[] pointer-list to the initialized FATFS-Filesystem structs.
So subsequently the address to this struct was overwritten, when a command of 5 or more bytes arrived.
Phew, that was a tough one.

Time CPU Used by Process

I've managed to implement the code on this listing to get a list of all the processes running and their IDs. What I need now is to extract how much time each process uses the CPU.
I've tried referring to the keys in the code, but when I try to print 'Ticks of CPU Time' I get a zero value for all of the processes. Plus, even if I did get a value I'm not sure if 'Ticks of CPU Time' is exactly what I'm looking for.
struct vmspace *p_vmspace; /* Address space. */
struct sigacts *p_sigacts; /* Signal actions, state (PROC ONLY). */
int p_flag; /* P_* flags. */
char p_stat; /* S* process status. */
pid_t p_pid; /* Process identifier. */
pid_t p_oppid; /* Save parent pid during ptrace. XXX */
int p_dupfd; /* Sideways return value from fdopen. XXX */
/* Mach related */
caddr_t user_stack; /* where user stack was allocated */
void *exit_thread; /* XXX Which thread is exiting? */
int p_debugger; /* allow to debug */
boolean_t sigwait; /* indication to suspend */
/* scheduling */
u_int p_estcpu; /* Time averaged value of p_cpticks. */
int p_cpticks; /* Ticks of cpu time. */
fixpt_t p_pctcpu; /* %cpu for this process during p_swtime */
void *p_wchan; /* Sleep address. */
char *p_wmesg; /* Reason for sleep. */
u_int p_swtime; /* Time swapped in or out. */
u_int p_slptime; /* Time since last blocked. */
struct itimerval p_realtimer; /* Alarm timer. */
struct timeval p_rtime; /* Real time. */
u_quad_t p_uticks; /* Statclock hits in user mode. */
u_quad_t p_sticks; /* Statclock hits in system mode. */
u_quad_t p_iticks; /* Statclock hits processing intr. */
int p_traceflag; /* Kernel trace points. */
struct vnode *p_tracep; /* Trace to vnode. */
int p_siglist; /* DEPRECATED */
struct vnode *p_textvp; /* Vnode of executable. */
int p_holdcnt; /* If non-zero, don't swap. */
sigset_t p_sigmask; /* DEPRECATED. */
sigset_t p_sigignore; /* Signals being ignored. */
sigset_t p_sigcatch; /* Signals being caught by user. */
u_char p_priority; /* Process priority. */
u_char p_usrpri; /* User-priority based on p_cpu and p_nice. */
char p_nice; /* Process "nice" value. */
char p_comm[MAXCOMLEN+1];
struct pgrp *p_pgrp; /* Pointer to process group. */
struct user *p_addr; /* Kernel virtual addr of u-area (PROC ONLY). */
u_short p_xstat; /* Exit status for wait; also stop signal. */
u_short p_acflag; /* Accounting flags. */
struct rusage *p_ru; /* Exit information. XXX */
In fact I've also tried to print Time averaged value of p_cpticks and a few others and never got interesting values. Here is my code which is printing the information retrieved (I got it from cocoabuilder.com) :
- (NSDictionary *) getProcessList {
NSMutableDictionary *ProcList = [[NSMutableDictionary alloc] init];
kinfo_proc *mylist;
size_t mycount = 0;
mylist = (kinfo_proc *)malloc(sizeof(kinfo_proc));
GetBSDProcessList(&mylist, &mycount);
printf("There are %d processes.\n", (int)mycount);
NSLog(#" = = = = = = = = = = = = = = =");
int k;
for(k = 0; k < mycount; k++) {
kinfo_proc *proc = NULL;
proc = &mylist[k];
// NSString *processName = [NSString stringWithFormat: #"%s",proc->kp_proc.p_comm];
//[ ProcList setObject: processName forKey: processName ];
// [ ProcList setObject: proc->kp_proc.p_pid forKey: processName];
// printf("ID: %d - NAME: %s\n", proc->kp_proc.p_pid, proc->kp_proc.p_comm);
printf("ID: %d - NAME: %s CPU TIME: %d \n", proc->kp_proc.p_pid, proc->kp_proc.p_comm, proc->kp_proc.p_pid );
// Right click on p_comm and select 'jump to definition' to find other values.
}
free(mylist);
return [ProcList autorelease];
}
Thanks!
EDIT: I've just offered a bounty for this question. What I'm looking for specifically is to get the amount of time each process spends in CPU.
If, in addition to this, you can give %CPU being used by a process, that would be fantastic.
The code should be optimal in that it will be called every second and the method will be called on all running processes. Objective-C preferable.
Thanks again!
EDIT 2
Also, any comments as to why people are ignoring this question would also be helpful :)
Have a look at the Darwin source for libtop.c and particularly the libtop_pinfo_update_cpu_usage() function. Note that:
You'll need a basic understanding of Mach programming fundamentals to make sense of this code, as it uses task ports, etc.
If you want to simply use libtop, you'll have to download the source and compile it yourself.
Your process will need privileges to get at the task ports for other processes.
If all this sounds rather daunting, well… There is a way that uses less esoteric APIs: Just spawn a top process and parse its standard output. A quick glimpse over the top(1) man page turned up this little gem:
$ top -s 1 -l 3600 -stats pid,cpu,time
That is, sample once per second for 3600 seconds (one hour), and output to stdout in log form only the statistics for pid, cpu usage, and time.
Spawning and managing the child top process and then parsing its output are all straightforward Unix programming exercises.
Have you taken a look at the struct rusage? You have listed it and commented as "Exit information" but I know that it contains the resources actually used by a process. Take a look at this page. I remember I used getrusage() for calculating the exact amount of CPU time used in my scientific calculation for my current process, so you just have to know how to query that struct for each process in you list i guess