Why can't I integrate the CubeMX USB MSC code into the default end-node I-CUBE-LRWAN project? - usb

I want to add USB MSC (Mass Storage Class, so USB storage drive) functionality to my Murata B-L072Z-LRWAN1 board. For this I have used the most recent I-CUBE-LRWAN end-node project and generated USB MSC code. I have done this in the past for older versions of the I-CUBE-LRWAN (a 2018 release) and gotten it working. However if I do it now I get two behaviours:
Flash the board, connect it and then I reset the board. Nothing happens. No flashing lights, debug serial output, no USB and no LoRaWAN.
Flash the board, connect it and then start a debugger session, I let it run freely without any breakpoints: full operation, lights work, debug serial works, USB presents itself and windows says it needs to format the drive. (Which is correct as the most barebones version does not have any storage interfacing added)
I can't explain this. Why does the code work when the debugger is attached but fully locks up when it is not? As for the changes between the older and newer versions of I-CUBE-LRWAN: they have changed from the systick to an RTC based timing setup. I however can't figure out how this is related to the debugger.
Removing the USB Device cable does not make the code run.
When I comment out the call to MX_USB_DEVICE_Init, windows sees an unidentifiable USB device but none of the code works (e.g. no debug UART output). When I uncomment MX_USB_DEVICE_Init nothing happens, no USB connect.
I'm using Keil uVision as my IDE. Compiler version: "default compiler version 6"
To replicate this you need a B-L072Z-LRWAN1 (modified to enable the USB pins) or that Murata chip with USB port. The full Minimal, Reproducible Example is to take the end-node project from I-CUBE-LRWAN and generating the USB MSC code in STM32CubeMX. (Target MCU is STM32L072CZTx). Then add all the USB MSC files to the end-node project and making the following additions to the project:
Add the USB_IRQHandler to stm32l0xx_it.c:
extern PCD_HandleTypeDef hpcd_USB_FS;
/**
* #brief This function handles USB event interrupt / USB wake-up interrupt through EXTI line 18.
*/
void USB_IRQHandler(void)
{
HAL_PCD_IRQHandler(&hpcd_USB_FS);
}
Append the following clock configuration to SystemClock_Config:
RCC_PeriphCLKInitTypeDef PeriphClkInit = {0};
while(!LL_RCC_HSI48_IsReady());
/*USB clock initialization */
PeriphClkInit.PeriphClockSelection |= RCC_PERIPHCLK_USB;
PeriphClkInit.UsbClockSelection = RCC_USBCLKSOURCE_HSI48;
if (HAL_RCCEx_PeriphCLKConfig(&PeriphClkInit) != HAL_OK)
{
Error_Handler();
}
And add the included #include usb_device.h and call to MX_USB_DEVICE_Init(); in the main.c
In relation to my old code, when I flash this to my board USB does work together with everything else (LEDs, LoRaWAN, debug UART).
Running the USB MSC code on it's own works. Running the LoRaWAN code on it's own works. The problem only manifests in the merger of these two.

The problem is that printf is being called because of the following definitions in usbd_conf.h.
/** #defgroup USBD_CONF_Exported_Defines USBD_CONF_Exported_Defines
* #brief Defines for configuration of the Usb device.
* #{
*/
/*---------- -----------*/
#define USBD_MAX_NUM_INTERFACES 1
/*---------- -----------*/
#define USBD_MAX_NUM_CONFIGURATION 1
/*---------- -----------*/
#define USBD_MAX_STR_DESC_SIZ 512
/*---------- -----------*/
#define USBD_SUPPORT_USER_STRING 0
/*---------- -----------*/
#define USBD_DEBUG_LEVEL 3
/*---------- -----------*/
#define USBD_SELF_POWERED 1
/*---------- -----------*/
#define MSC_MEDIA_PACKET 512
/****************************************/
/* #define for FS and HS identification */
#define DEVICE_FS 0
/**
* #}
*/
/** #defgroup USBD_CONF_Exported_Macros USBD_CONF_Exported_Macros
* #brief Aliases.
* #{
*/
/* Memory management macros */
/** Alias for memory allocation. */
#define USBD_malloc (uint32_t *)USBD_static_malloc
/** Alias for memory release. */
#define USBD_free USBD_static_free
/** Alias for memory set. */
#define USBD_memset /* Not used */
/** Alias for memory copy. */
#define USBD_memcpy /* Not used */
/** Alias for delay. */
#define USBD_Delay HAL_Delay
/* DEBUG macros */
#if (USBD_DEBUG_LEVEL > 0)
#define USBD_UsrLog(...) printf(__VA_ARGS__);\
printf("\n");
#else
#define USBD_UsrLog(...)
#endif
#if (USBD_DEBUG_LEVEL > 1)
#define USBD_ErrLog(...) printf("ERROR: ") ;\
printf(__VA_ARGS__);\
printf("\n");
#else
#define USBD_ErrLog(...)
#endif
#if (USBD_DEBUG_LEVEL > 2)
#define USBD_DbgLog(...) printf("DEBUG : ") ;\
printf(__VA_ARGS__);\
printf("\n");
#else
#define USBD_DbgLog(...)
#endif
So the solution for this version of the I-CUBE-LRWAN is to set the value of USBD_DEBUG_LEVEL to 0. Another option is to fix this by changing printf to APP_PRINTF.
Another problem is that the sleep function in the sequencer leads to problems. Setting the define LOW_POWER_DISABLE to 1 in sys_conf.h disables the Stop mode.
If more fine control is needed during USB operations then calling this line in the correct set and unset modes will make it work:
UTIL_LPM_SetStopMode((1 << CFG_LPM_APPLI_Id), UTIL_LPM_DISABLE);
(low power is handled in sys_app.c:96.

Related

PIC16F877 display the result of ADC on LEDs with C language using MPLAB

I used PIC16F877 and my purpose is to choose CHANNEL 4 to display the analogue input AN4 value on PortD leds. The approximate value is about 1V. I wrote a code and however, no matter how I ran my code, there're not reaction with the GPIO monitor.
By the way, do I write ReadADC1() in the while(1){} loop? I tried that, but there's no help. Thanks.
#include <xc.h>
#define LEDs PORTD
#include "prologue.c"
unsigned char ReadADC1(void) {
ADCON0 |= 0b00000010;
while ( (ADCON0 & 0b00000010) );
return ADRESH;
}
main ()
{
// declare variables if any required
TRISA= 0B00100000;
ANSEL=0B00010000;
ADCON0 = 0b11010001;
ADCON1 = 0b10000000;
LEDs=ReadADC1();
//*** your code for initialisation if required
//*** end of your initialisation
//*** your code for the superloop
while (1) {
}
//*** end of the superloop
}
There's no reaction with the GPIO pins monitor. I restarted the IDE many times.
By default all ports are configured as an input. If you want want use a port as an output you had to change the configuration:
TRISD = 0x00;
Another issue:
There is no ANSEL register in this controller, you had to do the selection (digital or analog input) with ADCON1register.

UDR register always reads 0xFF

I have an ATTiny that is supposed to receive commands over UART. I have a simple display of eight LEDs that should show the contents of the most recent byte received. I am using an interrupt to read data as it is received. No matter what data I send UDR always reads 0xFF in the interrupt. I know the interrupt is being triggered since the display changes from 0x00 to 0xFF, but it never displays the value I sent over the serial bus.
This is how I enable UART.
UBRRH = UBRRH_VALUE;
UBRRL = UBRRL_VALUE;
#if USE_2X
UCSRA |= (1U << U2X);
#else
UCSRA &= ~(1U << U2X);
#endif
// Enable receiver and interrupt
UCSRB = (1U << RXEN) | (1U << RXCIE);
// No parity, 8 Data Bits, 1 Stop Bit
UCSRC = (1U << UCSZ1) | (1U << UCSZ0);
This is the code in the interrupt. I have tested display() and it functions correctly on its own thus implying message is always 0xFF.
ISR(USART_RXC_vect) {
uint8_t message = UDR;
display(message);
}
I am confident that my computer is sending the correct information, but I have only tested it with a pseudo-terminal to print out the sent bytes. I intend to snoop the hardware connection with an oscilloscope, but I don't believe that is the issue. Is there something that is causing UDR to always read as 0xFF?
Edit:
I have snooped the connection with an oscilloscope and have verified that the computer is sending the correct data, at the correct rate. However, the ATTiny is not operating at the correct baud rate. At 2400 baud pulses should be about 400 microseconds long, however the microcontroller is producing pulses over 3 milliseconds long. This explains why it would always read 0xFF, the computer would send nearly the entire byte when the controller thought it was receiving the start bit, when the controller tried to read the remaining data the lines would be undriven, resulting in it reading all ones. I still don't know why this is the case as I believe I am properly setting the baud rate on the controller.
Edit:
The issue has been resolved. By default the clock prescaler is set to 8, so the device was only operating at 1MHz, not 8MHz. Setting the clock prescaler to 1 solved the problem.
There can be several problems with uart communication. First check some things:
Is controller configured with the right clock?
Internal/External
Is F_CPU defined for <util/setbaud.h>?
Is BAUD defined for <util/setbaud.h>?
Are you using a controller like ATmega16 that has special register access?
If you are using an external clock (that should not be devided) is CKDIV8 disabled in FUSES or in special registers at some controllers?
Is:
Baudrate,
Paritybit,
Stopbit
setup corret on Transmitter and Receiver
Debug:
If you are using a PC for communication, create a loopback at the UART adapter and check with a terminal (TeraTerm, Putty, ...) if the messages you send are received correctly.
You also can enable the TX controller and check if loopback is working on your uC.
If it is possible try to write the received data to some leds to check if some date is received
Is GND betweend receiver and transmitter connected?
Are the voltage levels between transmitter and receiver the same?
Do transmitter and receiver have its own source? (Then do not connect VCC!)
Check if the clock on the controller is correct (switch on an led with _delay_ms() function every second)
Example Program
#define F_CPU 12000000UL
#define BAUD 9600UL
#include <avr/io.h>
#include <avr/interrupt.h>
#include <util/setbaud.h>
ISR(USART_RXC_vect)
{
volatile unsigned char message = UDR;
// If it is possible try to write the received data to
// LEDs (if there are some at your board)
display(message);
}
int main()
{
// To allow changes to clock prescaler it is necessary to set the
// CCP register (Datasheet page 23)!
CCP = 0xD8;
// RESET the clock prescaler from /8 to /1 !!!!
// Or it is necessary to divide F_CPU through the CLK_PRESCALER
CLKPSR = 0x00;
UBRRH = UBRRH_VALUE;
UBRRL = UBRRL_VALUE;
#if USE_2X
UCSRA |= (1<<U2X);
#else
UCSRA &= ~(1<<U2X);
#endif
// Enable receiver and interrupt
UCSRB = (1U << RXEN) | (1U << RXCIE);
// No parity, 8 Data Bits, 1 Stop Bit
// Not necessary! Mostly ATmega controller
// have 8 bit mode initialized at startup
//UCSRC = (1U << UCSZ1) | (1U << UCSZ0);
// If you are using ATmega8/16 it is necessary to do some
// special things to write to the UBRRH and UCSRC register!
// See ATmega16 datasheet at page 162
// Do not forget to enable interrupts globally!
sei();
while(1);
}
Please explain what the display() function is doing...

Erasing a flash on TIVA TM4C123 Microcontroller

I have been trying to understand the following code which is writing to micro controller flash. The Microcontroller is TIVA ARM Cortex M4. I have read the Internal Memory Chapter 8 of Tiva™ TM4C123GH6PM Microcontroller Data sheet. At high level I understand Flash Memory Address (FMA), Flash Memory Data (FMD), and Flash Memory Control (FMC) and Boot Configuration (BOOTCFG).
Below are definitions for some of the variable used in the function.
#define FLASH_FMA_R (*((volatile uint32_t *)0x400FD000))
#define FLASH_FMA_OFFSET_MAX 0x0003FFFF // Address Offset max
#define FLASH_FMD_R (*((volatile uint32_t *)0x400FD004))
#define FLASH_FMC_R (*((volatile uint32_t *)0x400FD008))
#define FLASH_FMC_WRKEY 0xA4420000 // FLASH write key (KEY bit of FLASH_BOOTCFG_R set)
#define FLASH_FMC_WRKEY2 0x71D50000 // FLASH write key (KEY bit of FLASH_BOOTCFG_R cleared)
#define FLASH_FMC_MERASE 0x00000004 // Mass Erase Flash Memory
#define FLASH_FMC_ERASE 0x00000002 // Erase a Page of Flash Memory
#define FLASH_FMC_WRITE 0x00000001 // Write a Word into Flash Memory
#define FLASH_FMC2_R (*((volatile uint32_t *)0x400FD020))
#define FLASH_FMC2_WRBUF 0x00000001 // Buffered Flash Memory Write
#define FLASH_FWBN_R (*((volatile uint32_t *)0x400FD100))
#define FLASH_BOOTCFG_R (*((volatile uint32_t *)0x400FE1D0))
#define FLASH_BOOTCFG_KEY 0x00000010 // KEY Select
This function is used to erase a section of the flash. The function is called from a start address to and end address. I have not fully comprehended how this code works.
//------------Flash_Erase------------
// Erase 1 KB block of flash.
// Input: addr 1-KB aligned flash memory address to erase
// Output: 'NOERROR' if successful, 'ERROR' if fail (defined in FlashProgram.h)
// Note: disables interrupts while erasing
int Flash_Erase(uint32_t addr){
uint32_t flashkey;
if(EraseAddrValid(addr)){
DisableInterrupts(); // may be optional step
// wait for hardware idle
while(FLASH_FMC_R&(FLASH_FMC_WRITE|FLASH_FMC_ERASE|FLASH_FMC_MERASE)){
// to do later: return ERROR if this takes too long
// remember to re-enable interrupts
};
FLASH_FMA_R = addr;
if(FLASH_BOOTCFG_R&FLASH_BOOTCFG_KEY){ // by default, the key is 0xA442
flashkey = FLASH_FMC_WRKEY;
} else{ // otherwise, the key is 0x71D5
flashkey = FLASH_FMC_WRKEY2;
}
FLASH_FMC_R = (flashkey|FLASH_FMC_ERASE); // start erasing 1 KB block
while(FLASH_FMC_R&FLASH_FMC_ERASE){
// to do later: return ERROR if this takes too long
// remember to re-enable interrupts
}; // wait for completion (~3 to 4 usec)
EnableInterrupts();
return NOERROR;
}
return ERROR;
}
Questions: How does the function exit out of the two while loops? How are variables FLASH_FMC_WRITE, FLASH_FMC_ERASE, and FLASH_FMC_MERASE changed? Can '0' be written as part of the erase process?
FLASH_FMC_WRITE, FLASH_FMC_ERASE, and FLASH_FMC_MERASE are individual bits in the FLASH_FMC_R register value (a bitfield). Look in the part's reference manual (or maybe datasheet) at the description of the FLASH_FMC_R register and you will find the description of these bits and more.
The while loops repeatedly read the FLASH_FMC_R register value and exit when the specified bits are set. The flash memory controller sets these bits when it's appropriate (read the reference manual).
Erasing flash means setting all bits to 1 (all bytes to 0xFF). Writing flash means setting select bits to 0. You cannot change a bit from 0 to 1 with a write.
You need to erase to do that. This is just the way flash works.

Objective C - How to check if Executable can be launched (eg. terminal)

I am currently building an Executable handling application in Objective C and I just wanna know a simple code that can determine if an executable file can be launched (without launching it) or if it is just a loadable one.
Thanks.
Once you've taken care of permission bits and whether the file is a Mach-O, there are three things you need to consider:
File type
CPU compatibility
Fat binaries
File type
Whether your Mach-O is an executable, dylib, kext, etc., can be determined from a field in its header.
From <mach-o/loader.h>:
struct mach_header {
uint32_t magic;
cpu_type_t cputype;
cpu_subtype_t cpusubtype;
uint32_t filetype; // <---
uint32_t ncmds;
uint32_t sizeofcmds;
uint32_t flags;
};
Also from <mach-o/loader.h> you get all possible values for that field:
#define MH_OBJECT 0x1 /* relocatable object file */
#define MH_EXECUTE 0x2 /* demand paged executable file */
#define MH_FVMLIB 0x3 /* fixed VM shared library file */
#define MH_CORE 0x4 /* core file */
#define MH_PRELOAD 0x5 /* preloaded executable file */
#define MH_DYLIB 0x6 /* dynamically bound shared library */
#define MH_DYLINKER 0x7 /* dynamic link editor */
#define MH_BUNDLE 0x8 /* dynamically bound bundle file */
#define MH_DYLIB_STUB 0x9 /* shared library stub for static linking only, no section contents */
#define MH_DSYM 0xa /* companion file with only debug sections */
#define MH_KEXT_BUNDLE 0xb /* x86_64 kexts */
CPU compatibility
Just because it says "executable", doesn't mean it can be launched though. If you take an iOS app and try to execute it on your iMac, you'll get a "Bad CPU type in executable" error message.
The different CPU types are defined in <mach/machine.h>, but the only of comparing against the current CPU type is via defines:
#include <mach/machine.h>
bool is_cpu_compatible(cpu_type_t cputype)
{
return
#ifdef __i386__
cputype == CPU_TYPE_X86
#endif
#ifdef __x86_64__
cputype == CPU_TYPE_X86 || cputype == CPU_TYPE_X86_64
#endif
#ifdef __arm__
cputype == CPU_TYPE_ARM
#endif
#if defined(__arm64__)
cputype == CPU_TYPE_ARM || cputype == CPU_TYPE_ARM64
#endif
;
}
(This will only work if your application has 64-bit slices, so that it always runs as 64-bit when it can. If you want to be able to run as a 32-bit binary and detect whether a 64-bit binary could be run, you'd have to use sysctl on "hw.cpu64bit_capable" together with defined, but then it gets even uglier.)
Fat binaries
Lastly, your binaries could be enclosed in fat headers. If so, you'll simply need to iterate over all slices, find the one corresponding to your current architecture, and check the two conditions above for that.
Implementation
There is no Objective-C API for this that I know of, so you'll have to fall back to C.
Given a pointer to the file's contents and the is_cpu_compatible function from above, you could do it like this:
#include <stdbool.h>
#include <stddef.h>
#include <mach-o/fat.h>
#include <mach-o/loader.h>
bool macho_is_executable(char *file)
{
struct fat_header *fat = (struct fat_header*)file;
// Fat file
if(fat->magic == FAT_CIGAM) // big endian magic
{
struct fat_arch *arch = (struct fat_arch*)(fat + 1);
for(size_t i = 0; i < fat->nfat_arch; ++i)
{
if(is_cpu_compatible(arch->cputype))
{
return macho_is_executable(&file[arch->offset]);
}
}
// File is not for this architecture
return false;
}
// Thin file
struct mach_header *hdr32 = (struct mach_header*)file;
struct mach_header_64 *hdr64 = (struct mach_header_64*)file;
if(hdr32->magic == MH_MAGIC) // little endian magic
{
return hdr32->filetype == MH_EXECUTE && is_cpu_compatible(hdr32->cputype);
}
else if(hdr64->magic == MH_MAGIC_64)
{
return hdr64->filetype == MH_EXECUTE && is_cpu_compatible(hdr64->cputype);
}
// Not a Mach-O
return false;
}
Note that these are still rather basic checks though, which will e.g. not detect corrupt Mach-O's, and which could easily be fooled by malicious files. If you wanted that, you would have to either emulate an operating system and launch the binary within, or get into the research field of theoretical IT and revolutionize the mathematics of provability.
My understanding is you want to distinguish a Mach-O standalone executable from a Mach-O dyld library. A standalone executable will use either:
LC_MAIN load command to denote the entry point, supported since MacOS 10.7
LC_UNIXTHREAD load command , older non-dyld approach to do the same (still supported)
A dyld library will not have either of these Mach-O load commands, so if you detect one of them it means it's a runnable standalone executable. That of course does not imply the binary executable is valid and kernel won't kill it for other reasons.
If you want inspect some test files to verify it I recommend using a free tool called MachOView

stm32L476RG - how to execute the bootloader from firmware

I am working on a NUCLEO-L476RG board, trying to start the bootloader from my firmware code but its not working for me. here is the code that i am trying to execute :
#include "stm32l4xx.h"
#include "stm32l4xx_nucleo.h"
#include "core_cm4.h"
#include "stm32l4xx_hal_uart.h"
GPIO_InitTypeDef GPIO_InitStructure;
UART_HandleTypeDef UartHandle;
UART_InitTypeDef UART_InitStructre;
void BootLoaderInit(uint32_t BootLoaderStatus){
void (*SysMemBootJump)(void) = (void (*)(void)) (*((uint32_t *) 0x1FFF0004));
if(BootLoaderStatus == 1) {
HAL_DeInit(); // shut down running tasks
// Reset the SysTick Timer
SysTick->CTRL = 0;
SysTick->LOAD = 0;
SysTick->VAL =0;
__set_PRIMASK(1); // Disable interrupts
__set_MSP((uint32_t*) 0x20001000);
SysMemBootJump();
}
}
int main(void)
{
HAL_Init();
__GPIOC_CLK_ENABLE();
GPIO_InitStructure.Pin = GPIO_PIN_13;
GPIO_InitStructure.Mode = GPIO_MODE_INPUT;
GPIO_InitStructure.Pull = GPIO_PULLUP;
GPIO_InitStructure.Speed = GPIO_SPEED_FAST;
HAL_GPIO_Init(GPIOC, &GPIO_InitStructure);
while (1) {
if (HAL_GPIO_ReadPin(GPIOC, GPIO_PIN_13)) {
BootLoaderInit(1);
}
}
return 0;
}
What i hope to get after the execution of the firmware is that i can connect to the board with a UART and send commands/get responses from the bootloader. the commands i am trying to use come from here: USART protocol used in the STM32 bootloader.
I don't see and response from the board after connecting with the UART.
Here are some ideas taken from the answers to this question.
HAL_RCC_DeInit();
This is apparently needed to put the clocks back into the state after reset, as the bootloader expects them to be.
__HAL_REMAPMEMORY_SYSTEMFLASH();
Maps the system bootloader to address 0x00000000
__ASM volatile ("movs r3, #0\nldr r3, [r3, #0]\nMSR msp, r3\n" : : : "r3", "sp");
Set the stack pointer from bootloader ROM. Where does your 0x20001000 come from? If it's an arbitrary value, then the stack can clobber the bootloader's variables.
Then there is this alternate solution:
When I want to jump to the bootloader, I write a byte in one of the
backup register and then issue a soft-reset. Then, when the processor
will restart, at the very beginning of the program, it will read this
register.
Note that you need LSI or LSE clock for accessing the backup registers.
Try to avoid using __set_MSP(), as current implementation of this function does NOT allow you to change MSP if it is also the stack pointer which you currently use (and you most likely are). The reason is that this function marks "sp" as clobbered register, so it will be saved before and restored afterwards.
See here - STM32L073RZ (rev Z) IAP jump to bootloader (system memory)
Find your bootloader start address from the reference manual.
Then use the following code.
Make sure you have cleaned and disabled the interrupts before do so.
/* Jump to different address */
JumpAddress = *(__IO uint32_t*) (BootloaderAddress + 4);
Jump_To_Application = (pFunction) JumpAddress;
/* Initialize user application's Stack Pointer */
__set_MSP(*(__IO uint32_t*) ApplicationAddress);
Jump_To_Application();
Please have a look at Official STM32 AppNote as well.