STM32 flashing disabled after flashing a code without R/W protection - embedded

I have some experience of StdPeriph libraries usage for programming stm32. But now I tried STM32Cube HAL with STM32CubeMX code generator. I generated a project with this options:
Middleware: FreeRTOS and FatFS via SDIO
Compiler is GCC
stm32f103ret6 MCU
I imported generated code to Eclipse environment. I made a binary and flashed it with "st-flash write ..." as usual. My test program successfuly wrote to USART1 "Hello" in cycle - this is no problem. But then, when I tried to flash another code, it failed with "unknown chip id". If I manually connect NRST to GND, st-flash gives:
...Flash: 0 bytes (0 KiB) in pages of 2048 bytes
Full output:
2015-06-14T16:07:29 INFO src/stlink-common.c: Loading device parameters....
2015-06-14T16:07:29 INFO src/stlink-common.c: Device connected is: F1 High-density device, id 0x10036414
2015-06-14T16:07:29 INFO src/stlink-common.c: SRAM size: 0x10000 bytes (64 KiB), Flash: 0 bytes (0 KiB) in pages of 2048 bytes
I tried to use ST-Link Utility from Windows, but it cannot connect to this MCU to change option bytes (connection to another devices with stm32 works well).
I tried to flash through USART1, but it failed.
Source code I flashed, of course, does not contain any read/write protection enabling. I tried 2 another MCU, but this error was reproduced.
How can I unbrick by MCUs and flash anything?

I found a root cause!
This is a HAL initialization function, generated by STM32CubeMX:
void HAL_MspInit(void)
{
/* USER CODE BEGIN MspInit 0 */
/* USER CODE END MspInit 0 */
__HAL_RCC_AFIO_CLK_ENABLE();
HAL_NVIC_SetPriorityGrouping(NVIC_PRIORITYGROUP_4);
/* System interrupt init*/
/* SysTick_IRQn interrupt configuration */
HAL_NVIC_SetPriority(SysTick_IRQn, 0, 0);
/**DISABLE: JTAG-DP Disabled and SW-DP Disabled
*/
__HAL_AFIO_REMAP_SWJ_DISABLE();
/* USER CODE BEGIN MspInit 1 */
/* USER CODE END MspInit 1 */
}
I didn't notice this simple lines!
/**DISABLE: JTAG-DP Disabled and SW-DP Disabled
*/
__HAL_AFIO_REMAP_SWJ_DISABLE();
This macros totally disables SWD and JTAG programming, look at stm321xx_hal_gpio_ex.h:
#define __HAL_AFIO_REMAP_SWJ_DISABLE() MODIFY_REG(AFIO->MAPR, AFIO_MAPR_SWJ_CFG, AFIO_MAPR_SWJ_CFG_DISABLE)
I didn't found any checkbox in CubeMX to disable/enable SWD/JTAG, so this is the only behavior of code generator! Pay attention to this point when using STM32CubeMX!

If you set the pin assignments for the JTAG/SWD pins correctly (e.g. SYS_JTDI, SYS_JTDO-TRACESWO, etc.) on the pinout tab of STM32CubeMX, the generated code will not disable JTAG/SWD.

It's (BURIED) under Pinout | SYS | Debug of STM32CubeMX...set to Serial Wire or whatever.

Related

STM32F103 SPI different pins does not work

I am currently working on a project with LoRaWAN technology using STM32F103C8T6 microcontroller. For LoRa I am using SPI in Full-Duplex Master mode (spi1 specifically) and in CubeIDE when you activate SPI1, automatically pins PA5, PA6 and PA7 are activated (ver1):
However, PCB is designed and printed and those pins are unfortunately busy. Because, before it was planned to use other SPI1 pins (PB3, PB4, PB5) (ver2):
So, when I use ver1, all is good, LoRa connects to server and sends data without a problem. However, when I use ver2, it does not work at all. I debugged to find where is problem and found out that, SPI read fails (when version of LoRa is read, it returns 0). Thus, ASSERT fires and code is stuck in infinite loop. I could not find any reference of difference of SPI pins in the internet.
Can anyone explain the difference of these pins? And is it possible to use ver2? Thanks beforehand.
P.S. I am using HAL Library + LMIC library (for LoRa) and the configuration of SPI are the same for both ver1 and ver2. Here is code of configuration, if needed:
void MX_SPI1_Init(void)
{
hspi1.Instance = SPI1;
hspi1.Init.Mode = SPI_MODE_MASTER;
hspi1.Init.Direction = SPI_DIRECTION_2LINES;
hspi1.Init.DataSize = SPI_DATASIZE_8BIT;
hspi1.Init.CLKPolarity = SPI_POLARITY_LOW;
hspi1.Init.CLKPhase = SPI_PHASE_1EDGE;
hspi1.Init.NSS = SPI_NSS_SOFT;
hspi1.Init.BaudRatePrescaler = SPI_BAUDRATEPRESCALER_64;
hspi1.Init.FirstBit = SPI_FIRSTBIT_MSB;
hspi1.Init.TIMode = SPI_TIMODE_DISABLE;
hspi1.Init.CRCCalculation = SPI_CRCCALCULATION_DISABLE;
hspi1.Init.CRCPolynomial = 10;
if (HAL_SPI_Init(&hspi1) != HAL_OK)
{
Error_Handler();
}
}
P.S.S: I also gave this question in electronics stackexchange, but there was no answer there, so I decided to share the question here too.
After lots of tries, I found out that, remapped SPI1 does not work together with I2C1, because of I2C1-SMBA pin overlap with SP1 MOSI pin (PB5), even if you are not using SMBA. You can find about that here: STM32F103x8 errata chapter 2.8.7
So, I guess, I will use I2C2 for avoiding collision. The only change I should make on PCB would be redirecting I2C1 pins to I2C2 (2 pins), which is way better than redirecting SPI1 pins (3 pins) and other elements occupying ver1 (also 3) pins.

flash write efm32zg fails with while (DMA->CHENS & DMA_CHENS_CH0ENS)

I am attempting to create a boot loader which allows me to update a processor's software remotely.
I am using keil uvision compiler (V5.20.0.0).
Flash.c, startup_efm32zg.s, startup_efm32zg.c and em_dma.c configured to execute from RAM (code, Zero init data, other data) via their options/properties tabs.
Stack size configured at 0x0000 0800 via the startup_efm32zg.s Configuration Wizard tab.
Using Silicon Labs flash.c and flash.h, removed RAMFUNC as this is redundant to Keil configuration, above.
I modified the flash.c code slightly so it stays in the FLASH_write function (supposedly in RAM) until the DMA is done doing its thing.
I moved the
while (DMA->CHENS & DMA_CHENS_CH0ENS);
line down to the end of the function and added a little wrapper around it like this:
/* Activate channel 0 */
DMA->CHENS = DMA_CHENS_CH0ENS;
if (DMA->CHENS & DMA_CHENS_CH0ENS)
{
/* Start the transfer */
MSC->WRITECMD = MSC_WRITECMD_WRITETRIG;
/* Wait until transfer is done */
while (DMA->CHENS & DMA_CHENS_CH0ENS)
{
//do nothing here
}
}
FLASH_init() is called as part of the initial setup prior to entering my infinite loop.
When called upon to update the flash.....
(1): I disable interrupts.
(2): I call FLASH_erasePage starting at 0x0000 2400. This works.
(3): I call FLASH_write.
FLASH_write(&startAddress, (uint32_t *)flashBuffer, (BLOCK_SIZE/4));
Where:
startAddress = 0x00002400,
flashBuffer = a buffer of type uint8_t flashBuffer[256],
#define BLOCK_SIZE = 256.
It gets stuck here in the function:
while (DMA->CHENS & DMA_CHENS_CH0ENS)
Eventually the debugger execution stops and the Call Stack clears to be left with 0x00000000 and ALL of memory is displayed as 0xAA.
I have set aside 9K of flash for the bootloader. After a build I am told:
Program size: Code=7524 RO-data=304 RW-data=664 ZI-data=3432
Target Memory Options for Target1:
IROM1: Start[0x0] Size[0x2400]
IRAM1: Start[0x20000000] Size:[0x1000]
So .... what on earth is going on? Any help?
One of my other concerns is that it is supposed to be executing from RAM. When I look in the in the Call Stack for the Location/Value for FLASH_write after having stepped into the FLASH_write function I see 0x000008A4. This is flash!(?)
I've tried the whole RAM_FUNC thing, too with the same results.

STM32CubeMX USB CDC VCP?

I've found large number of examples, but nothing on how to do it "properly" from STM32MXCube.
How do I create skeleton code from STM32CubeMX for USB CDC virtual COM port communications (if possible STM32F4 Discovery)?
A STM32CubeMX project for Discovery F4 with CDC as USB device should work out of the box. Assuming you use an up-to-date STM32CubeMX and library:
Start STM32CubeMX
Select the board Discovery F4
Enable peripheral UBS_OTG_FS device only (leave over stuff uncheck)
Enable midlleware USB_Device Communication .. .aka CDC
In the clock tab check the clock source is HSE HCLK. It shall give 168 MHz HLCK and 48 MHz in the 48 MHz (USB). Check there is no red anywhere.
Save the project
Generate code (I used SW4STM32 toolchains)
Build (you may need to switch to internal CDT builder vs. GNU make).
Now add some code to send data over the COM port and voila it should work.
Actually, the tricky part is not try to make any "CDC" access until the host USB connects (no CDC setup yet)
Here is how I did it for quick emit test:
In file usbd_cdc_if.c
uint8_t CDC_Transmit_FS(uint8_t* Buf, uint16_t Len)
{
uint8_t result = USBD_OK;
/* USER CODE BEGIN 7 */
if (hUsbDevice_0 == NULL)
return -1;
USBD_CDC_SetTxBuffer(hUsbDevice_0, Buf, Len);
result = USBD_CDC_TransmitPacket(hUsbDevice_0);
/* USER CODE END 7 */
return result;
}
static int8_t CDC_DeInit_FS(void)
{
/* USER CODE BEGIN 4 */
hUsbDevice_0 = NULL;
return (USBD_OK);
/* USER CODE END 4 */
}
In file main.c
/* USER CODE BEGIN Includes */
#include "usbd_cdc_if.h"
/* USER CODE END Includes */
....
/* USER CODE BEGIN WHILE */
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
uint8_t HiMsg[] = "hello\r\n";
CDC_Transmit_FS(HiMsg, strlen(HiMsg));
HAL_Delay(200);
}
As soon you plug the micro USB (CN5) CDC data will start to show on the host terminal.
That works. I can see "hello" on the terminal (you may need to install a driver, http://www.st.com/web/en/catalog/tools/PF257938).
For reception, it needs to be first armed, say, started by a first call to USBD_CDC_ReceivePacket() in a good place. For that it can be CDC_Init_FS.
Then you can handle data as it arrives in CDC_Receive_FS and rearming reception again from here.
That works for me.
static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len)
{
/* USER CODE BEGIN 6 */
USBD_CDC_ReceivePacket(hUsbDevice_0);
return (USBD_OK);
/* USER CODE END 6 */
}
static int8_t CDC_Init_FS(void)
{
hUsbDevice_0 = &hUsbDeviceFS;
/* USER CODE BEGIN 3 */
/* Set Application Buffers */
USBD_CDC_SetTxBuffer(hUsbDevice_0, UserTxBufferFS, 0);
USBD_CDC_SetRxBuffer(hUsbDevice_0, UserRxBufferFS);
USBD_CDC_ReceivePacket(hUsbDevice_0);
return (USBD_OK);
/* USER CODE END 3 */
}
There are a number of STM32F4 Discovery boards supported by the STM32Cube software, and you haven’t said which you’re using, but I’ve had exactly the same issue with the Discovery board with the F401VCT MCU.
After installing the STM virtual COM port driver, Windows Device Manager showed a STMicroelectronics virtual COM port, but with a yellow warning mark. The COM port was not accessible with a terminal application (PuTTY).
I eventually found that there is a problem with source code output from the STMCube program. But there is a simple fix:
Open a new STM32Cube project and enable the USB_OTG_FS as Device
Only and select CDC Virtual Port COM from the MiddleWares
USB_Device drop-down.
Generate the source code with no other changes needed to any USB settings.
In file usbd_cdc_if.c, change #define USB_HS_MAX_PACKET_SIZE from 512 to 256.
In file usbd_cdc.c, change the #define CDC_DATA_HS_MAX_PACKET_SIZE from 512 to 256.
After doing this, the yellow warning disappeared from Device Manager, and I could receive data at the CDC_Receive_FS function (in usbd_cdc_if.c file) when using PuTTY. Be aware that these definitions return to their incorrect values each time the STM32Cube generates code, and I haven’t found a way around this yet.
I hope this helps.
iChal's fix worked to remove the yellow warning mark.
I would like to mention that USB_HS_MAX_PACKET_SIZE is now in usbd_def.h and CDC_DATA_HS_MAX_PACKET_SIZE is in usbd_cdc.h
I am using STM32CubeMX v4.11.0 STM32Cube v1.0 and the STM32F401C-DISCO.
On further work, I now only have to set the heap size to a larger value.
I am setting it to 0x600 as I also have FreeRTOS enabled. I am using IAR EWARM, so the change is made in linker script stm32f401xc_flash.icf.

atmel sensor using printf

I have an atmel UC3-L0 and compass sensor. Now I install AtmelStudio and download some demo code into the board. But I have no idea where the function printf in demo code will appear the data. How should I do to get the data?
The printf function outputs to stdout.
Usually on a "naked" processor with no operating system you need to define how a character is sent or received from a physical interface (usually an USART, console port, USB port, 4-port LCD interface, etc.). So typically you may want to use the USART port of your processor board to connect to a PC running Hyperterm, PuTTY or similar using a serial cable.
In essence you will need to
create FILE streams using the fdev_setup_stream() macro and
provide pointers to functions get() and put() that tell the printf() function how exactly to read and write from/to that stream (e.g. read/write to a USART, an LCD display, etc.).
you may have libraries - depending on your hardware - that already contain such functions (plus the correct port initialisation functions), like e.g. uart.c/.h, lcd.c/.h, etc.
In the documentation of stdio.h (e.g. here) look for the following:
printf(), fdev_setup_stream()
If you have downloaded Atmel Studio you may look into the stdiodemo.c code for further insight.
In order to use printf in ATMEL studio you should check the following things:
Add and Apply the Standard serial I/O module from Project->ASF Wizard.
Also add the USART module from the ASF Wizard.
Include the following code snippet before the main function.
static struct usart_module usart_instance;
static void configure_console(void)
{
struct usart_config usart_conf;
usart_get_config_defaults(&usart_conf);
usart_conf.mux_setting = EDBG_CDC_SERCOM_MUX_SETTING;
usart_conf.pinmux_pad0 = EDBG_CDC_SERCOM_PINMUX_PAD0;
usart_conf.pinmux_pad1 = EDBG_CDC_SERCOM_PINMUX_PAD1;
usart_conf.pinmux_pad2 = EDBG_CDC_SERCOM_PINMUX_PAD2;
usart_conf.pinmux_pad3 = EDBG_CDC_SERCOM_PINMUX_PAD3;
usart_conf.baudrate = 115200;
stdio_serial_init(&usart_instance, EDBG_CDC_MODULE, &usart_conf);
usart_enable(&usart_instance);
}
Make Sure you call the configure_console after system_init() from the main function.
Now go to tools->extension manager. Add the terminal window extension.
Build and Run your program and open the terminal window from view-> terminal window. put the correct com port to which your device is running on and set the baud to 115200 and hit connect on the terminal window.
You should see the printf statements now. (Float doesn't get printed in Atmel studio)
I was recently puzzling over this myself. I has installed Atmel Studio 7.0 and was using the SAMD21 Dev Board via an example project in which a call to printf was made.
In the sample code I saw that there was a configuration section:
/*!
* \brief Initialize USART to communicate with on board EDBG - SERCOM
* with the following settings.
* - 8-bit asynchronous USART
* - No parity
* - One stop bit
* - 115200 baud
*/
static void configure_usart(void)
{
struct usart_config config_usart;
// Get the default USART configuration
usart_get_config_defaults(&config_usart);
// Configure the baudrate
config_usart.baudrate = 115200;
// Configure the pin multiplexing for USART
config_usart.mux_setting = EDBG_CDC_SERCOM_MUX_SETTING;
config_usart.pinmux_pad0 = EDBG_CDC_SERCOM_PINMUX_PAD0;
config_usart.pinmux_pad1 = EDBG_CDC_SERCOM_PINMUX_PAD1;
config_usart.pinmux_pad2 = EDBG_CDC_SERCOM_PINMUX_PAD2;
config_usart.pinmux_pad3 = EDBG_CDC_SERCOM_PINMUX_PAD3;
// route the printf output to the USART
stdio_serial_init(&usart_instance, EDBG_CDC_MODULE, &config_usart);
// enable USART
usart_enable(&usart_instance);
}
In windows device manager I saw that there was an "Atmel Corp. EDBG USB Port (COM3)" listed under "Ports". However, the one of the "Properties" of this port was listed as 9600 Bits per second. I changed this from 9600 to 115200 to be consistent with the config section above.
Finally, I ran PuTTY.exe and set the Connection-->Serial setting to COM3 and 115200 baud. Then I went to Session, then clicked the Serial Connection Type, then clicked the Open button. And, BAM, there's my printf output via PuTTY.

Using Port DD as GPIO on MCF5282

I’ve got a MCF5282 that I’m trying to use PDD4 as a GPIO on. In my setup code, I’ve got:
MCF5282_GPIO_DDRDD = 0x10; /* cs on dd4. */
MCF5282_GPIO_PORTDD = 0x10; /* active-low. */
And in my main loop, I’ve got:
MCF5282_GPIO_PORTDD = (mainloop_cnt & 0x10);
Which should give me a nice square wave on the oscilloscope, but the port doesn’t seem to be doing as I say. Am I missing some setup steps? I can’t find anything in the 5282 manual about a “Port DD pin-assignment register” to repurpose it from its “primary” role as DDATA.
Edit 2011-03-01: We never figured this out, we just used a different pin for GPIO.
You probably need to clear PSTEN in the Chip Configuration Register to disable DDATA, see page 27-4 of the MCF5282 and MCF5216 ColdFire Microcontroller User’s Manual