I'm working with the qemu riscv32 emulator. I have managed to boot a simple hello-world image I have got from github, however I haven't managed to boot my own image. I suspect this is because I built my image without a linker script, therefore it is being loaded at the wrong address. I'm trying to understand how the qemu boot sequence works to fix this.
This is the linker script I'm using
OUTPUT_ARCH( "riscv" )
OUTPUT_FORMAT("elf32-littleriscv")
ENTRY( _start )
SECTIONS
{
/* text: test code section */
. = 0x20400000;
.text : { *(.text) }
/* gnu_build_id: readonly build identifier */
.gnu_build_id : { *(.note.gnu.build-id) }
/* rodata: readonly data segment */
.rodata : { *(.rodata) }
/* data: Initialized data segment */
. = 0x80000000;
.data : { *(.data) }
.sdata : { *(.sdata) }
.debug : { *(.debug) }
. += 0x1000;
stack_top = .;
/* End of uninitalized data segement */
_end = .;
}
And this is the qemu command I'm executing:
qemu-system-riscv32 -nographic -machine sifive_e -bios none -kernel hello
# with -s -S when debugging
The source code is not very relevant, it is just a small assembly file that writes "hello".
My main question is:
How can I know at which address is qemu expecting to find the image?
Other questions I would like to answer:
With gdb, I have noticed that qemu starts executing at address 0x1004 (before me doing anything). I was expecting it to be 0x0. Why is this?
I have read hat qemu can use U-boot. Does it use it, or any other bootloader, by default?
If so, is there any way to load an image at address 0x0 without any sort of bootloader intervening? (I ask this for debugging purposes, because the first time you try a new arch. possibly yo want to keep everything as simple as possible)
Does the kernel option just load the provided image, or does it something more? (like loading a Linux kernel and execute the provided image on top of it)
I'm using the sifive_e emulator, therefore I have gone to the SiFive E series datasheet (like this one ) to check the memory map, and find the starting address. This is what I have found:
Those address are very different from those specified in the linker script above. It seems I'm looking at the wrong place, where can I found the SiFive E boot address?
EDIT
With regards to the last question about the memory map, I found the answer. It is explained here (5.16) and here (chapter 6)
Related
Im trying to create trace log messages for this Idd Sample Driver. I am following this document.
I add WPP_INIT_TRACING(pDriverObject, pRegistryPath) to the DriverEntry, and WPP_CLEANUP(pDriverObject)to the EvtCleanupCallback.
_Use_decl_annotations_
void DriverContextCleanup(WDFOBJECT DriverObject)
{
UNREFERENCED_PARAMETER(DriverObject);
DoTraceMessage(MYDRIVER_ALL_INFO, "Tracing Fini Success");
WPP_CLEANUP(WdfDriverWdmGetDriverObject(DriverObject));
}
_Use_decl_annotations_
extern "C" NTSTATUS DriverEntry(
PDRIVER_OBJECT pDriverObject,
PUNICODE_STRING pRegistryPath
)
{
WDF_DRIVER_CONFIG Config;
NTSTATUS Status;
WDF_OBJECT_ATTRIBUTES Attributes;
WDF_OBJECT_ATTRIBUTES_INIT(&Attributes);
Attributes.EvtCleanupCallback = DriverContextCleanup;
WDF_DRIVER_CONFIG_INIT(&Config,
IddSampleDeviceAdd
);
WPP_INIT_TRACING(pDriverObject, pRegistryPath);
DoTraceMessage(MYDRIVER_ALL_INFO, "Tracing Init . . .");
Status = WdfDriverCreate(pDriverObject, pRegistryPath, &Attributes, &Config, WDF_NO_HANDLE);
if (!NT_SUCCESS(Status))
{
DoTraceMessage(MYDRIVER_ALL_INFO, "Tracing Init Failed");
WPP_CLEANUP(pDriverObject);
return Status;
}
DoTraceMessage(MYDRIVER_ALL_INFO, "Tracing Init Success");
return Status;
}
I add some DoTraceMessage() calls with a flag of MYDRIVER_ALL_INFO to the DriverEntry and DeviceEntry.
NTSTATUS IddSampleDeviceD0Entry(WDFDEVICE Device, WDF_POWER_DEVICE_STATE PreviousState)
{
UNREFERENCED_PARAMETER(PreviousState);
// This function is called by WDF to start the device in the fully-on power state.
DoTraceMessage(MYDRIVER_ALL_INFO, "Tracing Device Entry");
auto* pContext = WdfObjectGet_IndirectDeviceContextWrapper(Device);
pContext->pContext->InitAdapter();
return STATUS_SUCCESS;
}
I make sure WPP Tracing is set to YES in the properties of the project.
The project builds, I go into TraceView and open the IddSampleDriver.PDB file, I set the level to verbose, and check all of the flags. I verified that it has the trace stuff it needs. Since if I open the IddSampleApp.PDB file, it fails.
I install the driver after enabling TestSigning and installing with pnputil -a ./x64/Debug/IddSampleDriver/IddSampleDriver.inf, run the sample app, the driver spins up 3 virtual monitors in the Display Settings. I then exit the app, and the monitors disappear. Everything seems to be functional. The problem is there is no traces in TraceView.
I have tried using tracelog, following this. Still nothing.
I have tried using logman, following this. Still nothing.
I am at my wits end. I spent all last week on this, Trying every possible avenue to get my trace messages to appear.
Either I followed every one of these instructions with no success. Either I somehow messed up every single one of them, or I am missing something else that I need to do in order to view these traces.
Additional Info:
Trace.h was left untouched
Targeting x64, Debug. Running on build machine. Win10.
CTL file I used:
b254994f-46e6-4718-80a0-0a3aa50d6ce4 MyDriver1TraceGuid
Basic process I used (tracelog as example):
tracepdb -f .\x64\Debug\IddSampleDriver.pdb
tracelog -start TestTraceIDD -guid .\guid.ctl -f testTrace.etl -flag 0xff
pnputil -a .\x64\Debug\IddSampleDriver\IddSampleDriver.inf #install driver
.\x64\Debug\IddSampleApp.exe #create software device and attach driver to it
<exit app>
tracelog -stop TestTraceIDD
tracefmt.exe .\testTrace.etl -p . -o test.out```
pnputil -d oem20.inf -f #uninstall driver
Solved my problem. I wasnt actually installing my driver, since it was still installed from the first time I installed it, so it was always using that driver instead of my new one with WPP enabled. I was installing and uninstalling the driver with pnputil.
I was doing pnputil -d oem20.inf -f for example to uninstall the driver. This is BAD. I have learned now that force deleting a driver does nothing. The reason I was force deleting was because it wouldnt delete when i still had a device, even though i would exit the sample app.
So what you have to do in order to properly delete the driver is enumerate the devices with pnputil, remove the ones that use your driver, then delete the driver. This allows a proper fresh driver installation.
I have a VxWorks Image Project project without a File-System on MPC5200B, using DIAB tool-chain.
I need to dynamically load a module from flash.
I allocated memory on my stack char myTemporaryModuleData[MAX_MODULE_SIZE]
and filled it with the module data from Flash.
(checked that the binary data is intact)
then i create a memDevice('/mem/mem01', myTemporaryModuleData, moduleReadLength)
open the psuedo-stream int fdModuleData = open("/mem/mem01", O_RDONLY, 777);
when i run int mId = loadModule(fdModuleData, LOAD_ALL_SYMBOLS);
did not see anything in the console after running loadModule();
but mId = 0 which indicates failure :(.
getErrno() returned 0x3D0004 (S_objLib_OBJ_TIMEOUT)
NOTE: it didn't take long at all to fail => timeout?
i tried replacing the module with a simple void foo() { printf(...); } module but still failes with same issue.
tried loading an .out instead of .o
unfortunately, nothing got me nowhere,
How can i know what caused it to fail? (log, last_error, anything i should check?)
FOUND IT.
Apparently, it was a mistake in the data read from the flash.
What I can contribute is that 'loadModule()' from memDrv device is possible and working.
I'm trying to connect to a 2GB sd card class 6 with stm32f091cctx MCU via SPI. Using fatFs library ver. R0.13a I'm able to mount the drive and open the file with f_mount and f_open functions. But when it comes to reading from file, it just freezes somewhere in f_read function. Also when I try to change the position of pointer with f_lseek, again it freezes. f_lseek works only when I write it as: f_lseek(&MyFile, 0).
This part of my code is as below:
if(FATFS_LinkDriver(&SD_Driver, SDPath) == 0)
{
f_mount(&SDFatFs, (TCHAR const*)SDPath, 1);
f_open(&MyFile, "SAMPLE1.WAV", FA_READ);
f_lseek(&MyFile, 200);
f_read(&MyFile, rtext, 1000, (UINT*)&bytesread);
}
You are probably run out of heap size and go to HardFault exception.
You can increase HEAP size from CubeMX -> Project Setting or directly from **_startup.s file.
PS: Print something in HardFault_Handler and Error_Handler function to see when something goes wrong.
I am attempting to create a boot loader which allows me to update a processor's software remotely.
I am using keil uvision compiler (V5.20.0.0).
Flash.c, startup_efm32zg.s, startup_efm32zg.c and em_dma.c configured to execute from RAM (code, Zero init data, other data) via their options/properties tabs.
Stack size configured at 0x0000 0800 via the startup_efm32zg.s Configuration Wizard tab.
Using Silicon Labs flash.c and flash.h, removed RAMFUNC as this is redundant to Keil configuration, above.
I modified the flash.c code slightly so it stays in the FLASH_write function (supposedly in RAM) until the DMA is done doing its thing.
I moved the
while (DMA->CHENS & DMA_CHENS_CH0ENS);
line down to the end of the function and added a little wrapper around it like this:
/* Activate channel 0 */
DMA->CHENS = DMA_CHENS_CH0ENS;
if (DMA->CHENS & DMA_CHENS_CH0ENS)
{
/* Start the transfer */
MSC->WRITECMD = MSC_WRITECMD_WRITETRIG;
/* Wait until transfer is done */
while (DMA->CHENS & DMA_CHENS_CH0ENS)
{
//do nothing here
}
}
FLASH_init() is called as part of the initial setup prior to entering my infinite loop.
When called upon to update the flash.....
(1): I disable interrupts.
(2): I call FLASH_erasePage starting at 0x0000 2400. This works.
(3): I call FLASH_write.
FLASH_write(&startAddress, (uint32_t *)flashBuffer, (BLOCK_SIZE/4));
Where:
startAddress = 0x00002400,
flashBuffer = a buffer of type uint8_t flashBuffer[256],
#define BLOCK_SIZE = 256.
It gets stuck here in the function:
while (DMA->CHENS & DMA_CHENS_CH0ENS)
Eventually the debugger execution stops and the Call Stack clears to be left with 0x00000000 and ALL of memory is displayed as 0xAA.
I have set aside 9K of flash for the bootloader. After a build I am told:
Program size: Code=7524 RO-data=304 RW-data=664 ZI-data=3432
Target Memory Options for Target1:
IROM1: Start[0x0] Size[0x2400]
IRAM1: Start[0x20000000] Size:[0x1000]
So .... what on earth is going on? Any help?
One of my other concerns is that it is supposed to be executing from RAM. When I look in the in the Call Stack for the Location/Value for FLASH_write after having stepped into the FLASH_write function I see 0x000008A4. This is flash!(?)
I've tried the whole RAM_FUNC thing, too with the same results.
I have an atmel UC3-L0 and compass sensor. Now I install AtmelStudio and download some demo code into the board. But I have no idea where the function printf in demo code will appear the data. How should I do to get the data?
The printf function outputs to stdout.
Usually on a "naked" processor with no operating system you need to define how a character is sent or received from a physical interface (usually an USART, console port, USB port, 4-port LCD interface, etc.). So typically you may want to use the USART port of your processor board to connect to a PC running Hyperterm, PuTTY or similar using a serial cable.
In essence you will need to
create FILE streams using the fdev_setup_stream() macro and
provide pointers to functions get() and put() that tell the printf() function how exactly to read and write from/to that stream (e.g. read/write to a USART, an LCD display, etc.).
you may have libraries - depending on your hardware - that already contain such functions (plus the correct port initialisation functions), like e.g. uart.c/.h, lcd.c/.h, etc.
In the documentation of stdio.h (e.g. here) look for the following:
printf(), fdev_setup_stream()
If you have downloaded Atmel Studio you may look into the stdiodemo.c code for further insight.
In order to use printf in ATMEL studio you should check the following things:
Add and Apply the Standard serial I/O module from Project->ASF Wizard.
Also add the USART module from the ASF Wizard.
Include the following code snippet before the main function.
static struct usart_module usart_instance;
static void configure_console(void)
{
struct usart_config usart_conf;
usart_get_config_defaults(&usart_conf);
usart_conf.mux_setting = EDBG_CDC_SERCOM_MUX_SETTING;
usart_conf.pinmux_pad0 = EDBG_CDC_SERCOM_PINMUX_PAD0;
usart_conf.pinmux_pad1 = EDBG_CDC_SERCOM_PINMUX_PAD1;
usart_conf.pinmux_pad2 = EDBG_CDC_SERCOM_PINMUX_PAD2;
usart_conf.pinmux_pad3 = EDBG_CDC_SERCOM_PINMUX_PAD3;
usart_conf.baudrate = 115200;
stdio_serial_init(&usart_instance, EDBG_CDC_MODULE, &usart_conf);
usart_enable(&usart_instance);
}
Make Sure you call the configure_console after system_init() from the main function.
Now go to tools->extension manager. Add the terminal window extension.
Build and Run your program and open the terminal window from view-> terminal window. put the correct com port to which your device is running on and set the baud to 115200 and hit connect on the terminal window.
You should see the printf statements now. (Float doesn't get printed in Atmel studio)
I was recently puzzling over this myself. I has installed Atmel Studio 7.0 and was using the SAMD21 Dev Board via an example project in which a call to printf was made.
In the sample code I saw that there was a configuration section:
/*!
* \brief Initialize USART to communicate with on board EDBG - SERCOM
* with the following settings.
* - 8-bit asynchronous USART
* - No parity
* - One stop bit
* - 115200 baud
*/
static void configure_usart(void)
{
struct usart_config config_usart;
// Get the default USART configuration
usart_get_config_defaults(&config_usart);
// Configure the baudrate
config_usart.baudrate = 115200;
// Configure the pin multiplexing for USART
config_usart.mux_setting = EDBG_CDC_SERCOM_MUX_SETTING;
config_usart.pinmux_pad0 = EDBG_CDC_SERCOM_PINMUX_PAD0;
config_usart.pinmux_pad1 = EDBG_CDC_SERCOM_PINMUX_PAD1;
config_usart.pinmux_pad2 = EDBG_CDC_SERCOM_PINMUX_PAD2;
config_usart.pinmux_pad3 = EDBG_CDC_SERCOM_PINMUX_PAD3;
// route the printf output to the USART
stdio_serial_init(&usart_instance, EDBG_CDC_MODULE, &config_usart);
// enable USART
usart_enable(&usart_instance);
}
In windows device manager I saw that there was an "Atmel Corp. EDBG USB Port (COM3)" listed under "Ports". However, the one of the "Properties" of this port was listed as 9600 Bits per second. I changed this from 9600 to 115200 to be consistent with the config section above.
Finally, I ran PuTTY.exe and set the Connection-->Serial setting to COM3 and 115200 baud. Then I went to Session, then clicked the Serial Connection Type, then clicked the Open button. And, BAM, there's my printf output via PuTTY.