Why static random access memory (SRAM) does not require a memory controller? - embedded

I have been studying the bootloaders, and there is an explanation on most of the sources that ROM code is on most of the chips that tell the chip where to go after it power up, and then ROM code load a small chunk of code into the SRAM.
My question is that DRAM requires a controller to run, but why SRAM doesn't? Who controls the SRAM? or how it is being controlled?
Also what happens after the system is being done with the SRAM and things are running off of DRAM?
I do not know yet if it makes sense or not but it would be best if you can answer from the perspective of u-boot and Linux.

Both need controllers, DRAM however needs to be refreshed periodically to keep its state (in condensators), unlike SRAM that stores its states through latch circuits.
That means that if you want to keep the content of the memory after a reset (from Linux or U-Boot for example), you must have configured the DRAM controller to "auto refresh" the memory during the reset step. There is no such need with SRAM.

Generally when you are referring to SRAM from bootloader perspective, it is internal RAM which is accessible by the controller. This RAM is accessed by the controller using an AHB/AXI bus (for ARM based devices). There might be a memory bridge which converts the signals from AHB/AXI bus to memory bus. So speaking from a software point of view, it is transparent, no specific software configuration is required to access this RAM.

... then ROM code load a small chunk of code into the SRAM.
That is a common procedure with some SoCs, but it's not required. There are alternate boot schemes.
Etrax SoCs that used CRIS processors (which are now out of production) required the DRAM parameters to be stored in nonvolatile memory (NVM). The embedded ROM boot code accessed this NVM, and initialized the DRAM controller. The ROM boot code was thus capable of directly booting a Linux kernel.
Some ARM SoCs have a Boot Memory Selector (BMS) pin (e.g. Atmel AT91SAM9xxx and Microchip SAMA5Dx) that can disable the internal ROM code, and has the processor execute code after a reset from an external NVM (e.g. NOR flash) which has execute-in-place (XIP) capability. Such a boot scheme could be customized to initialize the external DRAM, and then load U-Boot or even a Linux kernel.
My question is that DRAM requires a controller to run, but why SRAM doesn't?
Who controls the SRAM? or how it is being controlled?
DRAM requires a controller because this type of memory technology requires periodic refreshing. The DRAM controller needs to be programmatically initialized before the DRAM can be accessed. One of the functions of the boot code that is loaded into SRAM is to perform this initialization of the DRAM controller.
Interfacing SRAM by comparison is far more straightforward. Normally there is no "SRAM controller". The control logic to interface SRAM typically does reach the level of complexity to require a "controller". For instance I've used a SBC that had its Z80 microprocessor directly connected to the SRAM (HM6264) and EPROM (MBM2764) memory ICs plus some logic for address decoding.
The "SRAM controller" found on a modern SoC is primarily a buffered interface for external SRAM with the internal system bus. The internal SRAM of the SoC does not require any software initialization, and would be accessible immediately after a reset.
Also what happens after the system is being done with the SRAM and things are running off of DRAM?
Typically the internal SRAM is left unused when it is not included as part of the memory that the Linux kernel manages. I don't know if that is due to any technical reasons such as virtual memory or caching issues, or oversight, or desire for the simplicity of homogeneous memory.
For some SoCs the amount of internal SRAM is so small (e.g. 8 KB in Atmel AT91SAM926x) that the effort to utilize it in the kernel could be deemed to have a poor cost-to-benefit trade-off.
See this discussion regarding a kernel patch for SRAM on an Atmel/Microchip SAMA5D3x
A device driver could still utilize the internal SRAM as its private memory region for high-speed buffers. For instance there was a kernel patch to use the SRAM to hold Ethernet transmit packets to avoid transmit underrun errors.

Related

Cortex-M3 External RAM Region

I'm currently researching topics such as RAM/ROM/Stack/Heap and data segments etc.
I was looking at the ARM Cortex-M3 memory map and saw the region labeled "External RAM".
According to the data sheet of a random Cortex-M3 STM32 MCU the external RAM region is mapped from 0x60000000- 0x9FFFFFFF, so it is quite large!
I couldn't find a definitive answer about how this region is actually used.
I imagine you would have an external SRAM and you would choose between two options.
(1) Read via the SPI interface and place into a local buffer(stack), then load that local buffer into the external ram region. This option seems to have a lot of negative consequences, such as hogging the CPU and increasing the stack temporarily if the requested data is very large.
(2) Utilize a DMA and transfer from the SPI interface into the external ram region.
Now I can't understand, why you would map the data to this specific address range, what are the advantages, why don't you just place the data directly in that huge memory region?
Now I'm asking this question because I have a slight feeling I have completely missed the point of what the External RAM region really is.
-Edit-
In the data sheet that is linking to the STM32 device, the memory region "External RAM" is marked as reserved. It is my conclusion that the memory regions listed by ARM is showing the full potential of a 32bit MCU, as I incorrectly state that the external RAM region "is quite large!" does not necessarily mean that this is "real" size of that region, if it is even used, it depends on what the vendor can physically achieve within the MCU hardware, and I imagine they would limit hardware capabilities to be competitive on price, power consumption etc.
I imagine you would have an external [SRAM][3] and you would choose
between two options.
(1) Read via the SPI interface and place into a local buffer(stack), then load that local buffer into the external ram region. This option
seems to have a lot of negative consequences, such as hogging the CPU
and increasing the stack temporarily if the requested data is very
large.
(2) Utilize a DMA and transfer from the SPI interface into the external ram region.
None of the above. External memory on an SPI bus is not memory mapped. If you have an SPI memory, it is not mapped to that region, it is simply an SPI device, and the "address" is simply an offset from the start of the memory device itself. MCUs with a Quad or Octo-SPI controller are memory mapped. QSPI RAM is not that common and relatively expensive. QSPI is more commonly used for flash memory.
The external memory region can be used by STM32 parts with an FSMC (Flexible
Static Memory Controller) or an FMC (Flexible Memory Controller), or and mentions a QPSI interface. The latter FMC SDRAM, and is generally available on the higher end parts. Apart from the QSPI and NAND flash, these interfaces require using the GPIO EMIF (external memory interface) alternate function to create an address and data bus. So it generally requires parts with high pin count to accommodate. The EMIF can be configured for 8, 16 or 32bit data bus for reduced pin count (and slower access).
Now I can't understand, why you would map the data to this specific
address range, what are the advantages, why don't you just place the
data directly in that huge memory region?
Since it was precipitated by your earlier misconception this question is perhaps redundant, but memory that exists in the memory map can be used to store data accessed as regular variables rather than transferring to an from internal buffers and it can be used as an execution region - code can loaded to and be executed directly from such memory.
Now I'm asking this question because I have a slight feeling I have completely missed the point of what the External RAM region really is.
Self awareness is a skill. That is known as conscious incompetence and is a motivator for learning.
It is my conclusion that the memory regions listed by ARM is showing the full potential of a 32bit MCU, as I incorrectly state that the external RAM region "is quite large!" does not necessarily mean that this is "real" size of that region, if it is even used, it depends on what the vendor can physically achieve within the MCU hardware, and I imagine they would limit hardware capabilities to be competitive on price, power consumption etc.
No, it is largely about the number of pins available for an address bus (except for QSPI). The external memory is a matter for the board design - it is not something the MCU vendor decides must be present. The constraint is a maximum, not a required amount of physical memory. The STM32 FMC supports the following memory sizes/types:
So you can have up to 512Mb of SDRAM for example. The space available for static memories (NOR/PSRAM/SRAM) is significantly larger than the than the typical size of such memories.

Most common firmware update protocol

I am supposed to pick (and may be implement) the firmware update protocol/software/procedure for the embedded device without USB and with limited program memory size. That device will work autonomously most of the time but once in a while a technician will be coming and updating the firmware.
What would be the most common choice for the update protocol if I wanted to use RS232 or CAN?
The requirements for the update are: complete after interrupted update (boot loader will be needed, I assume), small memory footprint, merge user settings with the newly introduced user data fields (in EEPROM), backup the previous version of the firmware with the possibility to roll the update back, safely update the boot loader itself.
It would be nice if the implementation of the boot loader and update client software existed already too (at least for Windows).
And just out of curiosity - are there any good alternatives to DFU for devices with USB?
Thanks in advance
I am not sure about "most common"; I am not sure anyone could answer that authoritatively or whether the answer is even useful. However I can tell you that I have implemented XMODEM-CRC/XMODEM-1K on a number of devices (ARM 7, ARM Cortem-M, PIC24, TI C55xx for example) in less than 4Kbytes. The bootloader sends an XMODEM start request on each port that is to support update, then for each port if a response is received within a short timeout (a few tens of milliseconds), then transfer continues. If no response is received the application is started normally.
complete after interrupted update (boot loader will be needed, I assume)
The approach I have taken is to not program the start address immediatly to flash on receipt but to copy it sideways and then program it last. The bootloader checks the start address on start-up and if it is 0xFFFFFFFF (i.e. not programmed) the transfer did not complete, and the bootloader restarts continuously polling for XMODEM start.
merge user settings with the newly introduced user data fields (in EEPROM),
In my case I have used Intel HEX files, but EEPROM memory is not commonly memory mapped. You could solve that by using a proprietary data format or set the address of the HEX data to an area that is invalid on the processor which the bootloader code will recognise as data to be sent to the EEPROM instead.
backup the previous version of the firmware with the possibility to
roll the update back,
That is a function of the bootloader implementation rather than the protocol. It of course requires that you have space to store two copies of the application. The unused copy could possibly be zipped, but incorporating decompression in the bootloader will increase its size. A perhaps simpler and least costly approach would be to have the bootloader support output of the current application image via XMODEM allowing the back-up to be stored on the host. However by doing that you are potentially enabling a third party to access your code.
safely update the boot loader itself.
Again that is a function of your bootloader rather then the protocol. If the code runs from RAM (i.e. the bootloader is copied from ROM to RAM and executed, then it is straightforward. In this case it is safest if possible to load the entire bootloader data into RAM before programming flash memory in order to minimise the time the target has no bootloader and so that sucessful programming does not rely on the host connection being maintained throughout.
If however the bootloader runs from flash, replacing it from the bootloader itself is not possible. Instead you might load an application that the bootloader runs and which replaces the bootloader before loading (or reloading) the final application.
It would be nice if the implementation of the boot loader and update
client software existed already too (at least for Windows).
Any terminal emulator software such as TeraTerm, Hyperterminal, PuTTY etc. will support XMODEM transfer. Implementing your own custom XMODEM sender is relatively straightforward with XMODEM source code widely available.
And just out of curiosity - are there any good alternatives to DFU for
devices with USB?
What I have done is implement a CDC/ACM device class USB stack in the bootloader so that it appears to the host as a serial port, and then used the same XMODEM code as before to do the data transfer. This increases the size of the bootloader; in my case to about 12kbytes. It was implemented using a stack and CDC/ACM (virtual COM port) app-note provided by the chip vendor. Strictly speaking for this you will need a USB vendor-id (VID) registered to your company; you should not use just any old ID.

Bootloader Working

I am working on Uboot bootloader. I have some basic question about the functionality of Bootloader and the application it is going to handle:
Q1: As per my knowledge, bootloader is used to download the application into memory. Over internet I also found that bootloader copies the application to RAM and then the application runs from RAM. I am confused with the working of Bootloader...When application is provided to bootloader through serial or TFTP, What happens next, whether Bootloader copies it to RAM first or whether it writes directly to Flash.
Q2: Why there is a need for Bootloader to copy application to RAM and then run the application from RAM? What difficulties we will face if our application runs from FLASH?
Q3: What is the meaning of statement "My application is running from RAM/FLASH"? Is it mean that our application's .text segment or .code segment is in RAM/FLASH? And we are not concerned about .bss section because it is designed to be in RAM.
Thanks
Phogat
When any hardware system is designed, the designer must consider where the executable code will be located. The answer depends on the microcontroller, the included memory types, and the system requirements. So the answer varies from system to system. Some systems execute code located in RAM. Other systems execute code located in flash. You didn't tell us enough about your system to know what it is designed to do.
A system might be designed to execute code from RAM because RAM access times are faster than flash so code can execute faster. A system might be designed to execute code from flash because flash is plentiful and RAM may not be. A system might be designed to execute code from flash so that it boots more quickly. These are just some examples and there are other considerations as well.
RAM is volatile so it does not retain code through a power cycle. If the system executes code located in RAM then a bootloader is required to obtain and write the code to RAM at powerup. Flash is non-volatile so execution can start right away at powerup and a bootloader is not necessary (but may still be useful).
Regarding Q3, the answer is yes. If the system is running from RAM then the .text will be located in RAM (but not until after the bootloader has copied it to there). If the system is running from flash then the .text section will be located in flash. The .bss section is variables and will be in RAM regardless of where the .text section is.
Yes, in general a bootloader boots the system, but it might also provide a mechanism for interrupting the default boot path and allow alternate firmware to be downloaded and run instead, as well as other features (like flashing).
Traditional rom had a traditional ram like interface, address, data, chip select, read/write, etc. And you can still buy rom that way, but it is cheaper from a pin real estate perspective to use something spi or i2c based, which is slower. Not desireable to run from, but tolerable to read once then run from ram. newer flash technologies can/have had problems with read-disturb, where if your code is in a tight loop reading the same instructions or for any other reason the flash is being read too fast, the charge can drop such that a read returns the wrong data, potentially causing the program to change course or crash. Also your PC and other linux platforms are used to copying the kernel from NV storage (hard disk) to ram and then running from there so the copy from flash to ram and run from ram has a comfort level, and is often faster than flash. So there are many potential reasons to not use flash, but depending on the system it may be possible to run from flash just fine (some systems the flash in question is not accessible directly and not executable, of course SOME rom in that system needed to be executable/bootable).
It simplifies the coding challenges if you program the flash with something that is in ram. You can create and debug the code one time that reads from ram and writes to flash and reads from flash and writes to ram. DONE. Now you can work on separate code that receives data from serial to ram, or from ram to serial. DONE. Then work on code that does the same over ethernet or usb or whatever DONE. You dont have to deal with inventing a protocol or solving the problem of timing. Flash writing is very slow, and even xmodem at a moderate speed can be way too fast, so you have to buffer that data in ram anyway, might as well make the tasks completely separate, instead of an xmodem or any other serial based flash loader with a big ram based fifo, just move the data to ram, then separately go from ram to flash. Same for other interfaces. It is technically possible to buffer the data and give the illusion of going from the download interface straight to flash, and depending on the protocol it is technically possible to hold off the sender so that as little as one flash page is required in ram before programming flash. With the older parallel flashes you could do something pretty cool which I dont think most people figured out. When you stop writing to the flash page for some known period of time the flash would automatically start to program that page and you have to wait for 10ms or something like that before it is done. What folks assumed was you had to program sequential addresses and had to get the new data for the next address in that period of time and would demand high serial port speeds, etc, the reality is you can pound the same address over and over again with the same data and the flash wont start to program the page, and the download interface can be infinitely slow. Serial flashes work differently and either dont need tricks or have different tricks.
RAM/FLASH is not some industry term. It likely means that .text is in rom (flash) and .data and .bss are in ram. A copy of the initial state of .data will probably be on flash as well and copied to ram before main() is called, likewise .bss will be zeroed before main() is called. look at crt0.S for most platforms in gnu sources (glibc, or is it gcc, I dont know) to get the gist of how the bootstrap works in a generic fashion.
A bootloader is not required to run linux or other operating systems, you dont NEED uboot, but it is quite useful. Linux is pretty easy, you copy the kernel and root file system, either set some registers or some tags in memory or both then branch to the entry point in the kernel and linux takes over from there. Because linux is so complicated it is desireable to have a complicated bootloader that can capitalize on high speed interfaces like ethernet (rather than being limited to serial or slower).
I would add something regarding your question Q2.
Q2: Why there is a need for Bootloader to copy application to RAM and then run the application from RAM? What difficulties we will face if our application runs from FLASH?
It is not only about having SPI or similar serial external code memory (which is not that often anyway).
Even the external ROM/FLASH/EPROM/ connected to the usual high speed parallel bus will will prevent a system from running on a maximum clock (with zero wait state) even on the relatively slow MCUs due to the external memory access time. You would need 10 ns FLASH access time for the 100 MHz clock, which is not so easy to get (if economically possible at all). And you would agree that 100 MHz is not such a brain spinning speed any more :-)
That is why many MCU/CPU architectures are doing tricks with reading multiply instructions at once, or having internal cash, or doing whatever was needed to compensate for a slow external code memory. Only most older 8-bit architectures can execute the code directly from the flash memory ('in place').
Even if your only code memory was the internal Flash, something need to be done to speed it up. Take a look for example at this article:
http://www.iqmagazineonline.com/magazine/pdf/v_3_2_pdf/pg14-15-18-19-9Q6Phillips-Z.pdf
It desribes how the ARM7 has incorporated something they called MAM (Memory Accelerator Module). It is a good read, and you will find some measures there to speed up the code memory access for the specific ARM7 arhitecture (goes for most others):
Limit maximum clock frequency (from 80 MHz to about 20 MHz for the example in the article)
Insert wait-cycles during flash accesses
Use an instruction cache
Copy the program code from flash to RAM
Obviously, if the instruction cache was not an option (too small, or the clock too high) you are really left only with execution from the RAM, after relocating the code there at the start up.
There is an option also to run only specific section of code from the RAM, which could be specified to the linker. For the DSP (Digital System Processing) systems, there was really no option to run from the EPROM/FLASH even in the old days with clock around only few tens of MHz, let alone now.
Another issue is debugging, the options for debugging the code placed in ROM, or even Flash, are very limited (you have to move section of the code to RAM to be able to set a break point on most systems).
Regarding Q2, one of the difficulties you may face executing from Flash is another code update. If you are executing from the same block of Flash you are trying to update, the system will crash. This depends on your system architecture (how your application and bootloader are organized in Flash) but may be particularly hard to avoid if you are trying to update the bootloader itself.

Writing in flash is IO Mapped while reading from flash is Memory mapped...what could be the reasons

I am using Broadcom CFE(common framework enviorment) boot loader... the SOC is from Broadcom...and using Serial NOR Flash N25Q032 as a bootstrap device. To read from the flash it is using memory map technique while to write it is using SPI interface (IO mapped).
The reason behind such a design seems to me are as:
IO mapped reading/writing is blocking call so while reading to keep the CPU free, it is implemented using memory mapped.
It don't have an implementation to check tha... now mapped area in the RAM is modified so change the flash accordingly (means failing to implement memory mapped write) and its comparatively easy to use SPI interface (IO mapped) implementation to write the flash.
....Please explain what could be the reason behind such design....
By definition the serial memory cannot itself be memory mapped. In this case I imagine (not having looked at the datasheet) that the NOR flash memory controller reads data into a dedicated memory mapped page in the micro-controller, so it is reading blocks serially into random-access memory.
When writing, the data is already in random-access memory, the NOR flash memory controller serializes that directly to the memory device. It would make little sense to copy it from one memory mapped area to another just to then serialize it.

cortex a9 boot and memory

I am a newbie starting out in micro-controller programming. The chip of interest here is cortex-a9. At reset or power up there has to be code at 0x0000000 from my readings. My questions though they may sound too trivial will help me in putting some concepts in perspective.
Does the memory address 0x0000000 reside in ROM?
What happens right after the code is read from that address?
Should there be some sort of boot-loader present & if so at what address should this be in & Should it also be residing in ROM?
Finally, at what point does the kernel kick in & where does the kernel code reside?
ARM sells cores not chips, what resides at that address depends on the chip vendor that has bought the ARM core and put it in their chip. Implementations vary from vendor to vendor, chip to chip.
Traditionally an ARM will boot from address zero, more correctly the reset exception vector is at address zero. Unlike other processor families, the traditional ARM model is NOT a list of addresses for exception entry points but instead the ARM EXECUTES the instruction at that address, which means you need to use either a relative branch or a load pc instruction. The newer cortex-m series, which are thumb/thumb2 only (they cannot execute ARM (32 bit) instructions) uses the traditional (non-ARM) like list of addresses, also the zero address is not an exception vector, it is the address to load in the stack pointer, then the second entry is reset and so on. Also the cortex-m exception list is different, that family has like 128 individual interrupts, where the traditional ARM has two, fast and normal. There is a recent cortex-m based question or perhaps phrased as thumb2 question for running linux on a thumb2 ARM. I think the cortex-m implementations are all microcontroller class chips and only have on chip memory in the tens of kbytes, basically these dont fall into the category you are asking about. And you are asking about cortex-a9 anyway.
A number of cores or maybe all of them have a boot option where the boot address can be 0x00000000 or something like 0xFFFF0000 as an alternate address. using that would be very confusing for ARM users, but it provides the ability for example to have a rom at one address and a ram at another allowing you to boot on power up from a rom then switch the exception table into ram for runtime operation. You probably have a chip with a core that can do this but it is up to the chip vendor whether or not to use these edge of the core features or to hardwire them to some setting and not provide you that flexibility.
You need to look at the datasheet/docs for the chip in question. Find out what the name of the ARM core is, as you mentioned cortex-a9. Ideally you want to know the rev as well r0p0 kind of a thing, then go to ARM's website and find the TRM, technical reference manual for that core. You will also want to get a copy of the ARM ARM, ARM Architectural Reference Manual. The (traditional) ARM exception vectors are described in the ARM ARM as well as quite a ton more info. You also need the chip vendors documentation, and look into their boot scheme. Some will point address zero to the boot prom on power up, then the bootloader will need to do something, flip a bit in a register, and the memory controller will switch address 0 to ram. Some might have address 0 always configured as ram, and some other address always configured as rom, lets say 0x80000000 for example, and the chip will copy some items from rom to ram for you before boot, or the chip may simply have the power up setting for the reset vector to be a branch to rom, then it is up to the bootloader to patch up the vector table. As many different schemes as you can think of, it is likely someone has tried it, so you have to study the chip vendors documentation or sample code to understand Basically the answer to your rom question, is it depends and you have to check with the chip vendor.
The ARM TRM for the core should describe, if any, the strap options on the core (like being able to boot from an alternate address), connect those strap options, if any, that are implemented by the vendor. The ARM ARM is not really going to get into that like the TRM. A vendor worth buying from though will have some of their own documentation and/or code that shows what their rom based boot strategy is.
For a system destined to be a linux system you are going to have a bootloader, some non-linux code (very much like the bios on your desktop/laptop) that brings up the system and eventually launches linux. Linux is going to need a fair amount of memory (relative to microcontroller and other well known ARM implementations), that ram may end up being sram or dram and the bootloader may have to initialize the memory interface before it can start linux up. There are popular bootloaders like redboot and uboot. both are significant overkill, but provide features for developers and users like being able to re-flash linux, etc.
ARM linux has ATAGs (ARM TAGs). You can use both the traditional linux command line to tell linux boot information like what address to find the root file system, and ATAGs. Atags are structures in memory that I think r0 or something like that is set to when you branch from the bootloader to linux. The general concept though is the chip powers up, boots from rom or ram, if prepares ram so that it is ready to use, linux might want/need to be copied from rom to ram, the root file system, if separate, might want to be copied to somewhere else in ram. ATAGs are prepared to tell arm where to decompress linux if need be, as well as where to find the command line and or where to find things like the root file system, some registers are prepared as passed parameters to linux and lastly the bootloader branches to the address containing the entry point in the linux kernel.
You have to have boot code available at the address where the hardware starts executing.
This is usually accomplished by having the hardware map some sort of flash or boot ROM to the boot address and start running from there.
Note that in micro controllers the code that starts running at boot has a pretty tough life - no hardware is initialized yet, and by no hardware I mean that even the DDR controllers that control the access to RAM are not working yet... so your code needs to run without RAM.
After the initial boot code sets enough of the hardware (e.g. sets the RAM chips, set up TLBs etc, program MACs etc.) you have the bootloader run.
In some systems, the initial boot code is just the first part of the boot loader. In some systems, a dedicated boot code sets things up and then reads the boot loader from flash and runs it.
The job of the boot loader is to bring the image of the kernel/OS into RAM, usually from flash or network (but can also be shared memory with another board, PCI buses and the like although that is more rare). Once the boot loader has the image of the kernel/OS binary in RAM it might optionally uncompress it, and hand over control (call) the start address of the kernel/OS image.
Sometime, the kernel/OS image is actually a small decompressor and blob of compressed kernel.
At any rate the end result is that the kernel/OS is available in RAM and the boot loader, optionally through the piggy back decompressor, has passed control to it.
Then the kernel/OS starts running and the OS is up.