Mapping a VxWorks image onto RAM (BSP) - vxworks

Looking at a BSP package supplied with VxWorks shows the following memory mapping for the image: (from Pentium4)
Parameter RAM_HIGH_ADRS {
NAME Bootrom Copy region
DEFAULT (INCLUDE_BOOT_APP)::(0x00008000) \
0x00108000
}
Parameter RAM_LOW_ADRS {
NAME Runtime kernel load address
DEFAULT (INCLUDE_BOOT_RAM_IMAGE)::(0x00508000) \
(INCLUDE_BOOT_APP)::(0x00108000) \
0x00308000
}
But this one looks strange to me, how can RAM_LOW_ADRS > RAM_HIGH_ADRS ?
Per what I could understand, the boot loader is suppose to be loaded after RAM_HIGH_ADRS and the VxWorks image at RAM_LOW_ADRS, and the boot loader is suppose to be located after the image.
Any ideas?

RAM High/Low are somewhat of a misnomer as you have discovered.
It really should be called RAM_VXWORKS_ADDR and RAM_BOOT_ADDR (or some such).
A lot of those names are historical in nature.
In 99% of cases, RAM_HIGH > RAM_LOW. But, depending on architecture, BSP and target, there might be an inversion.
In the end, it's just an address to load software. As long as there is no conflict or overlap, it's ok.
The vxWorks Heap has nothing to do with RAM_LOW/HIGH_ADDR per se.
The vxWorks heap (in a simplified view) runs from above the vxWorks image to the address returned by sysMemTop() - which is defined by the BSP and might run to the top of physical RAM (or not).
In the normal situation, with vxWorks loading bellow the bootrom load address, the bootrom simply gets overwritten. That's not the case in your BSP, so you do loose some RAM space since the bootrom is not "reclaimed".

Related

Can you reuse an IOSurface that has been purged?

TL;DR: Is an IOSurfaceRef a valid surface to write to after it has been purged and its state changed to kIOSurfacePurgeableEmpty?
I'm trying to get a better understanding of what it means for an IOSurface to be purged. The only documentation I have come across is in IOSurfaceRef.h and the only sample code I've come across is in WebKit.
I'm using the command line tool memory_pressure to simulate a critical memory pressure environment for 10 seconds like so:
> memory_pressure -S -s 10 -l critical
I've written a very simple application that allocates 100 IOSurfaces with identical properties. When I use Instruments to measure the memory allocations, I see VM: IOSurface at roughly 6GB, which is about 6MB for each surface. (4096x4096x4)
I then change the purgeable state of each IOSurface to kIOSurfacePurgeableVolatile and run the memory_pressure simulation.
Instruments still reports that I have 6GB of surfaces allocated. However, if I check the purgeable state of each surface, they are marked as kIOSurfacePurgeableEmpty.
So it looks like they were successfully purged, but the memory is still allocated to my application. Why is that and what condition are these surfaces in?
The header file states that I should assume they have "undefined content" in them. Fair enough.
But is the actual IOSurfaceRef or IOSurface * object still valid? I can successfully query all of its properties and I can successfully lock it for reading and writing.
Am I allowed to just reuse that object even though its contents were purged or do I have to discard that instance and create an entirely new IOSurface?
macos 10.14
Yes, it's still usable. It's just that the pixel data has been lost.
Basically, when the system is under memory pressure, it would normally page data out to disk. Marking a purgeable object volatile allows the system to simply discard that data, instead. The app has indicated that while it's nice-to-have, it's not has-to-have, and can be recreated if necessary.
When it wants to work with the IOSurface again, the app should mark the object nonvolatile and check the old state. If it was empty, then the app should recreate the data.
The reason that Instruments reports that your app still has 6GB allocated is because it has 6GB of its address space reserved for the IOSurfaces. But allocated does not necessarily mean backed by either physical RAM or swap file. It's just bookkeeping until the memory is actually used. Your app's resident set size (RSS) should shrink.

Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?

I have a Xilinx development board connected to a RHEL workstation.
I have U-boot loaded over JTAG and connect to it with minicom.
Then I tftpboot the helloworld standalone application.
Where do these images go?
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
I had it working correctly the first time, but then started trying different things when building.
It almost feelings like I am clobbering memory, but I assumed after a power cycle anything tftp'd would be lost.
The issue stills occurs through a power cycle though.
Where do these images go?
The U-Boot command syntax is:
tftpboot [loadAddress] [[hostIPaddr:]bootfilename]
You can explicitly specify the memory destination address as the loadAddress parameter.
When the loadAddress parameter is omitted from the command, then the memory destination address defaults to the value of the environment variable loadaddr.
Note that several other U-Boot commands also use this loadaddr variable, such as "bootp", "rarpboot", "loadb" and "diskboot".
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
The loadAddress is simply the start address in memory to which the file transfered will be written.
For a standalone application, this loadAddress should match the CONFIG_STANDALONE_LOAD_ADDR that was used to link this program.
Likewise the "go" command to execute this standalone application program should use the same CONFIG_STANDALONE_LOAD_ADDR.
For example, assume the physical memory of your board starts at 0x20000000.
To allow the program to use the maximum amount of available memory, the program is configured to start at:
#define CONFIG_STANDALONE_LOAD_ADDR 0x20000000
For convenient loading, define the environment variable (at the U-Boot prompt):
setenv loadaddr 0x20000000
Assuming that the serverip variable is defined with the IP address of the TFTP server, then the U-Boot command
tftpboot hello_world.bin
should retrieve that file from the server, and store it at 0x20000000.
Use
go 20000000
to execute the program.
I assumed after a power cycle anything tftp'd would be lost.
It should.
But what might persist in "volatile" memory after a power cycle is unpredictable. Nor can you be assured of a default value such as all zeros or all ones. The contents of dynamic RAM should always be presumed to be unknown unless you know it has been initialized and has been written.
Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?
Only if your board has main memory that is non-volatile (e.g. ferrite core or battery-backed SRAM, which are not likely).
You can use the "md" (memory display) command to inspect RAM.

VxWorks: Access Main memory Region

I am migrating a code from Linux to Vxworks. The code requires opening physical/main memory and then map the physical to virtual memory using mmap.
In Linux, main memory is accessed by
fd = open("/dev/mem", O_RDONLY);
Can you please let me know how this can be accomplished in Vxworks.
Thanks in advance
It depends on which programming environment your migrated code will be running.
For kernel mode, it is much easier as generally you can access everywhere in the system memory in read-only mode as long as its memory region is mapped in the page table. No special API is needed in your code to access the memory.
For user mode (aka Real Time Process, only available starting from VxWorks 6.0), things are a little bit complicated. You need write a pair of code blocks, with one operating in the kernel mode while the other one in the user mode. Please refer to the comment block in the VxWorks source codes for a code example # vxworks-6.9/target/usr/src/os/mm/devMemLib.c (taking VxWorks 6.9 for example).

u-boot : Relocation

This one is a basic question related to u-boot.
Why does the u-boot code relocate itself ?
Ok, it makes sense if u-boot is executing from NOR-flash or boot ROM space but if it runs from SDRAM already why does it have to relocate itself once again ?
This question comes up frequently. Good answers sometimes too.
I agree it is handy to load the build to SDRAM during development. That works for me, I do it all the time. I have some special boot code in flash which does not enable MMU/cache. For my u-boot builds I switch CONFIG_SYS_TEXT_BASE between flash and ram builds. I run my development builds that way routinely.
As a practical matter, handling re-initialization of MMU/cache would be a nontrivial matter. And U-Boot benefits IMO from simplicity, as result of leaving out things like that.
The tech lead at Denx has expressed his opinion. IIRC his other posts are more strongly worded than that one. I get the impression that he does not like to repeat himself.
update: why relocate. Memory access is faster from RAM than from ROM, this will matter particularly if target has no instruction cache. Executing from RAM allows flash reprogramming; also (more minor) it allows software breakpoints with "trap" instructions; also it is more like the target's normal mode of operation, so if e.g. burst reads from RAM are iffy the failure will be seen at early boot.
U-boot has to reserve 3 regions in memory that stores: 1) u-boot itself, 2) uImage (compressed kernel), and 3) uncompressed kernel. These 3 regions must be carefully placed in u-boot to prevent conflict.
However, the previous stage boot-loader, (BL2 or BL1) that brings u-boot into DRAM memory don\t know u-boot's planing on these 3 regions. So it can only loads u-boot onto a lower address in DRAM memory and jump to it. Then, after u-boot execute some basic initialization and detect current PC is not in planed location, u-boot call relocate function that move u-boot to the planned location and jump to it.
The code of NOR flash must initialize the SDRAM, Then the copy code from Nor Flash to SDRAM, The process will copy itself, because you could enable MMU, we will start Virtual address mapping.

What does it mean to attach ROMFS in RAM?

I'm building a kernel for an ARM platform running uClinux 2.4 and under "General Setup" in the Linux configuration there is an option called "m68knommu-style attached romfs in RAM support". My ARM assembly skills are somewhat limited but as far as I can tell if I enable this option the ROMFS is copied to the end of the kernels BSS.
What is the purpose of this?
As you rightly indicate, this option causes the romfs attached to the kernel image to be relocated to the end of the .bss section. This allows the system to start from the romfs as its root filesystem.
The above isn't exactly correct; I believe I actually developed the change, if not I definitely used it. As noted, this feature offers support for a romfs filesystem concatenated to the kernel image -- both of which are placed in RAM. Then this option ensures the romfs filesystem will automatically have its size evaluated and be moved to a reserved area of RAM (as well as the appropriate pointers passed for mounting via the MTD RAM driver).
Without this change it is still possible to run out of RAM; you merely needed to have your bootloader place it in a predetermined location and pass in the appropriate kernel options. The big feature this change added was the ability to have a single, unified kernel+filesystem image the way the Coldfire builds did.
Note that it only worked if you have the appropriate changes in your head-platform.S, as I recall -- I think it may only be in place on the NetSilicon NS7520.