Is it possible to implement DISKCOPY to copy block by block from eMMC volume to USB volume - usb

I am using STM32, FATFS, SDMMC, eMMC and have created the FATFS on the eMMC.
I have also created a FATFS volume on the USBH (host mode). This also works fine.
The eMMC FATFS work fine then I need to copy all files from the eMMC to the USB drive. The copy file by file from eMMC via FATFS is taking too long.
I think it would be faster if I just blindly copy memory block by block (512 bytes) from eMMC to USBH. So I implemented enough routine to do so. The problem is the copy failed after about few hundred block copied. The failure is seemed to be due to the USBH does not respond.
My question is:
1- "Is is possible to copy block by block raw data from eMMC to USBH like I try to do?"
2- have anyone successfully doing so?

Yes it is perfectly normal to blindly copy all the blocks of one storage device to another and to expect it to work.
The only catch is that the devices have to either have the same block size, or else you have to at least pretend they do (eg: treat each 4kB physical block as eight 512-byte blocks). This is because many filesystem drivers always assume the block size is 512 bytes.
One other problem I have encountered in doing this is that devices can overheat (but this isn't a software problem).

Related

How does USB FAT32 copy files?

I was copying files between two flash drives, both Mosdart 32GB drives, when it occurred to me a possible that file transfer with USB drives works. Just about any time I copy large directories full of files between USB drives, I notice a distinctive wave pattern in the copy speed:
This led me to believe that either USB or FAT32 has a buffer which fills up and must be emptied. I suspect that the data transfer speed rises after the buffer is emptied, then drops out when it is filled. After that, it repeats.
Is this a design of USB or of FAT32? Is this actually a buffer causing this?

iPhone Memory Stick Windows Formatting (populating!) Q

SUMMARY: Cannot copy more than 32GB of files to a 128GB memory stick formatted under FAT32 or exFAT despite the fact that I can format the stick and ChkDsk is showing the correct results after formatting (and also when less than 32GB of files are on the stick). I cannot use NTFS because this stick is designed to transfer files to an iPhone and the app will not handle NTFS. See below for details.
DETAILS:
I have a 128GB memory stick which is designed to quickly transfer files between a computer and an iPhone. One end is a USB and the other plugs into the iPhone's lightning port. This particular type is extremely common and looks like a "T" when you unfold it (Amazon link: https://www.amazon.com/gp/product/B07SB12JHG ).
While this stick is not especially fast when I copy Windows data to it, the transfer rate to my iPhone is much better than the wireless alternatives.
Normally I'd format a large memory stick or USB drive in NTFS, but the app used to transfer files to my iPhone ("CooDisk") will only handle exFAT and FAT32. I've tried both. For exFAT formatting, I've tried both Windows 7 and 10, and for FAT32 I used a free product from RidgeCrop consulting (I can give you the link if you want).
As with all USB storage devices, my stick is formatted as a single active partition.
I do not have a problem formatting. After formatting, ChkDsk seems happy with both FAT32 and exFAT. The CooDisk app works fine with either. After formatting, all the space is ostensibly available for files.
My problem arises when populating the stick with files.
Whenever I get beyond 32GB in total space, I have various problems. Either the copy will fail, or ChkDsk will fail. (After running ChkDsk in 'fix' mode, every file created beyond the 32GB limit will be clobbered.) Interestingly, when I use the DOS copy command with "/v" (verify) it will flag an error for files beyond the 32GB limit, although DOS XCopy with "/v" keeps on going. GUI methods also die at 32GB.
Out of sheer desperation, I wrote a script that uses GNU's cp for Windows. Now I can copy more than 32GB of files and ChkDsk flags no errors. However files beyond the 32GB limit end up being filled with binary zeros despite the fact that they appear as they should in a directory or Windows file explorer listing. (Weird, isn't it?)
I have also tried various allocation unit sizes from 4K all the way up to 64K and attempted this with three different Windows OSs (XP, Win7, and Win10).
Let me emphasize: there is no problem with the first 32GB of files copied to the stick regardless of: whether I use exFAT or FAT32; my method of copying; and my choice of AU size.
Finally, there is nothing in these directories that would bother a FAT32 or exFAT system: (a) file and directory names are short (well under 100 characters); (b) directory nesting is minimal (no more than 5 levels); (c) files are small (nothing close to a GB); and directories have relatively few files (nowhere close to 200, for those of you who recall the old FAT limit of 512 files per directory :)
The only platform I haven't yet tried is using an aging MacBook that someone gave to me. I'm not terribly good with Macs, but I would rather not be dependent on it (it's 13 years old, although MacBooks are built like tanks).
Also, is it possible that FAT32 and exFAT don't allow more than 32GB on an active partition (I can find no such limitation documented anywhere, in fact in my experience USB storage devices are always bootable - as was the original version of my stick)?
Any ideas??

What is the 'erased' value of a memory location on an sdhc card?

I am writing to a micro sd card (SDHC) for an embedded application. The application needs to be able to write very quickly to the card in real time.
I have seen that erasing the memory blocks beforehand makes the write a lot faster. Unfortunately I am struggling to get the erase command (and ACMD23) to work as the driver provided for the development board I am using is not complete.
Is there any way to erase the card by maybe writing an 'erased' value to the memory blocks beforehand? For example, if after erasing a block it becomes 0x12345678 can I just write this value instead to make it erased in order to get around using the erase command? Or is there some other way that the card marks a block as erased?
Thanks
I have tried writing 0xffffffff as the erased value but it has not helped.
I think you're misunderstanding how flash memory works.
Flash memory has blocks which are way bigger than what typical filesystems expect. Additionally, they have a limited number of erase cycles. Hence, the flash controller provides an abstraction that maps virtual sectors to physical blocks.
A sector that is "erased" is not actively erased at all. It's just unmapped, and an empty block (if available) is mapped in its place. In the background, the flash controller shuffles sectors around and erases physical blocks as they become wholly unused.
As you can see, the quality of the flash controller matters here. It's not even a driver issue, typically. The driver just sends the command; the flash controller executes them. If you need better performance, get a better SD card.

Why U-Boot (DENX) stays in boot loop and gives "Program Check Exception"?

I have a MPC5200 v2.2, Core v1.4 on a phyCORE-MPC5200-tiny Board. DRAM 64 MB, FLASH 16 MB. RTOS VxWorks 6.9.
I have problems when booting the embedded system and it stays in boot loop, when U-Boot/uboot (DENX) tries so load image, saying: "Program Check Exception".
For debugging during development I use an TFTP server to load vxWorks binary directly into the RAM (U-Boot command: 'tftpboot 0x100000 vxWorks.bin'). In this case everything works fine.
For release the pure *.bin VxWorks file (size of 8,07 MB (8.462.808 bytes)) gets compressed and packed into a U-Boot compatible image file (with bootloader specific header information) and a resulting size of 5,25 MB (5.509.763 bytes). The image file is put onto flash, from where it is uncompressed and loaded into RAM (U-Boot command: 'bootm 0xff800000'). After then the above mentioned exception is thrown, resulting in rebooting loop (See screenshot below).
I've already investigated that if the prepared image has a size beneath 5 MB, U-Boot loads it without errors. Maybe also the uncompressed file size could be a problem?! (at 8MB?)
Do you have any idea, how this problem can be solved?
U-boot(since 2011.06) provides environment variable "bootm_mapsize" to change the space required for booting kernel image.
However, Your u-boot seems really old; & may not contain this.
In your u-boot I understand this value is set in "include/configs/" file as:
define CONFIG_SYS_BOOTMAPSZ (8 << 20) /* Initial Memory map for Linux */
You can change this value & re-compile u-boot to get over the problem.
I hope this helps.

Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?

I have a Xilinx development board connected to a RHEL workstation.
I have U-boot loaded over JTAG and connect to it with minicom.
Then I tftpboot the helloworld standalone application.
Where do these images go?
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
I had it working correctly the first time, but then started trying different things when building.
It almost feelings like I am clobbering memory, but I assumed after a power cycle anything tftp'd would be lost.
The issue stills occurs through a power cycle though.
Where do these images go?
The U-Boot command syntax is:
tftpboot [loadAddress] [[hostIPaddr:]bootfilename]
You can explicitly specify the memory destination address as the loadAddress parameter.
When the loadAddress parameter is omitted from the command, then the memory destination address defaults to the value of the environment variable loadaddr.
Note that several other U-Boot commands also use this loadaddr variable, such as "bootp", "rarpboot", "loadb" and "diskboot".
I understand I am specifying a loadaddr, but I don't fully understand the meaning.
When I run the standalone application, I get various outputs on the serial console.
The loadAddress is simply the start address in memory to which the file transfered will be written.
For a standalone application, this loadAddress should match the CONFIG_STANDALONE_LOAD_ADDR that was used to link this program.
Likewise the "go" command to execute this standalone application program should use the same CONFIG_STANDALONE_LOAD_ADDR.
For example, assume the physical memory of your board starts at 0x20000000.
To allow the program to use the maximum amount of available memory, the program is configured to start at:
#define CONFIG_STANDALONE_LOAD_ADDR 0x20000000
For convenient loading, define the environment variable (at the U-Boot prompt):
setenv loadaddr 0x20000000
Assuming that the serverip variable is defined with the IP address of the TFTP server, then the U-Boot command
tftpboot hello_world.bin
should retrieve that file from the server, and store it at 0x20000000.
Use
go 20000000
to execute the program.
I assumed after a power cycle anything tftp'd would be lost.
It should.
But what might persist in "volatile" memory after a power cycle is unpredictable. Nor can you be assured of a default value such as all zeros or all ones. The contents of dynamic RAM should always be presumed to be unknown unless you know it has been initialized and has been written.
Is a software image loaded into non-volatile RAM when using tftpboot from U-boot?
Only if your board has main memory that is non-volatile (e.g. ferrite core or battery-backed SRAM, which are not likely).
You can use the "md" (memory display) command to inspect RAM.