Embedded app and wearing out flash disks - embedded

I have an embedded app that needs to do a lot of writing to a flash disk (or other). We cannot use a hard disk due to the environment. This is an industrial system subject to vibration and explosive fuel vapour.
The trouble is, flash has a lifecycle of around 100000 write cycles. Ample for your digital camera. Wears out after a year in our scenario.
Any alternatives that people have found work for them?
I was thinking of using FRAM but it's been done before here and it's slow and small.

As Nils says, commercial compact flash cards, and drive replacements (NAND) have wear levelling.
If you are using cheap onboard (NOR) flash you might have to do this yourself.
The best way is some sort of ring buffer where you are only appending data and then overwriting a full drive. Remember flash can only erase a full block (page) but can then append individual bytes to existing data in that page.
Also can you buffer a page in RAM and then write once or do you have to have individual bytes committed at all times?
Most app sheets for embedded processors will have examples of this.

You really need to provide much more information:
how much capacity do you need?
what costs are acceptable?
what physical form factor do you need?
what lifetime do you want?
If your storage needs aren't particularly huge and you can deal with the cost, There are battery-backed SRAM parts (up to at least 2 Megabytes per part) that are as fast as RAM (that's what they are) and have no limit on number of writes. But they cost a lot more than flash.
You could also get a drive with a SATA interface that's populated with DRAM.

This post referes to using embedded linux. Not sure if this is what you want.
I have a not to differnt system, but for medical use. We use a NOR flash for all parts that have low update frequency and NAND flash for the rest. I would recoment using UBI/UBIFS for the top layer om the MTD disk. UBI/UBIFS takes care of all the underlying problems for you. If you then design your system to have a lot larger physical flash than you need. Example: You need 100MB and then design your HW with 1GB flash. Then the data can be shuffeld around by UBI without any interaction from systems above.
UBIFS documentation
UBI documentation

As Michael Burr pointed out, we need more info. (Please answer his questions.)
I have an additional question: What kind of interface is this? PATA? SATA? USB?
As others have pointed out, any decent Flash Drive will provide some kind of wear leveling. Look for this in the datasheet for the device. Many vendors will boast about their wear-leveling technique.
You mention 100000 cycles. This seems pretty low to me. Most "industrial grade" flash drives can do a lot more than that (millions). Make sure you aren't using a bargain-basement device. A good flash drive will usually include an equation or calculator tool you can use to figure out the expected lifespan of the device.
(I can say from personal experience that some brands of flash drives hold up a lot better than others, particularly the "industrial" ones. Our drives go through some pretty brutal usage scenarios.)
The other thing that can help a lot is capacity. The higher capacity of flash drive, the more room the wear-leveling algorithm has to work with, which means a longer lifespan.
The other thing you can look at doing is software techniques to minimize the wearing of the flash components. Do you have a pagefile/swapfile? Maybe you don't need it. If you are creating/deleting lots of temporary files, move this to a RAM disk. Remember, it is erasure/reprogramming cycles that usually wears out a flash cell, so reducing those operations will usually help.

Use SD cards that have a built-in wear leveling controller. That way the write cycles get distributed over all the flash blocks and you get a very long life out of your flash.

I was thinking of using FRAM but it's
been done before here and it's slow
and small.
Compare with nvSRAM; that may provide the performance you need.

I have used a Compact Flash card in a embedded system with great success. It has a onboard controller that does all the thinking for you. Not all Compact Flash controllers are equal so get one that is a recent design and was intended to be used as a hard drive replacement as they have better wear levelling algorithms.

Related

is it recommended to use SPI flash to run code instead internal flash due to memory limitation of internal flash?

We used the LPC546xx family microcontroller in our project, currently, at the initial stage, we are finalizing the software and hardware requirements. The basic firmware size (which contains RTOS, 3rd party stack, library, etc...) currently is 480 KB. Now once full application developed than the size will exceed the internal flash size (512KB) and plus we needed storage which can hold firmware update image separately.
So we planned to use SPI flash (S25LP064A-JBLE, http://www.issi.com/WW/pdf/IS25LP032-064-128.pdf, serial flash memory) of 4MB\8MB to boot and run firmware.
is it recommended to run code from SPI flash? how can I map external flash memory directly to CPU memory space? Can anyone give an example that contains this memory mapping(linker script etc..) or demo application in which LPC546xx uses SPI FLASH?
Generally speaking it's not recommended, or differently put: the closer to the CPU the better. Both S25LP064A and LPC546xx however support XIP, so it is viable.
This is not a trivial issue as many aspects are affecting. I.e. issue is best avoided and should really have been ironed out in the planning stage. Embedded Systems are more about compromising than anything and making the right/better choices takes skill end experience.
Same question with replies on the NXP forum: link
512K of NVRAM is huge. There are almost certainly room for optimisations even if 3'rd party libraries are used.
On a related note this discussion concerning XIP should give valuable insight: link.
I would strongly encourage use of file-systems if not done already, for which external storage is much better suited. The further from the computational unit, the more relevant. That's not XIP and the penalty is copy-to-RAM either way you do it. I.e. performance will be slower. But in my experience, the need for speed has often-times not been thoroughly considered and at least partially greatly overestimated.
Regarding your mentioning of RTOS and FW-upgrade:
Unless it's a poor RTOS there's file-system awareness built in. Especially for FW upgrading (Note: you'll need room for 3 images, factory reset included), unless already supported by the SoC-vendor by some other means (OTA), it will make life much easier and less risky. If there's no FS-awareness, it can be added.
FW upgrade requires a lot of extra storage. More if simpler. Simpler is however also safer which especially for FW upgrades matters hugely. In the simplest case (binary flat image), you'll need at least twice the amount of memory you're already consuming.
All-in-all: I think the direction you're going is viable and depending on the actual situation perhaps your only choice.

Embedded System and Serial Flash wear issues

I am using a serial NOR flash (SPI Based) for my embedded application and also I have to implement a file system over it. That makes my NOR flash more prone to frequent erase and write cycles where having a wear level Algorithm comes into picture. I want to ask few questions regarding the same:
First, is it possible to implement a Wear Level Algorithm for Nor flash, if yes then why most of the time I find the solutions for NAND Flash and not NOR Flash?
Second, are serial SPI based low cost NAND flashes available, if yes then kindly share the part number for the same.
Third, how difficult is it to implement our own Wear Level algorithm?
Fourth, I have also read/heard that industrial grade NOR Flashes have higher erase/write cycles (in millions!!), is this understanding correct? If yes then kindly let me know the details of such SPI NOR Flash, which may also lead to avoiding implementation of wear level algorithm, if not completely then since I'm planning to implement my own wear level algorithm, it might give me a little room and ease in certain areas to implement the wear level algorithm.
The constraint to all these point is cost, I would want to have low cost solution to these issues.
Thanks in Advance
Regards
Aditya Mittal
(mittal.aditya12#gmail.com)
Implementing a wear-levelling algorithm is is not trivial, but not impossible either:
Your wear-levelling driver needs to know when disk blocks are no longer used by the filing system (this is known as TRIM support on modern SSDs). In practice, this means you need to modify your block driver API and filing systems above it, or have the wear-levelling driver aware of the filing system's free-space map. This second option is easy for FAT, but probably patented.
You need to reserve at least an erase-unit + a few allocation units to allow erase unit recycling. Reserving more blocks will increase performance
You'll want a background thread to perform asynchronous erase-unit recycling
You'll need to test, test an test again. When I last built one of these, we built a simulation of both flash and ran the real filing system on top it, and tortured the system for weeks.
There are lots and lots of patents covering aspects of wear-leveling. By the same token, there are two at least two wear-levelling layers in the Linux Kernel.
Given all of this, licensing a third-party library is probably cost-effective,
Atmel/Adesto etc. make those little serial flash chips by the billion. They also have loads of online docs. I suspect that the serial flash beetles don't implement wear-levelling because of cost - the devices they are typically used in are very cheap and tend to have a limited lifetime anyway. Bulk, 4-line NAND flash that is expected to see heavier and lengthy use, (eg. SD cards), have complex, (relatively), built-in controllers that can implement wear-levelling in a transparent manner.
I no longer use one-pin interface serial flash, partly due to the wear issue. An SD-card is cheap enough for me to use and, even if one does break, an on-site technician, (or even the customer), can easily swap it out.
Implementing a wear-levelling algo. is too expensive, both in terms of development time, (especially testing if the device has to support a file system that must not corrupt on power fail etc), and CPU/RAM for me to bother.
If your product is so cost-sensitive that you have to use serial NOR flash, I suggest that you ignore the issue.

retrieve video ram usage iphone

i have see this article about retrieve memory usage off iphone app
programmatically-retrieve-memory-usage-on-iphone It's great !
In my project i want to retrieve the available VRAM free, because my app load many textures, and i must preload theses into the video Ram for fast rendering.
but on the VM_statistics i don't view theses properties : vm_statistics MAN page
Thanks a lot for your help.
As you've seen so far, getting hard numbers for GL texture memory usage is quite difficult. It's complicated further by the fact that CoreAnimation will also use GL Texture memory without "consulting" you, including from processes other than yours.
Practically speaking, I suggest that you use the VM Tracker instrument in Instruments to watch changes in the VM pages your process maps under the IOKit tag. It's a bit crude, but it's the best approach I've found. In my experience, this process is largely guess and check.
You asked specifically for a way to determine the amount of free VRAM, but even if you could get that info, it's not really likely to be helpful. Even if your app is totally OpenGL and uses no UIViews or CoreAnimation layers other processes, most importantly those more privileged than yours, can consume that memory at any time, either explicitly or implicitly through CoreAnimation. It's also probably safe to assume that if your app prevents those more-privileged apps from getting the texture memory they need, your process will be killed.
Put differently, even if you could ascertain the instantaneous state of the GL texture memory, you probably couldn't count on being the only consumer of that resource, so it's pretty useless.
At the end of the day, you should spend your effort designing your app to be a good citizen in terms of GL memory and manage (read: minimize) your own consumption of texture memory. iOS devices are not old-school game consoles -- you are not the only thing running -- so you need to be mindful and tolerant of that fact, lest your app be one of those where everyone has to reboot their phone every few minutes in order to use it.

Accesing files which are currently being written

If a file is in a writing process, and at this time if I try to access it like if it is a log file which is being written every 10 milliseconds and I`m trying to access it will I damage or disturb the writing process?
Specifically I'm asking about video files, like if I start a recording process (using Windows Media Encoder) and at this time I would like to monitor the file if it is a blank file (black pixels everywhere) or there is a real content being recorded.
Sorry if my question is a newbie one, but I really really need to be sure about that.
Best on advance
In general you can certainly read files as they are being written, without corrupting their content. However:
It is possible to face an issue if your recording medium cannot deal with the combined data-rate or of both reading and writing. This can be a problem especially with slow-ish USB flash drives.
It is possible to face an issue on hard drives too, if the combination of reading and writing exceeds the rate of random seeks that the hard drive can handle. This can happen more easily on older drives (e.g. IDE) when dealing with HD video.
The end result is that if you have a real-time writer process, such as a TV recorder, it may be forced to drop some of the data - in the case of video a few frames.
Modern systems have quite fast disk subsystems, reasonably good I/O schedulers and large enough RAM capacities to allow for extensive data caching, which makes it quite unlikely that a single writer/reader combination would saturate the disk subsystem, unless you are doing something unusual like recording several video streams at once.
Keep in mind however, that:
The disk subsystem can also be saturated by unrelated processes reading/writing other files from the same drive.
If you are encoding video, you might also lose frames if something draws enough CPU resources that the encoding process is no longer able to keep-up with the real-time requirements. Depending on the video file, test-playing it might be just enough to do that - at least HD reproduction can be quite demanding. So, watch your CPU load and experiment before relying on it to record your favourite show :-)
EDIT:
If you are among the lucky ones that have SSD drives, seeks and data rate should normally be a non-issue. That leaves the CPU - you'd be surprised how easy it is to push it to the limit.
Above all, you should experiment to find out the limits of your system for each particular application. That way you won't have any nasty surprises...

Determining failing sectors on portable flash memory

I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc).
I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts).
Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal?
Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks?
In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory?
Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself.
If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :)
Thanks!
It's interesting if badblocks can help in this case
AFAIK, Wear leveling happens at the firmware level. The hardware does not know about the bad block, till such time the firmware detects one.
And there is no known way to find this bad sectors before hand. BTW, I guess, it is not bad sectors, but bad blocks. Once a sector is bad, the whole block is marked as bad ...