I am wondering~
How large is GLib? Can it be used directly on embedded system? Is it usually too large for embedded system?
Is there a embedded system version of GLib?
Thanks
"Embedded" barely means anything. The systems range from tiny (few registers of volatile memory, read-only code memory) to nearly as big as computers have ever been on this planet.
Glib isn't huge as desktop libraries go, but it's not a tiny microcontroller library either. There's obviously a vague line somewhere in that range below which it just won't fit, but without knowing the system there's no way to tell.
Based on a comment it seems your environment ranks "ginormous" in the embedded scale, so glib will probably fit if the system underneath makes porting it worthwhile. If you have a GPOS (like Linux or BSD), there might already be a port.
If you can fit GLibC then you can fit GLib.
GLIB can be used for embedded systems and is quite small. However, newer
versions of GLIB require a lot more libraries to build. If what you need
is a subset like async queues, threads, queues, lists, etc., you might
consider getting the code and building GLIB 1.2.x instead of the latest.
I have been considering building a subset of GLIB for embedded work that
removes some of the newer features.
How large storage space do you have on your embedded device? There is no problem compiling Glib for an embedded device, given that you have enough storage on your device. Since we don't know anything about your device, it's kind of hard to say though.
Related
We used the LPC546xx family microcontroller in our project, currently, at the initial stage, we are finalizing the software and hardware requirements. The basic firmware size (which contains RTOS, 3rd party stack, library, etc...) currently is 480 KB. Now once full application developed than the size will exceed the internal flash size (512KB) and plus we needed storage which can hold firmware update image separately.
So we planned to use SPI flash (S25LP064A-JBLE, http://www.issi.com/WW/pdf/IS25LP032-064-128.pdf, serial flash memory) of 4MB\8MB to boot and run firmware.
is it recommended to run code from SPI flash? how can I map external flash memory directly to CPU memory space? Can anyone give an example that contains this memory mapping(linker script etc..) or demo application in which LPC546xx uses SPI FLASH?
Generally speaking it's not recommended, or differently put: the closer to the CPU the better. Both S25LP064A and LPC546xx however support XIP, so it is viable.
This is not a trivial issue as many aspects are affecting. I.e. issue is best avoided and should really have been ironed out in the planning stage. Embedded Systems are more about compromising than anything and making the right/better choices takes skill end experience.
Same question with replies on the NXP forum: link
512K of NVRAM is huge. There are almost certainly room for optimisations even if 3'rd party libraries are used.
On a related note this discussion concerning XIP should give valuable insight: link.
I would strongly encourage use of file-systems if not done already, for which external storage is much better suited. The further from the computational unit, the more relevant. That's not XIP and the penalty is copy-to-RAM either way you do it. I.e. performance will be slower. But in my experience, the need for speed has often-times not been thoroughly considered and at least partially greatly overestimated.
Regarding your mentioning of RTOS and FW-upgrade:
Unless it's a poor RTOS there's file-system awareness built in. Especially for FW upgrading (Note: you'll need room for 3 images, factory reset included), unless already supported by the SoC-vendor by some other means (OTA), it will make life much easier and less risky. If there's no FS-awareness, it can be added.
FW upgrade requires a lot of extra storage. More if simpler. Simpler is however also safer which especially for FW upgrades matters hugely. In the simplest case (binary flat image), you'll need at least twice the amount of memory you're already consuming.
All-in-all: I think the direction you're going is viable and depending on the actual situation perhaps your only choice.
We're currently creating a device for a customer that will get a block of data (like, say, 5-10KB) from a PC application. This is a bit simplified, so assume that the data must be passed and uncompressed a lot, not just once a year. The communication channel is really, really slow, so we'd like to compress the data beforehand, pass to the device and let it uncompress the data to its internal flash. The device itself, however, runs on a micro controller that is not really fast and does not have a lot of memory. It has enough flash memory to store the result, and can uncompress the data block as it is received, but it may not have enough RAM to store the entire compressed or uncompressed (or even both!) data blocks. And of course, it doesn't have an operating system or other luxury.
This means we need a sufficiently fast uncompression algorithm that does not use a lot of memory. The compression can be slow and ugly, since we're doing it on the PC side. C or .NET code preferred though for compression, to make things easier. The uncompression code should be in C, since it's unlikely that someone has an ASM optimized version for our controller.
We found LZO, which would be almost perfect for us, but it has a so-called "free" license (GPL) by default, which makes it totally unusable for our customer. The author says that commercial licenses are available on request, but unfortunately he's currently unreachable (for non-technical reasons, like the news on his site say).
I found a few other libraries, including the puff.c from zlib, and we're still investigating, but I thought I'd ask for your experience:
Which compression algorithm and/or library do you recommend for embedded purposes, given that the decompression device has really limited resources and source code and a commercial license are required?
You might want to check out one of these which are not GPL and are fairly compact implementations:
fastlz - MIT license, fairly simple code
lzjb - sun CDDL, used in zfs for compression, simple and very short
liblzf - BSD-style license, small, fast
lzfx - BSD-style, based on liblzf, small, fast
Those algorithms are all based on the original algorithm of Lempel–Ziv–Welch (They have all LZ in common)
https://en.wikipedia.org/wiki/Lempel–Ziv–Welch
I have used LZSS. I used code from Haruhiko Okumura as base. It uses the last portion of uncompressed data(2K) as dictionary. This code can be modified to not require a temporary ring buffer if you have no memory. The licensing is not clear from his site but some versions was released with a "Use, distribute, and modify this program freely" line included and the code is used by commercial vendors.
Here is an implementation based on the same code that forms part of the Allegro game library. Allegro licensing is giftware or zlib.
Another option could be the lzfx lib that implement LZF. I have not used it yet but it seems nice. Also uses previous results so it has low memory requirements and is released under a BSD Licence.
One alternative could be the LZ77 coder/decoder in the Basic Compression Library.
Since it uses the unpacked data history for its dictionary, it uses no extra RAM except for the compressed and uncompressed data buffers. It should be ideal for your use case (zlib license, portable C). The entire decoder is just 70 lines of code (including comments), and really fast.
EDIT: Yet another alternative is the liblzg library, which is a refined version of the aforementioned LZ77 coder/decoder. It compresses better, is generally faster, and requires no memory for decompression. It is very, very free (zlib license).
I would recommend ZLIB.
From the wiki:
The library provides facilities for control of processor and memory use
There are also facilities for conserving memory. These are probably only useful in restricted memory environments such as some embedded systems.
zlib is also used in many embedded devices because the code is portable, liberally-licensed
and has a relatively small memory footprint.
A lot depends on the nature of the data. If it is simple enough, you may not need anything very fancy. For example if the downloaded data was a simple image (for example something like a line graph), a simple run length encoding could cut the data down by a factor of ten and you would need trivial amounts of code and RAM to decode it.
Of course if the data is more complex, then this won't be of much use. But I'd start by exploring the data being sent and see if there are specific aspects which would allow you to compress it more effectively than using a general purpose algorithm.
You might want to check out Jørgen Ibsen's aPlib - a couple of excerpts from the product page:
The compression ratios achieved by aPLib combined with the speed and tiny footprint of the depackers (as low as 169 bytes!) makes it the ideal choice for many products.
aPLib is free to use even for commercial use, please check the included license for details.
The compression library is closed-source (yes, I know this could be a problem), but has precompiled libraries for a variety of compilers and operating systems, including both 32- and 64-bit editions. There's C and x86 assembly source code for the decompressor.
EDIT:
Jørgen also has a free (zlib license) BrifLZ library you could check out if not having compressor source is a big problem.
I've seen people use 7zip on an embedded system with memory in the tens of megabytes.
there is a specific custom version of zlib for Micro-controller based on ARM Cortex-M (M0, M0+, M1, M3, M4)
https://github.com/kuym/Zlib-ARM-Cortex-M
I'm using the ARM7TDMI-S (NXP processor) and I need a file system capable of reading/writing to an SD card. There are so many available, what have people used and been happy with? One that requires the least amount of setup is best - so the less I have to do to get it started (i.e. write device drivers to NXP's hardware) the better.
I am currently using CMX's RTOS as the OS for this project.
I suggest that you use either EFSL or Chan's FAT File System Module. I have used both on MMC/SC cards without problems. The choice between them may come down to the license terms and pre-existing ports to your target. Martin Thomas's ARM Projects site has examples for both libraries.
FAT is popular precisely because it's so simple. The main problems with FAT are performance (because of its simplicity, it's not very fast) and its limited size (2GB for FAT16, though 2TB for FAT32)
ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.
As a kind of opposite to this question: "Is low-level embedded systems programming hard for software developers" I would like to ask for advice on moving from the low level embedded systems to programming for more advanced systems with OS, especially embedded Linux.
I have mostly worked with small microcontroller hardware and software, but now doing software only. My education also consists of hardware and embedded things mainly. I haven't had many programming courses and don't know much about software design or OO coding.
Now I have a big project in my hands that is going to be done in embedded Linux. I have major problems with designing things and keeping things manageable because I haven't really needed to do that before. Also making use of multitasking and blocking calls instead of running "parallel" task from main function is like another world.
What kind of experiences do you have on moving from low-level programming to bigger systems with OS (Linux)? What was hard and how did you solve it? What kind of mindset is needed?
Would it be worthwhile to learn C++ from zero or continue using plain C?
The main problems with using the Linux kernel to replace microcontroller systems is driving the devices you are interfacing with. For this you may have to write drivers. I would say stick with C as the language because you are going to want to keep the user-space as clean as possible. Look into the uclibc library for a leaner C standard library.
http://www.uclibc.org/
You may also find busybox useful. This provides many userspace utilities as a single binary.
http://www.busybox.net/
Then it is simply a matter of booting from some storage to a live system and running some controlling logic through init that interfaces with your hardware. If need be you can access the live system and run the busybox utilities. Really, the only difference is that the userspace is much leaner than in a normal distribution and you will be working 'closer' to the kernel in terms of objectives.
Also look into realtime linux.
http://www.realtimelinuxfoundation.org/
If you need some formal promise of task completion. I suspect the hardest bit will be booting/persistent storage and interfacing with your hardware if it is exotic. If you are unfamiliar with Linux booting then
http://www.cromwell-intl.com/unix/linux-boot.html
Might help.
In short, if you have not developed at a deep level for Linux, built your own distro, or have kernel experience then you might find the programming hard-going.
http://www.linuxdevices.com/ Might also help
Good Luck
In order to work with Unix/Linux you should get into the Unix philosophy: http://www.faqs.org/docs/artu/ch01s06.html
I consider the whole book a quite interesting read: http://www.faqs.org/docs/artu/index.html
Here you can find a free Linux distro for embedded targets plus bootloader to get you started: http://www.denx.de/wiki/DULG/WebHome
I was in a very similar predicament not too long ago. I bought and read Embedded Linux Primer and it was a very helpful way to make the mental-transition to a high level OS (from a microcontroller perspective).
If you have the "time to 'take your time'," you could obviously make the transition. But if you need to get up to speed quickly, you may want to strongly consider getting a technical mentor to help guide you.
You also may find it useful to work your way into Linux by starting out with ucLinux. It's basically Linux on a microcontroller. You could get a feel for the kernel without the virtual memory aspect of it as transition. See if ucLinux supports a microcontroller that you are already familiar with and see how the kernel interacts with that architecture.
I agree that the Embedded Linux Primer book is great for getting your brain wrapped around embedded Linux. You're better off sticking with C for now. C++ can wait, and it's more useful for applications, not driver code.
When you're comfortable with how ucLinux operates, then you could start out with a normal Linux kernel on a microprocessor architecture such as ARM that has an MMU and virtual memory.
Just my two cents!