Embedded platform development in (!C) - embedded

I'm curious to see how popular the alternatives to C are in the embedded developer world e.g. Ada...
I've only ever used C (with a little bit of assembler), but then my targets have very limited resources. Is there a move else where in this space to something else? What is winning the ware in set top boxes?
If !C what was the underlying reason?
Compiler support for target
Trace \ static analysis tools
other...
Thanks.

Forth is quite popular for embedded development.
Also, while Smalltalk is probably not popular in the embedded community, embedded development is definitely popular in the Smalltalk community.

When you say "embedded development", keep in mind that you have to consider the scale of the project.
When programming something on the scale of a microcontroller or the firmware for an ASIC, you tend to see C and assembly dominate the scene. Embedded developers tend to "specialize" in these languages since compilers for them are available for nearly every embedded target platform. If your project migrates from, say, a chip with a PowerPC core to a chip with an ARM core, you can be fairly confident that your C code will not be overly difficult to port over. Some chips do have compilers available for other languages, but typically they do not match the C compiler in terms of efficiency of the resulting binary. Since embedded systems are often low on resources, system designers want to make their code as efficient as possible (also one reason why you see a lot of assembly language code). I have seen development tools available for languages such as C++, Pascal, Basic, and others, but they are typically niche tools that are not mature enough to match the efficiency of the available C compilers. Debugging tools for these languages also tend to be harder to find than what is available for C/assembly.
You also mentioned set-top boxes. Embedded systems on this scale can pack the equivalent power of a desktop computer from 7-8 years ago. Their available RAM, storage space, and processing power allows them to run full-featured operating systems and interpreters for higher-level languages. On these more powerful systems you will still see C and assembly language being used (for driver code, if nothing else), but other languages (such as Java, Lua, Tcl, Ruby, etc) are becoming more and more common. Using interpreted languages makes porting code from one platform to another even easier, as long as the platform has sufficient resources to handle the overhead of the language interpreter. Any low-level code that interfaces directly with hardware (drivers) with still typically use assembly or C since high-level languages don't always have the capability to do this sort of thing. Anything running as an application on top of the embedded operating system can usually be developed and tested inside an emulator or virtual machine, and so you will see a lot of code being developed in whatever language the developer happens to be comfortable with.
TLDR version: C is popular because is it a versatile language that nearly all developers are familiar with. Assembly is popular because it allows for low-level hardware access in ways that would otherwise be difficult or impossible. Interpreted/scripted languages such as Java are becoming more popular, but the resource requirements of the interpreters for these languages may be too much for some embedded systems to handle. The quality and variety of development/debugging tools availability for the C and assembly languages also makes these options attractive.

Perhaps not quite the large step from C you're looking for but C++ is also resonably popular for embedded projects.

I haven't used myself, but Bascom is quite popular for AVR microcontrollers. It is a Basic IDE that lets you interact with the peripherals very easily. I've met hardware people that successfully use it.

Yes. Java is becoming more popular - many processors have added instructions that pertain primarily to Java and similar languages (.net). Also, uclinux runs on microcontrollers, so you can use practically any language for some of the larger micros.
Basic is still common, as is assembly.
You'll see Ada in certain gov't projects.
And some engineers are even putting Lua and other interpreters on their micros so their customers can extend the functionality.
But C is still dominant.
-Adam

In the early 90 I did a lot of embedded development on the 8051 using Intel PLM51 and the DCX51 operating system.
PLM is very simple language – but very powerful
We now use C

If you work in the smartcard space, you get to use Java Card. Yep, Java, on an 8-bit micro. It's kinda fun, actually. I get to develop in Eclipse, test ( & debug!) on the PC simulator, and can be confident that it'll run the same on the card. It's just such a pity Java is a terrible language for embedded apps :)

I've used EC++ (Embedded C++) quite extensively.
Also, PICBasic has been popular with the PIC'ers for eons now.

I have used Ada in embedded project for military avionics because of customer requirements. There is lots of Ada tools for embedded development but most of it is very expensive. Personally I would just use C.

There is a Pascal compiler for 8051

JAL

There is a group of folks working to make Lua a viable option for embedded work. They are targeting primarily 32-bit ARMs with 256K FLASH and 64K RAM or better, and seem happy with their work so far.
They are partly inspired by the classic BASIC-Stamp, a BASIC interpreter running in a moderately powerful PIC with the program itself stored in a serial EEPROM device.
At work, I am still maintaining a customer's embedded system that is written in a compiled flavor of BASIC running in a Zilog Z180 CPU. 1980's technology all around, with most of the system still built out of 24-bin DIP packages in sockets. The compiler runs under CP/M-80 running in a Z80 simulator, that itself runs in the MS-DOS simulator built into Windows. Aside from the shear amazement that anything productive can be done this way (and that you can still buy 27C256 UV erasable EPROMS, and that my nearly 20 year old Data/IO PROM programmer still works) I really wish the customer could afford to move to a new hardware design so the system could be rewritten in a maintainable language.

Depends on the microcontroller, many of them have C but the compilers are horribly, assembler is usually easy and the best performing, most efficient, etc. Ones like the msp, avr, and arm are good for C compilers and for those I would and do use C (depending on the problem).
I would stick to C or assembler, you are wasting memory, performance, and resources using anything else.

Pascal, Modula2 work fine too. Essentially they are pretty much equivalent to C, except for the inability to do alloca (though some have that as extension).
But the core problem will be the problem with any !C compiler: what do you prefer, a better compiler/toolchain or the language of preference.
Despite I like the Wirthian languages most, I simply use C, and am living with the consequences, simply because the toolchain is better.
There have been examples in the past (Pascals, or even tightly compiled Basics), but C is mostly the norm. I never understood why.

I worked on a device which ran some incredibly old version of python (1.4 or something). There was no way to debug it (other than printing debug messages) so when your code hit an exception everything would just stop and you scratched your head for an hour. Whenever you made a change and upgraded the code it was running, it took about 10 minutes to interpret and compile it.
Needless to say we scrapped that and replaced the microcontroller with one that ran C.

See this related question:
What languages are used for real-time systems programming.
In response to your "why" question, from the standpoint of government/military acquisition, there is a perception that Java (language, platform, etc...) is the lingua franca these days and that economies of scale in the language will reduce acquisition and maintenance cost. There's also a hope that one can efficiently train a competent Java programmer to be a reasonable RT/embedded programmer in Java faster than if they are required to learn a new language. This rationale is suspect, in my opinion, but it does answer the "why" question.

If you include the iPhone as an embedded platform then Objective-C

Considering how many times I've had a Java out-of-memory exception on my phone(most of the time I do anything remotely interesting), I'd run away from Java like a bat out of a hot place.
I've heard that Erlang was designed for use for cell phones. I think Lisp is a good architecture for remote device support- if the device cna handle the run-time.

A lot of home-brew users and small companies needing a cheap solution have found Tiny Tiger and Basic STAMP (using BASIC) meets their needs.

Related

Why do almost all OO languages compile to bytecode?

Of the object-oriented languages I know, pretty much all but C++ and Objective-C compile to bytecode running on some sort of virtual machine. Why have so many different languages settled on compiling to bytecode, as opposed to machine code? Is it possible in princible to have a high-level memory-managed OOP language that compiled to machine code?
Edit: I'm aware that multiplatform support is often advanced as an advantage of this approach. However, it's quite possible to compile natively on multiple platforms, without making a new compiler per platform. One can, per example, emit C code and then compile that with GCC.
There's no reason in fact, this is a kind of coincidence. OOP now is the leading concept in "big" programming, and so virtual machines are.
Also note, that there are 2 distinct parts of traditional virtual machines - garbage collector and bytecode interpreter/JIT-compiler, and these parts can exist separately. For example, Common Lisp implementation called SBCL compiles program to a native code, but at runtime heavily uses garbage collection.
This is done to allow a VM or JIT compiler the chance to compile the code on demand optimally for the architecture on which the code is executed. Also, it allows for cross-platform bytecode to be created once and then executed on multiple hardware architectures. This allows for hardware specific optimizations to be placed into the compiled code.
Since byte code is not limited to a microarchitecture, it can be smaller than machine code. Complex instructions can be represented vs. the much more primitive instructions available in modern day CPUs, since the constraints in the design of CPU instructions are very different from the constraints in designing a bytecode architecture.
Then there's the issue of security. The bytecode can be verified and analyzed prior to execution (i.e., no buffer overflows, variables of a certain type being accessed as something they are not), etc...
Java uses bytecode because two of its initial design goals were portability and compactness. Those both came from the initial vision of a language for embedded devices, where fragments of code could be downloaded on the fly.
Python, Ruby, Smalltalk, javascript, awk and so on use bytecode because writing a native compiler is a lot of work, but a textual interpreter is too slow - bytecode hits a sweet spot of being fairly easy to write, but also satisfactorily quick to run.
I have no idea why the Microsoft languages use bytecode, since for them, neither portability nor compactness is a big deal. A lot of the thinking behind the CLR came out of computer scientists in Cambridge, so i imagine considerations like ease of program analysis and verification were involved.
Note that as well as C++ and Objective C, Eiffel, Ada 9X, Vala and Go are OO languages (of varying vintage) that are compiled straight to native code.
All in all, i'd say that OO and bytecode do not go hand in hand. Rather, we have a coincidental convergence of several streams of development: the traditional bytecoded interpreters of scripting languages like Python and Ruby, the mad Gosling masterplan of Java, and whatever it is Microsoft's motives are.
The biggest reason why most interpreted languages (not specifically OO languages) are compiled to bytecode is for performance. The most expensive part of interpreting code is transforming text source to an intermediate representation. For instance, to perform something like:
foo + bar;
The interpreter would have to scan 10 characters, transform them into 4 tokens, build an AST for the operation, resolve three symbols (+ is a symbol, which depends on the types of foo and bar), all before it can perform any action that actually depends on the run-time state of the program. None of this can change from run to run, and so many languages try to store some form of intermediate representation.
bytecode, rather than storing an AST has a few advantages. For one, bytecodes are easy to serialize, so the IR can be written to disk and reused at the next invocation, further reducing interpretation time. Another reason is that bytecode often takes up less actual ram. significantly bytecode representations are often easy to just in time compile, because they are often structurally similar to typical machine code.
As another data point, the D programming language is GC'ed, OO, and a lot higher level than C++ while still being compiled to native code.
Bytecode is significantly more flexible medium than machine code. First, it provides the basis for platform portability without the need for a compiler or shipping source code. So a developer can distribute a single version of the application without needing to give up the source, require complex developer tools, or anticipate potential target platforms. While the later is not always practical it does happen. Especially with developer libraries say I distribute a library that I've only tested on Windows, but someone else uses it on Linux or Android. It happens quite frequently actually, and most of the time it works as expected.
Byte code is also generally more optimized that an interpreter because it's closer to machine instructions therefore faster to translate to machine instructions. Not all OO languages are compiled. Ruby, Python, and even Javascript are interpreted so they aren't compiled to anything so the ruby interpreter has to take a very flexible language and turn that into instructions, but that flexibility comes at a price paid an runtime: parse text, generate AST, translate AST to machine code, etc. It's also easy to do optimizations like JIT where byte code is translated to machine code directly, and even gives the possibility for creating optimizations for specific hardware.
Finally, just because one language compiles to bytecode doesn't preclude other languages taking advantage of of that byte code. Now any optimization using that byte code can be applied to these other languages that might know how to translate themselves to that byte code. That makes the byte code a very important layer for reusability for other languages.
OO and byte code compilation goes back to the 70s with Smalltalk, and I'm sure someone will say LISP as early as the 50s/60s. But, it really wasn't until the 90s that it started to really be used in production systems on a large scale.
Native compilation sounds like the optimal path, and probably why our industry spent 20 years or more thinking that was THE ANSWER to all our problems, but the last 15 years we've seen byte code compilation take stage and it's been a significant advantage over what we did before. Looking back we realize how much time wasted natively compiling everything mostly by hand.
I agree with Chubbard's answer and I'd add that in OO languages type information can be very important for enabling optimizations by virtual-machines or last-level compilers
It is easier to develop an interpreter than a compiler.
Effort in development of...:
interpreter < bytecode-interpreter < bytecode-jit-compiler < compiler-to-platform-independent-language < compiler-to-multiple-machine-dependent-assembler.
It is a general trend to stop the development at jit-compilers because of platform independence. Only the preferred languages in respect to performance and research in theoretical computer science are and will be developed in ALL possible directions, including new bytecode-interpreter, even while there are good and advanced compilers to platform independent languages and to different machine-dependant assemblers.
The research in OOP languages is pretty ...let's say dull, compared to functional languages, because really new language and compiler technologies are more easily expressed with/in/using mathematical cathegory theory and mathematical descriptions of touring-complete type-systems. In other words: it is nearly functional in itself, while imperative languages are nearly only assembler-frontends with some syntactic sugar. OOP languages tend to be imperative languages, because functional languages have already closures and lambda. There are other ways to implement java-like "interfaces" in functional languages, and there is just no need for additional object oriented features.
In i.e. Haskell, adding the feature of OOP-like programming would probably be more than only a few steps back in technology – there would be no point in using that. (<- that is not only IMHO... you ever heard of GADTs or Multi-parameter-type-classes?) Probably there might be even better ways to dynamically create Objects with Interfaces to communicate with OOP-languges than changing that language itself. But there are other functional languages, too, that explicitely combine functional and OOP aspects. There is just more science with mainly functional languages than non-functional OO-languages.
OO languages can not be easily compiled to other OO languages, iff they are in some way more "advanced". Usually, they have features like stack-protector, advanced debugging abilities, abstract and inspectable multi-threading, dynamic object-loading from files from the internet... Many of these features are not or not-easily realisable with C or C++ as compiler-backend. The functional language LISP (which is 50 years old!) was AFAIK the first with garbage collector. As compiler-backend LISP used a hacked version of the language C, because plain C did not allow some of those things, assembler did allow, i.e. proper-tail-calls or tables-next-to-code. C-- allows that.
An other aspect: Imperative languages are intended to run on a specific architecture, i.e. C and C++ programs run on only those architectures, they are programmed for. Java is more extreme: it runs only on a single architecture, a virtual one, which itself runs on others.
Functional languages are usually by design pretty architecture-independent: LISP was developed to be so immense architecture-unspecific, that it could be compiled to genetic code, in some distant future. Yes, like programs running in living biologic cells.
With the bytecode for the LLVM, functional languages will most-likely be compiled to bytecode in the future, too. Most imperative languages will most likely still have the same inherited problems as they have now from not-abstracting-far-enough. Well, I'm not that sure about clang and D, but those two are not "the most" anyway.

What are alternatives to the Java VM?

As Oracle sues Google over the Dalvik VM it becomes clear, that you cannot implement a Java VM without license from Oracle (EDIT: Matthew Flaschen points out, that the claims of Oracle may not be valid. Anyways we have currently a situation, where Oracle threats VM-implementations.). That may become the death for Open-Source-implementations of Java (like Apache Harmony).
I don't want to discuss the impact or the legitimation of this lawsuit. but as a Java-programmer I want to take a deeper look into the alternatives, to be prepared for every case. As I see the creation of a compiler as a minor problem, my main interest are alternative VM-implementations, that serve a similar purpose as the JVM.
The VM I'm looking for, should meet some conditions:
free of patent-issues
an Open-Source-implementation exists
potential for optimizations/good performance
platform independent (the VM can be ported to different platforms without bigger hurdles)
Please add some recommendations for me.
LLVM is a really good optimizing, low level virtual machine. It can support languages like C and C++, and does not have built in support for high level features like garbage collection.
VMKit is an implementation of the Java and CLI virtual machines on top of LLVM. Since it uses Java bytecode, this probably wouldn't help with the patent issues.
HLVM is another interesting high level virtual machine built on top of LLVM. It is probably different enough to avoid most well known patents, but it is mainly targeted at numerical computing and functional programming.
On the dynamically typed side, there is Parrot.
I am actually working on a compiler and VM for a language of my own design, but don't count on it ever being finished. ;-)
Keep in mind that any large piece of software will infringe on numerous patents, the important thing is how well known they are (and how much the patents' owners actively seek out infringers). Of course, the whole patent system is absurd, and we would be much better off getting rid of it.
I don't think there is any significant piece of software that is free from patent issues.
If you are an independent developer or working for a smaller company you probably won't get hit directly by the problems though. It's unlikely that big companies holding patents will go after lots of small claims - it's an expensive process and causes a lot of resentment. SCO tried something like that and it didn't work out too well for them.
I would concentrate on finding the best tool for the job without worrying too much about the patent issues, otherwise you will never get anything done.
GraalVM is a research project developed by Oracle Labs and already in production at Twitter. I can't believe my eyes that no one mentions anything about it, it’s so weird. Anyways, GraalVM is a well promising extension of the java virtual machine to support more language and execution modes for running applications like JavaScript, Python, Ruby, R, JVM-based languages, and LLVM-based languages such as C and C++.The GraalVM project includes a new high-performance Java compiler, itself called Graal, which can be used in a just-in-time configuration on the HotSpot VM, or in an ahead-of-time configuration on the SubstrateVM. The main goal of this project is to improve the performance of the java virtual machine base language to match the performance of native languages. Let’s sum up the novel features that this project offers and make a brief explanation according to the docs why you should adopt it.
Polyglot: All languages (even LLVM-based) share the same VM and its capabilities. Zero overhead interoperability between programming languages allows you to write polyglot applications and select the best language for your task
Native: Native images compiled with GraalVM ahead-of-time improve the startup time and reduce the memory footprint of JVM-based applications.
Embeddable: GraalVM can be embedded in both managed and native applications. There are existing integrations into OpenJDK, Node.js, Oracle Database, and MySQL GraalVM removes the isolation between programming languages and enables interoperability in a shared runtime. It can run either standalone or in the context of OpenJDK, Node.js, Oracle Database, or MySQL.
Performance: Graal benchmark reports show great performance improvements in almost all of its implementations thanks to the way that GraalVM performs object allocations
If someone don’t get convinced by now that is a good choice and it is a really awesome project you can see this talk by Christian Thalinger on “on why Graal is a good fit for Twitter”

How hard is it for a software developer to learn how to program a microcontroller?

I'm a software developer. I've been programming in high level languages for a few years.
I would like to know, how to take my first step into programming hardware. Not something crazy complicated, but maybe some ordinary CE device? Assuming I don't need to put the PCB together with varies components, but just to program the tiny cpu?
How low-level do I have to go? ASM? C? manipulating registers? or are the dev kit quite high level now? Is Java even in the picture? OO coding in hardware, is that even a dream or a reality? Need a reality check.
I also tend to learn better with books or sites that are written in a tutorial format. Something that guides the way for me from something simple to something more complex. Any recommendations? Maybe something that will introduce me to the popular hardware (microprocessor/micro-controller) available today?
Much appreciated, thank you everyone.
The actual programming isn't a big deal. The frustrating, annoying part is getting your development environment setup and getting the tools working. Once you've done that, you're half done.
I'd suggest buying a development kit ('dev kit') that has USB built in and works with your chosen OS. Get that working, and you're halfway done.
If you're missing the knowledge, it's also important to know the basics of how a processor works. You'll be programming at a much lower level than any other programming, so the fundamentals are a bit more important.
If you know C then it's only a matter of learnig the tool chain steps to download the code.
Easy place to start (cheap hardware/software) http://www.arduino.cc/en/Guide/HomePage
I have been coding in C both as a hobby and professionally for about 16 years now, but always for userland code (i.e., programs, not kernel or drivers). Most of my jobs involved high level languages (I have done a lot of Perl and Ruby programming, with the occasional Java, Python and shell scripting in between). I did develop a lot for MS-DOS (which was probably as close to bare-metal programming as you would get on a x86 machine), but my last job involved 5 years of Perl and Ruby on Rails web development.
That being said, I am now a senior engineer for embedded Linux development, developing drivers (including an emulator for a legacy simple microprocessor inside a kernel module) for uClinux on the Blackfin platform. There are times when my inexperience with hardware related issues (i.e., floating signal levels due to lack of a pull-up/pull-down on a pin) did get in the way, but it has been mostly a highly enjoyable and thrilling experience. As stated by others, understanding your tools is essential -- for uClinux, that meant the GNU Toolchain, which fortunately I was already familiar with due to my background on FOSS technologies.
The Blackfin is hardly an entry-level microprocessor (in particular, it does not have a MMU, which has some relevant effects on Linux development), but as already stated, you can buy a Beagleboard for around US$200 with all required accessories and start messing around with it in just a few days. If you want something simpler, there are many Arduino options out there, though if you have some real development experience under your belt I believe you will find their development environment a little limiting (I know I did).
After you get comfortable with your tools you might want to spend some money on an in-circuit emulator (or ICE). These are usually highly platform specific (both in terms of target architecture and development environment), but are highly recommended for anything beyond the usual blink-LEDs-after-button-press examples I am sure you will quickly outgrow.
In few months you will find yourself building custom images for hackable customer devices using Buildroot and having a lot of fun. All I can say is, go for it, it's highly addictive and not particularly expensive to do nowadays.
Also something to look into is the Microsoft Robotics Studio. They support quite a lot of hardware boards (including CE), and with it is is fairly easy to get a small robot up and running. And what's more cool a project to learn embedded programming?
It all integrates nicely in Visual Studio (express) and their devkit also comes with a free express edition.
Get a beagleboard. Cheap, lots of users (community support will be key), many OS options. http://beagleboard.org/
Well, if you want to know what you're doing, you need to understand the assembly language of the processor and the processor's architecture.
You will need to learn C to be competent in microcontrollers. There is no way around that.
There are some VM-level languages on embedded systems. I see the Java out-of-memory exception from time to time on my cell phone(which also helps to give me a strong opinion on VM-level embedded languages).
The ARM has some support for hardware-level Java bytecodes.
Your best bet is to pick up something like the PIC or the Atmel chips and begin hacking with them.
If you want to do it with your existing hardware, get a hypervisor for your PC and begin writing a basic kernel.

Use of general-purpose scripting languages

There are many scripting language communities claiming that the language can be used for everything but in fact, nearly everybody uses it for one specific thing, e.g.: web development. If I take a look at Ruby, for example, they tell you its general-purpose but actually everybody is using it with rails for web development only..
Can you list me some uses of popular general-purpose scripting languages for the local PC? (except embedding) Are there any?
Is the fast development usually worth having to bring the whole interpreter with your program? Then there would be some language-dependent performance and stability problems too in most cases..
best regards,
lamas
I tend to use Python for most things that aren't compute bound, i.e. they aren't restricted by how many computations you do per second. Some of the things I've used Python for are:
General scripts to manipulate images etc. with the Python Imaging Library.
GUI frontends for command line applications using the pexpect module.
Mathematical modeling of microbial systems.
Bioinformatics.
Some web programming.
etc...
When the program/algorithm is compute bound, I use C together with Python and Ctypes. Does this fit your definition of general purpose? It's certainly useful for a wide variety of applications, but not suitable if the program needs to crunch numbers fast.
Stability: Python 2.5/2.6 is rock solid. Never had a crash that wasn't caused by self-stupidity.
Fast development: It's definitely worth it for me. For the most part, in the field where I work, programmer time is orders of magnitude more valuable than processor time. I'm quite happy to let a program run for hours if I can write it in a few days instead of a few weeks.
I often use PHP for things that I used to use bat files for. Much easier to write. Ironically, the deployment scripts to create installable materials for my web apps from the subversion sources are written in PHP.
Python is popular in the gaming community. EVE Online is written in python.
claiming that they can be used for everything but I often can't find any examples for that
You are basing your question on an incorrect assumption. Although, as pointed out, a Turing complete language will be able to compute what you require ... languages are 'viewed' by most as the sum of their most useful features and productive semantics.
The reality is:
Most scripting languages can do the same things, or support the most common things via libraries.
Some languages make a subset of operations more convenient, take Perl and regular expressions as an example
CPU time is cheap, as is RAM. Simple to understand code is the priority for most people.
The rise of the scripting languages is natural. Trying to assert any one language, approach or level of execution is good for a range of situations is usually fruitless.
What do you want?
What is the best language for that?
Is is fast enough or small enough? Usually the answer is yes
Imagine trying to use Python where you should be using Erlang, or C instead of Lisp because you thought all languages are equal. They aren't, even though, you can achieve the same things in a problem domain, in most languages/platforms with varying levels of ballache dependant on the task.
I often use ruby for what other people would create bash/sh files for. I find Ruby syntax intuitive for batch tasks along with a lot of other sorts of tasks(it's my goto language)
Perl is extremely popular for general scripting in unixes, such as there are package managers and websites and maintenance scripts written in perl.
Python is extremely popular for both web and application use.
VBA Is popular for being abused to write programs inside of Access, and also was once commonly used in ASP for websites (right?)
Nobody mentioned AppleScript!
Hahah, no seriously, Perl runs everywhere, is installed by default on (almost) any Unix-family OS (and is easy to get on Windows), and is extremely useful for gluing things together. And if you browse a bit at CPAN you'll see that it's extremely general-purpose. "Swiss army chainsaw" was intended as a slur but I think of it fondly. Performance is good too, though it hardly ever actually matters. Larry Wall's goal was "make easy things easy and hard things possible".
OK OK, so I'm a fanboy still, sigh.

Getting Embedded with D (the programming language)

I like a lot of what I've read about D.
Unified Documentation (That would
make my job a lot easier.)
Testing capability built in to the
language.
Debug code support in the language.
Forward Declarations. (I always
thought it was stupid to declare the
same function twice.)
Built in features to replace the
Preprocessor.
Modules
Typedef used for proper type checking
instead of aliasing.
Nested functions. (Cough PASCAL
Cough)
In and Out Parameters. (How obvious is that!)
Supports low level programming -
Embedded systems, oh yeah!
However:
Can D support an embedded system that
not going to be running an OS?
Does the outright declearation that
it doesn't support 16 bit processors
proclude it entirely from embedded
applications running on such machines? Sometimes you don't need a hammer to solve your problem.
Garbage collection is great on Windows or Linux, but, and unfortunately embedded applications sometime must do explicit memory management.
Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues.
What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading.
Is there a D-Lite for embedded systems?
So basically is D suitable for embedded systems with only a few megabytes (sometimes less than a magabyte), not running an OS, where max memory usage must be known at compile time (Per requirements.) and possibly on something smaller than a 32 bit processor?
I'm very interested in some of the features, but I get the impression it's aimed at desktop application developers.
What is specifically that makes it unsuitable for a 16-bit implementation? (Assuming the 16 bit architecture could address sufficient amounts of memory to hold the runtimes, either in flash memory or RAM.) 32 bit values could still be calculated, albeit slower than 16 bit and requiring more operations, using library code.
I have to say that the short answer to this question is "No".
If your machines are 16 bit, you'll have big problems fitting D into it - it is explicitly not designed for it.
D is not a light languages in itself, it generates a lot of runtime type info that normally is linked into your app, and that also is needed for typesafe variadics (and thus the standard formatting features be it Tango or Phobos). This means that even the smallest applications are surprisingly large in size, and may thus disqualify D from the systems with low RAM. Also D with a runtime as a shared lib (which could alleviate some of these issues), has been little tested.
All current D libraries requires a C standard library below it, and thus typically also an OS, so even that works against using D. However, there do exist experimental kernels in D, so it is not impossible per se. There just wouldn't be any libraries for it, as of today.
I would personally like to see you succeed, but doubt that it will be easy work.
First and foremost read larsivi's answer. He's worked on the D runtime and knows of what he's talking about.
I just wanted to add: Some of what you asked about is already possible. It won't get you all the way, and a miss is as good as a mile here but still, FYI:
Garbage collection is great on Windoze or Linux, but, and unfortunately embedded apps sometime must do explicite memory management.
You can turn garbage collection off. The various experimental D OSes out there do it. See the std.gc module, in particular std.gc.disable. Note also that you do not need to allocate memory with new: you can use malloc and free. Even arrays can be allocated with it, you just need to attach a D array around the allocated memory using a slice.
Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues.
The specification for arrays specifically requires that compilers allow for bounds checking to be turned off (see the "Implementation Note"). gdc provides -fno-bounds-check, and in dmd using -release should disable it.
What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading.
This I'm less clear on, but given that most C runtimes allow turning off multithreading, it seems likely one could get the D runtime to disable it as well. Whether that's easy or possible right now though I can't tell you.
The answers to this question are outdated:
Can D support an embedded system that not going to be running an OS?
D can be cross-compiled for ARM Linux and for ARM Cortex-M. Some projects aim at creating libraries for Cortex-M architectures like MiniLibD for the STM32 or this project which uses a generic library for the STM32. (You could implement your own minimalistic OS in D on ARM Cortex-M.)
Does the outright declearation that it doesn't support 16 bit processors proclude it entirely from embedded applications running on such machines? Sometimes you don't need a hammer to solve your problem.
No, see answer above... (But I would not expect that "smaller" architectures than Cortex-M will be supported in the near future.)
Garbage collection is great on Windows or Linux, but, and unfortunately embedded applications sometime must do explicit memory management.
You can write Garbage Collection free code. (The D foundation seems to aim at a "GC free compliant" standard library Phobos but that is work in progress.)
Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues.
(As you said this depends on your "personal taste" and design decisions. But I would assume an acceptable performance overhead for bound checking due to the background of the D compiler developers and D's design aims.)
What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading.
(What is the question? One could implement mutlithreading using D's language capabilities e.g. like explained in this question. BTW: If you want to use interrupts consider this "hello world" project for a Cortex-M3.)
Is there a D-Lite for embedded systems?
The SafeD subset of D targets at the embedded domain.