Reusable knowledge going from Embedded to Desktop - embedded

I'm thinking about switching my path "slightly" by going into desktop development (VC++, MFC, C#, etc) after about 8 years within embedded telecom systems development (C, MAKE, Symbian, 100 compilers etc, etc).
My concern however is that my experience within embedded systems maybe doesn't give me much value when going into desktop development. For example that the domain specific problems and environments I've worked with for so long still doesn't give me much to negotiate salaries with since it bares little worth on the desktop.
I think this place might be good for input on this.
So, the Q:
If you disregard the obvious generic experience on programming language level, give an example of something you have learned working with embedded systems that you could reuse when working in a desktop environment.
PS:
I should note that I'm no beginner in the desktop area - since many years back all my hobby projects are focused around desktop development.

Embedded engineers in general tend to be more disciplined when it comes to validating operations and dealing with finite resources.
This can also translate into coming up with an exception handling strategy earlier on.
The quintessential example is checking the return value of malloc. I have seen very few desktop software consistently check it, but it's commonplace in embedded environments.

Discipline of having a clean, well-organized set of source-code is the key skill that translates well to the "desktop experience". -- I've noticed that the embedded projects I've written and picked up are often WAY cleaner than their desktop counterparts.

Many desktop-only developers could benefit from the experience of making a program fit in 128K of FLASH and 32K of SRAM, not to mention communicating meaningfully with a user through only an LED or two and a couple of buttons. Making that a requirement might reduce some of the endemic code bloat in the applications industry. :-)
Even if you don't switch tracks to straight application development, the embedded experience translates well to driver development, as well as to low level utilities and to long running services. All of these are also domains where the disciplines that are nearly second-nature to a successful embedded developer remain valuable.

I was a desktop developer for almost 5yrs before switching to an embedded environment.
I find working on an embedded environment more challenging as we have to deal with memory limitations, slow CPU speed, cross-compilation issues, etc.
Having learned a lot of patience, discipline and low-level intricacies, desktop development should be as easy as a walk in the park.

State machines/event driven programming on embedded systems is not that different from event driven programming on the desktop. The depth of experience you have of these coding techniques on embedded systems, especially telecoms embedded systems, should make you a great desktop programmer.
Similarly, your experience with communications protocols should transfer nicely to the desktop. Most desktop applications have some involvement with the network.

Related

Where is Smalltalk-80 best used?

I want to know in which applications/programming domain are most suitable for Smalltalk. Could anyone please provide me some useful links that could answer my query?
Through googling I learned that some companies use it for:
logistics and foreign trade application
desktop, server and script development
data processing and logistics, scripts and presentations
but I cant find documents/research papers that can tell me which programming domain Smalltalk-80 (or Smalltalk) is best suited.
Some of the programming domains are:
- Artificial intelligence reasoning
- General purpose applications
- Financial time series analysis
- Natural language processing
- Relational database querying
- Application scripting
- Internet
- Symbolic mathematics
- Numerical mathematics
- Statistical applications
- Text processing
- Matrix algorithms
I hope you guys can help me. I am doing this for my case study. Thanks in advance.
It's a general purpose programming language. To paraphrase Kent Pitman on the question of what Common Lisp is useful for:
...Please don't assume [Smalltalk] is only
useful for Animation and Graphics, AI,
Bioinformatics, B2B and E-Commerce,
Data Mining, EDA/Semiconductor
applications, Expert Systems, Finance,
Intelligent Agents, Knowledge
Management, Mechanical CAD, Modeling
and Simulation, Natural Language,
Optimization, Research, Risk Analysis,
Scheduling, Telecom, and Web Authoring
just because these are the only things
they happened to list.
It's particularly suited for applications that cannot have downtime - it's quite normal to patch a running server in deep ways (say, by changing the shape of your class) without taking the server down - or systems that are very complex or have rapidly changing requirements.
Smalltalk has quite substantial growth recently in web based applications, thanks to innovations and fresh approaches in Aida/Web, Iliad and Seaside Smalltalk web frameworks.
In general Smalltalk is used for most complex information systems, let me mention just two:
Finance: Kapital, a risk management in JP Morgan
Manufacturing: ControlWorks, for chip manufacturing in AMD
My goal has been to do a brain dump into software. And I have found Smalltalk to be very well suited for that. Smalltalk makes it easy to put my ideas down in code. And it provides feedback to my thinking. The ability to debug infinitely deep at any point in the execution just enhances my understand of the problem to be solved. Then it allows me to carry out my solution most naturally.
Aik-Siong Koh
I'm afraid you will get as many answers as users of Smalltalk. For some it's a "way of life" for others it's a learning process and in the end they "strand" at granddaddy of the OO languages. Some are using their smalltalk as a kind of shell to "IT-problems".
For me the answer is for application development. Now this is definitive a wide field. As you figured out it is used quite "much" in the software for economic stuff. And that is where I'm using it. I've decided to use it for my Web-Development projects which are related to "business".
The domains you named are all suitable for Smalltalk. Smalltalk shows its strengths in development for systems that are engineering-time limited, instead of hardware-limited.
The Seaside web framework allows us to create complex web applications in a fraction of the time needed in other technologies. The Gemstone object-oriented database allows us to nearly ignore persistence issues.
Smalltalk is generally a very expressive, readable, and understandable language. Whenever a large codebase is to be maintained or code needs to be understandable to non-professionals, Smalltalk shines.
»Smalltalk is a vision of the computer as a medium of self expression. … A humanistic vision of the computer as something everyone could use and benefit from. If you are going to have a medium for self expression, programability is key because unless you can actually make the system behave as you want you are a slave to what’s on the machine. So it’s really vital, and so language comes to the for because it’s through language that you express yourself to the machine.« – Elliot Miranda
You can check this link: http://www.clubsmalltalk.org/web/index.php?option=com_content&view=article&id=183&Itemid=117 this is a compilation of uses of smalltalk in latam.
perhaps another way of answering the question would be by stating what it might not be suitable for. One domain would be where you have "real" real time constraints i.e. you would need to control the garbage collector from kicking off. If I recall IBM's (OTI) Smalltalk embedded had a mechanism for turning off the gc, but IBM dropped that a while ago. The other domain I have not seen much of is cell phone apps. As far as I know none of the viable Smalltalk's can run on Android but that may change. One hears of folks in Squeak/Pharo working on that. I would love to see ST running well on Android. I think that the Android tablet market will be a hot one.
I should conclude by saying that in all the years I have been coding in ST i.e. since 94, I have seen Smalltalk in just about everything else.
I cant find documents/research papers that can tell me which programming domain Smalltalk-80 (or Smalltalk) is best suited.
This is because Smalltalk is not a domain-specific language, but a general purpose language.
Things it has been used for in the past:
- as the operating system system language for personal computers
- writing rich multimedia and near real-time applications, such as sound synthesisers
- very large corporate and government data processing systems, such as the UK's Home Office Large Matter Enquiry System, or many of JPMorgan Chase's financial trading systems
- web applications, such as DabbleDB
- creating complicated development tools, such as IBM's VisualAge IDE
- experimenting and prototyping applications in early-stage development
Generally speaking Smalltalk shines where the systems are complex, development speed is a key factor, and maintainability is going to be a key factor.
I use Smalltalk to create applications to control, manage and distribute multi-platform JavaScript webapps.

How hard is it for a software developer to learn how to program a microcontroller?

I'm a software developer. I've been programming in high level languages for a few years.
I would like to know, how to take my first step into programming hardware. Not something crazy complicated, but maybe some ordinary CE device? Assuming I don't need to put the PCB together with varies components, but just to program the tiny cpu?
How low-level do I have to go? ASM? C? manipulating registers? or are the dev kit quite high level now? Is Java even in the picture? OO coding in hardware, is that even a dream or a reality? Need a reality check.
I also tend to learn better with books or sites that are written in a tutorial format. Something that guides the way for me from something simple to something more complex. Any recommendations? Maybe something that will introduce me to the popular hardware (microprocessor/micro-controller) available today?
Much appreciated, thank you everyone.
The actual programming isn't a big deal. The frustrating, annoying part is getting your development environment setup and getting the tools working. Once you've done that, you're half done.
I'd suggest buying a development kit ('dev kit') that has USB built in and works with your chosen OS. Get that working, and you're halfway done.
If you're missing the knowledge, it's also important to know the basics of how a processor works. You'll be programming at a much lower level than any other programming, so the fundamentals are a bit more important.
If you know C then it's only a matter of learnig the tool chain steps to download the code.
Easy place to start (cheap hardware/software) http://www.arduino.cc/en/Guide/HomePage
I have been coding in C both as a hobby and professionally for about 16 years now, but always for userland code (i.e., programs, not kernel or drivers). Most of my jobs involved high level languages (I have done a lot of Perl and Ruby programming, with the occasional Java, Python and shell scripting in between). I did develop a lot for MS-DOS (which was probably as close to bare-metal programming as you would get on a x86 machine), but my last job involved 5 years of Perl and Ruby on Rails web development.
That being said, I am now a senior engineer for embedded Linux development, developing drivers (including an emulator for a legacy simple microprocessor inside a kernel module) for uClinux on the Blackfin platform. There are times when my inexperience with hardware related issues (i.e., floating signal levels due to lack of a pull-up/pull-down on a pin) did get in the way, but it has been mostly a highly enjoyable and thrilling experience. As stated by others, understanding your tools is essential -- for uClinux, that meant the GNU Toolchain, which fortunately I was already familiar with due to my background on FOSS technologies.
The Blackfin is hardly an entry-level microprocessor (in particular, it does not have a MMU, which has some relevant effects on Linux development), but as already stated, you can buy a Beagleboard for around US$200 with all required accessories and start messing around with it in just a few days. If you want something simpler, there are many Arduino options out there, though if you have some real development experience under your belt I believe you will find their development environment a little limiting (I know I did).
After you get comfortable with your tools you might want to spend some money on an in-circuit emulator (or ICE). These are usually highly platform specific (both in terms of target architecture and development environment), but are highly recommended for anything beyond the usual blink-LEDs-after-button-press examples I am sure you will quickly outgrow.
In few months you will find yourself building custom images for hackable customer devices using Buildroot and having a lot of fun. All I can say is, go for it, it's highly addictive and not particularly expensive to do nowadays.
Also something to look into is the Microsoft Robotics Studio. They support quite a lot of hardware boards (including CE), and with it is is fairly easy to get a small robot up and running. And what's more cool a project to learn embedded programming?
It all integrates nicely in Visual Studio (express) and their devkit also comes with a free express edition.
Get a beagleboard. Cheap, lots of users (community support will be key), many OS options. http://beagleboard.org/
Well, if you want to know what you're doing, you need to understand the assembly language of the processor and the processor's architecture.
You will need to learn C to be competent in microcontrollers. There is no way around that.
There are some VM-level languages on embedded systems. I see the Java out-of-memory exception from time to time on my cell phone(which also helps to give me a strong opinion on VM-level embedded languages).
The ARM has some support for hardware-level Java bytecodes.
Your best bet is to pick up something like the PIC or the Atmel chips and begin hacking with them.
If you want to do it with your existing hardware, get a hypervisor for your PC and begin writing a basic kernel.

How important is platform independence?

A lot of software frameworks, languages, platforms claim platform independence and boast it as a selling feature. However, I have failed to understand how could this such an important feature. For example, Java is said to be platform independent - but why should I care when I know that my webapp is going to run on only one platform? Is the overhead of making an application platform independent really worthwhile?
For webapps it mostly isn't an issue as they by definition are almost "platform independent". I mean, users of application mostly aren't tied to any particular platform.
For desktop apps it is a question of your potential client base. If you think that you will benefit from multi platform targeting, then it's worth to make your application platform independent, otherwise better stay away from it :)
If you know your app is going to run on only one platform you shouldn't care - you should evaluate the framework using the same criteria as every other framework on your target platform.
This of course depends on the application in question. If you know that the application is going to run on only one platform, then there's obviously no reason to require it to be platform independent. On the other hand, if you are building an application that is supposed to be usable for, say, next 15 years, how can you know that the platform you choose will even exist then? It's hard to predict the future, and therefore making your app platform independent gives you one headache less.
Platform independence doesn't necessarily imply overhead. Rather, it implies good programming practices; if you make your app orthogonal to the platform, then changing the platform is a breeze.
Sometimes it's impossible to avoid platform-dependent function calls, for example because of having to directly communicate with some hardware device at low level. Even then it's possible make the app "almost platform independent". Instead of scattering the platform dependent things everywhere, wrap them all strictly into one class/package/whatever. Then you need to change just that one unit in order to translate your app to another platform.
we develop a Java B2B application that is Unix only, but works on all Unix flavors (where java is available).
the advantage to have a multiplatform application is that our customers sometimes have knowledge in Linux, sometimes in Solaris, sometimes in FreeBSD, ...
this way we can adapt to the customer and not force them to use one specific platform
For example, Java is said to be platform independent - but why should I care when I know that my webapp is going to run on only one platform?
The fact that it's not advantageous to you doesn't mean that it's of no benefit. I'm sure many Java developers enjoy the fact that they don't have to recompile their application for each platform (hence it's a selling point). A web app that makes use of Active X exclusively for certain components will face more road blocks if, in the future, other platforms also become of interest.
Is the overhead of making an application platform independent really worthwhile?
Depends on what you mean by overhead. If it's a good framework, there might be minimal overhead. Of course if other platforms are of no interest to you, then yes, it's an overhead. However, the fact is that unlike a decade or so ago, more platforms are starting to matter these days (at least for web and desktop application). So, the overhead could be worth it in the long run.
If you're developping only server-side, you probably don't need to take care of it at the moment. However, you might be extremely happy down the road to find that you can run your application seamlessly on another OS if the needs arise (for instance, if asked for by a client, or if you have specific performance/fonctionnalities needs).
For a client-side application, platform-independence means a lot less work to be able to ship for Mac and Linux, and yes, that might be worth it.
You almost answer your own question. Platform independence is only important if you want your application to work on multiple platforms. If you don't, then that's one less thing to worry about.
Take OpenOffice or Firefox for example. You can use those on every major platform. That's important to them because they want everyone to be able to use them and have the same experience no matter what their OS is.
If your project is smaller and doesn't really need to be on every platform, then don't worry about it. It's really a judgment call for each program you develop.
but why should I care when I know that
my webapp is going to run on only one
platform?
You shouldn't. If you know that you are going to run on only one platform, platform independence is not very relevant to you.
But you are not equal to all the population of potential users. Other people will want to target pc's in multiple platforms.
It's like having a version in Chinese. If you're going to sell only in English speaking coutries, it's irrelevant. If you're trying to sell in China, it might help.
Theoretically, platform independence helps you avoid the so-called "vendor lock" while at the same time giving you a broader reach and potentially more customers.
In practical terms, you should evaluate your target audience and do good business calculation on whether the profit potential of being able to deliver to multiple platforms outweighs the cost of adopting a platform independent framework. After all, the framework might claim to work the same on all platforms, but you will have to verify that claim. Not to mention that no framework solves all problems for delivering an application, like deployment, configuration, centralized management, updating/upgrading and so on.
Of course, if your product is a server-based and the end user is going to consume it through an HTTP agent, you don't have to worry about it. For the most part and as long as you stay in the [relatively] safe realm of HTML, JavaScript and Flash.
Platform independence is a desirable feature for software vendors because they invest a large amount of money developing a modern, sophisticated application so they don't want to artificially cut out any market segment. They want to sell their baby to as many organizations as possible.
Software vendors try to convince IT departments that platform independence is a good thing because it avoids vendor lock-in. I'm sure that is importance, in theory; however, in practice most IT departments self impose vendor lock-in with their attitudes, usually concerning a particular technology vendor of high prominence.
"Platform independence" can mean different things to different people. For example, is "Windows XP" a different platform than "XP 64", or Vista, or Windows 7? It depends upon whether you write application software or drivers, and on what pre-installed libraries and services you depend on.
In the most general sense, no application can be truly platform-independent - you won't expect to run a web application on the embedded Linux in your toaster, or on a 16-MB Windows 3.11 machine.
But software frameworks that have platform independence as an architectural target are generally better prepared when your platform changes, and in any long-lived project, it will change, if only because hardware will be replaced every 3-5 years, and new hardware often comes with new OS versions.
You always pay for Flexibility.
Always.
Deciding if the cost is worth it (the pay offs can be very high) is entirely dependent on the needs of the individual/company at hand but there is always a cost. Many of these are implicitly assumed, for example:
Most people code to a file system agnostic[1] api rather than one assuming a particular implementation and this choice is correct so often as to be a reasonable default choice in the absence of any particular requirement in that area.
Nonetheless it is sometimes worth revisiting your core assumptions every so often simply to know what they are
[1] at least to the level of saying it's a tree with path separators '/' as opposed to talking ext3, NTFS, ReiserFS, etc...
For a web application that only you are going to use, the only point of being platform-independent is that it makes it easier on you if you change servers down the line.
Of course, languages like Java are used for a lot more than web applications - people write standalone(-ish) desktop programs in them as well, and for those it's a lot more useful to be platform independent. Sun can do the work of making sure Java runs on a whole bunch of different computers, and every Java application developer shares the benefits of that work for free, basically. It's especially beneficial to developers of mobile phone applications (not the iPhone or Android, but good old basic cell phones): writing different code for every different phone out there would be a nightmare. The fact that many phones include a JRE to run applications makes the developers' jobs easier.
One field where cross-platform is an issue even for the desktop applications is software for the scientific community. From my experience, the desktops in the academy are much more heterogeneous than the ones you see at home, offices etc.
Platform independence is not much of an issue when you target a certain platform but it is when you write an application. There are libraries and frameworks out there which solve about any problem you might encounter. Only you can't use them unless they have been written for your target platform.
Which is why it is usually a good thing for a library or framework to be as platform independent as possible because every developer on the planet is a possible client. In the next step, it makes it more simple for application developers to write code which runs on any platform. In the last years, we have seen the user numbers of Mac and Linux grow steadily. So if you can sell to them for little additional cost, why not?

Are embedded developers more conservative than their desktop brethrens? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been in the embedded space for a while now, and it seems that most programmers I talk to seem to be doing things pretty much the same way it was done 15 years or more ago: Waterfall(ish) Development, command line tools and a small group uses lint.
Contrast this with the server/desktop environment, where there seems to be lots of activity related to all sorts of facets of programming:
XP, Scrum, Iterative, Lean/Agile
Continuous Integration
Automated Builds
Automated Unit Testing Frameworks
Refactoring tool support
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
I do realize that this is a generalization, and some embedded projects do use Scrum, Agile, CI, Automated Builds (in fact I worked at a company that had that in place since the 80s). But my impression is that it is a very small percentage.
We are all used to the fact that our desktop PC crashes once in a while (or at least an application on the desktop suddenly disappears). It's no big deal. The next patch will fix it.
In the embedded space, you are building something which can't be patched. Lives can depend on your device (in a car, an elevator or a medical system). Most devices are installed and then must run unattended for years. So embedded people tend to be very conservative. TCP/IP is often "too modern". They stick to their trusty serial bus with a communication "stack" that is roughly 50 lines of assembler code.
What's worse, you simply don't have the abundance of space on the device which means you can't use one of the latest programming languages which make TDD and automated builds a bliss.
Next, a lot of embedded development environments are proprietary. If your supplier doesn't support it, you won't get it. Linux has started to weaken this in the past years but a whole lot of devices are not powerful enough to run Linux, yet. And even if they were, the CPU power would be used for something else instead of running a fancy OS which comes with source.
So yes, there are powerful forces working in the background to keep the embedded space where it is.
Are embedded developers more conservative than their desktop brethrens?
Yes, because they are more concerned with the consequences of making errors. It’s a big deal to patch an embedded device. Not so much for a desktop app.
Waterfallish development is necessary in the embedded world because you are generally building hardware at the same time as the software. You need to know as soon as possible how much memory, how much processor speed, how big a flash, what if any special hardware is necessary etc...The hardware design can’t complete until you know these answers. Once you decide, that is pretty much it. The lead time for redoing a board is far too long. If you mess up then the software is going to have to work around any short-comings. Not usually an ideal situation.
As for the tools, that is largely based on what the supplier provides and any biases of the developers. On some projects I have used XP Embedded and got pretty much everything that the desktop developer gets.
XP, Scrum, Iterative, Lean/Agile:
Since most of the design is done up front (by necessity), and you usually don’t have working hardware when it is time to code, the quick turn-around processes don’t really provide much benefit.
Continuous Integration/Automated Builds
Nice to have, but not really necessary. What…it takes about 15 seconds to open the IDE and press the compile button.
Automated Unit Testing
No reason why this shouldn't be done, but only part of the code can truly be automatically tested because the other part is either hardware dependent or has some other dependencies like timing. So you can't really tell if the code is working by the automated tests.
Refactoring Tool Support
The vendors of embedded processors product is the processor. They provide the IDE support in order to encourage you to purchase their processor. They couldn't possibly afford to pay for a Visual Studio sized development team in order to add all the bells and whistles to the IDE which isn't even their product.
These some reasons I can think of:
Embedded teams are usually smaller that desktop/Web teams. Code base is smaller.
System testing is much more important than unit testing. The software needs to be tested together with hardware. Automated testing is not possible and can only be applied to a small fraction of the code base.
Embedded engineers have a different skill set than software engineers. They interact with hardware, know how to use an oscilloscope and a logic analyzer. Usually, the difficult part of their job is to find a glitch in the hardware. They do not have the time to adopt modern software methologies.
Embedded programmers are mostly electrical engineers, not computer scientists or software engineers.
They excel in their field of expertise. They bring a slower more methodical approach than most computer programmers. When it comes to programming firmware, electrical engineers know just enough to be dangerous.
Here are some of the things I've noticed electrical engineers doing in C:
All code in ONE single file
Math like variable names: x, y, z
No or missing indentation
No stardard comment headers
No comments at all
Too many comments
In their defense EE's didn't train to be computer programmers, it's not their job. I think software is the hardest part of creating embedded devices. Designing PCBs and choosing components requires skill but pales in comparison to the complexity of 10,000 lines of code.
Embedded programmers also have to deal IDE's that look and behave like the IDE's of the 90's.
MPLAB
AVR Studio
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
It's partly a matter of scale. Software is NOT the product, the product is the product. however, there are thousands of different types of microcontrollers and microprocessors out there, and the most popular thousand have 3-4 different compilers that aren't completely compatible.
So a given tool is only going to be used by a few hundred or thousand engineers.
In windows development, however, there are millions of programmers of many levels - the tools produce software directly which is the product, and so it's going to get more eyeballs, and more money.
Each new product that an engineer puts out might have a different processor.
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Embedded programmers are generally software or firmware engineers, as opposed to programmers. Engineering implies a certain amount of design, design analysis, and design proof prior to implementation - in other words a ton of work is done before the first line of code is written, and the documentation, ideally, is specific enough that implementation is merely turning pseudocode like documentation into compilable code.
New tools and concepts are needed in the design phase, not the implementation phase. An IDE with intellisense may be nice, but by the time the code is being written it's useless cruft - they already know what they need.
CAD - computer aided design - tools are being developed for firmware engineers that are used in the design phase to develop models and simulations that are directly turned into code. Matlab and simulink are good examples of this. The system as a whole is designed.
In fact, one might wonder why software developers are still writing code while the engineers are making data/program flow charts and state machine diagrams. Why is UML uptake so slow in the application world? It sounds like application developers can use some of the tools in common use among embedded systems engineers...
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
Actually, it's likely the reverse. When a project starts the engineers have to pick the processor.
The processor manufacturers get less money on older chips, so they pitch the latest and greatest, and they are generally cheaper overall than the chips used in the previous design (either by die shrinks, more integration, etc).
So the design is actually using the latest and greatest chips.
The downside is that the compiler and tools are often immature. They can only build so much on the older tools, and since the target moves with each new processor, they can't focus on a lot of the nice features application programmers might like. Especially since many of those features won't be useful to an embedded engineer.
There are many other factors, some of which are enumerated by other answers, but it's really a different field even though they both involve programming.
-Adam
I would also add a couple of points here:
In general embedded projects tend to be smaller than desktop projects. This decreases the need for very elaborated software processes.
Requirements for embedded project are often precise and better defined. Therefore SCRUM and agile are not so crucial
Finally embedded projects are generally a mix of software and hardware. The software being only a part of the project embedded developpers invest less time in software processes
I agree with much that's been written here:
Old tools without the bells and whistles (far fewer refactorings available due to C/C++'s preprocessor directives, if any at all) (time consuming to choose a unit test framework vs simply using JUnit).
It's true that waterfall feels more efficient. If I'm going to open the hood and get into a hard-to-access place, I'll want to do as much as I can while I'm there, rather than exiting and closing the hood after each task just to open it again. The idea that creating the most important features first allows you the option of shipping when promised instead of going late can also be hard to grasp when you believe nothing is optional, which might be true. IME, though, when the deadline looms something always becomes unnecessary.
Less visibility into the system makes it riskier to revisit existing code to refactor or change functionality. There are often timing issues, which automated tests running on the host using stubs and mocks won't catch. It can be hard for someone who's been bitten by these issues to take a different perspective.
I'll add one more; the language of agile/scrum is in workstation programmer's terms. To an embedded developer who knows just enough C to get the job done, what is a class, object, or method? When the "user" is typically regarded as a physical person clicking and typing, and the product has no person user interface, it's easy to dismiss the idea as Not Applicable. This may change with James Grenning's forthcoming book about TDD in C. I've been reading the beta ebook and it's quite good.
I would say it's more lack of good toolsets. It's really frustrating when you want to use C++ for its compile-time features not present in C (templates, namespaces, object-orientedness, etc) rather than its run-time features (exceptions, virtual functions) -- but the device manufacturers & 3rd parties just give you a C compiler, not C++. This probably results more from market size (hundreds of millions of PCs running Windows, with hundreds of thousands or even millions of developers -- vs. hundreds of thousands of Chip X, with hundreds or low thousands of developers) than from device capability.
edit: w/r/t robustness: there are different markets out there. The car/elevator/aeronautics/medical device market is going to have to be rigorous about getting rid of bugs. Other markets (toys, MP3 players, & other consumer electronics) can be less rigorous, especially if it's possible to upgrade code in the field. ("Oops! We're sorry we deleted your music library! We just fixed that bug, you can grab the latest release at our website at your convenience!")
I'd say different sorts of problem environments.
The biggest problem with the waterfall methodology is that requirements change. In every environment I've been in, there has been at least the likelihood of a requirements change, which means that the successful methodologies are those that keep flexibility as long as possible. Even if the customer has signed off in blood, and stands to forfeit his left hand if he suggests a change, there are changes coming in the future.
In embedded programming, it is possible to nail the requirements down up front. They come from the behavior of the system as a whole, and engineers are good at nailing down system requirements. Nobody's going to come in halfway through and say that the user now wants the pacemaker to deliver syncopated impulses while the recipient is dancing.
Once the requirements are frozen beyond thawing, which never happens in software designed for human use, waterfall is a very efficient methodology. The team proceeds from well-specified requirements to overall design, then detailed design, then coding, verifying all the way that the stages are done correctly. Then it's time to debug the code (since it's never perfect when written), and final tests to make sure the code meets the requirements.
I would also posit that some fields are inherently conservative. The transportation industry for example, where trains and planes may have life spans of 30 years or so. Customers tend to require tried and true practices, probably derived from IEEE. Waterfall is what customers know, waterfall is what customers demand.

Embedded platform development in (!C)

I'm curious to see how popular the alternatives to C are in the embedded developer world e.g. Ada...
I've only ever used C (with a little bit of assembler), but then my targets have very limited resources. Is there a move else where in this space to something else? What is winning the ware in set top boxes?
If !C what was the underlying reason?
Compiler support for target
Trace \ static analysis tools
other...
Thanks.
Forth is quite popular for embedded development.
Also, while Smalltalk is probably not popular in the embedded community, embedded development is definitely popular in the Smalltalk community.
When you say "embedded development", keep in mind that you have to consider the scale of the project.
When programming something on the scale of a microcontroller or the firmware for an ASIC, you tend to see C and assembly dominate the scene. Embedded developers tend to "specialize" in these languages since compilers for them are available for nearly every embedded target platform. If your project migrates from, say, a chip with a PowerPC core to a chip with an ARM core, you can be fairly confident that your C code will not be overly difficult to port over. Some chips do have compilers available for other languages, but typically they do not match the C compiler in terms of efficiency of the resulting binary. Since embedded systems are often low on resources, system designers want to make their code as efficient as possible (also one reason why you see a lot of assembly language code). I have seen development tools available for languages such as C++, Pascal, Basic, and others, but they are typically niche tools that are not mature enough to match the efficiency of the available C compilers. Debugging tools for these languages also tend to be harder to find than what is available for C/assembly.
You also mentioned set-top boxes. Embedded systems on this scale can pack the equivalent power of a desktop computer from 7-8 years ago. Their available RAM, storage space, and processing power allows them to run full-featured operating systems and interpreters for higher-level languages. On these more powerful systems you will still see C and assembly language being used (for driver code, if nothing else), but other languages (such as Java, Lua, Tcl, Ruby, etc) are becoming more and more common. Using interpreted languages makes porting code from one platform to another even easier, as long as the platform has sufficient resources to handle the overhead of the language interpreter. Any low-level code that interfaces directly with hardware (drivers) with still typically use assembly or C since high-level languages don't always have the capability to do this sort of thing. Anything running as an application on top of the embedded operating system can usually be developed and tested inside an emulator or virtual machine, and so you will see a lot of code being developed in whatever language the developer happens to be comfortable with.
TLDR version: C is popular because is it a versatile language that nearly all developers are familiar with. Assembly is popular because it allows for low-level hardware access in ways that would otherwise be difficult or impossible. Interpreted/scripted languages such as Java are becoming more popular, but the resource requirements of the interpreters for these languages may be too much for some embedded systems to handle. The quality and variety of development/debugging tools availability for the C and assembly languages also makes these options attractive.
Perhaps not quite the large step from C you're looking for but C++ is also resonably popular for embedded projects.
I haven't used myself, but Bascom is quite popular for AVR microcontrollers. It is a Basic IDE that lets you interact with the peripherals very easily. I've met hardware people that successfully use it.
Yes. Java is becoming more popular - many processors have added instructions that pertain primarily to Java and similar languages (.net). Also, uclinux runs on microcontrollers, so you can use practically any language for some of the larger micros.
Basic is still common, as is assembly.
You'll see Ada in certain gov't projects.
And some engineers are even putting Lua and other interpreters on their micros so their customers can extend the functionality.
But C is still dominant.
-Adam
In the early 90 I did a lot of embedded development on the 8051 using Intel PLM51 and the DCX51 operating system.
PLM is very simple language – but very powerful
We now use C
If you work in the smartcard space, you get to use Java Card. Yep, Java, on an 8-bit micro. It's kinda fun, actually. I get to develop in Eclipse, test ( & debug!) on the PC simulator, and can be confident that it'll run the same on the card. It's just such a pity Java is a terrible language for embedded apps :)
I've used EC++ (Embedded C++) quite extensively.
Also, PICBasic has been popular with the PIC'ers for eons now.
I have used Ada in embedded project for military avionics because of customer requirements. There is lots of Ada tools for embedded development but most of it is very expensive. Personally I would just use C.
There is a Pascal compiler for 8051
JAL
There is a group of folks working to make Lua a viable option for embedded work. They are targeting primarily 32-bit ARMs with 256K FLASH and 64K RAM or better, and seem happy with their work so far.
They are partly inspired by the classic BASIC-Stamp, a BASIC interpreter running in a moderately powerful PIC with the program itself stored in a serial EEPROM device.
At work, I am still maintaining a customer's embedded system that is written in a compiled flavor of BASIC running in a Zilog Z180 CPU. 1980's technology all around, with most of the system still built out of 24-bin DIP packages in sockets. The compiler runs under CP/M-80 running in a Z80 simulator, that itself runs in the MS-DOS simulator built into Windows. Aside from the shear amazement that anything productive can be done this way (and that you can still buy 27C256 UV erasable EPROMS, and that my nearly 20 year old Data/IO PROM programmer still works) I really wish the customer could afford to move to a new hardware design so the system could be rewritten in a maintainable language.
Depends on the microcontroller, many of them have C but the compilers are horribly, assembler is usually easy and the best performing, most efficient, etc. Ones like the msp, avr, and arm are good for C compilers and for those I would and do use C (depending on the problem).
I would stick to C or assembler, you are wasting memory, performance, and resources using anything else.
Pascal, Modula2 work fine too. Essentially they are pretty much equivalent to C, except for the inability to do alloca (though some have that as extension).
But the core problem will be the problem with any !C compiler: what do you prefer, a better compiler/toolchain or the language of preference.
Despite I like the Wirthian languages most, I simply use C, and am living with the consequences, simply because the toolchain is better.
There have been examples in the past (Pascals, or even tightly compiled Basics), but C is mostly the norm. I never understood why.
I worked on a device which ran some incredibly old version of python (1.4 or something). There was no way to debug it (other than printing debug messages) so when your code hit an exception everything would just stop and you scratched your head for an hour. Whenever you made a change and upgraded the code it was running, it took about 10 minutes to interpret and compile it.
Needless to say we scrapped that and replaced the microcontroller with one that ran C.
See this related question:
What languages are used for real-time systems programming.
In response to your "why" question, from the standpoint of government/military acquisition, there is a perception that Java (language, platform, etc...) is the lingua franca these days and that economies of scale in the language will reduce acquisition and maintenance cost. There's also a hope that one can efficiently train a competent Java programmer to be a reasonable RT/embedded programmer in Java faster than if they are required to learn a new language. This rationale is suspect, in my opinion, but it does answer the "why" question.
If you include the iPhone as an embedded platform then Objective-C
Considering how many times I've had a Java out-of-memory exception on my phone(most of the time I do anything remotely interesting), I'd run away from Java like a bat out of a hot place.
I've heard that Erlang was designed for use for cell phones. I think Lisp is a good architecture for remote device support- if the device cna handle the run-time.
A lot of home-brew users and small companies needing a cheap solution have found Tiny Tiger and Basic STAMP (using BASIC) meets their needs.