What are some options for future proofing your application? - future-proof

I am looking at minimizing the future impact on a yet to be written application. I am trying to avoid any 3rd party products, and even avoid operating system specific calls. Can anybody suggest other ways of future proofing the application. The idea would be not having to rewrite major portions in 10 or 20 years, and that only maintenance (bug fixes) would ever need to be done.

If you want your program to keep running (on modern OSes) for that kind of time period, you'll probably end up having to write it in only pure ANSI C (or C++). Anything else is likely to need some kind of tweaking over the years - and nobody really knows what'll happen over the next 10-20 years.
That said, here are a few tips to minimize these kinds of problems:
Avoid weird dependencies. If you're going to depend on some library, make sure it is very well-established (and thus likely to survive at least 5 of those 10-20 years), or at least open-source so you can fork it yourself if need be.
Avoid OS-specific calls. This will be a balancing act with 1. - you can use a wrapper library like boost or Qt or glib or what-have-you - but that would increase the chance for compatibility issues on that front.
Document everything. Fact is, no matter how hard you try, this program will need compatibility fixes and bug fixes, and probably feature additions too. So make life easier on that poor maintenance programmer who comes along 15 years later. :)

Dig out some 10 and 20 year old programs that still run on todays machines and see why they still run and why they are of value. I see a number of computation intensive console based apps still in occasional use, mostly written in C and FORTRAN, called by other apps. If your app has a lot of GUI, are you sure it will still have any value in a few decades time? Perhaps consider seperating the user interface from the core functionality, in such a way that the UI can be replaced in the future as UI paradigms change and evolve. If you write your system in a very modular fashion, modules that still provide value can be kept while those that are clearly obsolete can be replaced.

Write with 64-bit in mind. We're discovering now that many of our third-party dependencies don't support 64-bit, and so we have issues.

The best way to build an application that is still functioning with little-to-no maintenance in 10 years is to look at the systems that were built and are still running from 10 years ago.
From my experience, most of those systems, which did not need major upgrades are doing so by running on the same or similar hardware that they were deployed to 10 years ago and use the same interface.
The maintainers chose to trade away the performance improvements due to Moore's law or usability improvements in favor of little to no maintenance over the years.

Automate Testing can also help.
Not that your application will be "proofed", but I you'll have some idea of what has to be fixed when things changed.

A lot of the applications I develop run on a cycle (example: yearly). The most important thing I do to ensure they continue to work is not hard-coding dates or date ranges. Example:
year(now()) for the year
DateSubmitted BETWEEN year(now()) AND DATEADD(year,1,year(now())) for a range
Of course, these are just examples.

Related

What is MAGIC programming language? Which other language is closest in syntax? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have recently heard about Magic programming language from several sources and didn't recall ever hearing about it before. It was mentioned that it is a programming language from Israel.
I did some googling and couldn't find much information about it. I couldn't find any code examples, and wikipedia didn't have any information on it either.
I think this is the site for it http://www.magicsoftware.com/en/products/?catID=70 though I am not sure, as it mentions uniPaaS instead of magic. However other material on the site indicates that this is the new name for it.
I was interested in learning more about it from it's practitioners, rather than the company. I saw several claims on the internet that it provided really fast application development, similar to claims made by RoR proponents when it came out.
How does it compare to VB?
Is it still a better RAD tool than current .net or mvc frameworks like django, ror ...etc?
How hard is it to learn?
If you can post some sample code it would be most helpful as well.
Could this site be it? Though it links back to the page above.
You're right my friend, Magic is the original name of the "programming language", nowadays is called UniPaaS (Uni Platform as a Service), I use it to develop some business application. Maybe is the fastest way to create an applications(data manipulation), you can create apps in just a few days, but like everything in life has its own drawbacks:
it's very weird so that makes it
difficult to learn.
you do not have all the control of what's happening in the background
and you have to pay a lot for licensing (servers,clients, etc)
If you are interested in learning this, you can download a "free" version of the software that only works with sqlite databases called UniPaaS Jet.
Magic Language is as it’s called today uniPaaS, it used to be Magic than eDeveloper and now uniPaaS as PachinSV menchend before.
uniPaaS is an application platform enabling enterprises, independent software vendors (ISVs) and system integrators (SIs) to more successfully build and deploy business applications.
You can download the free version of uniPaaS Jet here: http://web.magicsoftware.com/unipaas-jet-download,
try it yourself and see how easy it is to use.
Magic technology as you descried is a Magic Software Enterprises tool (uniPaaS), you can find more information on:
official website: www.magicsoftware.com/en/products/?catID=70&pageID=55
uniPaaS Jet developer group on facebook: https://www.facebook.com/groups/unipaasJet/
Magic developer zone: devnet.magicsoftware.com/en/unipaas
Let me know if you find the information helpful
Bob
As PachinSV explained, there is a RAD once called Magic, then eDeveloper, now UniPaaS. This RAD is dedicated for database applications. Programming in this RAD does not look like anything else I know, you mostly don't write code as with usual languages, but it is nearly impossible to explain just with words. The applications are interpreted, not compiled.
As PachinSV said, when developing, you must follow UniPaaS' way of doing things. This is probably why so many people never manage to use Magic properly: if you thought like Magic before learning about it, then you will adapt to it easily; but if you have a long and successful experience using other database development tools, then often the Magic paradigm will never become natural to you. The learning curve is quite steep, you must learn a lot of things before being able to write a little application.
Previous versions stored the "code" inside a database table. The last version, UniPaas stores the code in xml files. I could send you an example, if PachinSV does not answer you before. But the files are pretty big: the smallest xml file I have in a test app is 4000 bytes, and any application is made of at least 11 files, an empty application is 7600 bytes. You must also understand that developers never use those files (they are undocumented AFAIK), they are only the storage format used internally by UniPaaS. The only way to use them is to set them up as a UniPaaS application.
I'm still an active MAGIC Developer... This is the old name used and its a completely different paradigm like some of you mentioned. I've been developing it from Magic version 8.x to eDeveloper 9.x to 10.x then renamed to UniPAAS.
The newer version is much easier to use and it is still very RAD in the sense that there is little or no code you write... a lot of the common programming tasks like IO, SQL command...etc is handled by the tool and is transparent ( so even less code to write since we use it in almost all types of applications)... Its mostly an Enterprise tool... you wouldnt use it for small application...
You can download the free version to learn the paradigm... but the enterprise licenses are expensive.. you need both the development tool and the runtime license if you want to deploy... so it can be costly for small scale projects...
I enjoy it personally, especially when you have to do quick proof of concepts or a quick data migration or porting onto any db platform and bridging any existing system through a wide range of gateways they provide with the licensed version.. It is up to date with the commonly used web technology out there...like SOAP, RIA ...
It's more popular in Europe... The HQ in the States is in Irvine... we used to have 2 branches in Canada but it closed down in 2001 .... Visit the Magic User Group on Yahoo... Its a very active forum with lots of cool people who will help you out in your quest...
http://tech.groups.yahoo.com/group/magicu-l/
I Programmed with Magic for 6 years and found it to be a amazingly fast tool, easy to understand if you are a competent database programmmer because all operations are really about data manipulation. It is certainly a niche area develop in and because of this jobs are few and far between. As it is interpreted there are really no bugs to make. It will work with many databases/connections simultaneously but there is a big memory and processing hit.
Drawbacks :
Little control over communications between machines and devices
No mobile API as yet
Niche area so few skilled practitioners or companies willing to invest.
Good Points :
You can say you are a Magician; you can impress people with uber fast apps development (really)
It is easy to understand if you don't have a PHD in Maths
zero programming "bugs" can creep in. What you do is what you get.
Developed in The original Magic PC referred to by several of the above folks.
It is exactly this: FAST, FAST, but expensive and rigid in what it will allow you to do. It works on a tick tack toe like matrix. Dropping in commands into the various sections determines when they are run. The middle column is run indefinitely until you break the cycle. It is like a do Until loop. If you have to do an item once you put it into this infinite loop and end it after one cycle.
The first column procedures are run first, ONCE, before the infinite middle column is run. The 3rd column of commands is run after the infinite cycle, once. It is very efficient and logical once you get over the idea of an infinite loop.
Types can be specified and an associated program to present the type. Then everywhere the type is used all the settings automatically kick in. I like especially that one can write the program and 5 months later change the name of a variable and it is carried throughout the program. In fact the program does not use your name for anything. The internal name of any and all variables is hidden to the end user, so of course it is not a problem to change a name. It takes a minute to write an input program for any table. It takes a minute to write an export/import program for all the data files in the database.
Attaching to a type of database like Btrieve or SQL independent of the program itself.
I stopped using the language because they demand more for the runtime engine than I could charge for the programs I wished to run with it. Bill Gates went the opposite direction. VB is superior in control and being able to drop `10 datagridviews onto the same screen, but development is 10 times slower.
It's niche then is PROOF of concept for a program in a big company or conversion, importing, exporting for a development company. It is good for $25k programs that are database heavy and not going mobile.
uniPaaS, Magic PC
I did some Magic work around 1993. It was a DOS based 4GL that came from Israel. Haven't seen it since.
How does it compare to VB?
It doesn't.
Is it still a better RAD tool than current .net or mvc frameworks like django, ror ...etc?
If you mean "is it more Rapid", then yes, otherwise no.
How hard is it to learn?
About as hard as learning MS Access.
Coincidentally, if you want to get an idea of what it is and how it works, I've found that comparing it to MS Access is handy. It works in much the same way from a user's or developer's perspective. Obviously what happens in the background is vastly different, but if you've ever developed a form in design view in Access, Magic will seem very familiar.
Google tells me there's also MAGIC/L. All I could find about it was this blurb:
A procedural language written in
Forth. Originally ran on Z80's under
CP/M and later available for IBM-PCs
and Sun 3s.
The only Magic programming language that I know about is one used by a company called Meditech. It's a proprietary language derived from MUMPS.
The language is truly miserable - here's a sample.

Code readability and UI design balance

I have worked a lot on improving how my code runs (and it's "beauty"), but when is the time to stop fixing, and start working on the UI?
Microsoft (in my opinion) seems to go with the nice-code, while Apple goes with nice UI (although Apple's developer examples do have very nice code).
I'm bad at balancing, when is it the time to work on one or the other?
Make it work, then make it elegant, then make it fast.
If the user doesn't like the program, the quality of the code is irrelevant. Once the user likes the program, then code quality becomes more important. And by "like" I mean a good user experience where the program doesn't crash, fulfills the need that the user has and adhere's to the Principle of Least Surprise.
Note I say "more important" because code is not something the user sees or is interested in. Code quality and "beauty" is important to developers because it's what they see of the program and what they hand over to other developers.
I remember reading something some time ago comparing (in general terms) software developed for the windows platform and software developed for OS X. In general terms it said that windows programs tended to be developed by developers who didn't spend too much time on the UI or thinking about the user experience. Their concentration was getting every piece of functionality they could think of into the program without any thought about it making sense. Mac OS X on the other hand tended to be developed by people concentrating on the user experience first and solving their problems. So it didn't necessarily have as much functionality, but what it did have was directly associated with what the user needed and was easy to use.
So when is it time to stop and think about the UI? The fact that you have asked the question makes me think it's time to stop right now. If anything I'd suggest that even before writing any code you should have drawn out a basic UI and worked out what sort of user experience your program is going to provide If you cannot work that out, you don't want to wasting time writing code because you will never use it.
Thats not to say I think code "beauty" is irrelevant. I spend time on making sure my code is well written, easy to follow and looks "good". But that's after I've figured out the UI and because I've had a lot of experience with cleaning up other people aweful code :-)
This is an incredibly broad question, and the answer is entirely a function of what you're building, why, and for whom.
Some random bits of conventional wisdom that I agree with:
If you're building broadly-applicable consumer software (e.g. for Mac or iOS), then the UI is a critical component of the application. Spend lots of time on it. :) Work with designers. If your software looks like crap, no one will want to use it. (Assuming nobody's going to make them: see next point.)
If you're building internal or enterprise tools, UI polish is probably less important.
Even if you're a one-man (or woman) operation, think of engineering as distinct from "product management". Product management is the thought process that determines what the software should do, and how it should work (and look). Engineering creates the reality, but has different tradeoffs. These are sometimes in conflict, which is hard to handle if you're one person, but try to wear different hats.
In all cases, your code should work, for some reasonable definition of work.
If it's ugly because it's hacked together, you're likely to run into accuracy/correctness problems sooner than if you construct it methodically. It will also be harder to maintain. The tradeoff here is almost never worth it. As you gain experience, you'll more naturally write clean code from the start, even if it's "scratch" code. You'll save time in the long run this way, and not be faced with later wholesale attempts to improve its beauty.
If you're on a team, or working with others, clean code is even more important. Find out if there are any "local" coding conventions, and work closely with colleagues.
As far as improving "how code runs", if you mean performance optimization, don't do any of that until you're sure you need to. Write the simpler code first, even if it's slower. It's likely that it won't be slow enough to matter.
Special case: if you're writing a game, long-term maintainability is less important, because they tend to be "throwaway" at some level. YMMV.

How do you handle technology updates in long running projects?

Let's assume you're in the middle of a long running project (long running = several years) and, as expected, there will be several things coming up with brand new releases. There might be a new .Net Framework with brand new features (e.g. Linq, Entity Framework, WPF, WF...), a new Visual Studio or V.next of your favorite Control Library, a new Mock Framework and a lot more things.
What are your guidelines for handling these technology updates? Do you adopt them instantly or do you ignore them until the end of the project? Do you have different guidelines for different things (Tools, Frameworks, supporting stuff)?
In my experience, these decisions are always made on a case-by-case basis. Several factors are considered, including:
How mature is the new technology? Does the organization like to be at the forefront working with bleeding edge new technologies, or does it prefer to work with proven tools and methodologies?
What skill sets do your people have? Are they consistent with use of the new technology, or is more training needed? Will improved productivity outweigh the time it takes to come up to speed?
What investment do you have in the existing technology? What is the cost of moving to the new technology? How much rework and rewriting of code is involved?
What is the requirement? Is it supported by the existing techology, or are new tools needed to fulfill the requirement?
What are the performance expectations? Does the new technology provide a performance improvement that cannot be met with the old technology?
What about the technological culture? Is the organization vendor specific (e.g. a Microsoft shop)? Can open-source code be used?
What is the scope of the project? Is it a large project that would benefit from supporting technologies like frameworks and tools, or is it a small project that would be unduly weighed down and complicated by these things?
How is the new technology supported? Does the vendor have good documentation? Is there someone you can talk to if you have problems? Or are you an organization that has people that know how to solve problems without a support contract?
Is the technology comfortable to work with? Does it seem to make sense? Is it clean and elegant? Do other people seem to like it? Are other people having problems with it?
Is the technology the latest flavor of the week? Has it proven itself in the battlefield to produce tangible results, or is it just a religion?
How much time do you have to learn the new technology and iron out the kinks? Do the benefits outweigh the costs?
As a very brief example, I chose Link to SQL for my most recent project, because the project was complex enough to warrant an ORM, L2S performs well and is lightweight, we are a Microsoft Shop, and it is my sense that the Entity framework is not quite ready for prime time (even though Microsoft says that it will be the go-to framework for the future).
Stick with what you've started with.
A large and long running project often comes with a huge and highly complex code-base. Any change or upgrade to a new version of a library can add bugs in very subtle and unexpected ways.
Also: For large projects the tools and libraries used should have been tested and evaluated in the design-phase. Unless you find a show-stopper or a security issue it's best to not upgrade.
Always remember: Don't change horses in the middle of a stream. :-)
I would say different factors pitch in, like-
Say a software is nearing its end of life, for example last April, Microsoft retired mainstream support for SQL Server 2000, and your product uses it then its wiser to go for the next version of SQL Server in your next release.
Another factor which comes into play is how much value does the new features in the latest release of a software would bring to your product. It may well be the case that the new release of .NET framework has something which does not add any value to your product, then that does not build a strong case to upgrade.
Budget is also an important factor. I think you need to upgrade licenses in order to step up to the next release unless you are already part of something like software assurance.
Training to the team is also a factor. If the latest release is going to add to your product then you will have to train your team as well.
Well, there could be other telling factors too. These were the ones off the top of my head. I hope it helps.
cheers
If you're talking about a framework-specific example, the biggest piece of advice I'll give you is keep the system and your application separate. This is why I love patterns such as Model-View-Controller - it keeps your code modular and means you can upgrade sections without breaking the app as an entirety.
On a more practical level, if your framework has a Git or SVN repository, checkout the usual 'system' directory from the repo, then you can call 'svn update' occasionally to keep up with the latest and greatest builds.
I would suggest that the project not last that long. Develop the application in smaller pieces with iterations every couple months. That way, as new technology comes out, you can make the necessary change and implement updates as you go rather then have to decide to redevelop the whole application. As you say, trying to develop the whole application as things change just doesn't work.
As another poster said, it's certainly a case-by-case basis thing. What you can upgrade and when is determined mostly by how hard or easy it is to test the new version of the system. Having a comprehensive automated test suite for your application helps a lot with this.
Generally, I try to update to the latest stable release of libraries and so on as often as possible, because that makes maintenance easier. If you don't update, you may find yourself patching or working around bugs in the version of the library you are using. If you update less frequently, each update will be more work because you have more changes to deal with, and it's been longer since you last touched the system, and thus you remember less about it.

Are embedded developers more conservative than their desktop brethrens? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been in the embedded space for a while now, and it seems that most programmers I talk to seem to be doing things pretty much the same way it was done 15 years or more ago: Waterfall(ish) Development, command line tools and a small group uses lint.
Contrast this with the server/desktop environment, where there seems to be lots of activity related to all sorts of facets of programming:
XP, Scrum, Iterative, Lean/Agile
Continuous Integration
Automated Builds
Automated Unit Testing Frameworks
Refactoring tool support
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
I do realize that this is a generalization, and some embedded projects do use Scrum, Agile, CI, Automated Builds (in fact I worked at a company that had that in place since the 80s). But my impression is that it is a very small percentage.
We are all used to the fact that our desktop PC crashes once in a while (or at least an application on the desktop suddenly disappears). It's no big deal. The next patch will fix it.
In the embedded space, you are building something which can't be patched. Lives can depend on your device (in a car, an elevator or a medical system). Most devices are installed and then must run unattended for years. So embedded people tend to be very conservative. TCP/IP is often "too modern". They stick to their trusty serial bus with a communication "stack" that is roughly 50 lines of assembler code.
What's worse, you simply don't have the abundance of space on the device which means you can't use one of the latest programming languages which make TDD and automated builds a bliss.
Next, a lot of embedded development environments are proprietary. If your supplier doesn't support it, you won't get it. Linux has started to weaken this in the past years but a whole lot of devices are not powerful enough to run Linux, yet. And even if they were, the CPU power would be used for something else instead of running a fancy OS which comes with source.
So yes, there are powerful forces working in the background to keep the embedded space where it is.
Are embedded developers more conservative than their desktop brethrens?
Yes, because they are more concerned with the consequences of making errors. It’s a big deal to patch an embedded device. Not so much for a desktop app.
Waterfallish development is necessary in the embedded world because you are generally building hardware at the same time as the software. You need to know as soon as possible how much memory, how much processor speed, how big a flash, what if any special hardware is necessary etc...The hardware design can’t complete until you know these answers. Once you decide, that is pretty much it. The lead time for redoing a board is far too long. If you mess up then the software is going to have to work around any short-comings. Not usually an ideal situation.
As for the tools, that is largely based on what the supplier provides and any biases of the developers. On some projects I have used XP Embedded and got pretty much everything that the desktop developer gets.
XP, Scrum, Iterative, Lean/Agile:
Since most of the design is done up front (by necessity), and you usually don’t have working hardware when it is time to code, the quick turn-around processes don’t really provide much benefit.
Continuous Integration/Automated Builds
Nice to have, but not really necessary. What…it takes about 15 seconds to open the IDE and press the compile button.
Automated Unit Testing
No reason why this shouldn't be done, but only part of the code can truly be automatically tested because the other part is either hardware dependent or has some other dependencies like timing. So you can't really tell if the code is working by the automated tests.
Refactoring Tool Support
The vendors of embedded processors product is the processor. They provide the IDE support in order to encourage you to purchase their processor. They couldn't possibly afford to pay for a Visual Studio sized development team in order to add all the bells and whistles to the IDE which isn't even their product.
These some reasons I can think of:
Embedded teams are usually smaller that desktop/Web teams. Code base is smaller.
System testing is much more important than unit testing. The software needs to be tested together with hardware. Automated testing is not possible and can only be applied to a small fraction of the code base.
Embedded engineers have a different skill set than software engineers. They interact with hardware, know how to use an oscilloscope and a logic analyzer. Usually, the difficult part of their job is to find a glitch in the hardware. They do not have the time to adopt modern software methologies.
Embedded programmers are mostly electrical engineers, not computer scientists or software engineers.
They excel in their field of expertise. They bring a slower more methodical approach than most computer programmers. When it comes to programming firmware, electrical engineers know just enough to be dangerous.
Here are some of the things I've noticed electrical engineers doing in C:
All code in ONE single file
Math like variable names: x, y, z
No or missing indentation
No stardard comment headers
No comments at all
Too many comments
In their defense EE's didn't train to be computer programmers, it's not their job. I think software is the hardest part of creating embedded devices. Designing PCBs and choosing components requires skill but pales in comparison to the complexity of 10,000 lines of code.
Embedded programmers also have to deal IDE's that look and behave like the IDE's of the 90's.
MPLAB
AVR Studio
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
It's partly a matter of scale. Software is NOT the product, the product is the product. however, there are thousands of different types of microcontrollers and microprocessors out there, and the most popular thousand have 3-4 different compilers that aren't completely compatible.
So a given tool is only going to be used by a few hundred or thousand engineers.
In windows development, however, there are millions of programmers of many levels - the tools produce software directly which is the product, and so it's going to get more eyeballs, and more money.
Each new product that an engineer puts out might have a different processor.
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Embedded programmers are generally software or firmware engineers, as opposed to programmers. Engineering implies a certain amount of design, design analysis, and design proof prior to implementation - in other words a ton of work is done before the first line of code is written, and the documentation, ideally, is specific enough that implementation is merely turning pseudocode like documentation into compilable code.
New tools and concepts are needed in the design phase, not the implementation phase. An IDE with intellisense may be nice, but by the time the code is being written it's useless cruft - they already know what they need.
CAD - computer aided design - tools are being developed for firmware engineers that are used in the design phase to develop models and simulations that are directly turned into code. Matlab and simulink are good examples of this. The system as a whole is designed.
In fact, one might wonder why software developers are still writing code while the engineers are making data/program flow charts and state machine diagrams. Why is UML uptake so slow in the application world? It sounds like application developers can use some of the tools in common use among embedded systems engineers...
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
Actually, it's likely the reverse. When a project starts the engineers have to pick the processor.
The processor manufacturers get less money on older chips, so they pitch the latest and greatest, and they are generally cheaper overall than the chips used in the previous design (either by die shrinks, more integration, etc).
So the design is actually using the latest and greatest chips.
The downside is that the compiler and tools are often immature. They can only build so much on the older tools, and since the target moves with each new processor, they can't focus on a lot of the nice features application programmers might like. Especially since many of those features won't be useful to an embedded engineer.
There are many other factors, some of which are enumerated by other answers, but it's really a different field even though they both involve programming.
-Adam
I would also add a couple of points here:
In general embedded projects tend to be smaller than desktop projects. This decreases the need for very elaborated software processes.
Requirements for embedded project are often precise and better defined. Therefore SCRUM and agile are not so crucial
Finally embedded projects are generally a mix of software and hardware. The software being only a part of the project embedded developpers invest less time in software processes
I agree with much that's been written here:
Old tools without the bells and whistles (far fewer refactorings available due to C/C++'s preprocessor directives, if any at all) (time consuming to choose a unit test framework vs simply using JUnit).
It's true that waterfall feels more efficient. If I'm going to open the hood and get into a hard-to-access place, I'll want to do as much as I can while I'm there, rather than exiting and closing the hood after each task just to open it again. The idea that creating the most important features first allows you the option of shipping when promised instead of going late can also be hard to grasp when you believe nothing is optional, which might be true. IME, though, when the deadline looms something always becomes unnecessary.
Less visibility into the system makes it riskier to revisit existing code to refactor or change functionality. There are often timing issues, which automated tests running on the host using stubs and mocks won't catch. It can be hard for someone who's been bitten by these issues to take a different perspective.
I'll add one more; the language of agile/scrum is in workstation programmer's terms. To an embedded developer who knows just enough C to get the job done, what is a class, object, or method? When the "user" is typically regarded as a physical person clicking and typing, and the product has no person user interface, it's easy to dismiss the idea as Not Applicable. This may change with James Grenning's forthcoming book about TDD in C. I've been reading the beta ebook and it's quite good.
I would say it's more lack of good toolsets. It's really frustrating when you want to use C++ for its compile-time features not present in C (templates, namespaces, object-orientedness, etc) rather than its run-time features (exceptions, virtual functions) -- but the device manufacturers & 3rd parties just give you a C compiler, not C++. This probably results more from market size (hundreds of millions of PCs running Windows, with hundreds of thousands or even millions of developers -- vs. hundreds of thousands of Chip X, with hundreds or low thousands of developers) than from device capability.
edit: w/r/t robustness: there are different markets out there. The car/elevator/aeronautics/medical device market is going to have to be rigorous about getting rid of bugs. Other markets (toys, MP3 players, & other consumer electronics) can be less rigorous, especially if it's possible to upgrade code in the field. ("Oops! We're sorry we deleted your music library! We just fixed that bug, you can grab the latest release at our website at your convenience!")
I'd say different sorts of problem environments.
The biggest problem with the waterfall methodology is that requirements change. In every environment I've been in, there has been at least the likelihood of a requirements change, which means that the successful methodologies are those that keep flexibility as long as possible. Even if the customer has signed off in blood, and stands to forfeit his left hand if he suggests a change, there are changes coming in the future.
In embedded programming, it is possible to nail the requirements down up front. They come from the behavior of the system as a whole, and engineers are good at nailing down system requirements. Nobody's going to come in halfway through and say that the user now wants the pacemaker to deliver syncopated impulses while the recipient is dancing.
Once the requirements are frozen beyond thawing, which never happens in software designed for human use, waterfall is a very efficient methodology. The team proceeds from well-specified requirements to overall design, then detailed design, then coding, verifying all the way that the stages are done correctly. Then it's time to debug the code (since it's never perfect when written), and final tests to make sure the code meets the requirements.
I would also posit that some fields are inherently conservative. The transportation industry for example, where trains and planes may have life spans of 30 years or so. Customers tend to require tried and true practices, probably derived from IEEE. Waterfall is what customers know, waterfall is what customers demand.

How to avoid short-lifespan enterprise applications?

A while ago another question referred to the (possibly urban tale) statistic that
... the average lifespan of software is about 3 years
At the time I came up with the following reasons (and I'm sure there are more possibly better ones):
A new major system (ERP, CRM, etc.) is implemented and it has an "integrated" module to replace the old app.
Same, but no integrated app - but the existing app is not adaptable (the people left, technology has changed, current IT policies have changed, users don't like the existing app.)
The company you acquired the basic app from, to customize it for your needs has disappeared.
Or you don't get along well with them any more.
The technology for the existing app is "obsolete" (according to the framework vendor/Microsoft/consultant/industry expert/new IT manager who has management's ear.)
"We're phasing out (Windows 95/Windows 98/Windows 2000/Windows XP/NT) and we need matching technology in our apps".
"We've learned a lot from (App Version n) and we'll do a lot better the second/third/fourth/n+1th time."
Job justification for developers/IT manager/Division VP/consulting company.
The users hate it.
We've merged/acquired a competitor/been acquired by a competitor and theirs is better.
Some of these are unavoidable (e.g. your company gets bought), but overall this is surely smething that needs to be avoided. Does your organization intentionally fight this syndrome? What effective strategies would you recommend?
That's why an application needs to be easy to expand, and you should be able to easily add-in all the buzzwords.
If you have a solid base code, most of the buzzwords are related to the UI (Vista Controls, Ajax, .net, ASP.net 3.5)...
You could be running COBOL in the back-end ( I wouldn't).
A new major system is implemented - There's nothing you can do.
current IT policies have changed, - The app should be adaptable.
users don't like/hate the existing app - why? cosmetic changes in the UI can fix this most of the time.
The company you acquired the basic app from, to customize it for your needs has disappeared. - I wouldn't do that, I'd prefer to write it myself.
The technology for the existing app is "obsolete" (according to the framework vendor/Microsoft/consultant/industry expert/new IT manager who has management's ear.) - same as the above, if the back-end is solid, you should follow these in the front-end.
"We're phasing out (Windows 95/Windows 98/Windows 2000/Windows XP/NT) and we need matching technology in our apps". - a simple compatibility test and minor UI elements solve this.
I'll also say that this is different when you compare in-house to commercial apps, if you're doing an in-house app, change guarantees your job (if you know what you're doing). If you're doing a commercial app, change is an opportunity to make more money, new features would get you upgrades from existing clients and new clients who are looking for the buzzwords, these buzzword could become your advantage when compared to a competitor.
The average lifetime of software I write at the moment is probably a few days. (I write a lot of scripts, so I might be an aberration. ;-) But the core system I work with is probably 15 to 20 years old now. The underlying OS is about 30 years old. There is nothing inherently wrong with either old or young software. In fact, software ages best when it's possible to adapt it to new uses.
Having layers of abstraction between functional parts make it easier to replace functionality in a system. For instance, we've gone through several different tape libraries on our system and now we are considering going to disk archives in the future. Since the "archive" portion of our system sits behind an abstraction layer, we can replace it fairly easily without replacing the rest of the system.
When possible, it's also best to use standard parts. That way, if you run into some limitation, it's likely others will have the same problems and more likely someone will come up with a fix.
Continuous improvement - add useful features at regular intervals
No show-stopping bugs in new versions - testing, testing, testing...
be nice to your clients and treat them with respect (most users really don't want to change their ERPs every three years so if you have a good realtions with them they'll be on your side)
Stay current with new technologies and integrate them in your application when needed
When gathering requirements and someone says "Situation X will always be the case, no exceptions", make it configurable. It will always change, no exceptions.
Most companies don't make it for 5 years. Their software implementations wouldn't be expected to last as long.