What guidelines should be followed for policies and procedures? - policy

Our company, being rather small, doesn't have much in the way of policies and procedures for good development. When I first started, I developed some, but we now need to update these in a more formal manner. Since I don't have much experience in writing policies (I've written plenty of instruction sets, so I'm not as worried about the procedures, though still thoughtful), I want to ask the community for tips and advice on writing good policies and procedures for software development. Thanks.

Some of policies depend of your programming languages, but :
Write naming rules to uniforme the code of your team
Write general coding rules like KISS principle
For customers and bugs management, take a look at ITIL
For delay and tasks management, I recommand Scrum method
Write a chapter about design patterns (design pattern head first)
If you want more formal procedures, have a look at ISO 90003
Some other books can help you like Code complete

My humble opinion... don't overdo it :) A small team needs a shared understanding and vision of how things work. While policies serve communications in large companies, they tend to devolve into power struggles with smaller groups. I found traditions grown out of experience more useful than procedures stated ahead of time.
That said, software dev establishments need some systems for source control, issue tracking, and a place for dropping documentation. Clarity around how decisions are made is immensely useful, as is an understanding of what a project looks like.
Avoid fights on whether there has to be a space character before the semicolon like the plague, and make sure to leave enough room for fun...

Related

SQL Data Mining Concepts VS. Programming Concepts

I am a beginner but know the basics and am peering into more advanced data mining and stored procedure routines. I have learned small concepts that mimic C# Design Patterns such as looping structures but have not seen much (on the web) about SQL Design Patterns.
I ask because I ran across this book http://www.amazon.com/SQL-Design-Patterns-Programming-Focus/dp/0977671542 but have also always been told that you will find better information on sites like Stack than in a book.
I've been told that for programming professionals Design Patterns are a must. Is this also the case for SQL programmers?
*(Wasn't sure if this belonged on Meta or not. It's not a question about the site but is a general discussion question)
Design patterns for SQL are very useful as well. Is it any good to gather data if you do not store it properly and retrieve it back to make useful decisions?
I have found Joe Celco's books very useful on database design patterns. http://www.amazon.com/Joe-Celko/e/B000ARBFVQ
Of course you should read books. In the first place they tend to be written by people recognized for their expertise (not just anyone who happens to log into the site) so the information is likelier to start out as more accurate. Then they have editors who help make the presentation of the information better. Finally a good book should discuss in more depth than an SO post can do and thus you will get information at a deeper level than just the code to fix your current problem. This means your understanding will grow and you will know why you would do X vice Y and be able to expand that understanding to new problems.
DBAs who don't read books probably don't understand the internals of how the database works very well and are likely to be less effective than those who do read in depth about their profession.
A book I recommend is:
http://www.amazon.com/SQL-Antipatterns-Programming-Pragmatic-Programmers/dp/1934356557/ref=pd_sim_b_1
Also books on performance tuning are critical to read if you want to use good patterns in your SQL. A huge number of performance problems are caused by badly designed SQL. YOu should know what works well and what doesn't. Those tend to be databse backend specific, so look for ones realting to the type of databases you support.

ORM and NH for business people

I have to prepare a case to convince managers to promote development using an ORM. I don't want to go into technical details in this case, the benefits have to be visible to business people.
I'm not quite happy with the arguments I've written down until now Are there any points I'm forgetting, both PRO and CONTRA?
The case I'm going to make will be in two points:
Convince managers to use an ORM
Convince managers to use NHibernate
The PRO's for ORM:
Solves the impedence mismatch between a rich ecosystem of connected objects with behaviour and tablular lists of scalar values
Higher productivity => reduced time writing tedious data access code lets you focus more on solving 'real' business issues
Higher maintainability => reduced number of LOC == system is easier to understand (hmm... maybe...)
Almost no performance hit when used right
The CONTRA's for ORM:
O/R mapping tools do not perform well with bulk processing of data. Stored procedures may have better performance, but are not portable
Heavy reliance on ORM software has been pointed to as a major factor in producing poorly designed databases
The PRO's for NH:
Very mature produce
Supports a lot of DB's => developers don't have to learn a new SQL dialect on every other project
High mind-share amongst .net community leaders
Many examples, articles, blog posts
It's open source
The CONTRA's for NH:
Not suited at all for batch processing
No code generation or code designer => some people think this makes developers more productive
Bad reputation due to lazy coding (== abuse of lazy loading)
It's not from Microsoft
It's open source => some companies just don't like that
Business people typically think in terms of cost and deliverables. Beyond that, most don't grasp or care about the technical reasons.
You already mentioned the higher productivity aspect.. try phrasing that as spending more time on business rules and less time on repetitive CRUD code.
I'd add, and highlight this:
Lower development friction makes meeting deadlines easier
Reduces cost of development in time spent working with the database
Reduces cost of maintenance
NHib is flexible; can handle object mapping automatically, and allow specific queries/stored procedures when needed
Also checkout Fluent NHibernate and Fluent Migrator.
I'd actually like to rebuke some of the negative points.
O/R mapping tools do not perform well with bulk processing of data. Stored procedures may have better performance, but are not portable
NH has many optimizations for bulk data processing (batching and caching are the first that come to mind). And you can always add stored procedures for some cases, it's not an "all-or-nothing" proposal.
Heavy reliance on ORM software has been pointed to as a major factor in producing poorly designed databases
I've actually seen the opposite: supposedly optimized DB-first designs that fall apart when the real use cases are implemented. In any case, it's a developer failure; you can do poorly with or without an ORM
Not suited at all for batch processing
Absolutely not true. NH has many features specifically designed for batch processing, like Stateless Sessions. Of course it's hard to beat the performance of a SP running in the DB server, but apart from that, it'll usually do just as well as adhoc ADO.NET code.
No code generation or code designer
False. There are several products that provide that. Check http://nhforge.org/wikis/general/commercial-product-ecosystem.aspx
Bad reputation due to lazy coding
If you do SELECT * FROM TABLE it will perform badly too. I fail to see how NH is to blame.
It's not from Microsoft
Neither are Oracle, the iPod or BMWs, yet people use them
It's open source
There is commercial support available. And, unlike what happens with MS products, the support is provided by people who know the internals and can fix them in a few hours instead of a few years.
When your manager decides which technology you have to use, your manager should be able to understand the technical reasons. Otherwise, he should trust the developers (they have the knowledge to make the decision) to decide. I can think of more useful things for a manager than making technical decisions he does not understand enough.
When your boss is like Dilberts pointy haired boss, I would go for the time and money saving argument (for every technical decision the manager wants to be involved in).
Take a look at this list of features of a real-world ORM. For any non-trivial project you will eventually end up using 80%+ of those features. So, either you can build those features yourself or you can use NHibernate. If you choose to build it yourself, be prepared to invest about $7.5M and 140 person years.
Or you can just use NHibernate, save the money and effort, and benefit from the huge knowledge base available on the net, books, community, etc, and even use (if necessary) one of the high-quality commercial support providers available.
You need to express it in terms of time and money - impact on development and maintenance time and impact on user time. ORM use is way too often the cause of a system that is strangling itself in poor performance (and which is now too far along the design path to change), so you need to address to the managers how you intend to avoid doing this. Frankly dev time and maintenance time are peanuts compared to wasted users time from bad implementation of an ORM. Yes it can be done right to avoid that but it usually is not. Often this is because the devs who use ORM don't understand databases and don't want to understand databases. ORM in the hands of a database expert, good thing, ORM in the hands of an application programmer who can't write basic SQL and who doesn't even understand joins, disaster waiting to happen.
No code generation or code designer
As Diego pointed out, there are third party tools. However, I would like to stress that NOT having to use a designer is a strength of NHibernate. Designers typically don't scale in the following ways:
What good is a design surface when an application has many, many entities? A visual designer just slows you down with noise at that point.
Designers typically have issues with merging. They work great when only one developer needs to edit the model at a time. The more devs you have on a project, the more potential trouble there is for merging the designer files.

As a programmer how much are you expected to know outside of programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm wondering what you do as a programmer that's not programming but necessary for your task (eg: local setup, server setup, deployment, etc). I'm curious to know how many non-programming related tasks people are performing.
For example, when on web development projects I often:
Install servers
Manage user right/access to servers
Perform backups
Configure IIS/Apache
Setup FTP sites
On non-web projects I often:
Write build scripts
Setup source code management tools/procedures
Probably more stuff I'm not thinking of
Some tasks are more related to programming than others (such as writing build scripts) but others fall outside of my area of expertise (domain setup comes to mind). Just interested to know how many people perform tasks in their jobs that are not programming related.
The sad reality is that non-technical people look at technical people and expect them to know everything that is technology related, not understanding that there are specializations within technology which we might know nothing about.
I often think it is very much like a doctor that specializes in a particular discipline. All doctors have a baseline of knowledge in the medical field, but will not know the specifics of other specializations (a cardiologist will not know as much about anesthesiology and vice versa).
So while I think it is unreasonable for people to expect technologists to know everything, I do think that it is reasonable for them to expect that we know something when it comes to technology.
I think a more important facet of this question is how much one is expected to know about the specific domain where they apply their skills (finance, manufacturing, etc, etc). I think that is incredibly important, as having that domain knowledge makes them much more valuable as a programmer, as they can understand the problems on a deep level, and as a result, provide more comprehensive solutions for them.
Expected? Almost nothing, but everyone's always really happy when you know more.
The more you know outside the narrow confines of programming, the more valuable you are to your employer.
Things that have come up for me:
requirements gathering
writing use cases
evaluating test plans
negotiating with vendors
tax law
revenue recognition rules
ideas about how users behave
basic economic theory
usability guidelines
differences in consumer behavior in different countries
system administration (being a full on sysadmin)
database configuration, optimization, setup (basically being a DBA)
monitoring systems
networking principles and techniques (you'd be amazed how handy a packet trace can be when debugging something...)
being able to evaluate a business plan written by someone else
image manipulation
how to diffuse a situation and avoid arguments
how to corner someone and make them to commit to something when they don't want to
how to choose battles
I think the non-programming skill I use the most in my programming job is writing. It's really crucial to be able to explain ideas, designs, algorithms, and so on, and you can never count on being around to do it in person (or having the time). I spend a good amount of time at work writing up design documents and other documentation so other engineers can get their heads around my code and algorithms. So I'm really thankful that I had good writing classes in school and can put a sentence together. :-)
Probably depends on the size of the company you work for. As someone who has worked mainly at small to medium sized businesses, I've also been responsible for:
database creation, management, and tuning
supporting the internal applications I launch
managing website certificates
setting up external hosting
and I'm sure there's more as well
Well, since a programmer's primary tool is his computer, I think it's fair to assume some expertise with it. Most of those sorts of things you've described are difficult for someone unfamiliar with computers, but pretty easy (even with little prior experience) for someone who understands the domain and knows how to find and read documentation.
In a big, well-organized business or project, I'd expect someone who was more specifically familiar with those sort of administrative things to take care of them. However, if there's not enough of them to warrant a full-time job, then I don't think it's unreasonable to have anyone competent work on it; and programmers are probably at the head of the queue in that regard.
I find the vast majority of "bugs" discovered by users are configuration problems with the systems on which the application is installed. Having developers that understand the common machine and network setup errors is very desirable.
For example if an application sends email as part of its operation its useful to have developers knowledgable in DNS and SMTP configuration.
Of course it depends on your size of business, large organisations can probably shield developers from this by using other specialists.
I realized I'm never hired for the actual job, but as a problem solver. Whether I figure out what's going on, and fix it through code, or software, or something on the network, this seems to be the main perception of what clients want.
This will vary greatly depending on where you are. I've worked with people who know none of this stuff, and people who are experts.
Knowing this will help you greatly. In general it's always better to understand the environment your code is running in. Not understanding the context leaves you somewhat helpless.
Additionally there are often bugs that are not code related but configuration related, for example a page not showing up because of the apache configuration. You're very handicapped in debugging if you don't understand the environment.
People around a work place probably expect a programmer to be their IT HelpDesk guy... it happens around here to me. argh.
Where I work, all developers are expected to be able to use Subversion and have to be able to setup and configure Apache and Tomcat on their PC.
The biggest challenge is not the technical issues associated with getting the environment up and running but the domain knowledge required to effectively develop software in a small shop. For me, I work on a lot of different projects from a variety of sources in a mostly isolated development environment. This means that I need to come up to speed on the domain of the project pretty quickly in order to be effective in developing a solution. In the past I've worked on print accounting solutions, active directory management, research survey databases, and currently a quasi-CRM solution for a charitable organization. I wish I only had to know the nuts and bolts of setting up my development and build environment.
It often depends on the size of the company. In a little company, you have to know how to do everything, including systems admin, and network admin, even if your job is focused on programming.
In a big company, you get to see a little slice of the universe, and they often don't like you peeking outside of your box. Not only do you not need to learn everything, they're often unhappy with you if you try.
However, the more you understanding about the machines, how they work, and how they function in an operational environment, the easier it is to diagnose problems and write better software. The more you understanding about the domain you're writing applications for, the better you are able to differentiate between the users needs and their desires.
One of the coolest things about being a software developer is you have a life long excuse for sticking your nose into both the technologies and the various business domains. If you've shifted around to a few different industries, you tend to become loaded down with all sorts of interesting tidbits. There is always more to learn ...
Paul.
It's good to expose yourself to other technologies, but I really think it's a bad idea for you to not fully disclose the fact that you aren't experts in those areas (esp. domain setup). I've worked with people who thought they could do it all but ended up doing those tasks so poorly that with all the time (and money) they've spent trying to get it right, a consultant would have been paid for several times over.
I've worked at a company where I was responsible for everything "related to a computer" including the domain, PCs, database, custom software, builds, MS Office, PowerPoint, Quickbooks...; a mid-size company where it was development and builds; and a large company where I focus solely on the .Net code for my project (someone else handles the database and another handles reporting).
The mid-size company has been the best experience so far (pretty new at the large company) where I was given enough responsibility to feel useful and had easy access to everyone else to ask questions about those other tasks.
You are not alone out there. The position I signed up for was "ASP.NET Web Developer"... However, my job consists of:
Windows Server Administration
Limited Linux Administration (running
top to monitor CPU utilization and changing apache configs)
LDAP Administration / Tuning
MS SQL Server 2005 Administration /
Tuning
Database Development
Crystal Reports Developer
Perl Scripts
C# Win32 Developement
C# / ASP.NET Web Developement
Managing User Access Rights for
Windows Servers
Limited Network Troubleshooting
Being in a company that is constantly striving for supreme "Operation Effectiveness" my task list only grows by the day. I did not make up that list either. All of the items mentioned above, I have either touched or supported in the past 3 years I have worked in this company.
That being said, in a good development shop, you should have one specific task. As the saying goes, Jack of all trades ... master of none.
This depends greatly on what you're programming. If you're doing low level device drivers, it's vital that you understand the underlying hardware. If you're doing a standalone Java app, the better you understand the JVM and libraries you're using, the better - but it isn't strictly necessary to know a lot.
In general, the more you understand about your system environment, the better. How much your peers and management expect you to know depends on them.
Ignorance will, eventually, be punished. If not by your peers and management, the world will do it. Check any week's headlines or RISKS digest for examples where ignorance of the system environment cause software failure.
[rant mode on]
Ha, the curse of Excel and Word.
Outside work - particularly friends and family but sometimes when consulting or delivering software too, any and all non-technical people expect you to understand these. There's that internal groan when someone asks you across to have a look at a small problem they're having with some facet of Office. And because it's a client and you want to appear helpful you agree.
There's just this blanket expectation that because you're a developer you have an innate knowledge of configuring spreadsheets, fixing Word templates, and any and all other office techie tasks, and furthermore you can cast your eye over some badly configured Office mess and instantly diagnose what the problem is.
I can only just about manage to put together a spreadsheet to schedule my reoccuring invoices and set up a Word template to write them. I regularly tell people that too - but no-one ever listens.
It depends a lot on the type of software you're currently developing
For example, when I was working on software for a local government, I had to learn things like
What are the rules for registering animals (pets). What are the types of registrations, what discounts apply, what are penalties for not registering on time
How are council rates calculated. How are rates raised yearly (actually, the algorithm for raising yearly rates and its implementation was the most complex task I met so far).
How are building permits issued. What types of inspections can be performed. Who is involved in the process of issuing a building permit (owner, builder, architect, officers etc.)
How often are water meters read. How are water meters assigned to properties, how many dials are on a water meter, how to detach a water meter from one property and to attach to a different one
What are different pension types. What are different discounts that are granted depending on a pension type.
What are different types of receipts. What different types of terminal printers (those that are used to print small receipts) exist and how to print to them.
What are properties, strata children, what are rules for dividing properties into 'parcels' ...
Well, that's just part of non-programming stuff that I learned during the 2 years on the project. The most unfortunate thing here is that now that I moved to a different company, there is very little chance that any of this knowledge I will ever use.
My job title is "Senior Software Engineer". In point of fact, for most of the past several years, I did fairly little software development, but did do a lot of:
Systems & web administration
Static web page development with HTML (I don't consider that programming, although I have done PHP, CGI, and JavaScript).
As others have said, help desk sorts of stuff, although not as much as in the past.
As a "task leader", I'm expected to have some people/management skills, although that usually devolves to writing monthly reports. I also get sucked into CMMi stuff from time to time, which in an ideal world might be somewhat relevant, but is usually just record keeping so the employer can bid on new contracts which require it.
Working in science lab, there's a need to know some of the science, especially if you want/need to work on the code doing the scientific calculations.
Working in a (U.S.) government facility, there's lots of paperwork and a need to know lots of government regulation (e.g. Freedom of Information Act)
Fortunately, I've recently made an internal transfer where I'm doing more development work and less of this other stuff!
Personally, I find that knowing more is always good, it paves the way to the next level. The hardest things in life is at the integration point. Literally. People focus a lot on specializing, but don't forget that you need people who can straddle both realms.

Should you standardize on a scripting language in a dev group?

At work we write a small to moderate amount of scripts to aid us in normal development. We have some people that are more comfortable in python, some in perl, some in php, etc...
Sometimes I think it is best to let people work in a language they are most comfortable with. This can mean that sometimes people can do a better job on a script (as they know more tricks in one language). It can also lead to less development time per script.
Other times I think that we would benefit the most from standardizing so that there might be more shared libraries and so that we don't get into the situation of "I can't work on that script because I don't know python".
Do you think that we should standardize or let people choose for each script they write?
I would be inclined to let people choose, and hire people who are comfortable at learning new things. Gaining basic fluency with almost any language should be easy enough for a good developer. And for small scripts where the original author isn't far away, it's even easier.
The second part of the above is the hard part, of course. But you'll end up with a more flexible, more knowledgeable team.
I would advocate standardizing on a couple languages. "Thou shalt use either Python, or Perl, or Ruby. Not Rexx, nor PHP, nor NewBatch, nor aught other, for thy brethren ought to be able to read thine writing without undue despair or cutting of their skins".
My opinion is that it is just like any other development environment. Do you standardize on one development language? Why? The same should be true for your scripting environment. Not only do I lean toward standardizing on a specific language I think you should standardize all the same things that you are hopefully doing in your programming environment (naming conventions, coding style, etc). Of course there are counter arguments and there will occasionally be exceptions, but hopefully they would be few and for valid reasons.
Like everything there are upsides and downsides.
As a developer I dislike being limited by "official company standards." More often than not these rules tend to constrain and stifle.
I think what is most important is to always have two people who can both read and write a script language on staff so one person can go on vacation or be sick without holding up the works.
For major components of your system it is probably better to standardize on a single language - as much as it pains me to say so.
I think the right place to ask this question is with your own team.
Your team should form a consensus as to whether or not they want a common platform or whether they want the freedom to choose the right tool for the job. There is no single best answer to this question any more than there is a single best answer for "what's the best scripting language".
Certainly there are advantages to each approach. If every team member is free to choose their own language, they might be more productive and your business will reap the benefit. On the other hand, if someone writes a critical tool in a language only she knows and then something happens to her (illness, new job, etc) you can be stuck with a single point of failure that no one knows how to fix.
So long as the languages in question are used for scripting, I agree with other commenters that it should be left to the devteam, and different languages should be tolerated in most cases.
If the language is used for your main codebase, you had better standardize on one.
If some developers complain about a script not written in their language, encourage them to learn it, particularly if it's one of the widespread ones like Ruby, Perl, et al.
No.
Your dev environment is hopefully a living entity that changes and is cultivated. New abilities (scripting languages) should be able to be accommodated, and older places that haven't been visited for years will someday get obviated by tools, or revisited from time to time and an evaluation will take place. Hopefully the dev scripts are a minor amount of development and automation. Chances are the team will drift towards a set of standards (ant, python, etc).
Per Project, anything checked in should probably follow a rule of minimal complexity.
Your team will be gone some day, and someone else will have to come in and maintain this. Please don't make them learn 5 languages or they are going to look for your names in the source code an hunt you down.
When we've picked up hack projects like that, we've pretty much considered the creators complete fools and made fun of them. They couldn't bother themselves to learn one scripting language???
That said, if you don't check it in--who cares?

Formal Methods and Enterprises [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So...
I teach formal methods in software engineering. I also teach "agile methodologies". Most people seem to think this is contradictory. I think it makes a lot of sense... I also work for a company, where we need to actually get things done :) While I can apply my earned skill points on "specification" in a day-to-day basis, my colleagues typically flee away from the word "formal".
I used to think that this was due to the intrinsic way we learn how to program: we are usually driven to find a working solution, not to understand the problem. Then I thought this was due to the fact that most people in the formal community are not engineers, but mathematicians or computer scientists. Nowadays, I wonder if it just because the formal-methods community hide behind some kind of "obfuscation" law to use all the available UNICODE symbols, actively develop rude, unesthetic tools, and laugh in the face of standards.
Yes, I've been moving from a "blame them" to a "blame us" perspective ;-)
So, my question is: do you use any kind of formal methods in your company? Have you introduced them, or were they pre-requisites? What techniques do you use to clear the fog of mathematics from people's fears and incite them to use formal methods? What do you think current tools are lacking for a more general usage?
The key to getting people to buy into any methods or methodologies is to show them how it solves problems they are having. If they can see it will make their lives better you have a much improved chance of getting them to adopt the techniques.
And if you can't show them that, perhaps you wanted to adopt the methods based on philosophy rather than practicality. Unless the others share your philosophy then you're not going to get anywhere. And perhaps you shouldn't.
Over the decades there have been a great many methodologies. Newer ones always address the shortcomings of the old ones, yet projects still get in trouble and fail. Why? Because the rock stars that come up with new methodologies are rock stars, and have made a new methodology precisely because they understand the underlying issues and how to apply them. Those who come after tend to blindly follow the recipe, and it doesn't work so well.
So I think the best thing is to teach about the underlying problems and then show how various methods attempt to deal with those problems. The differences in companies, projects, and teams is so great that no one methodology can be applied successfully to all combinations. Learning to choose an appropriate tool and apply it well is crucial.
Thank you for all contributions. They are very insightful. Allow me to flame a bit (don't take it personal, though :-)
Most people seem to think that formal methods are just about program verification. Or critical systems. This may be true if we pursue the ultimate cliche: to prove we are doing the program right (v.s. validation, which asks, as a contributor said, if we are doing the right program).
But consider model finding/checking tools, such as Alloy. Learning to use a tool like this takes a negligable ammount of time for anyone used to UML and OO. Still, it can give you immediate insight over your model. It usually takes no more than 10 minutes to find a counter-example over a small enough subset of the model one's trying to use (and that includes describing the model in Alloy in the first place).
Take requirements engineering as an example. One usually draw a lot of UML. Few people use OCL, though, and many business rules are informally annoted in natural language. Why? Time constraints?
Now consider the fact that the majority just uses her/his gut-feeling to prove that a model is satisfiable. Again, why? I can take the same amount of time (probably even less, since I don't need to care about drawing aesthetics) to write that model in Alloy, and just check for satisfiability? And what kind of mathematics do I need to now? "Predicates"? Fancy name for IFs and booleans ;-) Quantifiers? Fancy names for ForEachs()...
What about big information systems? They don't need to be critical... Just try to analyze in your head a conceptual (not implementation!) diagram with over 600 classes. I see many people banging their head in the wall with easy-to-make model mistakes because they missed some constraint, or the model allows stupid things to happen.
The fact is, one does not need to use formal approaches from head to tail. Granted, I could prove a whole application in Coq, and certify that it is 100% compliant with some specification. This may be the Computer Scientist/Mathematician approach.
Still, with a GTD philisophy, why can't I delegate some tasks for the computer and allow it to help improving my development? Is it really a matter of "time", or plain, simple lack of technical abilities and will to learn/inovate?
Working with line of business IT development in an enterprise means having to transfer knowledge about the business from actual business people into the heads of developers. While I myself find abstract maths to be one of the greatest pastimes there is, it's a terrible communications tool. And communications is what it's all about. While I might conceivably have some success convincing IT people to embrace more abstract notations, I basically have no chance with the business people.
While there are some areas where I can see a role for formal methods in an enterprise (math- and logic-heavy specialist software, significant need for provable properties as in safety critical software) they provide little help with getting correct requirements on e.g. how to fulfil a customer order by issuing one or more supply orders to a set of possible external or internal providers.
I think the jury is still out on model based approaches and domain specific languages. I think they will succeed or fail depending on whether they provide quicker feedback from IT to the wishes and needs of the business side, and whether they presume business people will have to do any significant studying.
Technology is easy. Communication is hard. Formal methods may help us do things right, but those I've seen do nothing to help us do the right things. (Yes, these are cliches, but that's because they're inescapably and painfully true.)
I'm taking a course on 'Specification and Verification'. As part of the course structure we are doing the following-
1. Learning tools like PVS(Prototype Verification System) http://pvs.csl.sri.com/ and SMV(Software Modeling and Verification) http://www.cs.cmu.edu/~modelcheck/smv.html
2. Apart from that we do dissect accidents which happened because of software failures. For e.g. - Failure of Ariane V
I feel formal methods are more applicable to scenarios where the failure cost is more than the design cost. And it seems apt to use them for softwares being used in critical systems. I guess it is used in avionics, chip design etc. and the current automobile industry is also drafting it into practice.
I have tried to get people to embrace formal specification methods a few times (Z and Alloy) and have made the same expirience that you have: Most people, while feeling that they serve a useful purpose, are very uncomfortable using them for actual work.
Funny enough, the same people are more than happy to produce utterly useless UML diagrams in ginormous quantities.
I think there are two main reasons for this:
a.) Many developers are uncomfortable with the level of abstraction required by a formal approach. The fact that most entry-level mathematics education is all calculus and non discrete-mathematics might have to do something with this.
b.) Formal methods require a very bottom up design aproach where you design your core model from the ground up and make it airtight and then connect it up to the actual user requirements by providing an interface on top of it. Since we tend to have requirements drive development efforts, a top-down approach feels more natural although it often leads to inconsistent models. It's like retrofitting a basement underneath your house after it has already been built.
Formal methods make no sense in systems where the cost of failure is low.
In a production web application, you've got multiple front-end boxes, multiple back-end boxes, multiple database boxes - if a program on any one of them fails, it's a non-event. Hardware is so cheap that you can build these systems for far less than the cost of formally specifying all your software.