I'm a non-technical (well, non-software. hardware background) founder who has hired a pretty good developer that has built a site with backend on Rails and frontend with CSS/HTML pretty capably. our next step is to develop a Yodlee integration, and we both want to know how long it takes to do this. He has an estimate which I think is reasonable, but would like feedback from the community without biasing the responses.
Also, if anybody has done an implementation before, I would really appreciate your perspective and help!
I have implemented a complex Yodlee integration for a LA based start-up over the last two years. They built a social game and money management platform on top of it. The short answer is that it's tough and dirty work.
The technical aspect of getting your application to communicate to the Yodlee API is not at all the hard part (its pretty much a standard web service). Following are some aspects highlighting the difficulty:
The most difficult part is dealing with the unknowns and the variability in the client data.
There is effectively no documentation for the API
There are several way to do each operation that will return different data
Ive been designing and building systems for 15 years and have gotten pretty good at estimating projects. We were way off with Yodlee; in fact we are still dealing with issues.
In order to understand why its so tough, you really need to understand what Yodlee is.. it is an aggregator of 10,000 different systems. Now these other systems might be big professional systems like Bank of America, Chase, ... but they are often small little banks (Bob's Bank in Omaha).
When Yodlee communicates with the big companies (they are called content services) there is most always an api that actually returns good data. But with the little ones, they are doing screen scraping. You can imagine that breaks all the time. They have an entire team in India which is just focused on that.
The other issue is about modelling the data; each of the content services at its source has modeled the data differentley (different names, different elements, different relationships,...) but Yodlee but combine all 10,000 models into one view. What this leaves you with is a very bloated model, where you can never know or count on getting a certain data element.
To give you an idea... there are extra fields about a credit account (apr, credit amount, last payment, ...) beyond the standard base'class fields (balance, ...). While this sounds great that you have this data, in practice the number of content services that provide these extra data elements is so low that you cant really depend on them. I'd say that the fidelity of those data elements is very low. All you can really count on is the base elements (account name, type, balance) and (transaction date, description and type).
Speaking of transactions... their transaction categorization system is not that good. They have clearly taken a breadth first approach to this, rather than focus on accuracy. We built an entire system for transaction categorization which is far more effective.
A couple other things: The DAG account test system is useless; it does not operate the same way real accounts do. You will be far better off opening 5-10 accounts at different content services and giving your developers the username/passwords for these for testing. The MFA (multifactor auth) system for account security has been an endless headache. This isnt Yodlee's fault, its the nature of the game. The banks are doing more and more crazy things that add security layers. Yodlee has the MFA system in place to compensate for this. At any given time about 20% of our accounts are in error for some reason. We have built an entire component just to manage this.
So what does this all mean? Double your estimate, get ready to get dirty. I dont want to put Yodlee down at all (except for the lack of documentation); they really are solving a hard problem. There really arent any other better options.
I run the team responsible for sales and support of the Yodlee APIs so the response may be a little biased.
I have seen clients get up and running in anywhere from 10 days to 3 months to 6 months. The time to implement depends on the number of fields in the data model you are using and how you are going to use the data or manipulate it before presenting it to your users.
While the most prevalent data fields such as account balance or transaction amount will always be available, Craig is right, as you get into the broader data model you will have to code for exceptions when the data is not there. Yodlee does provide documentation on how often the fields will be available to help with this process. But if you are only going to be using basic account and transactional information, you will not have to worry about these complexities and it will speed implementation.
How you use the data once you receive it from Yodlee will also play a big part in the time it takes to get integrated. If you are deriving additional data from the transaction descriptions or are doing something with categorization then there is more complexity and it will require more time. If you are using many of the fields as-is, then this will be easier.
The other item that Craig mentioned is the extra security questions (Multi-factor Authentication). While that section of the API does add some work, we have added documentation around this to make integration easier. Also, with any development issues that come up we give clients access to a developer forum that is monitored by our Technical Consulting team.
Related
Intro:
I'm working for a contractor company. We're making SW for different corporate clients, each with their own rules, SW standards etc.
Problem:
The result is, that we are using several bug-tracking systems. The amount of tickets flow is relatively big and the SLA are deadly sometimes. The main problem is, that we are keeping track of these tickets in our own BT (currently Mantis) but we're also communicating with clients in theirs BT. But as it is, two many channels of communication are making too much information noise.
Solution, progress:
Actual solution is an employee having responsibility for synchronizing the streams and keeping track of the SLA and many other things. It's consuming quite a large part of his time (cca 70%) that can be spend on something more valuable. The other thing is, that he is not fast enough and sometimes the sync is not really synced. Some parts of the comments are left only on one system, some are lost completely. (And don't start me at holidays or sickness, that's where the fun begins)
Question:
How to automate this process: aggregating tasks, watching SLA, notifying the right people etc. partially or all together?
Thank you, for your answers.
You need something like Zapier. It can map different applications and synchronize data between them. It works simply:
You create zap (for example between redmine and teamwork).
You configure mapping (how items/attributes in redmine maps to items/attributes in teamwork)
You generate access tokens in both systems and write them to zap.
Zapier makes regular synchronization between redmine and teamwork.
But mantis is not yet supported by Zapier. If all/most of your clients BT are in Zapier's apps list, you may move your own BT to another platform or make a request to Zapier for mantis support.
Another way is develop your own synchronization service that will connect to all client's BTs as each employee using login/password/token and download updates to your own BT. It is hard way and this solution requires continious development to support actual virsions of client's BTs.
You can have a look a Slack : https://slack.com/
It's a great tools for group conversations
Talk, share, and make decisions in open channels across your team, in
private groups for sensitive matters, or use direct messages
one-to-one.
you can have a lot of integrations tools, and you can use Zapier https://zapier.com/ with it to programm triggers.
With differents channels you can notifying the right people partially or all together in group conversation :)
The obvious answer is to create integrations between all of the various BT's. Without knowing what those are, it's hard to say if that's entirely possible. Most modern BTs have an API and support integrations. Some, especially more desktop based ones, don't. For those you probably have to monitor a database directly.
Zapier, as someone already suggested, is a great tool for creating integrations and may already have some of the ones yo need available. I love Slack and it has an API, but messages are basically just text and unless you want to do some kind of delimiting when you post messages to its API, it probably isn't going to work.
I'm not sure what budget is, but it will cost resources to create the integrations. I'd suggest that you hire someone to simply manage these. Someone who's sole responsibility is to cross-populate the internal and the external bug tracking system and track the progress in each. All you really need is someone with good attention to detail for this, they don't have to be a developer. This should be more cost effective than using developer resources on this.
The other alternative is simply to stop. If your requirements dictate that you use your clients' bug tracking software for projects you do for them, just use their software and stop duplicating the effort. If you need some kind of central repository or something for managing work maybe just a simple table somewhere or spreadsheet with the client, the project, the issue number, the status and if possible a link to the issue in the client's BT. I understand the need and desire for centralizing this, but if it's stifling productivity, then the opportunity costs are too high IMO.
If you create an integration tool foe this, you will indeed have a very viable product. This is actually a pretty common problem.
I work in a big organization, spread around the world, about 17,000 people. Have to ask for money wayyyyyy in advance for projects before even really knowing what they are.
So we're redesigning an old intranet. Spending a chunk of money we have on user research and UX design phase. More on the 1.0 and piloting it and dealing with content migration and all that fun stuff.
Any ideas (insane ballpark here) about how much $ to ask for the next phase (rolling it out, more testing, actually replacing the legacy intranet)? I know details are sparse but there are some silly budget deadlines that require me to pull a figure out of thin air before I even really know what our end users want. If I don't come up with a number, we get nothing and have to hunt around in various budgets to get cash.
What would you throw out for low / medium / high numbers here? Low being open source, out of the box functionality, not insane customization, no more UX research beyond what we're already doing, and off we go. Medium being a nicer step up, and high being paying someone like Igloo $400k to use their out of the box platform, and then another bunch of custom plugin development and design work on top of that.
Right now I'm thinking 250k, 500k, 750k. Am I in the right ballpark, or am I over in some other arena that's not even in the same league and hasn't seen a baseball game in years?
As you've all but acknowledged, it's an impossible question to answer. You are constrained by an illogical system which seeks to reduce risk by increasing control in the mistaken belief that accounting for all spend in advance will increase return on investment. Your challenge is to learn how to play that system to your advantage, secure your funding and achieve the freedom to deliver your project with minimal interference.
Support
Fortunately, you are the best-placed person to do this. You have the best understanding of your organisational context and if you look at broadly similar (global, cross-functional) internal projects, you can get a good idea of how these projects are regarded and how they secure and maintain support and funding. Talk to colleagues and learn what you can. Ultimately you aren't looking for a right answer - you need to know what constitutes an acceptable answer and how to proceed when your guess proves to be "wrong".
Start looking externally to learn what you can about projects in similar organisations. Listening to others' experiences will give you a better understanding of potential risks, so dig deeper than the run-of-the-mill success stories. You might consider attending conferences, intranet-focused meetup groups and participating in online communities.
Vision
What is important is that you have a vision for your intranet project. As you go on, this vision may change. You will hear lots of assumptions about what people expect to be on the intranet. It's great to read you are conducting UX research as it's vital to understand what people actually need and use.
Needs
A guest presenter at a recent intranet event I ran showed us an intranet she'd built in-house over a five year period which was designed solely to help employees do their jobs effectively.
There were no generic calendar tools, no pages of content that no one ever looks at and no fashionable social features. Instead there was a lean, focused business tool, written in PHP, on which she'd spent GBP20,000 each year.
Equivalent value? Probably a month of Sharepoint consultancy.
It takes careful leadership and communication, but once you stop trying to deliver mediocre generic solutions based on what everything thinks "should" be on an intranet, you can ditch out-of-the-box tools and start focusing on your users' real needs.
We are going to develop a medium-large customized web app for a private company using MVC 4 and EF 5.
Early analysis and previous experiences on that business field show that it will have more than 150 tables/entities and because our customer isn't a software engineer and we know that our data model will change many times through the progress of the project.
Now, my Q is which approach is better for us according of the following:
1) Causes less effort on updating data model and data store because of many changes that will occur.
2) Takes less time to create the data store and helps progress the project faster.
Note: This app will work with big sets of data (10k, 100k objects for some entities). However, it will have few (and maybe sometimes no) concurrent requests and online users.
Thanks in advance
I think the term
because our customer isn't a software engineer and we know that our
data model will change many times
Is a reality but is not the whole reason of data model change in many times!
BTW some suggestions may be helpful
Consider business analysis a key point and do continues business analysis
More and more investing on analysis will cause less cost on development and change management
Have a specific talented analyst as a representative positioned in the customer office.
Have a specific representative of customer accountable for analysis and development team.
Document the customers business process, talk about them with the customer. Document the negotiation results.
Design and change management
Never design the database alone. Brainstorming team member, invite customer representative, invite business analyst, have a DBA in design team, have a Change-Manager role.
Technology
Have a look on technology trend (may using a document-based database will be helpful)
Have a flexible framework and architecture in service of business software production and not in reverse!
Changes will come ... be ready
Changes will come, all above efforts are targeted to optimized the solution to cost less when change happen, they will not prevent the changes.
You need to have an acceptable mechanism to bill them to the customer
ITIL's Service-level management, SLA and OLA will be a guide line.
So the question answers:
1) All above will help on the problem. Continues interactive analysis first, next design and develop in iterations, using template approach.Changs will come and they will cost.
2) Depends on technology and frameworks, there will be tools (I don't know EF) just don't insist on a specific platform or library.
Hope this helps.
I can recommend you to use CodeFirst structure. Because your classes will be clear and mainly you work with classes much in business logic.
Our company has recently decided that a good section of our IT department is actually doing product development and not internal IT development and now has created a new department.
What are the types of changes that developers should be looking to make during this type of transition?
Is there really any difference between internal development and product development?
I don't know how quickly the differences will assert themselves in an existing group which is transitioning from one role to the other, but having worked as both an internal developer and a product developer, two huge differences leap to mind: requirements and and testing.
As a developer of internal tools, I was pretty much given free reign regarding interface, organization, and even scope. Specs were in the form of an email saying "can you write something that does X?" Similarly, testing was almost non-existent. After whatever testing I was able to do, the tools would be deployed directly to their target audience and bug reports, when they came at all, were directly from those end-users, again usually via email or even hall-tackle.
Now that I'm doing product development, the difference is dramatic. Specs are 30-100 page Word documents and we have a dedicated testing department which makes darn sure that what we produce matches those specs. I'm much better supported by the project managers and have clear channels for any feedback I have on requirements or design. It could be argued that product development offers less freedom to the individual developer, but in exchange for being part of a (hopefully) better-organized and better-supported team.
[I work with Jeff.]
As others have mentioned, a key difference stems from the nature of the user:
When developing internal applications, you're typically dealing users from a single department or group who use a limited number of apps, and they're mostly a captive userbase.
When developing external products, you're dealing with users who see the whole enchilada, and they're paying customers who can take their business elsewhere.
That difference matters because a coherent user experience across multiple applications becomes much more important for paying customers who see the entire set of apps.
In the IT case, neither the business nor the users themselves are typically concerned about the fact that their app doesn't look and work like some other app that some other department uses. And if for whatever reason they did care, their ability to do anything about it is more limited. While it's possible to make a business case for consistency across apps (branding to internal employees, usability for employees who use multiple apps, reduced development costs resulting from the reuse of common libraries, patterns, services, etc.), this kind of concern typically takes a backseat to the development of new business functionality.
In the product case, the users do care about coherence across the entire set of apps, and they can do something about it if the experience is weak.
So one major difference is the relative importance of a coherent, quality user experience. But that goal itself has significant organizational ramifications that we're already starting to see.
We'll see an increased emphasis on "horizontal" activities that seek to establish standards and improve communications across teams, since such activities directly support the goal of producing coherent products. Cross-cutting teams (like user experience) will become more influential than they formerly were, we may see new cross-cutting teams (e.g. teams that look at architecture across many systems rather than just a single system), we'll probably see more presentations about what different apps do, more cross-training, etc.
App development teams will have less autonomy than they formerly had in charting their own course. The user experience team (working with end users, business stakeholders and engineering teams) will specify standards around visual design, interaction design, etc. and the app teams will be expected to adopt those. We'll see test practices become more standardized. Builds and deployments across apps will be more uniform and coordinated. Monitoring, alerting, response time SLAs and other operational metrics will be more uniform across apps. The app teams won't be able to define these for themselves anymore (though they can certainly contribute to the larger discussion).
Management will increasingly allocate resources in a way that seeks to optimize globally across the entire product instead of optimizing locally for individual apps. So where in the past the composition of individual app teams has been relatively fixed (developers, SQA, etc.), going forward we should expect to see more fluidity.
A huge difference is in your customer. IT developers have the rest of the company (and sometimes partner / subsidiary companies) as their primary customers. The Product Development developers have customers (i.e. the people who buy the product that is the companies reason for existing) as the primary customer.
Yes, very much so. There's a huge difference between performing an activity that drives revenue and invoices and one that's perceived as overhead.
My experiance is that people working on product development side of the house have bigger budgets, better training, better travel and more skilled employees. Having worked in a product development company it always felt like the lower skilled employees were thrown over to IT department.
In-house development is very process-orientated. They just want to deliver xyz functionality and have it work based on a company-wide strategy. It's not the iterative make-the-product better cycle you get when your primary product is your code. As a result, in-house stuff is often just 'good enough' whereas software companies probably tend to try to improve their product long after it 'just works'.
Note, in-house dev teams can get away with being 'good enough' but software development companies can get away with it for a while but will ultimately lose out to the ones who strive to improve.
I think moving between both environments could be a shock to the system in either direction but that's not to say they both can't be run in the same way - just that in my experience they probably won't. For example, UI is usually given a lower priority when it is in-house software as the customers are generally getting paid to use it rather than paying to use it.
I have been equals amounts of time in both types of organizations. The particular organization where I worked where the software development was part of an IT department treated the development of software as a cost center, where as software development as part of an product was seen as a profit center.
The two are very different. The skill level of developers in my case was vastly different-- those working on a public product were overall better skilled and cared more about the quality of their work. As a developer of a real public product you are actually making the company money.
In my internal software job I typically had a set of known fixed requirements mostly worked out. I designed a solution, but was always given a deadline that was unreasonable if quality was a concern (including code quality), rushed the coding, and delivered the result. Any bugs found that passed any short QA process typically only got fixed if they made formal requests for fixes.
Product development in my experience is almost the reverse. All the requirements are not fixed (only what I was working on for that week was fixed), design is usually dictated by someone who has worked on the product the longest. I get decide how long something takes (but got to really explain why and justify the time), and coding is usually not rushed. Experimenting with ideas generally is more difficult in product development because a product that is meant for public use should use tried and tested approaches.
Therefore, I would say that if creativity is really important then product development may not be for you because a particular idea that you personally have is unlikely to ever make it into the product unless you can make a business case for it and somehow make it more important than what the business has already been planning.
Choosing a particular library is also more difficult depending on how the software is deployed. For example software used by the government typically has to pass Common Criteria certification, which can eliminate certain library choices.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!