Related
All,
I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.
Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test.
Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.
I would appreciate your views on this.
Thanks,
Byte
Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.
Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?
Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.
For the example you give, I would do something like this:
For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.
Who
Whose
What
Where
When
Why
How
How much
Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.
Who else
Whose else
etc..
Then, ask the "not" questions--negate or refute your assumptions, and challenge them.
Who not (eg, Who might not need access to this secure feature, and why?)
What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
etc...
Other modifiers to the qustions could be:
W else
W not
W risks
W different
Combine two question words, eg, Who and when.
In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:
Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.
White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.
There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.
Identify your assumptions from different perspectives:
How can users possibly misunderstand this?
Why do I think it acts or should act this way?
What biases might I have about how this software should work?
How do I know the requirements/design/implementation is what's needed?
What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
Is the right software being built?
Do I really know what a valid name/phone number/ID number/address/etc looks like?
What am I missing?
How might I be mistaken about (insert noun here)?
Also, use any of the mnemonics and testing lists noted here:
http://www.qualityperspectives.ca/resources_mnemonics.html
Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.
Group brainstorming sessions. (or informally in pairs when necessary)
see these brainstorming techniques
Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.
Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.
Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So if a user story is a something nebulous like:
As a sales rep, I would like to capture the contact information so that I can follow up later on.
I'm not even sure if that's a valid user story but I'm sure it's close enough.
Then there are details/tasks for implementing that user story.
And I'm sure "The sales rep should be able to tab from one textbox to another." is one of the requirements. How do we capture/track this? Is this part of the user story or is it something that's to be considered separately?
A user story captures the essence of a feature, not the details, a story is a support for the discussion.
So, to answer your question, details are transmitted orally during a discussion, because face to face discussion is the most effective communication media. If you feel the need, details can be captured as notes on the back of the card (if you are using cards) or... in a "notes" field if you are using an electronic tool. Actually, I usually use a "how to demo" field too to capture a high-level description of how this story will be demonstrated at the sprint demo and use very brief "notes" for any other info, clarifications, references to other sources of info, etc (credits to Henrik Kniberg's famous Index card generator). If find this very handy, especially when using executable specifications.
PS: your story is perfectly valid and its a good practice to include the benefits in your template ("As a role, I want action so that benefits").
User stories should be short statements in 1 to 3 sentences.
http://en.wikipedia.org/wiki/User_story
I want to be able to tab from one textbox to another is another user story.
You can track these things in a tool like www.rallydev.com, or just any type of task tracking tool (SharePoint, Excel even ... etc.).
Next thing you do is prioritize.
Just taking a rough stab...
As a sales rep,
I want all data entry and navigation to be accomplished using the keyboard
so that I don't have to take my hands off the keyboard
(and so that we comply with accessibility guidelines).
Or
As a business,
We want all our products to be fully usable using only keyboard input
So that we can sell to customers who require accessible software.
The first part belongs to a "business requirements" document (usually written by a business analyst). The first generations of this document are quite high level, but the final versions (several iterations later) are pretty detailed.
http://www.tdan.com/view-articles/6089
The second part (about tabbing) is part of another document - "UX spec" (shows all screens and describes user interaction). This one is usually written by a different person/team (Product or UX team).
http://uxdesign.com/ux-defined-2
http://www.uxmatters.com/mt/archives/2007/05/sharing-ownership-of-ux.php
Yes, that is problem we also have a lot. On the one hand, user stories need to be conscise, on the other hand all the nitty gritty details must be put somewhere.
We use XPlanner, and we solve this by putting the short description into the text body of the user story. Then we use XPlanners "notes" feature (arbitrary text or files that can be attached to a user story) for the details.
That way we can add as much information as necessary to a user story, without cluttering up the user story text itself. You can also refer to external documentation, if you don't want to have everything in XPlanner.
This approach works quite well for us.
Agree with others, that this is viable story, but capture the (derived) requirements may be better captured elsewhere.
Software Developers and Business types are familiar with different terminology some what may simple to understand by one (data structures) may mean nothing to another. The User Stories is a tool or a means by which business user can convey a message as a starting point which is expanded on (with tests, details, etc).
Oral Communication can be effective, but the effectiveness is dependent on the receivers ability to hear and comprehend the meaning of the message. This is where oral communication can fail. Different types of communication offerring more or less formal forms of communication. Vocal communication is an "informal form of communication" which risks the message being misheard, misinterpretted, and misunderstanding. Just like the game played as a child, where one child whispers a message to another child, who tells another, until all have heard it...When the last child tells the message to the group it usually has been misinterpreted then misinterpretted again, causing a degraded message.
When doing hallway usability tests do most of you make your apps fully or near fully functional? Or do you just make sure the links or flow chain correctly? Or do you just draw on paper and go with that?
I'm would like to test early on a prototype and am trying to find a good balance. But at the same time am worried that some non functional parts might actually not give representative results.
Thanks.
Usability tests, hallway or otherwise, only need the functionality that you need to test. In most usability tests, you should go in with specific design questions to answer and develop your prototype to the point where it can answer those questions. For example, if you need to test if users understand your indication of the sort order for a table, all you need is a paper picture of the table showing the sort indication (with the table contents blurred) and ask them how the table is sorted. If you need to test the IA, all you need is a bunch of web pages, empty except for a title, that are linked through the navigation menus.
You only need the pages relevant for the tasks you give your users. If you’re just testing the IA, then you only need the pages on the normative path. If you are also testing error recovery, then you need the pages off the normative path along with the full navigation controls. If you are also testing error detection, then you need content on the pages as well.
You can also simulate functionality when that’s easier to do. For example, in testing if users can figure out how to get a desired sort order, when the user clicks on a non-functioning control for sorting the table, you can say, “Okay, doing that will get you this,” and you take the mouse and select a bookmark that shows the table in the new sort order.
In hallway testing, if users breach the fidelity envelope, you can simply say, “I haven’t made that part yet. Let’s go back to A, and continue from there.” Of course, you should note that the user made a wrong turn in the task you intended for them. I haven’t had any problems with users complaining about non-functional features when I tell them up front it’s an incomplete prototype and we’re only testing the UI for features x, y, and z at the moment.
For low fidelity prototypes, I often call them “mockups” or “drawings” to users rather than “prototypes” to indicate the low functionality. You can put obvious placeholders in for missing content (e.g., “Blah, blah, blah…”, “TODO: Picture of product about here.”). If a user comments on something outside the fidelity envelope (e.g., “This symbol should be red to stand out more”), simply note it, and say that topic is under development (e.g., “Thanks. We haven’t started work on the colors yet. We’re just trying to figure out how to organize the site right now.”).
Usability testing with limited-fidelity prototypes is really necessary for iterative design to be feasible for most projects. Otherwise, you waste too much work developing things that have to be redone.
A couple things to remember:
Test early and often.
The goal of usability testing is to find problems with the UI, not Q/A your code.
Therefore, if users can see the parts of your UI you are interested in testing and interact with them in a realistic way (e.g., click on buttons and links), you should be able to collect useful data. If some links are dead-ends, that's okay, as long as there's some way for users to recover and continue on. Basically, with prototypes, the "correct" path should work, but it's okay if incorrect paths don't (as long as there's a reasonably quick way to get back on the correct path). Even static storyboards (non-functioning drawings of a UI) can provide you with some information if you ask the right questions, e.g., "What would you do on this screen if you wanted to view your shopping cart?").
I would suggest a couple rounds of usability testing. First on paper, perhaps later on screen, generally throughout the application lifecycle (take an Agile approach to it).
There is a good argument to be made for paper prototypes. When users see a screen, even limited functionality, they may be hesitant to suggest changes since it looks "done."
Make no mistake, it's not trivial to get it all down on paper, but that's where I would start. Probably start with just a section or two of the application. And make sure somebody with good people skills and/or explaining skills is there to walk the user through it. Have a second person on-hand to take notes. Try to ask open-ended questions, etc.
For a hallway test, I would test with NONE of the functionality implemented.
Test against designs done on a whiteboard or on paper. You'll be surprised at how much you find out in these minimal mockups. And they are very inexpensive to make!
Functional prototypes are for later. If you give your usability subject a functional interface, they are much less likely to question whether you've implemented the right set of features in the first place.
I would make the UI functional, so that the user can really play with it, it will be much better than a static image. People can tell you whether they feel comfortable on the UI.
I would make sure everything in the UI works, or at least takes you to a clear, unambiguous message pointing out that the feature isn't implemented yet.
Showing prototypes to clients with a disclaimer up front about how feature X doesn't work yet will usually be ignored. They'll try out the prototype, click on featuree X and indignantly reply "Feature X doesn't work! This really needs to work in the final version! Why doesn't it work?". The client is confused and unhappy about the product, and it's frustrating for yourself because it overshadows the positive feedback. Besides, you told them it didn't work, why can't they use their imagination to envision how it would work in the final version?
Make it work, be it with a rough version, dummy data, or even a simple message saying "would show results sorted alphabetically now".
Are use cases just multiple user stories??
What are the benefits of using user stories over use cases.. and vice-versa... When to use one over other...
Does all agile methodologies uses user stories??
Actually, the original use cases (see Jacobson's OOSE) were pretty lightweight, much as user stories are now. Over time, they evolved until a common format for "use cases" now is a complicated document with inputs, outputs, inheritance, uses relationships, pseudocode, etc. Programmers, in general, try to convert everything into programming.
In any case, the attempt to defined what distinguishes a "use case" from a "user story" fro a "scenario" is pretty futile, as it's hard to find two authorities who agree.\
Personally, I find the pattern "[Actor] [verbs] [noun] to get [business value]" helpful. If it gets over about a paragraph of text, it may be too big.
When it comes down to it "agile" is just a label, and people disagree over exactly what it means. Similarly people call very different things "use cases."
In my experience the primary difference between the two is that a user story is focused on the user, and is usually shorter and less formal - ideally, it should easily fit on a postcard. It probably doesn't give details of error handling etc.
Use cases can be much more formal (although some people write them informally too) - they focus on every interaction with the system, and may well go into more detail about several different systems/actors/etc within the same use case.
That's just my experience though - chances are everyone will have used these tools in different ways. I wouldn't get too hung up about labels - just use what works for your project.
Use cases aren't compilations of user stories.
User stories are generally much simpler than use cases. I think use cases try to cover absolutely everything to do with the behaviour of some aspect of the system. That is, all behaviours, all error paths and all exception handling.
The recommended template for a user is:
As a (role) I want (something) so that (benefit)
(Thanks Mike Cohn for providing this simple template)
Descriptions of behaviour expressed like this are more agile.
This sort of template lets you describe behaviour using different levels of detail. For example:
for those stories being implemented in a much later sprint, you can describe behaviour in a high level sort of way, e.g. as an ops team member I want to monitor the system remotely so that I can determine system health while on the road.
for those stories being implemented in the next sprint, you can describe behaviour is a slightly more detailed way, e.g. as an ops team member I want to have a dedicated ops only login so that I can check system health.
for those stories being implemented in the current sprint, you can describe behaviour in a highly detailed way, e.g. as an ops team member I want to have a web interface so that I can check current status of the ingest ftp server.
IMHO Use cases are much more carved in stone! And hence can be a problem to update after the initial version.
HTH
cheers,
Rob
In one word, no.
Use Cases are typically detailed specifications laying out how some particular piece of functionality is going to work, or how a specific user is going to utilize the system. It typically is in the voice of a specific user (or Actor) and is fairly self-contained.
A user story on the other hand is "an invitation for discussion". It is typically one or two sentences. Here is one good resource for that. And Mike Cohn's User Stories Applied is well worth it.
The typical syntax is "As a <user> I need <functionality> to achieve <business value>", or "To achieve <business value> as a <user> I need <functionality>" which drives home the value of the story.
User stories are not meant to be stand-alone, but meant to invite discussion of the story between the developer and the customer (or customer proxy).
You can think of a usecase as a user story, but not the other way around. A usecase will cover multiple "endings" to the story, special requirements (e.g. form fields must be entered in format xyz, and show error message 123 if the user enters a field in the wrong format). Also, a usecase can include additional references to external documents, such as security guidelines.
User Stories is a tool used in Agile development to make sure you create the product your user really needs.
It describes rather why you should make this or that feature instead of HOW or WHAT feature.
From my personal experience, it's a great way to balance the client's and developer's vision to create a better product.
Unlike US aUse Case focuses on WHO uses your product. Here is the difference.
I'd say there is no other such instrument for an Agile developer as User Stories. If you want to learn how to write them successfully, check out this post.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!