User stories vs use cases - requirements

Are use cases just multiple user stories??
What are the benefits of using user stories over use cases.. and vice-versa... When to use one over other...
Does all agile methodologies uses user stories??

Actually, the original use cases (see Jacobson's OOSE) were pretty lightweight, much as user stories are now. Over time, they evolved until a common format for "use cases" now is a complicated document with inputs, outputs, inheritance, uses relationships, pseudocode, etc. Programmers, in general, try to convert everything into programming.
In any case, the attempt to defined what distinguishes a "use case" from a "user story" fro a "scenario" is pretty futile, as it's hard to find two authorities who agree.\
Personally, I find the pattern "[Actor] [verbs] [noun] to get [business value]" helpful. If it gets over about a paragraph of text, it may be too big.

When it comes down to it "agile" is just a label, and people disagree over exactly what it means. Similarly people call very different things "use cases."
In my experience the primary difference between the two is that a user story is focused on the user, and is usually shorter and less formal - ideally, it should easily fit on a postcard. It probably doesn't give details of error handling etc.
Use cases can be much more formal (although some people write them informally too) - they focus on every interaction with the system, and may well go into more detail about several different systems/actors/etc within the same use case.
That's just my experience though - chances are everyone will have used these tools in different ways. I wouldn't get too hung up about labels - just use what works for your project.

Use cases aren't compilations of user stories.
User stories are generally much simpler than use cases. I think use cases try to cover absolutely everything to do with the behaviour of some aspect of the system. That is, all behaviours, all error paths and all exception handling.
The recommended template for a user is:
As a (role) I want (something) so that (benefit)
(Thanks Mike Cohn for providing this simple template)
Descriptions of behaviour expressed like this are more agile.
This sort of template lets you describe behaviour using different levels of detail. For example:
for those stories being implemented in a much later sprint, you can describe behaviour in a high level sort of way, e.g. as an ops team member I want to monitor the system remotely so that I can determine system health while on the road.
for those stories being implemented in the next sprint, you can describe behaviour is a slightly more detailed way, e.g. as an ops team member I want to have a dedicated ops only login so that I can check system health.
for those stories being implemented in the current sprint, you can describe behaviour in a highly detailed way, e.g. as an ops team member I want to have a web interface so that I can check current status of the ingest ftp server.
IMHO Use cases are much more carved in stone! And hence can be a problem to update after the initial version.
HTH
cheers,
Rob

In one word, no.
Use Cases are typically detailed specifications laying out how some particular piece of functionality is going to work, or how a specific user is going to utilize the system. It typically is in the voice of a specific user (or Actor) and is fairly self-contained.
A user story on the other hand is "an invitation for discussion". It is typically one or two sentences. Here is one good resource for that. And Mike Cohn's User Stories Applied is well worth it.
The typical syntax is "As a <user> I need <functionality> to achieve <business value>", or "To achieve <business value> as a <user> I need <functionality>" which drives home the value of the story.
User stories are not meant to be stand-alone, but meant to invite discussion of the story between the developer and the customer (or customer proxy).

You can think of a usecase as a user story, but not the other way around. A usecase will cover multiple "endings" to the story, special requirements (e.g. form fields must be entered in format xyz, and show error message 123 if the user enters a field in the wrong format). Also, a usecase can include additional references to external documents, such as security guidelines.

User Stories is a tool used in Agile development to make sure you create the product your user really needs.
It describes rather why you should make this or that feature instead of HOW or WHAT feature.
From my personal experience, it's a great way to balance the client's and developer's vision to create a better product.
Unlike US aUse Case focuses on WHO uses your product. Here is the difference.
I'd say there is no other such instrument for an Agile developer as User Stories. If you want to learn how to write them successfully, check out this post.

Related

Features and Use Case Diagrams Vs Requirements and Use Cases

According to "Head First Object-Oriented Analysis and Design", Complex projects involves first finding a feature list -> drawing use case diagrams -> breaking into smaller modules before implementing object oriented design (requirements gathering -> use cases -> OO -> design patterns etc.)
I want to understand, what is the criteria for the size of project when feature lists and use case diagrams should be implemented before finding requirements and writing use cases?
I am particularly interested in how can this knowledge be applied to my real wold problems
Example, I am working on a UI that send instrument commands to the server and displays the response back from the server. I know from customer feedback that the UI should have the following things:
It should be able to let the user select an instrument from available list and send any custom command and display the result
It should be able to let the user select an instrument and a command from available list and display the result (create commands using drag and drops from given lists)
It should be able to have capability of creating macros
Is this UI project small enough to not have steps for gathering features and drawing use case diagrams? Do we go straight to categorizing the asks as requirements and start gathering requirements and writing use cases?
How would one go about breaking down project of this nature to deduce it to its appropriate class diagrams?
I have tried considering the above mentioned asks as features and then tried creating requirements, mainly on the different states that one could have during the life cycle of the UI application but I am still not sure and unable to comprehend the teachings of the books on this project.
I haven't read the book, so I'm not sure what the author(s) of the book really wanted to emphasize here. But I assume that you misinterpreted it.
Without knowing the requirements there is no feature list. If you don't know what is needed then you can't say anything about the system's capabilities.
Gathering requirements is an iterative process. First you gather the high-level requirements in order to be able to start building a mental model about the system. This can help you to start think about the supported features. By sharing your mental model and the exposed feature set of the system with the stakeholder, it initiates the next iteration.
Here you can start talking about actors, user journeys, use cases, etc. These are mainly focusing on the happy paths. As you have more and more iterations you will reach a point where you can start talking about edge and corner cases: What suboptimal cases can we foreseen? What can we do (prevention, detection, mitigation)? How does it affect the system/actors/journeys?...
The better you understand the needs and circumstances, the better the design and implementation of the system could be.
UPDATE #1
Will we always have high-level and low-level (edge cases and detailed use cases) requirements i.e. we will first need to make use case diagrams and then write individual detailed use cases?
There are a lot of factors which can influence this. Just to name a few:
Is it a system, submodule, or component design?
Is it a green or a brownfield project?
Is the stakeholder experienced enough to know which information matters and which doesn't from the IT project perspecitive?
Does the architect / system designer have previous experience with the same domain?
Does wireframe or mockup exist at project kick-off?
Should the project satisfy special security, legal or governmental regulations?
etc...
In short, yes there can be circumstances where you don't need several iterations, but based on my experiences that's quite rare.

Requirement, design, code derivation

I am following verification and validation threads and I think an example might be helpful. I am not an experienced developer so I would like to know whether this would be correct:
User equirement: I want to be save my friend's name, address and phone number to the system
Software Requirement specification: User wants to be able to enter and save a name, address, phone number.
Technical analysis: web UI for data entering. Data will be saved into the SQL DB.
Detailed design: UI elements: 3 fields of a string type, 1 button, object XYZ, dbConnection....
Code: (actual code of UI, db scripts)
Is it like that? Could anyone correct or add what I am missing here?
As for verification, each phase can be verified against requirement (traceability). As for validation, the functional code should work as expected (save three attributes).
While this is some what theoretically true (I have to say this), it is completely wrong in all practical and real world scenarios.
Capture user needs and Why he wants to do a certain thing. This allows you to build just the software that user wants, eliminate waste that come as part of made-up requirements, technical requirements, nice to haves etc.
So instead of,
I want to be save my friend's name, address and phone number to the system...
I'd rather like to have the below which emphasizes Why? the real need of the user
I want to send a greeting card to my friend on his birthday.
Now, I know I just need his name and address. Since this is for future I also want to store this information. So what I write next is a set of acceptance criteria to meet the above customer needs. If I can capture these as a set of executable specifications then it is even better as those are verifiable programmatically.
Ignore everything else. Traceability is unnecessary overhead. We need it if we are building software based on fabricated requirements.
Read the below
Agile Manifesto
ATDD and BDD
Impact Mapping
I've never seen a good way to trace code to requirements outside a single sprint/time box. And also, you're missing testers from your list! Unless your testers are also your business analysts (I my experience professional testers find a lot of the requirements inconsistencies - aka bugs).
I think the best approach is to have everyone as involved as possible, so you can cross reference each person expectations often. If everyone works together, you don't need to implement a cargo-cult process where batches of information are transferred down stream in one way.
The simplest tool have traceability is your VCS, where each commit includes the ID of the user story/use case that the commit is related to.

What are various methods for discovering test cases

All,
I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.
Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test.
Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.
I would appreciate your views on this.
Thanks,
Byte
Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.
Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?
Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.
For the example you give, I would do something like this:
For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.
Who
Whose
What
Where
When
Why
How
How much
Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.
Who else
Whose else
etc..
Then, ask the "not" questions--negate or refute your assumptions, and challenge them.
Who not (eg, Who might not need access to this secure feature, and why?)
What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
etc...
Other modifiers to the qustions could be:
W else
W not
W risks
W different
Combine two question words, eg, Who and when.
In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:
Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.
White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.
There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.
Identify your assumptions from different perspectives:
How can users possibly misunderstand this?
Why do I think it acts or should act this way?
What biases might I have about how this software should work?
How do I know the requirements/design/implementation is what's needed?
What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
Is the right software being built?
Do I really know what a valid name/phone number/ID number/address/etc looks like?
What am I missing?
How might I be mistaken about (insert noun here)?
Also, use any of the mnemonics and testing lists noted here:
http://www.qualityperspectives.ca/resources_mnemonics.html
Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.
Group brainstorming sessions. (or informally in pairs when necessary)
see these brainstorming techniques
Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.
Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.
Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.

What should a developer know about interface design, usability and user psychology to create great software? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Human factors design (meeting psychological needs in UI design)
What should a developer know about user interface design, usability and less technical aspects of human computer interaction?
What knowledge of usage scenarios, user behavior patterns and the psychology of user to computer interaction should we embrace to design effective software that helps users solve their problems in a natural and uncomplicated way without building barriers and creating obstacles?
There is much more to design of software than building the architecture, implementing the requirements and creating a nice-looking interface. A beautiful interface may not necessarily be useful and effective, and vice versa, an ugly software utility can become a favorite tool for many users. What at least basic knowledge should a decent developer or designer have to smooth the user experience?
Please focus on one issue per answer, describe a problem, bring examples, how the user experience is impaired and what are the ways to address the situation.
I will start:
PROBLEM: Interfaces with lots of controls and options immediately on one screen can be overwhelming to users. They will have to waste time looking through all of them trying to locate the one option they need. They'll also get distracted in the process, see one more feature, go there to learn about it and maybe read help to see if it can solve their problems, then another one and so on until they are completely lost.
EXAMPLE: As a good example I will cite the Microsoft Word (as well as other Office applications) of pre-2007 version. The sheer amount of menus and options has always scared me. I managed to remember where were the options I needed most often but that's it. Everything extra, I tend to google for things I need to learn where this particular feature is located in the forest of options.
SOLUTION: Hide out all extra options behind a few menus and submenus logically structured for the user to be able to locate them through the process of logical thinking. The 2007 redesign has obviously taken the problem into account by grouping the options into tabs. I found many new options I needed without googling but just by thinking where it could belong and looking there. Not that it has always worked, but the improvement can be felt.
Now, what are your ideas?
Useful and effective interfaces are beautiful. Look at them as a UI designer, not as an art major. :-)
Simplicity; as few choices as can accomplish the needs.
Convention; follow patterns the users are already familiar with.
Observation; watch the users, and smooth the places they have problems.
Gentleness; write human-readable errors. Don't upset the users.
Consistency; do things the same way everywhere in the application. Have one person write all of your text, or write a standard that text must meet.
Learn to listen.
Users will tell what they want but not in the words that you're used to. Socialize, sit down, take your time and listen. Watch them work, ask questions. Bring up some ideas "How would you like...?" and listen to the replies. Don't assume that something would be better for them, ask them. Don't force them down a certain path because it's more simple to code.
Interfaces with lots of controls and
options immediately on one screen can
be overwhelming to users.
GMail has this slogan "Search, don't sort". The same principle can be applied to user interfaces. As you mentioned, users are already doing this themselves by googling for features.
Now the next step is to build support for feature search right into the application. Hit a keyboard shortcut, type a few keywords, and click on the feature you want to use. The IDE Insight feature in the upcoming RAD Studio 2010 does exactly that.
Problem: user interfaces often don't have a 1-to-1 correspondence to the domain model:
There are communication problems
because programmers talk about the
hidden domain model while users talk
about the GUI.
There are maintenance problems
because users are constrained by the
task-based user interface. They regularly need
to ask for "a new screen to do this" even
if the domain model may already
support it.
Solution: the naked objects architectural design pattern. To take this to the extreme you might even generate the GUI automatically from the domain model.
I know the question is a bit old, but I'm surprised to see that no one mentioned Joel Spolsky's excellent article : User Interface Design For Programmers. It's definitely something every developer should read. There are no especially brilliant or original ideas in it, it's mostly common sense, but it did open my eyes on some not so obvious points...
I suggest reading "The Design of Everyday Things" by Donald Norman.
I use to think asthetics were useless until I tried to sell my house. Sturdy foundation, 3 brms, 2 baths, 2 car garage, fenced yard, blah, blah blah - until I got rid of the stink from my 3 dogs nobody would touch it.
The more visually pleasing the app/site is, the more chance it will get used. Now a user will give it a try and determine if it does anything they want. Finally, how usable is it? This is a point when you will probably get more feedback.
Just like the house: get rid of the clutter, clean everything, start with a general color pallette and let the user add the crazy colors if the want them.
If you really want your eyes opened, take a course in Human Factors Engineering.
I have worked at a pharmaceutical company for the past two years and I think that the design of the interface is nearly as important as the functionality. Watching users struggle with old complicated legacy code is the primary reason for re-designing it. Functionality is seldom the primary reason for redeveloping code or replacing it.
Usability studies
Watching people use your code
Extreme programming (Delivering preview code intermittently throughout design process)
Are all essential to delivering code that not only meets the users needs but makes them happy and productive. At the end of the day, programs will only be used if they make you happy and productive.

Hallway usability testing: How much of the UI do you actually make functional?

When doing hallway usability tests do most of you make your apps fully or near fully functional? Or do you just make sure the links or flow chain correctly? Or do you just draw on paper and go with that?
I'm would like to test early on a prototype and am trying to find a good balance. But at the same time am worried that some non functional parts might actually not give representative results.
Thanks.
Usability tests, hallway or otherwise, only need the functionality that you need to test. In most usability tests, you should go in with specific design questions to answer and develop your prototype to the point where it can answer those questions. For example, if you need to test if users understand your indication of the sort order for a table, all you need is a paper picture of the table showing the sort indication (with the table contents blurred) and ask them how the table is sorted. If you need to test the IA, all you need is a bunch of web pages, empty except for a title, that are linked through the navigation menus.
You only need the pages relevant for the tasks you give your users. If you’re just testing the IA, then you only need the pages on the normative path. If you are also testing error recovery, then you need the pages off the normative path along with the full navigation controls. If you are also testing error detection, then you need content on the pages as well.
You can also simulate functionality when that’s easier to do. For example, in testing if users can figure out how to get a desired sort order, when the user clicks on a non-functioning control for sorting the table, you can say, “Okay, doing that will get you this,” and you take the mouse and select a bookmark that shows the table in the new sort order.
In hallway testing, if users breach the fidelity envelope, you can simply say, “I haven’t made that part yet. Let’s go back to A, and continue from there.” Of course, you should note that the user made a wrong turn in the task you intended for them. I haven’t had any problems with users complaining about non-functional features when I tell them up front it’s an incomplete prototype and we’re only testing the UI for features x, y, and z at the moment.
For low fidelity prototypes, I often call them “mockups” or “drawings” to users rather than “prototypes” to indicate the low functionality. You can put obvious placeholders in for missing content (e.g., “Blah, blah, blah…”, “TODO: Picture of product about here.”). If a user comments on something outside the fidelity envelope (e.g., “This symbol should be red to stand out more”), simply note it, and say that topic is under development (e.g., “Thanks. We haven’t started work on the colors yet. We’re just trying to figure out how to organize the site right now.”).
Usability testing with limited-fidelity prototypes is really necessary for iterative design to be feasible for most projects. Otherwise, you waste too much work developing things that have to be redone.
A couple things to remember:
Test early and often.
The goal of usability testing is to find problems with the UI, not Q/A your code.
Therefore, if users can see the parts of your UI you are interested in testing and interact with them in a realistic way (e.g., click on buttons and links), you should be able to collect useful data. If some links are dead-ends, that's okay, as long as there's some way for users to recover and continue on. Basically, with prototypes, the "correct" path should work, but it's okay if incorrect paths don't (as long as there's a reasonably quick way to get back on the correct path). Even static storyboards (non-functioning drawings of a UI) can provide you with some information if you ask the right questions, e.g., "What would you do on this screen if you wanted to view your shopping cart?").
I would suggest a couple rounds of usability testing. First on paper, perhaps later on screen, generally throughout the application lifecycle (take an Agile approach to it).
There is a good argument to be made for paper prototypes. When users see a screen, even limited functionality, they may be hesitant to suggest changes since it looks "done."
Make no mistake, it's not trivial to get it all down on paper, but that's where I would start. Probably start with just a section or two of the application. And make sure somebody with good people skills and/or explaining skills is there to walk the user through it. Have a second person on-hand to take notes. Try to ask open-ended questions, etc.
For a hallway test, I would test with NONE of the functionality implemented.
Test against designs done on a whiteboard or on paper. You'll be surprised at how much you find out in these minimal mockups. And they are very inexpensive to make!
Functional prototypes are for later. If you give your usability subject a functional interface, they are much less likely to question whether you've implemented the right set of features in the first place.
I would make the UI functional, so that the user can really play with it, it will be much better than a static image. People can tell you whether they feel comfortable on the UI.
I would make sure everything in the UI works, or at least takes you to a clear, unambiguous message pointing out that the feature isn't implemented yet.
Showing prototypes to clients with a disclaimer up front about how feature X doesn't work yet will usually be ignored. They'll try out the prototype, click on featuree X and indignantly reply "Feature X doesn't work! This really needs to work in the final version! Why doesn't it work?". The client is confused and unhappy about the product, and it's frustrating for yourself because it overshadows the positive feedback. Besides, you told them it didn't work, why can't they use their imagination to envision how it would work in the final version?
Make it work, be it with a rough version, dummy data, or even a simple message saying "would show results sorted alphabetically now".