Related
Intro:
I'm working for a contractor company. We're making SW for different corporate clients, each with their own rules, SW standards etc.
Problem:
The result is, that we are using several bug-tracking systems. The amount of tickets flow is relatively big and the SLA are deadly sometimes. The main problem is, that we are keeping track of these tickets in our own BT (currently Mantis) but we're also communicating with clients in theirs BT. But as it is, two many channels of communication are making too much information noise.
Solution, progress:
Actual solution is an employee having responsibility for synchronizing the streams and keeping track of the SLA and many other things. It's consuming quite a large part of his time (cca 70%) that can be spend on something more valuable. The other thing is, that he is not fast enough and sometimes the sync is not really synced. Some parts of the comments are left only on one system, some are lost completely. (And don't start me at holidays or sickness, that's where the fun begins)
Question:
How to automate this process: aggregating tasks, watching SLA, notifying the right people etc. partially or all together?
Thank you, for your answers.
You need something like Zapier. It can map different applications and synchronize data between them. It works simply:
You create zap (for example between redmine and teamwork).
You configure mapping (how items/attributes in redmine maps to items/attributes in teamwork)
You generate access tokens in both systems and write them to zap.
Zapier makes regular synchronization between redmine and teamwork.
But mantis is not yet supported by Zapier. If all/most of your clients BT are in Zapier's apps list, you may move your own BT to another platform or make a request to Zapier for mantis support.
Another way is develop your own synchronization service that will connect to all client's BTs as each employee using login/password/token and download updates to your own BT. It is hard way and this solution requires continious development to support actual virsions of client's BTs.
You can have a look a Slack : https://slack.com/
It's a great tools for group conversations
Talk, share, and make decisions in open channels across your team, in
private groups for sensitive matters, or use direct messages
one-to-one.
you can have a lot of integrations tools, and you can use Zapier https://zapier.com/ with it to programm triggers.
With differents channels you can notifying the right people partially or all together in group conversation :)
The obvious answer is to create integrations between all of the various BT's. Without knowing what those are, it's hard to say if that's entirely possible. Most modern BTs have an API and support integrations. Some, especially more desktop based ones, don't. For those you probably have to monitor a database directly.
Zapier, as someone already suggested, is a great tool for creating integrations and may already have some of the ones yo need available. I love Slack and it has an API, but messages are basically just text and unless you want to do some kind of delimiting when you post messages to its API, it probably isn't going to work.
I'm not sure what budget is, but it will cost resources to create the integrations. I'd suggest that you hire someone to simply manage these. Someone who's sole responsibility is to cross-populate the internal and the external bug tracking system and track the progress in each. All you really need is someone with good attention to detail for this, they don't have to be a developer. This should be more cost effective than using developer resources on this.
The other alternative is simply to stop. If your requirements dictate that you use your clients' bug tracking software for projects you do for them, just use their software and stop duplicating the effort. If you need some kind of central repository or something for managing work maybe just a simple table somewhere or spreadsheet with the client, the project, the issue number, the status and if possible a link to the issue in the client's BT. I understand the need and desire for centralizing this, but if it's stifling productivity, then the opportunity costs are too high IMO.
If you create an integration tool foe this, you will indeed have a very viable product. This is actually a pretty common problem.
I'm looking to track how many people on the web have made my software program file available for download without my permission.
I've thought of searching for my product name and file size to catch possible thieves.
Do you think a web search API is the best way forward?
EDIT: I plan to use the detection data for survey purposes.
There are web analytics companies which could probably help you more than a roll-your-own solution. Consider the companies that big music and film vendors use, or check with the Business Software Alliance.
Ultimately, you are chasing your tail if you are looking to thwart piracy with the results of this kind of activity. However, if you are steadfast that you must try to understand what is going on, you need professional (web analytic) help. There are so many variations out there that you need someone experienced in tracking this kind of information, since you could easily get a false sense of security, or an inflated sense of activity.
No. This ultimately doesn't help you. What are you going to do? Send them emails tellings them to buy your software? Sounds like a spam filter will get it.
I would suggest in prevention rather than detection. A registration or activation process is pretty popular and reasonably successful, though if you have an amazing app it won't stop the really bad people from hacking it, however it will make it much more difficult.
I have a web site I am building for a client. I now have a tester on the project with me.
I feel testers are needed. REALLY! I cannot test my own code. I also appreciate the value of a new set of eyes. But what desires reporting?
It is easy to say everything should be reported, but I don't have someone between me and the tester to filter out the unimportant requests. The tester does not know the system nor the target user well. She is assigning me tasks and not the project manager. I think this will change soon, but until it does, what do you recommend? There seems to be a believe that our users have NEVER used the interent before at all, and they are as dumb as rocks.
The problem I am having is that EVERYTHING the tester suggests is being accepted automattically and assigned to me.
I have many cases that make me drop my jaw and say "Really? Are you serious? This deserves to be a issue?"
Ex: Need to add text at top of page that says "* = Required" for required fields.
Have you ever felt this way? How did you deal with it?
For now, I am just doing as I am told, but I am making it clear I do not agree.
It sounds to me like your tester is doing the right thing. You can't assume any level of user expertise when testing an application. If a user can break something, they will.
You and your tester need to work out a severity scale. The outliers (those that anybody with Internet experience could probably work around/would never hit) would be considered low priority and sit on the back burner until you knock out the high priority items.
...never the less, those outliers should still be logged because they can definitely come back to bite you in the ass in the end.
You need to add priorities for your issues. This will allow you to do the important issues first, and cosmetic issues last. Here is example priorities from Jira:
Priority 1 - a reproducible crash; issue blocking any further testing or development of a specific feature; loss of user's persistent data; huge memory leak
Priority 2 - a major issue that must be fixed before the product is released; prevents users from using a feature; negatively affects partner; significant memory leak in frequently used functionality
Priority 3 - a minor issue that should be fixed before a product is released; does not prevent users from using a product; highly visible usability issue; small memory leak in rarely used functionality
Priority 4 - a purely cosmetic issue; doesn't affect functionality
Actually it sounds like your tester is doing the right thing (and the text for "* = required" is a very good idea).
In addition to the suggestions about prioritizing reports, I would suggest that you categorize the reports as to whether they refer to user experience or functionality.
You and the tester will never exactly agree on what "needs" to be reported. Just set the priority on issues correctly, and get on with fixing the high-priority stuff first.
One thing you absolutely do not want to do is to discourage the tester from filing bugs. That'll come back to bite you when something ships totally broken, and they say "I thought that was just how it worked".
Do make sure that you're communicating the development schedule and status properly, so they don't waste time testing features that aren't sufficiently complete.
I would report to the client what each change will cost in terms of time and money. Things that are legitimate bugs you'll probably need to fix on your own time (unless your contract says otherwise). Things that are design / subjective issues you should be able to assign a cost to. Let the client know what it is going to cost them and they can decide if they want to proceed or not.
Hopefully you've got some sort of a project specification that the client has signed off on so that you know when the project is complete and what sort of things are not included in the project scope. If not, you might have a bit of a fight on your hands. For changes that you think are outside of the project scope, you might need to compromise - maybe bill them at a cheaper rate or split the cost with them. If you're in that situation it's a good learning experience to get everything documented in the project specification so that there is no question about what falls outside of the project scope. I've been there - one experience like this is enough to teach you to put more work into your specifications.
Report everything and triage. After a bit of time, she'll start to understand what gets past triage and what doesn't. Humans can learn; teach.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We're neck deep in a project right now, schedules are tight (but reasonable). Our general strategy is to get a strong beta done, release it for testing, and get feedback from our testers.
Quite frequently, we're being hit by small things that spiral into long, time-costing discussions. They all boil down to one thing: While we know what features we need, we are having trouble with the little details, things like 'where should this message go' and 'do they need this feedback immediately, or will it break their flow, so we should hold off'?
These are all things that our testers SHOULD catch, but
a) Each 'low priority' bug like this drains time from critical issues
b) We want to have as strong a product as possible
and
c) Even the best testing group will miss things from time to time.
We use our product, and we know how our users use the old version...but we're all at a loss as to how to think like a user when we try to use the new version (which has significant graphical as well as underlying changes).
edit - a bit more background:
We're writing a web app used by a widely-distributed base of users. Our app is a big part of their jobs, but not the biggest (and, of course, we only matter to them when it doesn't work). Getting actual users in to use our product is difficult, as we're geographically distant from the nearest location that serves as an end user (We're in Ohio, and I think the nearest location we serve is 3+ hours away).
The closest we can get is our Customer Service team (who have been a big help, really) but they don't really think like the users either. They also serve as our testers (it really motivates them to find bugs when they know that any they DON'T find may mean a big upswing in number of calls). We've had three (of about 12 total) customer service reps back here most of the week doing some preliminary testing...they've gotten involved in the discussions as well.
Watching someone using the app is a huge benefit to me. Possibly someone who is not entirely familiar with it.
Seeing how they try to navigate, how they try to enter information or size windows. Things we take for granted after creating/running the app hour after hour, day after day.
Users will always try and do things you never expected and watching them in action might bring to light how you can change something that might have seemed minor, but really makes a big impact on them.
Read Don't make me think.
Speaking generally, you can't. There's not any way you can turn off the "programmer" part of your brain and think like a user.
And you're right about (c), testing groups don't necessarily catch all the bugs. But the best thing you can do is get a testing group comprised of real, honest-to-goodness end users, and value their feedback. Draw further conclusions from their general comments.
If you want to know how your users will see your system, the closest you can get is usability testing with real users. Everything else is just heuristics and experience, and is also subject to error. There's no such thing as a bug-free product, but you should be able to get a "strong" product with usability testing.
Buy a cheap, easy to use video camera and record your testers using the app. Even better, get some people unfamiliar with the app. to use it and video them. It's relatively cheap, and you'd be surprised what it will highlight.
I like policy of "eating your own dog food"("http://en.wikipedia.org/wiki/Eat_one's_own_dog_food). It brings you one step closer, because you become a user, although you might think like one.
Try to use your app when you are very hurry (e.g. you have someone who waits for a dinner).
You will see all this little things because you have to wait, you have to go back to the mouse of the keyboard, etc.
And also, make your wife use it. Or your mother.
Another useful test : help someone to use it, by phone. If he can't find the button with your directions, that's probably a bug.
The important thing is to get enough information that you yourself can become a "user". Once you do that you can answer most questions yourself.
The way I always do this is to go talk with them about what they need to do, what they typically do, and how they use their current tools to do it. Then (very important) sit with them while they do it. Make sure you get on with them well enough that you can come back to them with questions about how they handle edge cases you think of later (often the answer will be the appalling "we go around the system manually for that").
I will almost always notice something they are doing that is a royal PITA that they didn't bring up because they are used to having to do that and don't know any better. I will always notice that their %90 typical workflow isn't the easiest workflow the tools provide.
You can't really rely on plain old-fashioned requirements gathering by itself, because that is asking them to think like a developer. They generally don't know what is possible to do with your software, what is easy, and what is hard. Also they typically have no clue on GUI design principles. If you ask them for design input they will just tell you to put any new control on their favorite page, until the thing looks like a 747 control panel.
The problem is often that even the users don't know what they want until they are actually working with the software. Sometimes, a small oversight can be a big usability problem, sometimes a well thought out function that was requested by many users sees only little use.
My suggestions to decrease the risk of not implementing the right usability features:
Take a look at users actually doing their day to day work. Even if they use another software or no software at all. You will be able to determine the artifacts they often need to get their job done. You will see what data they frequently need. Concentrate on the artifacts, data and workflows most used. They should be the most usable. Exotic workflows may be a bit more time consuming for the users than often used workflows.
Use working prototypes of the GUI to let users work through a realistic workflow. Watch them and note what hinders them and what works well. Adjust your prototypes accordingly.
If an issue arises in an often-used part of your software, it is time to discuss it now and in details. If the issue concerns a seldom used part, make it a low priority issue and discuss it if you have the time. If issues or suggestions are low priority, they should stay low priority. If you can't determine if solution A or solution B is the best, don't run in circles with the same arguments over and over. Just implement one of the solutions and see if the beta testers like it. The worst thing you could do is waste time over tiny issues, while big issues need to be fixed.
A software will never be perfect, because the viewpoints of users differ. Some users will think that a minor problem breaks the whole application. Others will live with even severe usability issues. People tend to lend their ear to those who argue the loudest. Get to know your users to separate the "loud" issues from the important ones. It takes experience to do this, and sometimes you will make wrong decisions, but there is no perfect way, only one of steady improvement.
If you can, set aside a certain amount of usability development resources for the rollout phase of your software. Usability issues will arise when people start working with it in a real production environment. Sometimes it is not important to present the perfect software, but to solve issues quickly as they arise.
The flippant (yet somewhat accurate) answer to how to think like a user is put a knitting needle in your ear and push really hard.
The longer response is that we as programmers are not normal and I mean that in a good way. I scratch my head at the number of people who still run executables they receive from strangers in emails and then wonder how their computer got infected.
Any group of people will in time develop their own jargon, conventions, practices and expectations. As a programmer you will expect different things from an operating system than Joe User will. This is natural, to be expected yet hard to work around.
It's also why BAs (business analysts) exist. They typically come from a business or testing background and don't think like programmers. They are your link to the users.
Really though, you should be talking to your users. There's no poitn debating what users do. Just drag a few in and see what they do.
A usability test group will help.. tests not focused on discovering bugs, but on the learning curve of the new design, made by a group of users, not programmers.
I treat all users like malicious idiots.
Malicious because I assume all users are going to try and break my code, do stuff that is not allowed, avoid typing in valid data, and will do anything in their power to make my life hell.
Idiots because again I can't assume they will understand simple stuff like phone formats, will run away screaming if presented to many choices, and will not make any leap of faith on complicated instructions. The goal is to hold their hand the entire way.
At the same time, its important to make sure the user doesn't realize you think they're an idiot.
To think like a user, be one. But are these actually bugs that your testers are reporting? Or are they "enhancement requests"? If the software behaves as designed per requirements and they just don't like the way it operates, that's not a bug. That's a failure of requirements and design. Make it work, make it rock solid, make it easy to change and you'll be able to make it what your users want.
I see some good suggestions here, especially observing people trying to use you app. One thing I would suggest is to look at the order in which things are presented to the user on paper forms (if they use these to do data entry from) and make the final data entry page mimic that order as closely as possible. So many data entry errors (and loss of data entry speed) are from them having to jump around on the page and losing their place. I did some work for a political campaign this year and in every case, entering data was made much more difficult because the computer screen did things in a differnt order than the paper inputs. This is particularly important if the form is one that can't be changed (like a voter registration form, a campaign has to use what the state provides) to match the computer screen. ALso be consistent from screen to screen if possible. If it is first Name last name on one form, making it last name first name on the next will confuse people and guanteee data entry errors.
If you are truly interested in understanding users though I strongly suggest taking a course in Human factors engineering. It is an enlightening experience.
The 'right' way to do this is to prototype (or mock up) your new interface features, and watch your users try to use them. Nothing is as enlightening as seeing a real user try to use a new feature.
Unfortunately, given most projects time and resources, this is not possible. If that is the position you are in I would recommend you discuss in the team who has the best grasp of usability, and then make them responsible for usability decisions - but that person will need to regularly consult real users to make sure his/her ideas are consistent with what the users want.
I'd suggest doing some form of usability testing; I've participated in such in the past, and found them quite useful.
If you were writing a ticketing system, for example, bring up tasks, and ask questions like "how would you update this ticket" or "what do you expect to happen if this button is clicked".
You don't necessarily need a full application, either, in some places screen shots can be used.
You could take the TDD/BDD approach and get the users involved before beta, having them work with you on refining requirements as you write your unit tests. We're beginning to incorporate some of those trends into our current project, and we're seeing fewer bugs in the areas where we have involved the users earlier.
There is no "think like a user" technique, get your hands on someone who knows nothing of the project and throw what you have done at them.
It's the only way to see how the look + feel + functionality present themselves to the end user.
Once you shocked that person who knew nothing of the product, listen to all of their idiotic (or so you think they are) complaints, fix them, arrange every silly cosmetic thing they point out (either by fixing the UI or by improving whichever documentation you had)..
and after you have satisfied the person you chose to look at your app from zero knowledge on the subject first round, pick another ...and another... until they stop being shocked when they see it, and they don't get stuck on.. "ok.. what does this do?" kind of phases.
You (as a member of the project, be it the project manager, developer, etc) will never think like a user is my answer to that question.
Old saying: You can make something "fool proof" but you can't make it "Damn-fool proof".
Additionally: When you make something "idiot proof" the world invents a better idiot.
Other than that, I agree with what everyone else said.
Ask someone with absolutely no knowledge, insight or programming experience to use the program and try to figure out every function of the program.
People who would NEVER use such a program are most likely to find bugs.
See it as a new Safari user (or FF) who tries to put the URL inside the search field...
As a programmer you guess no-one would be that stupid (or, well.. unknowing), but people actually sometimes find themselves in these situations. As a programmer, we miss these things.