Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
On our Scrum board, tasks start at 'To Do', go to 'In Progress', and when you're done with a task, they move to 'To Verify' before ending up in 'Done'. The 'To Verify' column is when you're done with a task and someone else can have a look at it, test it, and comment on it.
This has proven helpful for errors, better code, etc.
To people who have a similar practice: after the developer has addressed the comments/errors, do you verify it again, or do you assume the issues have been addressed and move the task to 'Done'?
I hope this is clear, and would like to hear your thoughts.
This question is not specific to Scrum, I've seen this problem outside agile processes too.
The answer turns out to be: it depends on the issues raised in verification. If there are minor issues raised, and the responsible developer is senior enough, then trust him to fix things the first time around. But if the person doing the verification considers the items too complex, or the Scrum Master lacks the confidence to trust the developer to get it right the second time around, then you move the post-it back to In Progress.
A good example of the kind of error you don't bother checking is a simple typo. A good example of something you would check again is an error in a boundary condition, when there are many interdependent boundary conditions.
In my expirence the fixing of bugs have a 50 - 75% chance of introducing new bugs, especially if the code if not covered by test cases. I would certainly verify it again.
Never assume the issue as being addressed until it is independently (i.e. not by the person who fixed it) verified.
We have no To Verify column. A task is in progress till it has been implemented and tested. An untested task cannot be done and why should someone else test it, report back to the programmer and then the programmer has to fix it? This only adds latency to the workflow. The programmer should test his own code, write unit tests for it if possible and integrate it into the app if possible and test it here as part of a natural workflow. That way he finds his own bugs and can immediately fix them. When he sets the task to Done, he's not only convinced to have fully implemented the task, but also that the task is bug free.
Okay, we all know, that means little. Sometimes bugs are found much later, but these are then not so obvious bugs and usually their fix will be a task of its own.
In the projects I have been in (both agile and non-agile), bug fixes were always verified by someone else. Quite often new bugs are introduced, so a bit exploring around the fix is needed. I have even seen some debug code forgotten in the build - everything works fine but extra files appear from nowhere.
Its also possible the developer did not find all paths to a bug, or that the bug report was so unclear that the developer made a wrong fix - e.g. if something has been misunderstood, and correct functionality is reported as a bug.
To ensure things stay done when they are done, tests for the fix should also be added to your automated tests - otherwise some embarrassing corner-case bug will re-appear months later.
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Interview question. Please help what the tester need to do?
The testers responsibility at that stage ends when the details of the bug are logged into whatever Bug Tracking tool is being used by the team.
At that point onwards, till the bug is fixed, it is the Development team owners responsibility.
Normally, the Dev team owner / test team leads etc would be triaging the bugs at regular intervals to decide on their priority.
Based on the priorities decided, its upto the Dev team lead to ensure the fixing of the bug.
The tester is really out of the loop till the bug is fixed and a new build supplied to the tester with that bug fix for testing.
Whenever you file a bug you can set the priority of the bug. If you feel it is important then put the priority as high and explain why the priority is high.
Now if the develoment team does not want to fix, it means they think that it's chances of occurring at the customer side are less. So you will have to explain why customer can also face it.
If it is blocking issue for the testing and then mention like that and do nothing until it is solved.
This depends on how you define important. One very common strategy is to find out the most used customer scenarios and if this bug is affecting those. As a tester one should think about the whole product experience and if this bug is interrupting the most basic system flows then its your duty as a tester to make sure it is fixed.
Your first step is to first try to be as objective as possible and collect the data, regarding user statistics, scenarios how often the bug could appear, %'s etc. This step is very important to present your case.
Then using this information try to rationalize with the dev on the importance of this bug.
If its not working still then you have to go another notch up and talk to your manager about this.
If he thinks that your case is valid and fights along with dev/dev team, its great.
Otherwise come back to step 1. Repeat 1-4 and if in meanwhile if you can find other teams or issues that bug can cause that will be additional benefit.
This answer is one of the several you could give in an interview. And you may have to vary depending on the interaction and other constraints. But this direction gives you a good start.
If developer denies fixing it for no proper reason, escalate to manager and raise severity.
If it was management decision to allow the bug in that release but fix it later, then create a Ticket (production ticket or feature ticket) and track it. Mark it deferred it for now.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a project hand over from on shore team to our team (off shore) not long ago. However we were having difficulties for the hand-over process.
We couldn't think of any questions to ask during their design walk-through, because we were overwhelmed by the sheer amount of information. We wanted to ask, but we didn't know what to ask. Since they got no question from us, the management think that the hand-over process was been done successfully.
We had tried to go through all the documentation from our company wiki page before attending the handover presentation, but there are too many documents, we don't even know where to start with.
I wonder, are there any rules or best practices that we can follow, to ensure a successful project hand-over, either from us, or to us.
Thanks.
In terms of reading the documentation, personally I'd go for this order:
Get a short overview of the basic function of the application - what is it meant to achieve. The business case is probably the best document which will already exist.
Then the functional specification. At this point you're not trying to understand any sort of how or technology, just what the app is meant to do. If it's massive, ask them what they key business processes are and focus on those.
Then the high level technical overview. This should include an architecture diagram, required platforms, versions, config and so on. List any questions you have.
Then skim any other useful looking technical documents - certainly a FAQ if there is one, test scripts can be good too as they outline detailed "how to" type scenarios. Maybe it's just me but I find reading technical documents before I've seen the system a waste - it's too academic and they're normally shockingly written. It's certainly an area I'd limit the time I spent on if I didn't feel I was getting a reasonable return for the time I was spending.
If there are several of you arrage structured reviews between you and discuss the documents you've read, making sure you've got what you need to out of it. If the system is big then each take an area and present to the others on it - give yourselves a reason to learn as much as possible and knowing you're going to be quizzed is a good motivator. Make a list of questions where you don't understand something. Having structured reviews between you will focus your minds and make it more of an interactive task, rather than just trawling through page after page of tedious document.
Once you get face to face with them:
Start with a full system demo. Ask questions as they come up, don't let them fob you off with unclear answers - if they can't answer something have it written down and task them with getting the answer.
Now get the code checked out and running on your machines. Do this on at least two machines - one they lead, one you lead. Document the whole process - this is the most important step. If you can't get the code running you're screwed.
Go through the build process. Ensure that you can build the app (including any automated build and unit tests they may have). Note that all unit tests should pass - if they don't or if they say "oh, that one always fails" then they need to fix that before final acceptance.
Go through the install process. Do this at least twice, one they lead, once you lead. Make sure that it's documented.
Now come up with a set of common business functions carried out with the application. Use this to walk the code with them. The code base will be too big to cover the whole thing but make sure you cover a representative sample.
If there is a database or an API do a similar exercise. Come up with some standard data you might need to extract or some basic tasks you might need to carry out using the API and spend some time working through these with them.
Ask them if there's anything they think you should know.
Make sure that any questions you've written down anywhere else are answered.
You may consider it worth going through the bug list (open and closed) - start with the high priority ones and talk through anything particularly worrying looking. Even if they've fixed it it may point at a bit of code which is troublesome.
And finally if the opportunity exists - if there are any outstanding bugs or changes, see if you can pair program a couple.
Do not finally accept the app unless you are 100% sure you can:
Get the code to compile
Get the code to build (including the database)
Get the application installed
Do not accept handover is complete until they have:
Documented anything you picked up on that wasn't covered to your satisfaction
Answered ALL of your questions - a question they won't answer after being asked repeatedly screams of something they're hiding
And grab their e-mail addresses and phone numbers. Even if it's only informal they'll probably be willing to help out if the shit really hits the fan...
Good luck.
My basic process for receiving a handover would be:
Get a general overview of the app, document it
Get a list of all future work that the client expects
... all known issues
... any implementation specifics
As much up-to-date documentation they have
If possible, have them write some tests for critical components of the system (or at least get them thoroughly documented)
If there is too much documentation (possible) just confirm that it is all up to date, and make sure you find out from them where to start, if it is not clear.
Ask as many question as possible; anything that comes to mind, because you may not have the chance again.
Most handovers, perhaps all of them, will cause a lot of information to be lost. The only effective way to perform a handover that I have seen is to do it gradually. One way to do it is to allow a few key people from phase One to stay on the project well into Phase Two.
The extreme solution is to get rid of all handovers, and start using an Agile mindset.
As a start, define the exit criteria for the handover. This should be discussed, negotiated and agreed with both parties and make sure higher management knows this. Then write up a checklist of all things needed to achieve the exit criteria and chase it.
Check out "Software Requirements" and Software Requirement Patterns for ideas on questions to ask when gathering information about a project. I think that just as they would work for new development, they would also help you to come to terms with an existing project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly.
How do you verify that users' requirements are addressed in the code you're working on?
You show it to the users as early and as often as possible.
Chances are that what they've asked for isn't actually what they want - and the best way of discovering that is to show them what you've got, even before it's finished.
EDIT: And yes, this is also an approach to answering questions on StackOverflow :)
You write tests that assert that the behavior the user requires exists. And, as was mentioned in another answer, you get feedback from the users early and often.
even if you talk with the user, and get everything right, the user might have gotten it wrong. They won't know until they use the software that they didn't want what they asked for. the surest way is to do some sore of prototype that allows the user to "try it out" before you write the code. you could try something like paper prototyping
If possible, get your users to write your acceptance tests. This will help them think through what it means for the application to work correctly. Break the development down into small increments that build on each other. Expose these to the customer early (and often), getting them to use it, as others have said, but also have them run their acceptance tests. These should also be developed in tandem with the code under test. Passing the test won't mean that you have completely fulfilled the requirements (the tests themselves may be lacking), but it will give you and the customer some confidence that you are on the right track.
This is just one example of where heavy customer interaction pays off when developing code. The way to get the most assurance that you are developing the right code is having the customer participating in the development effort.
How do you verify that users' requirements are addressed in the code you're working on?
For a question put in this form the answer is "You can't".
The best way is to work with users from the very first days, show them prototypes and incorporate their feedback continuously.
Even so, at the end of the road, there will likely be nothing resembling what was originally discussed and agreed on.
Ask them what they want you to build before you build it.
Write that down and show them the list of requirements you have written down.
Get them to sign off on the functional design.
Build a mock up and confirm that it does what they want it to.
Show them the features as it is being implemented to confirm that they are correct.
Show them the application when it's finished and allow them to go through acceptance testing.
They still wont be happy but you will have done everything you can.
Any features that are not in the document they signed off can be considdered change requests which you can charge them extra. Get them to sign off everything you show them, to limit your liability
by using development method that often controls alignement between the implementation and the requirements.
For me, the best way is to involve a "expert customer" to validate and test in a interative way as often as possible the implementation ....
If you don't, you risk to have, as you said, a very beautiful soft perfectly useless....
you can try personas; a cohort of example users that use the system.
quantify their needs, wants, and make up scenarios of what is important to them; and what they need to get done with the software.
most importantly- make sure that the users (the persona's) goals are met.
here's a post I wrote that explains it in more detail.
You write unit tests that expect an answer that supports the requirements. If the requirement is to sum a set of numbers, you write
testSumInvoice()
{
// create invoice of 3 lines of $1, $2, $3 respectively
Invoice myInvoice = new Invoice().addLine(1).addLine(2).addLine(3);
assertTrue(myInvoice.getSum(), 6);
}
If the unit test failed, either your code is wrong or possible was changed due to some other requirement. Now you know that there is a conflict between the two cases that needs to be resolved. It could be as simple as updating the test code or as complex as going back to the customer with a newly discovered edge case that isn't covered by the requirements.
The beauty of writing unit tests is it forces you to understand what the program should do such that if you have trouble writing the unit test, you should revisit your requirements.
I don't really agree that code can be perfect...but that's outside of the real question. You need to find out from the users prior to any design or coding is done what they want - ask them 'what does success look like', 'what do you expect when the system is complete', 'how do you expect to use it'...and video tape the response, mindmap it, or wireframe it and than give review it with them to ensure you captured the most important aspects. You can than use those items to verify the iterative deliveries...expect the users to change their mind/needs over time and once they have 'it in their hand' (IKIWISI - I Know It When I See It)...and record any change requests in the same fashion.
AlbertoPL is right: "Most of the time even the users don't know what they want!"
And if they know, they have a solution in mind and specify aspects of that solution instead of just telling the problem.
And if they tell you a problem, they may have other problems without being aware that these are related by having a common cause or a common solution.
Thus, before you implement mockups and prototypes, go and watch the use of what the customer already has or what the staff is still doing by hand.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At various times in my career I have encouraged staff I worked with and/or managed to track defects in artifacts of the development process other than source code (i.e. requirements, tests, design). Each time the request has been met with astonishment, confusion and resistance. It seems so obvious to me that I'm always a little shocked when people resist the idea.
What we get from this exercise is a picture of where bugs are created and where they are found (in what part of the process). If we are building bad requirements then we we'll know it and can work to improve them.
Is anyone else collecting information on defects not in source code?
Yes, track them all.
Documentation, design docs, requirements, etc.
I am also as astonished as you when I hear "arguments" against it.
At the very least the tracking system should be able to identify where the defect was found and what part of the process it was injected.
Absolutely. Just look at Ubuntu Bug #1.
Yes, definitely. The artifacts surrounding your code--models, specs, doco, requirements info, use cases, etc--can all contain errors that affect the code itself.
Normally bug tracking systems have an assumption that they're a list of things that are to be fixed or implemented. Tracking bugs in requirements or other documentation (e.g. task lists) doesn't seem like it's the same thing. It's more a matter of keeping records so you can trend problems and evaluate if you're making fewer of them.
I'm tracking them, but outside of our bug tracking system.
Well duh... anything you can improve, do what can to improve!
Treating it all as bug tracking makes sense - opinion will vary, as you note - but using one tracking system would give a coherent big picture of it all, let tasks be assigned, etc. Maybe a demo, a slide show or something aimed at using these systems in ways beyond the original source code tracking - pictures convince more than words.
I've normally tracked the source of all defects. They may get fixed in the code, but they don't necessarily get caused by that.
Wrong requirement, wrongly interpreted requirement, bad design, developer brainf*rt, bad documentation, wrong test, missing test, outdated test, code that doesn't do what the developer does, tool/compiler error (very rare, in my view), build system problem....
To me, they're all "the system doesn't do what the customer wants it to do", and all indicate something must be changed in order to make it do what the customer wants it to do. Arguing whether it's defect or feature, or a source code bug or some other issue distracts from addressing the issues to me.
One biggie that no one seems to have mentioned is to start a database of bad smells and traps for use when performing peer reviews.
This is an invaluable resource for the peers actually performing the review.
It definitely pays off in the long term. This should also be a live document, database, etc. that is added to as:
bugs are fixed
as peers perform reviews, and
as new blood arrives to join the team(s) bringing with them new knowledge and experience.
HTH.
cheers,
Rob
aboslutely. if your process is well enough along to trace back source of defect to orgin great. it helps customers and designers qualify the constraints in which they operate.
customer: develop robot to cut grass where all blades of grass are to be cut to a precise uniform length
designer: we will use left-handed kindergarten scissors mounted perpendicular to the ground ensure crisp/precise cuts
QA: cuts are precise.
customer: why does it take the robot 6 days to cut grass. we need in in 30 mins or less!
clearly tracking the source of the performance defect can help in molding conversations and improving the process going forward.
We track bugs in software, errors in documents, errors in drawings, and requests for new features all using the same track tool.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Recently we added a couple web service machines, and they couldn't successfully email out. We (IS) did not notice this, and the exceptions were being swallowed up and logged, but no-one noticed for about a month.
Needless to say, many purchase orders, and retraction of purchase orders, were never sent out for the past month.
While this isn't any one person's fault really, is there any GOOD way to break this to someone non-technical that is higher up in the company than you?
Thanks in advance for any advice, I'm freaking out just a bit. :)
Edit: Reading this over, I'm more asking for tips on how to break the news. I understand there isn't a GOOD way, just maybe successful tips that have worked for you in the past.
Resolution in case anybody was wondering...
new web service machines IPs weren't added to our mail servers list of trusted IPs. :)
Put emphasis on the fact that the problem was discovered and fixed swiftly by your team. Have detailed metrics on the number of failures, which customers were affected, etc. ready, in-hand. Have a contingency plan ready to describe that will prevent similar issues from happening in the future. Engender a sense of comradery with the higher-up because you are all on the same team and it's a team problem. If you convey a sense of urgency and give the impression that you appreciate the impact to the bottom line as much as they do, they will respond much better.
Lowly techs often make the mistake of going to upper management with their tail tucked between their legs, like a child who shamefully shows his parents the lamp he broke and waits for a spanking. You are an adult and a professional - leap into action and coordinate the right people to be in place to make the right decisions to fix it. In a case like this, that inevitably means bringing in upper management, but do so with an intention of solution seeking, not fear.
You bring shame to your department. You know what you must do.
http://en.wikipedia.org/wiki/Seppuku
Gee, bad news for ya - but it is someones fault.
The folks who built the server and installed the apps and signed off on putting them into production use without testing them. :-)
Pretty much the only way to break this to the management is to acknowledge the MAJOR FUBAR and show them the plan for making sure this kind of situation doesn't happen again.
Good luck. :-)
Raise the issue as soon as possible.
Come with a clear plan/lists of steps of how to mitigate the problem:
how to fix the issue, so further processing works fines
is it possible to determine which transactions are affected
what is necessary to ensure this does not happen again - automated tests for deployment, preproduction stage for new servers, anything else?
Be proactive in resolving the situation. As long as it's not a direct fault of yours, you might even benefit from the whole snafu.
Being honest and direct is the best, rather than trying to cover up certain aspects of what happened.
Don't blame anyone, simply accept that a problem happened, propose a solution, and execute on that solution. Communicate this plan to your superiors and be clear about why you are taking the steps you are taking to solve the problem.
The time to find responsible parties and blame comes after, solving issues having to do with collecting money from customers comes first.
Once the immediate problem is solved, then find a way to ensure that whatever caused this problem cannot happen again. Have a plan.
Point out
What happened
Why it happened
What you think the fallout was (ie, missed purchase order retractions)
What you've already done to fix it
What you need to do to (if there's more fixing needed)
What management needs to do, say, spend (if needed)
What can be done to prevent similar incidences in the future
Be proactive about reporting it and spin the negative into a positive ("we've learned the following valuable lessons").
Avoid pointing the finger wherever possible unless asked, and try to spin that in a positive light too. Techs make mistakes; they are human after all. If they can learn from the mistakes made they're probably worth keeping around.
Whatever you do, make sure you have agreed on it beforehand with your immediate superior, at least. Even if you are iS director.
lie or cover it up :-), if you can shed the blame to a new intern ill award you 10 kittens!