Is it a bug? Unintended behavior from agreed specifications [closed] - testing

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
When the specification and the agreed behavior of the application operates without fault, however the outcome turns out to be undesired, what should this be called? Is it a bug?
How can it be a bug when the software performs as desired from the specifications and the agreed behavior?

I do consider this a bug. In my experience 90% of all bugs* are about not understanding what is wanted: ie. design bugs.
In this case the design itself was buggy leading to a bad specification. The fault here lies not in the person who implemented the program but in the person/team who designed the system in the first place.
In a Scrum setting, I usually consider fixing design bugs as completely new stories since it really results in a new specification (and along with it new tests and new definition of done). I don't consider it a "bugfix" task.
*note: In most modern languages. In C, 90% of all bugs are about bad memory management

Why does it matter how we call it? It looks to me that there is a deeper problem out there where you are - instead of realizing there is work to do and doing it ground is being prepared for blame-laying. This serves no purpose and everyone wastes their time.
I understand the frustration on both sides, but right now what you have to do is a) identify what has to be changed for the software to work as expected (and therefore bring users the benefit they hoped for) and b) do it. Call it a bug or call it a new story - or call it a butterfly if it helps, but move forward.

I would have thought that the result was part of the agreed specification. Which would mean that the program is not behaving as specified. Either that, or the expectation of the result is flawed and the program is showing the correct result.

Yes its a defect.
But its a defect in the specification.
This in turn could be a defect in the requirements gathering.
This could either be the analyst misunderstanding the users request, or, the user requesting the wrong thing or usually a mixture of both. (But I always blame the analyst! --- on the same basis that you would still blame a doctor for mis-diagnosis on the basis of poor communication with an inarticulate patient).
In theory this is the same as any other defect - report it, fix the spec, recode and retest.
However in practice these can be a real pain as they cross organisational , and often company boundries. If you are a subcontracting software company and you produced the software according to the specs then contractually this is not a defect in your software, you should get new, better specs and be paid extra to code to the new specs.

People from TDD background say that "a bug is a test that was not written". With that in mind, this unexpected behaviour is a test that was not written, because of a specification that was not included.
The agile movement is about delivering good products. That's why it's not important to set a fixed set of requirements to implement according to a contract but correcting and improving the direction of the team at each step (iteration, sprint, etc). So with that in mind I'd say that if you face an unexpected behaviour simple stablish what's the expected behaviour write new stories/test cases that you can verify against and include then in the next sprints.
Change is everything.

This is a faulty specification. If a programmer makes mistakes implementing the spec, these are bugs. If the spec is faulty, but implemented correctly, this is a feature ;-).

Related

How do you verify that users' requirements are addressed in the code you're working on? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly.
How do you verify that users' requirements are addressed in the code you're working on?
You show it to the users as early and as often as possible.
Chances are that what they've asked for isn't actually what they want - and the best way of discovering that is to show them what you've got, even before it's finished.
EDIT: And yes, this is also an approach to answering questions on StackOverflow :)
You write tests that assert that the behavior the user requires exists. And, as was mentioned in another answer, you get feedback from the users early and often.
even if you talk with the user, and get everything right, the user might have gotten it wrong. They won't know until they use the software that they didn't want what they asked for. the surest way is to do some sore of prototype that allows the user to "try it out" before you write the code. you could try something like paper prototyping
If possible, get your users to write your acceptance tests. This will help them think through what it means for the application to work correctly. Break the development down into small increments that build on each other. Expose these to the customer early (and often), getting them to use it, as others have said, but also have them run their acceptance tests. These should also be developed in tandem with the code under test. Passing the test won't mean that you have completely fulfilled the requirements (the tests themselves may be lacking), but it will give you and the customer some confidence that you are on the right track.
This is just one example of where heavy customer interaction pays off when developing code. The way to get the most assurance that you are developing the right code is having the customer participating in the development effort.
How do you verify that users' requirements are addressed in the code you're working on?
For a question put in this form the answer is "You can't".
The best way is to work with users from the very first days, show them prototypes and incorporate their feedback continuously.
Even so, at the end of the road, there will likely be nothing resembling what was originally discussed and agreed on.
Ask them what they want you to build before you build it.
Write that down and show them the list of requirements you have written down.
Get them to sign off on the functional design.
Build a mock up and confirm that it does what they want it to.
Show them the features as it is being implemented to confirm that they are correct.
Show them the application when it's finished and allow them to go through acceptance testing.
They still wont be happy but you will have done everything you can.
Any features that are not in the document they signed off can be considdered change requests which you can charge them extra. Get them to sign off everything you show them, to limit your liability
by using development method that often controls alignement between the implementation and the requirements.
For me, the best way is to involve a "expert customer" to validate and test in a interative way as often as possible the implementation ....
If you don't, you risk to have, as you said, a very beautiful soft perfectly useless....
you can try personas; a cohort of example users that use the system.
quantify their needs, wants, and make up scenarios of what is important to them; and what they need to get done with the software.
most importantly- make sure that the users (the persona's) goals are met.
here's a post I wrote that explains it in more detail.
You write unit tests that expect an answer that supports the requirements. If the requirement is to sum a set of numbers, you write
testSumInvoice()
{
// create invoice of 3 lines of $1, $2, $3 respectively
Invoice myInvoice = new Invoice().addLine(1).addLine(2).addLine(3);
assertTrue(myInvoice.getSum(), 6);
}
If the unit test failed, either your code is wrong or possible was changed due to some other requirement. Now you know that there is a conflict between the two cases that needs to be resolved. It could be as simple as updating the test code or as complex as going back to the customer with a newly discovered edge case that isn't covered by the requirements.
The beauty of writing unit tests is it forces you to understand what the program should do such that if you have trouble writing the unit test, you should revisit your requirements.
I don't really agree that code can be perfect...but that's outside of the real question. You need to find out from the users prior to any design or coding is done what they want - ask them 'what does success look like', 'what do you expect when the system is complete', 'how do you expect to use it'...and video tape the response, mindmap it, or wireframe it and than give review it with them to ensure you captured the most important aspects. You can than use those items to verify the iterative deliveries...expect the users to change their mind/needs over time and once they have 'it in their hand' (IKIWISI - I Know It When I See It)...and record any change requests in the same fashion.
AlbertoPL is right: "Most of the time even the users don't know what they want!"
And if they know, they have a solution in mind and specify aspects of that solution instead of just telling the problem.
And if they tell you a problem, they may have other problems without being aware that these are related by having a common cause or a common solution.
Thus, before you implement mockups and prototypes, go and watch the use of what the customer already has or what the staff is still doing by hand.

Branching hell, where is the risk vs productivity tipping point? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
My company is floating the idea of extending our version numbers another notch (e.g. from major.minor.servicepack to major.minor.servicepack.customerfix) to allow for customer specific fixes.
This strikes me as a bad idea on the surface as my experience is the more branching a product does (and I believe the customer fixes are branches of the code base) the more overhead, the more dilution of effort and ultimately the less productive the development group becomes.
I've seen a lot of risk vs productivity discussions but just saying "I think this is a bad idea" isn't quite sufficient. What literature is there about the real costs of becoming too risk averse and adopting a heavy, customer specific, source code branching, development model?
A little clarification. I expect this model would mean the customer has control over what bug fixes go into their own private branch. I think they would rarely upgrade to the general trunk (it may not even exist in this model). I mean why would you if you could control your own private reality bubble?
Can't help with literature, but customer-specific branching is a bad idea. Been there, done that. Debugging the stuff was pure hell, because of course you had to have all those customer-specific versions available to reproduce the error... some time later, the company had to do a complete rewrite of the application because the code base had become utterly unmaintainable. (Moving the customer-specific parts into configuration files so every customer was on the same code line.)
Don't go there.
I agree its generally the overhead to handle customer fixes is high, but I wouldn't say don't do it.
I would say charge the customer an arm and a leg (and them some) if they want that much attention. Otherwise don't do customer branches.
You describe the changes that go into the customer branch as "fixes". Because they are fixes, I am assuming that they will also be made in the trunk and are really just advanced deliveries of future bug fixes. If this is the case, why not just create a new "servicepack" (from question: major.minor.servicepack) and give that version to the customer.
For example, you release version 1.2.3.
Customer #1 needs a fix, create version 1.2.4 and give it to Customer #1.
Customer #2 needs a fix, crate version 1.2.5, give it to Customer #2 and advertise that they also get interim fix "for free".
In my travels I haven't personally seen any definite literature for most of the good practices, although I suspect that there is a lot of stuff out there.
Versions numbers provide a really simple mechanism to tie back specific versions in the wild with specific sets of code changes. Technically, it doesn't matter how many levels are in the version number, so long as the developers are diligent in insuring that for every "unique" version released, there is a "unique" version number.
Logic dictates that to limit support costs (which are huge, often worse then development ones), a reasonable organization would prefer to have the least number of "unique" versions running around in the field. One would be awesome, however there are usually quite a few in the real world. It's a cost vs. convenience issue.
Usually, the first number indicates that this series of releases is not backward compatible. The next number says that it mostly is, but a few things have changed and the last number says some stuff was fixed, but the documents all hold true. Used that way, you don't need a fourth number, even if you've done some specific fixes at the request of a subset of your customers. The choice to become more client-driven shouldn't have any effect on your numbering scheme (and thus it's a bad idea).
Branching based on customer requests is absolute madness. One main trunk is essential, so each time you branch it creates massive technical debt. Branch enough, and you can't afford the interest anymore.
Not sure about the literature but... if there is even a chance that you are supporting customer specific fixes it seems sensible to at least have a branching and versioning strategy in place. Although I would be hoping for the strategy never to be used.
I guess the danger is you end up with a culture where customer specific fixes become acceptible and the norm, rather than addressing the true issue that resulted in the need for the fix.
I guess the real cost will largely be dependent on whether its just an interim bug fix to keep a customer happy prior to the next release or whether its more of a one-off customisation. If it is just the former, and the quantity isn't too high I wouldn't be too woried. However if its customisations i would be scared witless.
If you can find a way to compile your one product and turn on each client's features on/off in their "configuration" of a central build that might be something worth figuring out.
Something like this might best be done through a profile/config/role based setup.
You may have to secure one set of client's customizations from another, or maybe they can all benefit from it. That part is up to you.
This way you can build custom views, custom codes, custom roles, custom code, whatever. But, they're a part of one project.
Do not maintain multiple codebases of the same product at any cost. I did it once and doing an hour change takes at least 1 hour for each system if it's in the worst spot. It's suicide.
Do share what you end up doing!
In my experience, the tipping point is reached when it becomes difficult to explain how bugfixes should be propagated through the branches.
Branching hell is an issue because people lose track of what is in which branch. If propagation rules are too complex, people start making mistakes while propagating changes between branches, and that's how you create branching hell.
If the "Cisco" branch raised a defect and we fix it, should we propagate the fix to the current release of the "IBM" branch, or only to the next release of the "IBM" branch? What if IBM raised the same defect? What if IBM doesn't even use the feature that contains the defect? What if IBM later raises the same defect as high priority? With multiple customer branches propagation rules are never simple, so they pretty much guarantee branching hell.

Should human factor be taken into account when deciding on what process to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When you are deciding on what methodology or process to use for your project, should you take into account the human factors? If there is any resistance to things, do you go with the flow or force people to change?
For example, say you want to push for pair programming but the team members resist to working in that mode (or show dislikes), what would you do? Make them get used to it, try to convince them to do it or go with the flow and let them do what they like?
The human factor is the most important one.
If you consider nothing else, consider the culture and proclivities of the group.
People who want a process to fail will succeed. It's far easier to alter process than to alter people.
First try to reason, you may be wrong:
If you have resistance to certain things, you can usually give your points for why you think it's good, and hear their points for why they think it is bad, and come to some common grounds.
You should never force people into doing something against their will, but instead try to convince them based on your logical reasoning. Many times you will see reasons from them that changes your point of view.
If your developer is too afraid to voice their opinion, then you should make them feel comfortable with giving their opinion. If they are still reluctant, then you should consider new developers.
Foot in the door principle:
If you want to try some new concept that neither you nor they have experience in, say pair programming, then you can ask them to try it for 1-2 weeks and then you can sit together again after this trial period and assess the effectiveness. I think most people will find it perfectly reasonable to try something new if they have no experience in it, if it is for the purpose of finding out the method's effectiveness, and if it is only for a trial period.
If after this trial period, the thing you were testing was successful, then your developer will be more open to the idea.
Don't change them, find someone who fits:
If you are 100% for some way of doing things, and your developer is 100% against it, and he won't try it and has no logical reason why, instead of trying to change him you're better off finding a developer that will fit into your way of doing things.
If they are 100% against what you want to change, you have to make a decision. Is the developer themselves more important to you, or is the process that you want to change more important.
If you force someone into something they don't want to do, they will find a way to make your method fail.
Yes. Your development process needs to be humane. That said, there are better and worse development practices and you should strive to use the better practices. The best methodologies understand both human strengths and weaknesses and have practices that promote the former and compensate for the latter.
For example, most agile processes put a high value on trusting developers to do the right thing -- to work hard and value quality. They allow developers to have significant input into the process and into the product. This takes advantage of the human quality of rising to expectations. On the other hand, humans have trouble managing too much complexity at one time, so agile practices insist on breaking things down into manageable chunks.
On the other hand, we know that people don't like to do things that don't directly add value to their work. Agile practices, recognizing the value of things like unit testing, insist on this however and require the developer to conform to it despite the initial reluctance. Using TDD compensates for this somewhat by giving real value to developing tests -- you do them first and let them guide the design. It's a bit of the carrot and stick approach to get developers over the initial reluctance to the point where they can experience the value of the method and buy into it on their own.
Adapting the Process
The key to developing a good process with your people lies in adapting the process to the amount of ceremony that you need or want. We use the RUP where I work and one of the central goals of the RUP is to tailor the amount of ceremony in your process to fit your project and the personnel.
For instance, small projects require far less ceremony and tool support. As well, people new to a process need time to adapt. It's best not to flood them with information and let them adapt at their own pace.
Show Me the Money!
To get people to buy into a new process is to let them make a mistake (or present an example form the past) and then show them how the process could have helped prevent the mistake. Try and draw a direct line to show how the process will help them improve the way they work.
For instance: if people are resistant to automating builds and running tests automatically then the next time they release a fix for something that broke a piece of code that was already working use that opportunity to illustrate that an automated test would have caught the error before it got released, saving everyone time and money.
Automation
To ensure people can adapt to a process is to remove as much human intervention from them as you can. Automate builds, tests, reporting as much as possible using information that is automatically captured.
How this helps support process is by removing the "nag" factor. Many people resist new process because they figure it means more work for them to do or extra work that produces little result in the end. By automating existing tasks and gathering data from them you get a lot of benefit without increasing any individual developers workload.
A classic example is continuous integration. Continuous Integration tools like CruiseControl, TeamCity or Hudson can work with version control repositories to extract latest versions of source code, build that code, execute and archive test results and package stuff for deployment. This requires no extra effort on the part of the developer but you get a lot of extra "process" in return. You now know how good your source code is, you can distribute it easily and you can catch bugs earlier.

Scrum, but with no testing or documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you do when you join a team that says they use Scrum, but only use it as a time-management tool and not the whole process?
How can I reinstate back testing and documentation?
I was thinking to start off with adding user stories specifically for testing and documenting.
Perhaps someone else has more experience with this then I do about this as I am sure its not that uncommon.
The key to scrum is that a task be identifiable as "done" before it can be classed as done. How does you company assess whether something is done without reviewing documentation and tests?
Perhaps they have an unusual, but valid, way of doing it. Or perhaps they have missed the point of "done tasks". I'd suggest you start by asking them how they measure down and whether it could be improved. Then suggest documentation and testing as the way of improving the process.
Note that neither testing nor documentation are in fact part of Scrum. Scrum is a pure project management approach - the required engineering practices, like the ones you mention, are supposed to "emerge" during the project. And most specifically, they are supposed to be identified during the heartbeat retrospectives that you do at the end of every sprint. Are you doing those? Can you bring up your concerns there - and are they actually the biggest concerns the team has?
Is the issue that they don't have any documentation and tests, or that they aren't implementing the entire Scrum methodology? Those are 2 very different problems in my mind.
I would much prefer an organization that has taken the time and effort to find and fit a development process that matches their development style as opposed to mandating down from on high the one true process. So I would not be concerned at all if they were using a process that they called Scrum but that didn't meet all the "official" guidelines. Try to determine why the process is the way it is. Chances are that if they have taken the time to tailor it, the team will be receptive to your ideas, especially if you have taken the time to determine why things are the way they are. If you simply approach it as "this isn't Scrum and so isn't right", you will probably not make much headway, but by being pragmatic about the benefits you can likely make some substantial improvements.
Alternatively, if they aren't doing testing and don't have any documentation I would consider that a fairly bad sign. And by documentation I am taking the minimalist view here - a list of features, bug tracking, etc. - I would be very concerned by the absence of these items, less concerned by the absence of items higher up the abstraction list. In the absence of support from management, I would suggest you lead by example. Take it on yourself to setup a simple bug tracking system (there are several - in a pinch, simple text lists in a central location work as well). Don't declare your features complete until someone else has tested it. This can be as simple as walking over to another developer and asking them to try it in front of you. If someone claims a feature is complete, take a few minutes to familiarize yourself with it. If you find a bug, politely mention it to the responsible developer. Slowly build an environment where the team can see the benefits of running tests and tracking features and bugs.
Most teams operate in this manner simply because of a mistaken belief that they don't have time to "do it right", or that they will get to it later. Often this will occur when a simple proof-of-concept done by a developer or two as a side-project turns into a full-on development effort. By showing that it can actually save time and effort, and reducing the initial costs to the rest of the team, you will often find that it becomes ingrained as part of the process without ever actually being officially endorsed or accepted.
If you have management support it will make it much easier, but always be careful to make sure that the team is receptive to the changes. This may mean it takes longer than you want, but so be it, without the team's support any mandated process will fail at the first sign of pressure, which is when you need the process the most.
*Disclaimer - On my last project I spearheaded the movement to tailor the SCRUM process to fit our environment. The "official" process was simply untenable for our client, but it was still an invaluable guide in tailoring our process.
"adding user stories specifically for testing and documenting"
While meta-user stories might make sense in some circles, it rarely works out well. Software folks rarely cope well with meta-user stories, they either don't get the idea that they can change their own processes by writing a story, or -- more typically -- they engineer the meta-user story to death.
When you're interviewing users, it feels like they're making the user story up. Certainly, you're making it up as you listen to them and try to capture it.
When an IT organization tries to make up its own user stories about how IT should work, the process falls apart. Until the organization has done the thing (testing, for example) a bunch of times manually, they're not really qualified to write user stories. Then, after they've done it, they don't need software development processes, they'll just automate the important bits a little at a time.
I think change has to come from a less formal direction. Actually balking at calling something "done" that hasn't been tested is a good starting point.
IT doesn't do things unless forced. So, meet the users and find out why they're not requiring testing. Coach them to require testing. Tell them the consequences and the words to use.
A lot can go wrong in an organization to lead to poor processes. It's important to know what's wrong, and create a demand for change. The best possible thing is to have your boss complaining that you're not fixing it, rather than you suggesting that perhaps it would be good to fix it.
[It doesn't feel right when your boss demands you fix the process, but it's about the only way change will happen.]

Should we be tracking defects in things other than code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At various times in my career I have encouraged staff I worked with and/or managed to track defects in artifacts of the development process other than source code (i.e. requirements, tests, design). Each time the request has been met with astonishment, confusion and resistance. It seems so obvious to me that I'm always a little shocked when people resist the idea.
What we get from this exercise is a picture of where bugs are created and where they are found (in what part of the process). If we are building bad requirements then we we'll know it and can work to improve them.
Is anyone else collecting information on defects not in source code?
Yes, track them all.
Documentation, design docs, requirements, etc.
I am also as astonished as you when I hear "arguments" against it.
At the very least the tracking system should be able to identify where the defect was found and what part of the process it was injected.
Absolutely. Just look at Ubuntu Bug #1.
Yes, definitely. The artifacts surrounding your code--models, specs, doco, requirements info, use cases, etc--can all contain errors that affect the code itself.
Normally bug tracking systems have an assumption that they're a list of things that are to be fixed or implemented. Tracking bugs in requirements or other documentation (e.g. task lists) doesn't seem like it's the same thing. It's more a matter of keeping records so you can trend problems and evaluate if you're making fewer of them.
I'm tracking them, but outside of our bug tracking system.
Well duh... anything you can improve, do what can to improve!
Treating it all as bug tracking makes sense - opinion will vary, as you note - but using one tracking system would give a coherent big picture of it all, let tasks be assigned, etc. Maybe a demo, a slide show or something aimed at using these systems in ways beyond the original source code tracking - pictures convince more than words.
I've normally tracked the source of all defects. They may get fixed in the code, but they don't necessarily get caused by that.
Wrong requirement, wrongly interpreted requirement, bad design, developer brainf*rt, bad documentation, wrong test, missing test, outdated test, code that doesn't do what the developer does, tool/compiler error (very rare, in my view), build system problem....
To me, they're all "the system doesn't do what the customer wants it to do", and all indicate something must be changed in order to make it do what the customer wants it to do. Arguing whether it's defect or feature, or a source code bug or some other issue distracts from addressing the issues to me.
One biggie that no one seems to have mentioned is to start a database of bad smells and traps for use when performing peer reviews.
This is an invaluable resource for the peers actually performing the review.
It definitely pays off in the long term. This should also be a live document, database, etc. that is added to as:
bugs are fixed
as peers perform reviews, and
as new blood arrives to join the team(s) bringing with them new knowledge and experience.
HTH.
cheers,
Rob
aboslutely. if your process is well enough along to trace back source of defect to orgin great. it helps customers and designers qualify the constraints in which they operate.
customer: develop robot to cut grass where all blades of grass are to be cut to a precise uniform length
designer: we will use left-handed kindergarten scissors mounted perpendicular to the ground ensure crisp/precise cuts
QA: cuts are precise.
customer: why does it take the robot 6 days to cut grass. we need in in 30 mins or less!
clearly tracking the source of the performance defect can help in molding conversations and improving the process going forward.
We track bugs in software, errors in documents, errors in drawings, and requests for new features all using the same track tool.