How do I encourage code sharing and limit the bug tracking overhead while maintaining flexibility in my releases? [closed] - process

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
How are you tracking changes, testing effort for bugs that impact multiple artifacts released separately?
Code sharing is good because it reduces the total number of paths through the code which means more impact for fewer changes and less bugs (or more bugs addressed with fewer changes). For example, we may build a search tool and an indexer that use the same file handling package or model package.
We need to be able to ensure that changes get tested in all the right components and track which changes were included with which released tools. We also don't want to be forced to release the change in all applications at the same time.
Goal: one bug to be tested, scheduled tracked independently against each released application. With automated systems that understand the architecture guiding us to make the right choices.
Bug Split Release Scenario:
We may release a patch of the search tool that contains a performance fix in a util library. Critical for the search tool, the fix is less visible in the indexer so it can wait until the next maintenance release. We want the one bug to be scheduled-tracked-released with the search patch and deferred for the indexer's next maintenance release.
So, when I create a bug in our tracking system (JIRA) I want it to magically become multiple objects.
primary issue describing the problem and tracking the development work
a set of tasks that allow me to track testing effort and for me to track how this issue has been released for each application it impacts.
How can we make the user experience of code sharing low effort to encourage more of it without becoming blind to what changes impacted which releases or forcing people to enter many duplicate bugs?
I'm sure that large scale projects from Eclipse to linux distros have faced this kind of problem and wonder how they have solved it (I'm going to poke around on them next).
Do any of you have experience with this kind of situation and how have you approached it?

In Jira you can allow sub-tasks so you could assign sub-tasks to the main task. You can also allow time tracking on the issues so you know how much time each task is taking and what the difference between estimated and actual is.
You can also enable versioning so you have a road map of what is being done in the next release with a change log. The problem with the road map is that it is only for one project so you can't have a road map that covers all of your projects.
Finally, you can create your own custom workflows to do almost anything you want to do. I've never tried this because we'd have to learn a new language to do it and the reason we got Jira was to decrease development overhead, not increase it by having to customise our bug tracker - but it is possible.

For jira, make use of the affects versions and fixed in versions (plus you can add multiple custom fields, like verified by QA in versions)

Related

Software crashes in production environment, no access to debugger. What to do in short-term and long-term? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This is an interview question:
Software crashes in production environment, no access to debugger. What steps would you do to solve the problem short term? Long term? What would you do to prevent it from happening? What tools would you use?
My ideas:
Short term:
Track the log file of the program generate by OS, which may generate some signals about the crash.
Narrow down to the file where the program crashes by adding some print.
Add try-catch in the possible locations.
Find the reason.
Long-term:
Check the whole program design idea, algorithm/data structure usage, to make sure that they are used correctly and suitably.
Test it with different cases that have caused crashes to find the essential reasons
Tools : GDB, Valgrind family, gprof
Any better ideas or solutions?
Short Term
1. The absolute first thing to do is work out what was done to generate the problem and try and reproduce it. If you can do that, you can now track it down in a debugged environment.
2. If it is not reproducible, you need to look through all the information you collected in step one (which will include any logging) and see if you can see a possible problem.
3. If the problem has not been found, you will need to add logging, and lots of it. This is where a "DEBUG" logging setting comes in handy. It will probably slow down the system, and may even mask the problem (which tells you something about the nature of the problem).
4. With the new logging information you can go back to step one. Repeat this until the problem is solved!
In the long term the most obvious thing to do is make sure you have sufficient logging in place, even if it has to be turned on and off, to catch problems. As well as this, you need to try and beef up the testing effort..
When you have tracked down a problem, it is worth noting the type of problem (race condition, scalability, database access, etc.). This gives you an area to apply more automated and manual tests.
You have some good initial ideas, here are my comments:
Add logging to your code - you will get very little information from
the operating system about your code.
If exceptions can be thrown by methods that you call, you should catch them. Don't let them bubble up to the end user!
Run valgrind now, not later
Setup a test environment that simulates your production environment. Start simple, and increase the complexity until you are able to reproduce your issue. You do have a test environment, right?
The very first thing you should do is determine the severity of the problem. This will help to devise your short-term strategy. You will need to have some brief discussions with the major stakeholders in the software (such as the client), or have a project manager do this and report back to you.
In the heat of the moment, this is often the bit overlooked, and rushing a short-term fix almost always means wasting a lot of time not really understanding what needs to be done.
After this, your actual strategy, both long term and short term, is rather dependent on the technology you are using and how it is deployed.
Short term
It is absolutely vital to grab some preliminary information about the crash before attempting to resolve the problem, grab log files, take screenshots, note down system info like memory/CPU usage, archive any temporary data that might be useful.
The short-term action should be to get the system up-and-running again, quickly. Some common approaches to short-term solutions:
Try turning it off and on again... Seriously, 90% of the time this
will get production running again in the short term, at least until
the bug manifests itself again.
Revert to a previous production
release, preferably the latest version that was known to work fairly
reliably.
Run a second instance on another machine and fail-over if
the problem occurs again. This has the added bonus that logs and
system state are preserved after the last crash occurred.
Long term
In the long term, you will want to properly analyse the information you gathered at the time of failure. Where possible, try to reproduce the problem as closely as you can. Revert your code to the version being deployed (you do use version control tools right?), check high-level factors as well as low-level configuration ones. e.g. who was using the system when it crashed? Can they show you what they did?
Debugging and logging may be useful at this stage, and all the usual developer tools such as functional tests and memory profiling tools. A crash could come from a number of sources, from memory protection faults to an unexpected state of a resource. You should compile a list of candidate problems, and cross them off as you gain confidence that they aren't the cause of the crash.
Apart from logging, you can enable creation of mdmp files ( windows ) or the core dumps ( linux ) then examine them later; One downside of this approach is that core dumps can be pretty big. mdmp and core dumps contain the context of the application when the crash occurred.

What do you do when a library you use is no longer maintained? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
One thing I've always found frustrating is when a library I use is no longer maintained. Even looking at update history and community beforehand, I've run into the situation where I check back later to find that the version I'm using is the last version.
Generally this goes unnoticed until a few months have passed, or some bug/limitation has been found. I run into this fairly often when coding in Python, because my desire to upgrade to a new version of the interpreter can easily introduce problems in libraries that worked fine before. My question is: what is the best response to this situation?
Do you become the maintainer of the old library? Even if you're only fixing the bugs you care about, this is still a lot of work. Especially if the library is large, complex, and has less-than-well-documented code (the case more often than not).
Do you switch to a different library (if there is one)? This is also a significant undertaking, with the potential to introduce new bugs, especially if the only alternatives approach the problem from a different angle. This can be true even if you had the foresight to write an abstraction layer for the old library's functionality.
Do you roll your own? It probably ends up as less code than the old library, since you only write the parts you care about. It's therefore easier to maintain in the future. But now you've wasted days/weeks/months to produce something that is probably less functional, and is guaranteed to introduce tons of new bugs.
I realize the answer depends on the specific case: the size of the library, whether source is available, how maintainable it is, how much of it your code uses, how deeply your code relies on it, etc. I'm looking for answers across a range of cases. What are your experiences with this problem?
Well, you've found one argument to lessen the number of external dependencies...
I've come across this in several Java projects I've audited; it seems people have a tendency to drop in a Jar found somewhere on the Web for the tiniest amount of reuse possible from it. The result is a mess of dependencies that ends up undermining the code base. I prefer to use external components sparingly.
It's probably most useful to ask what you can do before. Make a point of evaluating the future lifetime of an external component before you start using it. Do some research on how large its developer community and its user community are. Also, prefer to use a component that has one or two "lesser" alternatives which you could also use.
If there's something you're tempted to use, but it has only one or two people working on it and isn't used much beyond their own project, then you should probably roll your own - or join forces with the maintainers of the component.
I think your really answer is in how do you select third party libraries to include in your code.
If you happen to like constantly upgrading your code to the latest version of the language then by default you can only use libraries that have active communities behind them
In fact I would go as far as saying that the only time that you want to use a third party open source library is when the community behind it is large (say at least 40+ users) and it has undergone a few releases.
For a commercial library the same thing applies how long is the company going to be around and how many other clients use it.
If you can't find a library in this position then ensure that you abstract the third party library out of your code so replacement isn't hard in the future.
When the Java EE framework my employer chose went belly up, we went out and found a newer, better one. Fortunately Spring was available.
We prefer to roll our own for that very reason. We end up with full control over it, full knowledge of how it works, and we can change it any way we want. When our ass is on the line when the blame game is played, we prefer to reduce the risk and do it ourselves.
We had a situation once where we did use an external library, and it got rewritten and repurposed by the author and no longer did what we expected. We rolled over that, wrote our own version, and continued safely.
The bottom line is safety, and minimization of risk.
If the source is available, the licence is open and the library does the job really well, you have the option to fork the library. By doing this, you can also add new features to it. If the library has lots of things to fix and the code is a mess, it is better to find something else to work with.

Scrum, but with no testing or documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What do you do when you join a team that says they use Scrum, but only use it as a time-management tool and not the whole process?
How can I reinstate back testing and documentation?
I was thinking to start off with adding user stories specifically for testing and documenting.
Perhaps someone else has more experience with this then I do about this as I am sure its not that uncommon.
The key to scrum is that a task be identifiable as "done" before it can be classed as done. How does you company assess whether something is done without reviewing documentation and tests?
Perhaps they have an unusual, but valid, way of doing it. Or perhaps they have missed the point of "done tasks". I'd suggest you start by asking them how they measure down and whether it could be improved. Then suggest documentation and testing as the way of improving the process.
Note that neither testing nor documentation are in fact part of Scrum. Scrum is a pure project management approach - the required engineering practices, like the ones you mention, are supposed to "emerge" during the project. And most specifically, they are supposed to be identified during the heartbeat retrospectives that you do at the end of every sprint. Are you doing those? Can you bring up your concerns there - and are they actually the biggest concerns the team has?
Is the issue that they don't have any documentation and tests, or that they aren't implementing the entire Scrum methodology? Those are 2 very different problems in my mind.
I would much prefer an organization that has taken the time and effort to find and fit a development process that matches their development style as opposed to mandating down from on high the one true process. So I would not be concerned at all if they were using a process that they called Scrum but that didn't meet all the "official" guidelines. Try to determine why the process is the way it is. Chances are that if they have taken the time to tailor it, the team will be receptive to your ideas, especially if you have taken the time to determine why things are the way they are. If you simply approach it as "this isn't Scrum and so isn't right", you will probably not make much headway, but by being pragmatic about the benefits you can likely make some substantial improvements.
Alternatively, if they aren't doing testing and don't have any documentation I would consider that a fairly bad sign. And by documentation I am taking the minimalist view here - a list of features, bug tracking, etc. - I would be very concerned by the absence of these items, less concerned by the absence of items higher up the abstraction list. In the absence of support from management, I would suggest you lead by example. Take it on yourself to setup a simple bug tracking system (there are several - in a pinch, simple text lists in a central location work as well). Don't declare your features complete until someone else has tested it. This can be as simple as walking over to another developer and asking them to try it in front of you. If someone claims a feature is complete, take a few minutes to familiarize yourself with it. If you find a bug, politely mention it to the responsible developer. Slowly build an environment where the team can see the benefits of running tests and tracking features and bugs.
Most teams operate in this manner simply because of a mistaken belief that they don't have time to "do it right", or that they will get to it later. Often this will occur when a simple proof-of-concept done by a developer or two as a side-project turns into a full-on development effort. By showing that it can actually save time and effort, and reducing the initial costs to the rest of the team, you will often find that it becomes ingrained as part of the process without ever actually being officially endorsed or accepted.
If you have management support it will make it much easier, but always be careful to make sure that the team is receptive to the changes. This may mean it takes longer than you want, but so be it, without the team's support any mandated process will fail at the first sign of pressure, which is when you need the process the most.
*Disclaimer - On my last project I spearheaded the movement to tailor the SCRUM process to fit our environment. The "official" process was simply untenable for our client, but it was still an invaluable guide in tailoring our process.
"adding user stories specifically for testing and documenting"
While meta-user stories might make sense in some circles, it rarely works out well. Software folks rarely cope well with meta-user stories, they either don't get the idea that they can change their own processes by writing a story, or -- more typically -- they engineer the meta-user story to death.
When you're interviewing users, it feels like they're making the user story up. Certainly, you're making it up as you listen to them and try to capture it.
When an IT organization tries to make up its own user stories about how IT should work, the process falls apart. Until the organization has done the thing (testing, for example) a bunch of times manually, they're not really qualified to write user stories. Then, after they've done it, they don't need software development processes, they'll just automate the important bits a little at a time.
I think change has to come from a less formal direction. Actually balking at calling something "done" that hasn't been tested is a good starting point.
IT doesn't do things unless forced. So, meet the users and find out why they're not requiring testing. Coach them to require testing. Tell them the consequences and the words to use.
A lot can go wrong in an organization to lead to poor processes. It's important to know what's wrong, and create a demand for change. The best possible thing is to have your boss complaining that you're not fixing it, rather than you suggesting that perhaps it would be good to fix it.
[It doesn't feel right when your boss demands you fix the process, but it's about the only way change will happen.]

Should we be tracking defects in things other than code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
At various times in my career I have encouraged staff I worked with and/or managed to track defects in artifacts of the development process other than source code (i.e. requirements, tests, design). Each time the request has been met with astonishment, confusion and resistance. It seems so obvious to me that I'm always a little shocked when people resist the idea.
What we get from this exercise is a picture of where bugs are created and where they are found (in what part of the process). If we are building bad requirements then we we'll know it and can work to improve them.
Is anyone else collecting information on defects not in source code?
Yes, track them all.
Documentation, design docs, requirements, etc.
I am also as astonished as you when I hear "arguments" against it.
At the very least the tracking system should be able to identify where the defect was found and what part of the process it was injected.
Absolutely. Just look at Ubuntu Bug #1.
Yes, definitely. The artifacts surrounding your code--models, specs, doco, requirements info, use cases, etc--can all contain errors that affect the code itself.
Normally bug tracking systems have an assumption that they're a list of things that are to be fixed or implemented. Tracking bugs in requirements or other documentation (e.g. task lists) doesn't seem like it's the same thing. It's more a matter of keeping records so you can trend problems and evaluate if you're making fewer of them.
I'm tracking them, but outside of our bug tracking system.
Well duh... anything you can improve, do what can to improve!
Treating it all as bug tracking makes sense - opinion will vary, as you note - but using one tracking system would give a coherent big picture of it all, let tasks be assigned, etc. Maybe a demo, a slide show or something aimed at using these systems in ways beyond the original source code tracking - pictures convince more than words.
I've normally tracked the source of all defects. They may get fixed in the code, but they don't necessarily get caused by that.
Wrong requirement, wrongly interpreted requirement, bad design, developer brainf*rt, bad documentation, wrong test, missing test, outdated test, code that doesn't do what the developer does, tool/compiler error (very rare, in my view), build system problem....
To me, they're all "the system doesn't do what the customer wants it to do", and all indicate something must be changed in order to make it do what the customer wants it to do. Arguing whether it's defect or feature, or a source code bug or some other issue distracts from addressing the issues to me.
One biggie that no one seems to have mentioned is to start a database of bad smells and traps for use when performing peer reviews.
This is an invaluable resource for the peers actually performing the review.
It definitely pays off in the long term. This should also be a live document, database, etc. that is added to as:
bugs are fixed
as peers perform reviews, and
as new blood arrives to join the team(s) bringing with them new knowledge and experience.
HTH.
cheers,
Rob
aboslutely. if your process is well enough along to trace back source of defect to orgin great. it helps customers and designers qualify the constraints in which they operate.
customer: develop robot to cut grass where all blades of grass are to be cut to a precise uniform length
designer: we will use left-handed kindergarten scissors mounted perpendicular to the ground ensure crisp/precise cuts
QA: cuts are precise.
customer: why does it take the robot 6 days to cut grass. we need in in 30 mins or less!
clearly tracking the source of the performance defect can help in molding conversations and improving the process going forward.
We track bugs in software, errors in documents, errors in drawings, and requests for new features all using the same track tool.

Allocating resources for project documentation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What would you suggest for the following scenario:
A dozen of developers need to build and design a complex system. This design needs to be documented for future developers and the design decisions must be noted. These reports need to be made about every two months. My question is how this project should be documented.
I see two possibilities. Each developer writes about the things they helped design and integrate and then one person combines each of these documents together. The final document will probably be incoherent or redundant at times since the person tasked of assembling everything won't have much time to adjust every part.
Assume that the documentation parts from each developer arrive just a few days before deadline. A collaborative system (i.e. wiki) wouldn’t work properly since there wouldn’t be anything to read until a few days before deadline.
Or should a few people (2-3) be tasked with writing the documentation while the rest of the team works on actually developing the system? The developers would need a way to transfer their design choices and conclusions to the technical writers. How could this be done efficiently?
We approach this from 2 sides, using a RUP style approach. In the first case, you'll have a domain expert who is responsible for roughing out the design of what you're going to deliver - with developers chipping in as necessary. In the second case, we use a technical author - they document the application, so they should have a good idea of how it hangs together, and you involve them right through the design and development process. In this case, they can help to polish the design, and to make sure that it matches what they thought was being developed.
We use confluence (atlassian's wiki-like-thing) and document all kinds of different "things". The developers do it continiously, and we push each other for docs - we let peer pressure decide what is necessary. Whenever someone new comes along he/she is tasked with reading through everything and to find out what still is correct. The incorrect stuff is either deleted or updated as a consequence of this. We're happy when we can delete stuff ;)
The nice thing about this process is that the relevant stuff stays and the irrelevant stuff is deleted. We always "got away" from the more formalized demands by claiming that we could always construct the word documents they wanted if "they" needed them. "They" never needed them.
I think alternative 2 is the less agile, because it means a new stage to the project (although it may be in parallel with tests).
If you are in an agile model, then just add documentation (following a guideline) as a story.
If you are in a staged approach, then I would nevertheless ask developers to work on documentation, following some guidelines, and review that documentation along the design and the code. Eventually, you may have a technical writer reviewing everything for proper English, but that would be a kind of "release" activity.
I think you can use Sand Castle to document your project.
Check it out
Sand Castle from Microsoft
It's not a complete documentation, but making sure that interfaces etc. are commented using Doxygen-style comments means writing code and documenting it are closer together.
That way, developers should document what they do. I still think a review by the architect(s) is needed to ensure consistent quality, but ensuring people document what they do is the best way to ensure they follow the architecture.