What is your or your company's programming process? [closed] - process

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for process suggestions, and I've seen a few around the site. What I'd love to hear is what you specifically use at your company, or just you and your hobby projects. Any links to other websites talking about these topics is certainly welcome!
Some questions to base an answer off of:
How do users report bugs/feature requests to you? What software do you use to keep track of them?
How do bugs/feature requests get turned into "work"? Do you plan the work? Do you have a schedule?
Do you have specs and follow them? How detailed are they?
Do you have a technical lead? What is their role? Do they do any programming themselves, or just architecture/mentoring?
Do you unit test? How has it helped you? What would you say your coverage is?
Do you code review? When working on a tight deadline, does code readability suffer? Do you plan to go back later and clean it up?
Do you document? How much commenting do you or your company feel comfortable with? (Description of class, each method and inside methods? Or just tricky parts of the code?)
What does your SCM flow look like? Do you use feature branches, tags? What does your "trunk" or "master" look like? Is it where new development happens, or the most stable part of your code base?

For my (small) company:
We design the UI first. This is absolutely critical for our designs, as a complex UI will almost immediately alienate potential buyers. We prototype our designs on paper, then as we decide on specifics for the design, prepare the View and any appropriate Controller code for continuous interactive prototyping of our designs.
As we move towards an acceptable UI, we then write a paper spec for the workflow logic of the application. Paper is cheap, and churning through designs guarantees that you've at least spent a small amount of time thinking about the implementation rather than coding blind.
Our specs are kept in revision control along with our source. If we decide on a change, or want to experiment, we branch the code, and IMMEDIATELY update the spec to detail what we're trying to accomplish with this particular branch. Unit tests for branches are not required; however, they are required for anything we want to incorporate back into trunk. We've found this encourages experiments.
Specs are not holy, nor are they owned by any particular individual. By committing the spec to the democratic environment of source control, we encourage constant experimentation and revision - as long as it is documented so we aren't saying "WTF?" later.
On a recent iPhone game (not yet published), we ended up with almost 500 branches, which later translated into nearly 20 different features, a huge number of concept simplifications ("Tap to Cancel" on the progress bar instead of a separate button), a number of rejected ideas, and 3 new projects. The great thing is each and every idea was documented, so it was easy to visualize how the idea could change the product.
After each major build (anything in trunk gets updated, with unit tests passing), we try to have at least 2 people test out the project. Mostly, we try to find people who have little knowledge of computers, as we've found it's far too easy to design complexity rather than simplicity.
We use DOxygen to generate our documentation. We don't really have auto generation incorporated into our build process yet, but we are working on it.
We do not code review. If the unit test works, and the source doesn't cause problems, it's probably ok - but this is because we are able to rely on the quality of our programmers. This probably would not work in all environments.
Unit testing has been a god-send for our programming practices. Since any new code can not be passed into trunk without appropriate unit tests, we have fairly good coverage with our trunk, and moderate coverage in our branches. However, it is no substitute for user testing - only a tool to aid in getting to that point.
For bug tracking, we use bugzilla. We don't like it, but it works for now. We will probably soon either roll our own solution or migrate to FogBugz. Our goal is to not release software until we reach a 0 known bugs status. Because of this stance, our updates to our existing code packages are usually fairly minimal.
So, basically, our flow usually looks something like this:
Paper UI Spec + Planning » Mental Testing » Step 1
View Code + Unit Tests » User Testing » Step 1 or 2
Paper Controller & Model Spec + Planning » Mental Testing » Step 2 or 3
Model & Controller Code + Unit Tests » User Testing » Step 3 or 4
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Rejection
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Acceptance » Unit Tests » Trunk » Step 2 or 4
Known Bugs » Bug Tracker » Bug Repair » Step 2 or 4
Finished Product » Bug Reports » Step 2 or 4
Our process is not perfect by any means, but a perfect process would also imply perfect humans and technology - and THAT's not going to happen anytime soon. The amount of paper we go through in planning is staggering - maybe it's time for us to get a contract with Dunder Mifflin?

I am not sure why this question was down voted. I think it's a great question. It's one thing to google search, and read some random websites which a lot of times are trying to sell you something rather than to be objective. And it's another thing to ask SO crowd which are developers/IT Mangers to share their experiences, and what works or doesn't work for their teams.
Now that this point is out of the way. I am sure a lot of developers will point you towards "Agile" and/or Scrum, keep in mind that these terms are often used very loosely especially Agile. I am probably going to sound very controversial by saying this which is not my intention, but these methodologies are over-hyped, especially Scrum which is more of a product being marketed by Scrum consultants than "real" methodology. Having said that, at the end of a day, you got to use what works the best for you and your team, if it's Agile/Scrum/XP or whatever, go for it. At the same time you need to be flexible about it, don't become religious about any methodology, tool, or technology. If something is not working for you, or you can get more efficient by changing something, go for it.
To be more specific regarding your questions. Here's the basic summary of techniques that have been working for me (a lot of these are common sense):
Organize all the documents, and emails pertaining to a specific project, and make it accessible to others through a central location (I use MS OneNote 2007 and Love it for all my documentation, progess, features, and bug tracking, etc.)
All meetings (which you should try to minimize) should be followed by action items where each item is assigned to a specific person. Any verbal agreement should be put into a written document. All documents added to the project site/repository. (MS OneNote in my case)
Before starting any new development, have a written document of what the system will be capable of doing (and what it wont do). Commit to it, but be flexible to business needs. How detailed the document should be? Detailed enough so that everyone understands what the final system will be capable of.
Schedules are good, but be realistic and honest to yourself and business users. The basic guideline that I use: release quality and usable software that lacks some features, rather than a buggy software with all the features.
Have open lines of communication among your dev. team and between your developers and business groups, but at the end of a day, one person (or a few key people) should be responsible for making key decisions.
Unit test where it makes sense. But DO NOT become obsessive about it. 100% code coverage != no bugs, and software works correctly according to the specs.
Do have code standards, and code reviews. Commit to standards, but if it does not work for some situations allow for flexibility.
Comment your code especially hard to read/understand parts, but don't make it into a novel.
Go back and clean up you code if you already working on that class/method; implementing new feature, working on a bug fix etc. But don't refactor it just for the sake of refactoring, unless you have nothing else to do and you're bored.
And the last and more important item:
Do not become religious about any specific methodology or technology. Borrow the best aspects from each, and find the balance that works for you and your team.

We use Trac as our bug/feature request tracking system
Trac Tickets are reviewed, changed to be workable units and then assigned to a milestone
The trac tickets are our specs, containing mostly very sparse information which has to be talked over during the milestone
No, but our development team consists only of two members
Yes, we test, and yes, TDD has helped us very much. Coverage is at about 70 Percent (Cobertura)
No, we refactor when appropriate (during code changes)
We document only public methods and classes, our maximum line count is 40, so methods are usually so small to be self-describing (if there is such a thing ;-)
svn with trunk, rc and stable branches
trunk - Development of new features, bugfixing of older features
rc - For in house testing, bugfixes are merged down from trunk
stable - only bugfixing merged down from trunk or rc

To give a better answer, my company's policy is to use XP as much as possible and to follow the principles and practices as outlined in the Agile manifesto.
http://agilemanifesto.org/
http://www.extremeprogramming.org/
So this includes things like story cards, test-driven development, pair programming, automated testing, continuous integration, one-click installs and so on. We are not big on documentation, but we realize that we need to produce just enough documentation in order to create working software.
In a nut shell:
create just enough user stories to start development (user stories here are meant to be the beginning of the conversation with business and not completed specs or fully fleshed out use cases, but short bits of business value that can be implemented in less then 1 iteration)
iteratively implement story cards based on what the business prioritizes as the most important
get feedback from the business on what was just implemented (e.g., good, bad, almost, etc)
repeat until business decides that the software is good enough

Related

Starting Testing department

I am joining a company, they dont have any formal testing setup. They expect me to start a testing department. I have good understanding of manual and automated testing. Not sure about how to start or which tools to use for document sharing, bugs tracking.
please guide as much info you can provide.
thanks
This is a very broad question and almost impossible to answer without significantly more knowledge of your companies products, quality goals and existing tooling... But I've got some Opinions :tm: that might help, starting with some philosophy (sorry).
What You're For
The function of a testing department isn't to test; the goal is to help the company be confident in its delivery of products. Your customers want to know that your software is accurate and stable. Your Operations team wants to avoid Production going down. Your Developers want to feel confident that their changes work and don't have any negative side effects.
I personally feel that the best way for a testing team to provide that confidence is not by writing tests; It's by editing them. The testing team provides the tooling, guidelines and expertise to help the rest of the Engineering departments make testing an integral part of the process.
It's like cooking. You don't make a well seasoned meal by chopping and sautéing and stirring and then giving it to a head chef to taste. You taste continually while you go because you're the one who knows what the food should be like. The head chef trains you and provides feedback on the final dish so that you learn how to season correctly.
Choosing Tools
Irrelevant. Mostly.
Your tools need to give you what you're after and then get out of your way. At the moment, the company barely knows what it's after, so you could even use a Google Doc to track defects.
You don't want to get in anyone's way to begin with, or they'll start to resent you. Your team needs to provide value and start to earn the social capital to change the Engineering processes to help deliver your goals.
So, use whatever document sharing tools are already in use; Whether that's a Wiki, Google, Dropbox etc. If you're choosing a new one because there's no collaboration, I'm partial to Notion.
If your team already has a collaborative build tool (eg Jenkins, Travis) it's probably best to stick with that, adding in testing steps. Again, the less friction you introduce, the better your initial outcomes.
I wouldn't bother building and maintaining a test grid; Instead, lean on a vendor like Sauce Labs for infrastructure and expertise. That way you've got easy parallelisation, wide platform coverage, test asset collection, insights, as well as their experience in supporting Testing teams. Disclaimer: I'm the Manager of Developer Relations at Sauce Labs, so I'm probably biased ;)
As for testing tools; If you want your engineering teams to collaborate on test production, you need to stick with an ecosystem they can use. This likely means whatever they're already using.
How To Start Testing
Selecting What To Test
Your organisation wants testing so bad they're hiring you. That implies there's a traumatic event that they want to avoid happening again. So, start there. Find out what it is, and create a test for it.
If Black Friday overwhelmed their site, do Load testing. If their build is always breaking, concentrate on unit testing. If functionality doesn't work in Prod, add an integration test.
Test Coverage
There's a trap for new players, and you're likely to hear this from your devs:
We're so far behind on test coverage we'll never catch up
That is absolutely true.... if you never start! Add the tests that prevent the trauma that bought you on board and you're already adding value; You'll catch that problem next time.
Another trap is setting test coverage goals. Test coverage is a great way to monitor your process but a terrible way to improve it. Force your teams to increase test coverage (or not let it slip) and they'll start to resent the process... And write crap tests just to boost the percentage.
Instead, use coverage for feedback. If coverage goes down during a commit, discuss why and talk about how to improve it. if it drops way down you might want to do something, but a little dip while you're getting started is A-OK.
Assuming you've covered the trauma that got you hired, increasing test coverage is best done on an as-worked basis. If a developer is writing new code, it gets tests. If a developer is modifying old code, it gets tests to (at least) prove that the modifications work, and ideally to prove that they don't break the old functionality either.
You may come across old code that literally can't be tested. That's a good time to refactor that code. If people are scared of refactoring because it might break, point out that that's exactly what tests are for. Try to pull out to a level where you can test. If you can't test a unit, test the class. If you can't test the class, test the package. Then, go back in and start re-working. You have to do it some day.
Oh, no, we'll be replacing the Fizzwangle with a new Buzzshooper implementation soon; There's no need to take the risk of refactoring for testability.
This is a lie. Even if they mean it truthfully, it's a lie. Buzzshooper isn't coming any time soon. Refactor that shit.
Tests Are Code, Code Is Tests
Your tests need to be treated like high quality code. Use all the abstractions you use when writing code, like inheritance, polymorphism, modularisation, composability.
Look at techniques like the Page Object Model for front end testing. Your test code should restrict implementation detail knowledge (eg, element locators) to the least number of places, so that changes are easy to implement.
Oh, and also, your Code is Code. Learn about then help your teams write code for testability, and tests for code-ability. Structure your tests and app so you can test in parallel, reliably, as fast as possible:
Give HTML elements unique, simple IDs
Write tests that test a single thing
Bypass complicated test setup by doing things like pre-populating databases
Log in once, then use session management to avoid doing it again
Use data generators to create unique test data (including logins)
Other Resources
Check out past conference talks like SauceCon Online.
Testing Talks Online has some great discussions and is the closest thing I've found to a real-life meetup during Covid.
There's also a lot of great content over at Ministry of Testing.

Planning a requirements gathering session using Agile [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are planning on introducing Agile into our development process (a shift from the waterfall we've been using so far). We are leaning towards a hybrid model in whcih the requirements gathering session is comprised of a business analyst, subject matter experts, technical person and a user interface person. The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Has anyone had experience with a hybrid model? How has it worked for you so far?
The plan is to create user stories that the development team can use in their agile process with 1 month sprints.
Some remarks:
1 month Sprints is IMHO too long, especially for an adoption and I prefer to use 2 to 3 weeks Sprints. During an adoption, shorter feedback loops give you the opportunity to inspect and adapt more frequently and since you are experimenting, this is in general appreciated.
I don't really understand what is so hybrid in your requirements gathering session as long as the goal is not to create the "final" list of fine grained Product Backlog Items in one shot (a backlog has typically a pyramidal structure with fine grained items at the top - for the upcoming iterations - and coarse grained items at the bottom). Having story-writing workshops ahead each iterations is a common practice.
PS: While I respect Péter's opinion, I have a slightly different one. I consider Scrum (we're talking about Scrum, right?) as a minimal and finely balanced framework and recommend to stick as close as possible to doing Scrum by the book. Sure, the goal is not to be Scrum but to deliver working product increments. But unless you have someone experienced with Scrum in the team, you (as organization) are not really qualified1 to alter the framework (and to understand the impacts) and might not get all benefits. Scrum is flexible, there aren't two similar Scrum implementations. But dropping a part of the framework is not the same as being flexible.
1 I often introduce the Shu Ha Ri progression model (that roughly means learn - detach - transcend) for agile adoption. From the C2 wiki:
As the beginner starts to learn, Shu gives them structure. It forces them to adhere to the basic principles (...). Since the beginner knows very little, they can only progress by slavishly adhering to these principles (...).
As the beginner gains experience, they naturally will wonder why?, how?, is there something better? Ha... the separation (much softer word than break) is the experimentation done around the principles... first straying only a little and then more and more as these ideas are tried against the reality of the world.
As the experiments of the Ha stage continue, bit by bit, the successes are incorporated into daily practice... we look for opportunities and use the patterns we have learned and tried out that closely fit those opportunities. This Ha/Ri stage is what makes an art the 'property' of the practitioner rather than the teacher or the community. Eventually, you are able to function freely and wisely.
I'm certainly not saying that one must stay at the Shu phase (the goal is beyond the first level), what I'm saying is that learning new ways of working takes time, don't ignore practice. As Ron Jeffries once said "They're called practices for a reason... You have to have done them. Practice makes perfect."
Update: (answering a comment)
One of the decisions we would like to take is the role of each person in the 'Product Owner' team.
Just to be clear: there should be only ONE Product Owner. He can of course work with a team but, still, there should be a single authoritative voice for the team. If I rephrase, there is no Product Owner Team.
For ex: What would the role of a technical person be?
Well, for me the technical person has no role to play in this team (unless he is there to train or support people at writing stories but the ScrumMaster should typically do that). Writing stories means capturing the essence of business oriented features, there is no real need for a technical point of view at this stage. Technical complexity (or even feasibility) will be included later in the estimation.
It seems to me that the end result of the requirements phase would be user stories that the developers will use in the iterations. Will the technical person be estimating the tasks? Traditionally, we've had the programmers estimate their own tasks
People doing the work should estimate the work (you can't expect a team to commit on something if someone else estimate the work for the team). In other words, the team should estimate stories. On top of that, experience shows that 1. collective estimations works better than individual estimation 2. we are better at doing relative estimations. So my recommendation would be to estimate the size and complexity of stories relatively using story points/t-shirt size/unit-less points and to do collective estimation during planning poker sessions. This worked very well every where I used this.
One of my colleagues (I work for a company which consults in agile working) has written several blogs about this separation between the requirements gathering and the development process. He describes how this can work very well in practice.
So far I have had experience with hybrid models only :-) None of the agile projects I have worked on so far implemented any Agile methodology strictly by the book. You needn't either.
The point is, any methodology is just a starting point / a collection of ideas you can use to work out your own process, tailor made to the specific project, team and circumstances.
Start with a process which looks good to you, then see how it works in practice. Keep regular retrospectives at the end of each iteration to assess how things are going, what worked in the last iteration and what didn't, and how could you improve things further. Then implement the most important ideas in the next iteration. In other words, develop the development process itself in an agile way :-)
Update: anecdotes about the requirement process
As I write this, I realize you may not got much useful info out of it... but at least it shows you that projects and processes vary a lot.
In one project, we had a fairly strict Scrum process, with a product backlog, although we didn't have a real customer: the product was new, and the prospective users didn't yet know it existed. Also it was a fairly specific and standardized domain where our company had a lot of experience. At the time I was part of the team (this was before the first release) we didn't really have much formal requirements gathering, because much of the key requirements were imposed on us by a standard. On top of that, we had some of our own ideas how to make the product stand out of the crowd.
In another project, we loosely had a Scrum process, but our sponsors and users did not really know about it, so we were struggling quite a bit. The "requirements gathering" was rather informal in that the product was huge and different people / subteams were assigned to different areas, working fairly independent of each other. Each subteam had their own contact(s) to discuss the requirements with, and the contacts were geographically separated - we rarely saw any of them face to face, so most of the communication happened via email, using lengthy Word docs. To top it off, we had a team of domain experts, who were often in wild disagreement with the users regarding the concrete requirements, however they were not very communicative. So the requirement process often consisted of reading lengthy documents containing obscure mathematical stuff, then other lengthy documents containing GUI requirements, then trying to figure out how to bring the two together... then discussing the requirements with the domain expert who briefly announced that it was a piece of sh*t, and we tried to tease some more useful and concrete improvement ideas out of him... then rewriting the requirements doc according to our latest understanding and the expert's comments, and sending it back to our contact person... then repeat from square 1.
In our current project, we again have many users scattered around a large part of the globe. However, at least our IT management is more knowledgeable about SW development and agile processes. We work on a large legacy system, which was in a pretty bad shape a couple of years ago - so maintenance and stabilization is a large part of our day to day work, and new requirements take less than half of our time on average. When we have one, though, we usually have preliminary estimation meetings where we try to come up with a crude estimate on how many person-days this project going to take. Then later our business analyst works out more and more details with the stakeholders, and our team works on filling out the technical details.
It seems to me if you label business analyst, subject matter experts, technical person and a user interface person as "the product owner" team, you really haven't deviated from "pure" agile.
That said, "pure" agile is somewhat of a misnomer because most agile advocates will tell you that the #1 or #2 selling point is its ability to adapt to the business processes and corporate culture of your existing organization.
The critical success factor might be having that product owner team, and all stakeholders really, invest in participating in some of your dev team's agile processes (showing up for demos, being accessible for questions during the sprint, etc).
Edit:
This quote from Wikipedia documents the very simple role of the Product Owner:
The Product Owner represents the voice of the customer. He/she ensures that the Scrum Team works with the “right things” from a business perspective. The Product Owner writes customer-centric items (typically user stories), prioritizes them and then places them in the product backlog.
Scrum isn't meant to enforce processes on how the Product Owner gets their job done. It's only the interface between the Product Owner and the Team (sprint planning and sprint review) that Scrum tries to outline.
Could we call this, "Building the back log," as that is really what this is, to my mind? The idea is to get those top priority pieces and then work from there. I have seen a few different Agile processes and some worked better than others but the key is how well is the buy-in from those involved in the process.
I'd also agree that 1 month is too long for a sprint. 2 week sprints seem about right to my mind though I have seen slightly longer and shorter sprints that also work. Another question is how big is the team and projects that are being done as stuff that may take years may not be easily done. I say this as someone that survived a project that lasted over a year and many sprints and demos later finally finished the project successfully.
I'd likely consider the technical person being the one that has to keep an eye on the big picture and understand what may be reasonable to do and what is unreasonable to do,e.g. having the system read my mind to know what I want done before I wake up in the morning without my having to write out anything other than simply thinking it would be unreasonable. Don't forget that the stories will develop into more cards as the stories are just a high-level view of what the end result is, which usually doesn't cover how easy is it, how much time will it take and a few other aspects.
For the sprints themselves, developers should estimate how long it takes to do various tasks. Determining the priority of stories though isn't part of what the developers do though. The requirements gathering session could also be seen as building a project charter so that there is a timeframe for the project as a whole, objectives and other high-level details that should be stated at the beginning.

Misusing the term "Code Freeze" [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm just curious if the community considers it acceptable to use the term "Code Freeze" for situations where we stop development except for testing and fixing bugs.
Development Situation
We're just finishing up our third and final sprint, which will be followed by a "Code freeze" and 2 weeks of Q/A testing. It is a big release and some components development have transcended all 3 sprints. Historically even though we call it a "Code Freeze" we still commit code to fix bugs.
Problem
Every release I try and correct my manager and co-workers that we should be calling it a "Feature Freeze", because it's pretty obvious that we're going to find bugs and commit code to fix them as soon as we start heavy testing. But they still persist in calling it a "Code Freeze". Sometimes we still have known bugs and declare a "Code Freeze".
The Wikipedia definition seems to agree with me here
Analysis
I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.
Conclusion
Am I over analyzing this? I just find it to be unhealthy to ignoring realities of situations and should either give it up calling it something it's not or fix the root problem. Has anybody else had similar experiences with Code Freezes?
Am I over analyzing this?
Yes.
Well, probably. Realistically, you should be thinking twice before making any code changes after the freeze. Bugs should have to pass some severity test, more so if the fix requires potentially-dangerous changes to the codebase or invalidates the testing that's been done. If you're not doing that, then yeah, you're just deluding yourselves.
But if you're not gonna fix any bugs, then freezing the code is kinda pointless: just build and ship it.
Ultimately, what matters is that you all understand what's meant by the label, not the label itself. One big happy Humpty-Dumpty...
We use the term "Feature Complete". All the features are coded and functional, but we're heading into a test pass to confirm that there are no bugs. If there are bugs, we will find them, fix them, and retest. After we're satisfied with the result, we're "Code Complete".
I think, actually, that they are more correct in their interpretation. A feature freeze, to me, would be a halt to introducing new features, but features currently under development could continue to completion or you could schedule some refactoring work to remove technical debt without generating new features. A code freeze brings a halt to all new development, including refactoring -- the only new code allowed is that to fix bugs found during QA. The latter seems to be what your team is doing.
Some people who get into adaptive and agile engineering methodologies like scrum may not realise what you have gotten yourselves into.
The reason for being agile engineering is releasing to your customers whatever that is usable now and gradually build up its usability and features.
If your project is projected to complete in 18 months but if you could have increasingly something usable every 2 months - why not release features every two months rather than wait till the grand holy day 18 months away since either way the project would still last 18 months.
Your customers' requirement might change so giving your customers opportunity to change their mind frequently, before it's too late, results in exhilarated customers.
Someone might release open source module of one of your modules 10 months from now and then you don't have to do much else but integrate that module.
Therefore, scrummers, or at least scrum masters and/or project managers/architects are required by the dynamics of scrum to modularise ... modularise is not good enough; but granularise the project.
You have to granularise your modules to the right size and provide a contract-interface specification for each so that changes within a module is managed within a module. If your module by itself or due to dependence of other modules is unable to satisfy a contract-interface, you have to code-freeze to enable you to broadcast a contract-interface version 1 so that other teams could continue albeit with less than expected features in the next general product release.
A code freeze is a code freeze.
If your code freezes are experiencing frequent thawing delays, your scrum master and product architect are not communicating or not doing their jobs properly. Perhaps, there's no point in trying to impress or acquiesce to your management that they are using some industry fad called agile programming. Or management needs to hire architect and scrum master who are able to design and granularise the project within the skills of the team as well as the expectations of the customers and the technological constraints of the project.
I think there are management elements and their scrum master who do not realise how crucial a good architect is even for a scrum environment and refuse to hire one. A good architect who is able to listen and work with the team is invaluable to the scrumming process because he/she has to constantly adapt the architecture to changing granularities and expectation.
I also think there are management elements and their scrum master who belongs to the other spectrum of the programming universe due to bad experiences with longer development cycles like waterfall, who therefore think that scrum is meant to produce a product within a month and therefore meticulous investigation into cross-modules effects is not really necessary. They sit down, wet their fingers in the air and come up with a great sprint.
If your team is experiencing frequent thawing of code freezes, you guys might need to code-freeze your whole project and rethink your strategy and see that the cause is due to your refusal to define module contracts that fit the granularity of modules. Or are you guys defining module contracts at all to so that features of a stuck module could be currently rarefied to enable other teams or modules to continue.
Do you guys have a UML strategy that aids in discovering the projected features of a project release and allows you to see the effects of a stranded module and then see which module needs focus to reach a desired product release level? Are you attending scrums and sprints and you have no picture of an UML to show how advanced or delayed you are so that you are just bumping yourselves along happily or otherwise blindly? Or does your scrum master would say to room of yeas or nays, hmm ... that module seems important - without actually having a clear picture of which are the most strandable modules in relation to a product release.
A product release code-freeze is achieved by progressive freezing of modules. As soon as a module is completed, a product test is done to ensure that the module satisfies its contract and that module is code-frozen to say version 2.1. Even though work progresses on that module for 2.2, the project on the whole should not depend on 2.2 but on 2.1. The strategy is to minimise the number of modules whose contracts needs to thawed when a product release is tested and if the product release should scale down its features. If progressive modular freezing does not help your development team ... either the product is so complex and your management is under-expecting the number of iterations to achieve a proper release or the modular architecture and strategy needs serious rethinking.
I have worked on a project (waterfall) in which we had feature freeze AND code freeze.
Feature freeze means the beginning of a bugfix period. Also new branch was created for the new version so that we could implement features, i.e. this is the point when the company starts to work on the new version. No new features are implemented, only bugs are fixed.
Code freeze comes when QA thinks the product is in releasable condition (i.e. they do not know of any severe bugs). Before a final test cycle a code freeze is announced (remember a test cycle might take a week). If the test succeeds this becomes the released product. If it fails then the new bugs are fixed. These checkins are supervised by architects and managers and the risk of every line is practically documented. Then the testcycle is started again.
Summary: After feature freeze you can only check in bugfixes. After code freeze you can only check in in exceptional cases.
Yeah, it's overthought.
Yeah, it's a misnomer.
If the code isn't broken/messy you wouldn't touch it, and if it is then you will fix it. That's exactly the same situation as if you were not in code freeze. Yes, it's "requirement freeze" or "integration break" which are anti-patterns. It is a point at which to stop including new features in the next release, which is valuable in the sales/marketing/customersupport side of things. But they should probably call it "prerelease".
What ought to happen is that there are always a few releasable versions of the system in version control, and the company picks one to ship.
the Lean name for "code freeze" is "waste."
In your comment, you mentioned the word 'sprint'. That tells me you may be using Scrum(or any other Agile) methodology. In Scrum you hardly 'freeze' anything :) Flexibility, risk identification and mitigation, and above all, in terms of engineering, continuous integration matter a lot in Scrum.
Given this, the team should be cross-functional and the code will be continuously integrated. As a result, you may not have things like 'code freeze'. You just have a releasable product at the end of the sprint. It should have been tested continuously and you should have already got the bug reports which you should have fixed already.
Well, this is theory. However, good scrum teams aren't too far from theory, as scrum is mainly about principles. There aren't too many rules.
I personally won't split too many hairs on the terminology, but the intention behind the term. Most certainly, the term is used to identify a stage in the SDLC, in your organization. Speaking strictly as per Scrum, it doesn't have a bug fix phase. In case you're dedicating one or more sprints to fix bugs, then this term can mean, "no feature backlogs will be included in the sprint, but only bug fixes". This can be easily handled at the sprint planning (and pre-planning) meeting(s) and the team doesn't even have to worry about the terminology. Even better, this terminology/intention doesn't even have to go beyond the Product Owner.
While "Code Freeze" may have a clouded meaning and is, as has been mentioned, more aptly a "Feature Freeze" when considering individual projects/releases it DOES have a place in a larger, integrated deployment where another entity is responsible for packaging and/or deploying multiple software releases from various teams. "Code Freeze" gives them time to make sure the environments are lined up and all packages accounted for. "Code Freeze" also means that nothing but "show stopping" changes are getting in. Everything else would be handled in the next maintenance release.
In a perfect world, scripted testing would have completed before this point and there would have been time allowed for deployment of any last fixes and retest. I have yet to see this happen at any "globo-corp". The (business) testers test up until and even after deployment and the "Code Freeze" becomes a signal to them to step up their efforts and log everything that they've been sitting on. In some cases, it's a signal for them to START testing.
Really, "Code Freeze" is just business speak for "Here there be Tygers". ;-)
when we code freeze, the repo is locked, hopefully all the bugs are fixed that you intended to be fixed, and you the testers to a whole nother round of testing before branching and building to production. if there's any outstanding bugs scheduled for this iteration the leads will be breathing down your neck until it is closed out, or deemed noncritical and pushed back an iteration. so, yes, its really frozen.

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

Engineer accountability and code review processes [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In your “enterprise” work environment, how are engineers held accountable for performing code inspections and unit testing? What processes do you follow (formal methodology or custom process) to ensure the quality of your software? Do you or have you tried implementing a developer "signoff" sheet for deliverables?
Thanks in advance!
Update: I forgot to mention we are using Code Collaborator to perform our inspections. The problem is getting people to "get it" and buy into doing them outside of a core group of people. As stalbot pointed out below it is a cultural change but the question becomes, how do you change your culture to promote quality initiatives such as reviews/unit tests?
• Our company uses peer code reviews. We conduct them as Over-The-Shoulder reviews and invite the team’s tester to participate in the meeting to gain a better understanding of the changes. We use Source Control software that requires check-in, code-review rules to be signed off. Nothing big, just another developer's name that has reviewed the code.
• There are definite benefits to code review as several studies have been able to demonstrate. For our company, it was evident that code quality increased as the number of support calls decreased and the number of reported bugs decreased as well. NOTE: Some of the benefits here were the result of implementing Scrum and abandoning Waterfall. More on Scrum below.
• The benefits of code review can be a more stable product, more maintainable code as it applies to structure and coding standards, and it allows developers to focus more on new features rather than “fire-fighting” bugs, and other production issues. There really aren’t any drawbacks if code reviews are conducted “right”. More on the “right way” below.
• Some of the hurdles to overcome while implementing code reviews are the idea that “big brother” is watching me and the idea that not having perfect code means torture and pain. Getting developers to trust each other is difficult sometimes, especially when it involves “pecking order” or the “holier than thou” attitudes and putting your hard work under a microscope. Trust is the key to resolving these issues. A developer must trust that they will not be punished by peers or management for mistakes in code. It happens to everyone. Make a note of the issue, get it resolved and move on.
Scrum
One of the benefits of using the Scrum methodology is that a development cycle (”sprint”) is short. The time-frame of the “sprint” is determined by what works best for your organization and will need some trial and error, but really shouldn’t be longer than four week iterations. The benefit is that it requires the developers communicate daily and communicate problems early on in the project. This was initially adopted by our development department and has spread to all areas of our company as the benefits of scrum are far reaching. For more information, see: http://en.wikipedia.org/wiki/SCRUM or http://www.scrumalliance.org/ . As the development iterations are smaller, the code review process reviews smaller pieces of code, making the review more likely to find problems than hours or days of formal reviews.
“Right Way”
Code Reviews done the “right way” is completely subjective. However, I personally believe that they should be informal, over-the-shoulder reviews. All of the participants in a review should avoid personally attacking the person being reviewed with statements such as “why did you do it that way?” or “what were you thinking?” etc. These types of comments diminish the trust between peers, leading to animosity, hours of arguing over the best/right way to code a solution. Keep in mind that developers do not think or code exactly the same, and there are many solutions to a problem.
Just a little clarification on over-the-shoulder reviews; these can be conducted via remote desktop sharing (pick flavor here), or in person. However, they shouldn’t be limited to the developers only. We typically invite our entire scrum team which consists of two developers per team, a tester, a documentation person, and product owner. All non-developers are there to gain a better understanding of the changes or new functionality being made. They are free to ask questions or provide input, but not to make coding decisions or comments. This has been effective as certain questions will be asked that may change the direction of the project as the initial requirements may have missed a scenario, but that is what agile is all about, change.
Suggestion
I would highly recommend researching scrum and code reviews, before mandating them. Create the basic rules for each and implement them as part of your culture to achieve a better quality product. It must become part of your culture so that it is part of a natural process and integrated at all levels, as it is a paradigm shift from poor quality, missed deadlines and frustration to better quality products, less frustration, and more on-time deliverables.
If you want to ensure that every changelist gets reviewed, before checkin, then you could have your source control tool reject unreviewed checkins. For example, a trigger could reject checkins without "CodeReview: " in the checkin comment. Although people could still lie, they could also be held accountable.
If you want to ensure that every changelist gets reviewed, after checkin, then you could see if Code Collaborator will play nicely with your source control system and automatically make a review task after each checkin (push or pull; whatever works). After that, use whatever "polite annoyance" features Code Collaborator has, to make sure reviews actually get done.
If you want people to review only some checkins, not all checkins, then good luck with that.
We have a pretty cool setup. Coders are expected to test their code before check-ins to ensure that it doesn't break the build and to write tests where they make sense to have but high coverage isn't required.
Complex methods are expected to be commented.
At the end of phases code is reviewed by the whole team.
Pair programming. Work items have a required field of collaborator, the person that you paired with for the work
We lean heavily on ITIL concepts. While we don't need the full scale ITSM that ITIL provides, we have implemented processes that draw from some of the best practices in ITIL, specifically in the areas of Change Management and Release Management.
Code reviews are part of our RM strategy. As a change or new piece of code makes its way through our RM process, a lot of eyes look at it. Ultimately the Release Manager makes the call on approval or rework, and all of this is documented (we use TFS and SharePoint). Formal code reviews are held by the Release Manager and the technical team he selects. The primary developer for a release candidate is held accountable for adherence to standards, functionality, and a verification of a completed test plan. If the quality standards aren't met, the deliverable is rejected and the project schedule is updated to reflect the rework.
Yes, this is all very heavy. I work in government and we have complex laws to follow, specifically in the areas of taxes, ADA compliance, and so on.
We use three basic rules
1) The developer is responsible for fixing bugs in code when unit tests don't exist. In cases where there is a test, the person breaking the test is responsible for fixing it.
2) Code reviews. There are some code review smells that are a good warning sign, over defensiveness and blame redirection being the two most common.
3) NO EMAILING CODE, JARs or config files. Everything is in the scm.
To create the culture 1st try define your standards and values and most of all make them known.
Then hire people who believe in them or who could be able to adapt to them. Don't hire someone who does not have any connection at all with your company values.
Make sure that those who respect these values and show improvements are "rewarded" and "properly" recognized and seen as models. Don't forget that for many is not all about the money.
Don't hesitate to take appropriate measures againts those who do not fulfill their responsibilities but make sure they know them. And have them accountable for their deeds.
Allow people to become used with any new responsibility.
To make change in culture is big deal. Still there are some ways to change.
Create awareness about code review and importance of code review tool. It can be done using training session.
Motivate the people : Giving some reward for the code reviews.
Change in process : Make sure that code review should be happen and properly. It can be done using checklist and part of release process.
Do not try to change completely. Slowly introduce newer changes. Create small group to observe and discuss the change in code review process.
Provide the solution instead of create problem. Process should not be overhead. It comes automatically. Provide solutions to peoples problem related to the process.