What is the "dogfood" build? - naming-conventions

What dogfood build does mean? I mean, I know alpha versions, beta versions, but I never heard about dogfood. For which dogs people this build?
For example here: https://github.com/JakeWharton/u2020/blob/master/build.gradle#L29

dogfooding is when a team working on a project brings that project home to use in their own everyday life, as a means of LIVE QA TESTING.
The developers get to see how the project runs from a QA point of view in the setting they are comfortable in.
This method of testing helps to catch many problems that otherwise may have taken much longer to find using standard quality assurance practices.
Some links with more info:
wikipedia: Eating your own dog food
The Google Test and Development Environment - Pt. 2: Dogfooding and Office Software
Geosoft: How dogfooding has helped us improve quality assurance

"Dogfood build" designates builds produced for internal use within the company, typically for finding bugs in new features that would be to embarrassing to show to an external customer.
The name comes from "Eating your own dog food" colloquialism, which refers to companies using their own products for its internal operations. The term is believed to have originated with Microsoft in the 1980s.

Related

Development/QA/Production Environment

I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.

Has anyone come accross the term 'friendly user testing' in software dev?

Reading a document and the term is in the context "...after several weeks of friendly user testing ..."?
Friendly User Testing usually refers to 'user testing' performed by people outside the development & QA team, but known to the organisation that's conducting the testing.
We've seen it in two cases:
Predominantly, organisations using their own staff (who should be
friendly :-) to eat their own dog food (e.g. the CEO, head of sales,
apprentice that joined last Monday... but usually won't include folk
from inside the development team)
And increasingly, organisations recruiting 'paid-for' or 'interested' folk to act as the test bunnies (in the nicest sense of the phrase) for soon to be launched products (e.g. O2s October 2011 VoIP trial to 1000 invited users)
It's 'friendly' - because if they find crashes, bugs, issues, the users are expecting it. And in our case, we're also probably monitoring their usage (with their permission) to fill out a 'diary', whilst performing automated field tests to drive tasks from friendly users that might not be as dedicated as we'd like (and they originally intended)...
"Friendly user testing" was used a lot on a recent conference call I was on, dealing with a project where we had started doing internal UAT. The context it was used in seems to indicate to me that the "friendly" part of "friendly user testing" is to differentiate between "our" testers, e.g. he's a "friendly" (war), and "their" testers, e.g. Charlie's in the rice patties.
Friendly user testing allows minimal client impact because "our guys" supposedly know what they're doing and what they should/shouldn't do in an environment that contains/touches client data that is currently in production that introduces a new piece of software that is at the end of its development cycle. (Existing data & software system plus the addition of a new software system.)
Edit:
I should add that when I say "our" testers I mean those not in the original QA testing phase, but testers that will be using the system in production later... hence "internal" UAT.
A friendly user test (FUT) is a user test of software carried out by people outside the development team, but known to the organization that performs the test.
A FUT is a person agreeing to test and detect bugs in an application or software in a version being deployed (not finalized) and before generalization. The voluntary user of the new service agrees to collect valuable information from developers. The inconveniences generated are transformed into active participation in the resolution of the faults encountered.
Perhaps the documentation is trying to differentiate between testing that is trying to break an application with testing that is being much friendlier - ie testing that is more superficial.
Most developers when testing their own code don't initially test to break the application, they tend to follow the same patterns of use each time - patterns which they will have cleared bugs from. Perhaps this is friendly testing? This is often why demos fail (curse of the demo) - as users new to an application follow fresh patterns which haven't been cleared.
Otherwise it could be a typo, and it should read user friendly testing - testing that is designed to be low impact on users?
Developers typically have their favorite users; users who ask good questions, suggest good features, and are good communicators.
Those users typically get software that's less than perfect before everyone else, so developers can get friendly feedback before releasing it to the ravenous hordes. Being told gently "I don't like the way your foobar works, I'd rather it do something different" is much better than hearing "Those jerks and their idiot foobar, I can't imagine why they wouldn't do it differently."
These friendly users are perhaps also not trying to break the software -- they might be willing to try new features, but they probably won't try to write exploits should they find bugs.

What is your or your company's programming process? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for process suggestions, and I've seen a few around the site. What I'd love to hear is what you specifically use at your company, or just you and your hobby projects. Any links to other websites talking about these topics is certainly welcome!
Some questions to base an answer off of:
How do users report bugs/feature requests to you? What software do you use to keep track of them?
How do bugs/feature requests get turned into "work"? Do you plan the work? Do you have a schedule?
Do you have specs and follow them? How detailed are they?
Do you have a technical lead? What is their role? Do they do any programming themselves, or just architecture/mentoring?
Do you unit test? How has it helped you? What would you say your coverage is?
Do you code review? When working on a tight deadline, does code readability suffer? Do you plan to go back later and clean it up?
Do you document? How much commenting do you or your company feel comfortable with? (Description of class, each method and inside methods? Or just tricky parts of the code?)
What does your SCM flow look like? Do you use feature branches, tags? What does your "trunk" or "master" look like? Is it where new development happens, or the most stable part of your code base?
For my (small) company:
We design the UI first. This is absolutely critical for our designs, as a complex UI will almost immediately alienate potential buyers. We prototype our designs on paper, then as we decide on specifics for the design, prepare the View and any appropriate Controller code for continuous interactive prototyping of our designs.
As we move towards an acceptable UI, we then write a paper spec for the workflow logic of the application. Paper is cheap, and churning through designs guarantees that you've at least spent a small amount of time thinking about the implementation rather than coding blind.
Our specs are kept in revision control along with our source. If we decide on a change, or want to experiment, we branch the code, and IMMEDIATELY update the spec to detail what we're trying to accomplish with this particular branch. Unit tests for branches are not required; however, they are required for anything we want to incorporate back into trunk. We've found this encourages experiments.
Specs are not holy, nor are they owned by any particular individual. By committing the spec to the democratic environment of source control, we encourage constant experimentation and revision - as long as it is documented so we aren't saying "WTF?" later.
On a recent iPhone game (not yet published), we ended up with almost 500 branches, which later translated into nearly 20 different features, a huge number of concept simplifications ("Tap to Cancel" on the progress bar instead of a separate button), a number of rejected ideas, and 3 new projects. The great thing is each and every idea was documented, so it was easy to visualize how the idea could change the product.
After each major build (anything in trunk gets updated, with unit tests passing), we try to have at least 2 people test out the project. Mostly, we try to find people who have little knowledge of computers, as we've found it's far too easy to design complexity rather than simplicity.
We use DOxygen to generate our documentation. We don't really have auto generation incorporated into our build process yet, but we are working on it.
We do not code review. If the unit test works, and the source doesn't cause problems, it's probably ok - but this is because we are able to rely on the quality of our programmers. This probably would not work in all environments.
Unit testing has been a god-send for our programming practices. Since any new code can not be passed into trunk without appropriate unit tests, we have fairly good coverage with our trunk, and moderate coverage in our branches. However, it is no substitute for user testing - only a tool to aid in getting to that point.
For bug tracking, we use bugzilla. We don't like it, but it works for now. We will probably soon either roll our own solution or migrate to FogBugz. Our goal is to not release software until we reach a 0 known bugs status. Because of this stance, our updates to our existing code packages are usually fairly minimal.
So, basically, our flow usually looks something like this:
Paper UI Spec + Planning » Mental Testing » Step 1
View Code + Unit Tests » User Testing » Step 1 or 2
Paper Controller & Model Spec + Planning » Mental Testing » Step 2 or 3
Model & Controller Code + Unit Tests » User Testing » Step 3 or 4
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Rejection
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Acceptance » Unit Tests » Trunk » Step 2 or 4
Known Bugs » Bug Tracker » Bug Repair » Step 2 or 4
Finished Product » Bug Reports » Step 2 or 4
Our process is not perfect by any means, but a perfect process would also imply perfect humans and technology - and THAT's not going to happen anytime soon. The amount of paper we go through in planning is staggering - maybe it's time for us to get a contract with Dunder Mifflin?
I am not sure why this question was down voted. I think it's a great question. It's one thing to google search, and read some random websites which a lot of times are trying to sell you something rather than to be objective. And it's another thing to ask SO crowd which are developers/IT Mangers to share their experiences, and what works or doesn't work for their teams.
Now that this point is out of the way. I am sure a lot of developers will point you towards "Agile" and/or Scrum, keep in mind that these terms are often used very loosely especially Agile. I am probably going to sound very controversial by saying this which is not my intention, but these methodologies are over-hyped, especially Scrum which is more of a product being marketed by Scrum consultants than "real" methodology. Having said that, at the end of a day, you got to use what works the best for you and your team, if it's Agile/Scrum/XP or whatever, go for it. At the same time you need to be flexible about it, don't become religious about any methodology, tool, or technology. If something is not working for you, or you can get more efficient by changing something, go for it.
To be more specific regarding your questions. Here's the basic summary of techniques that have been working for me (a lot of these are common sense):
Organize all the documents, and emails pertaining to a specific project, and make it accessible to others through a central location (I use MS OneNote 2007 and Love it for all my documentation, progess, features, and bug tracking, etc.)
All meetings (which you should try to minimize) should be followed by action items where each item is assigned to a specific person. Any verbal agreement should be put into a written document. All documents added to the project site/repository. (MS OneNote in my case)
Before starting any new development, have a written document of what the system will be capable of doing (and what it wont do). Commit to it, but be flexible to business needs. How detailed the document should be? Detailed enough so that everyone understands what the final system will be capable of.
Schedules are good, but be realistic and honest to yourself and business users. The basic guideline that I use: release quality and usable software that lacks some features, rather than a buggy software with all the features.
Have open lines of communication among your dev. team and between your developers and business groups, but at the end of a day, one person (or a few key people) should be responsible for making key decisions.
Unit test where it makes sense. But DO NOT become obsessive about it. 100% code coverage != no bugs, and software works correctly according to the specs.
Do have code standards, and code reviews. Commit to standards, but if it does not work for some situations allow for flexibility.
Comment your code especially hard to read/understand parts, but don't make it into a novel.
Go back and clean up you code if you already working on that class/method; implementing new feature, working on a bug fix etc. But don't refactor it just for the sake of refactoring, unless you have nothing else to do and you're bored.
And the last and more important item:
Do not become religious about any specific methodology or technology. Borrow the best aspects from each, and find the balance that works for you and your team.
We use Trac as our bug/feature request tracking system
Trac Tickets are reviewed, changed to be workable units and then assigned to a milestone
The trac tickets are our specs, containing mostly very sparse information which has to be talked over during the milestone
No, but our development team consists only of two members
Yes, we test, and yes, TDD has helped us very much. Coverage is at about 70 Percent (Cobertura)
No, we refactor when appropriate (during code changes)
We document only public methods and classes, our maximum line count is 40, so methods are usually so small to be self-describing (if there is such a thing ;-)
svn with trunk, rc and stable branches
trunk - Development of new features, bugfixing of older features
rc - For in house testing, bugfixes are merged down from trunk
stable - only bugfixing merged down from trunk or rc
To give a better answer, my company's policy is to use XP as much as possible and to follow the principles and practices as outlined in the Agile manifesto.
http://agilemanifesto.org/
http://www.extremeprogramming.org/
So this includes things like story cards, test-driven development, pair programming, automated testing, continuous integration, one-click installs and so on. We are not big on documentation, but we realize that we need to produce just enough documentation in order to create working software.
In a nut shell:
create just enough user stories to start development (user stories here are meant to be the beginning of the conversation with business and not completed specs or fully fleshed out use cases, but short bits of business value that can be implemented in less then 1 iteration)
iteratively implement story cards based on what the business prioritizes as the most important
get feedback from the business on what was just implemented (e.g., good, bad, almost, etc)
repeat until business decides that the software is good enough

What is "Enterprise ready"? Can we test for it?

There are a couple of questions on Stackoverflow asking whether x (Ruby / Drupal) technology is 'enterprise ready'.
I would like to ask how is 'enterprise ready' defined.
Has anyone created their own checklist?
Does anyone have a benchmark that they test against?
"Enterprise Ready" for the most part means can we run it reliably and effectively within a large organisation.
There are several factors involved:
Is it reliable?
Can our current staff support it, or do we need specialists?
Can it fit in with our established security model?
Can deployments be done with our automated tools?
How easy is it to administer? Can the business users do it or do we need a specialist?
If it uses a database, is it our standard DB, or do we need to train up more specialists?
Depending on how important the system is to the business the following question might also apply:
Can it be made highly available?
Can it be load balanced?
Is it secure enough?
Open Source projects often do not pay enough attention to the difficulties of deploying and running software within a large organisation. e.g. Most OS projects default to MySql as the database, which is a good and sensible choice for most small projects, however, if your Enterprise has an ORACLE site license and a team of highly skilled ORACLE DBAs in place the MySql option looks distinctly unattractive.
To be short:
"Enterprise ready" means: If it crashes, the enterprises using it will possibly sue you.
Most of the time the "test", if it may really be called as such, is that some enterprise (=large business), has deployed a successful and stable product using it. So its more like saying its proven its worth on the battlefield, or something like that. In other words the framework has been used successfully, or not in the real world, you can't just follow some checklist and load tests and say its enterprise ready.
Like Robert Gould says in his answer, it's "Enterprise-ready" when it's been proven by some other huge project. I'd put it this way: if somebody out there has made millions of dollars with it and gotten written up by venture capitalist magazines as the year's (some year, not necessarily this one) hottest new thing, then it's Enterprise-ready. :)
Another way to look at the question is that a tech is Enterprise-ready when a non-tech boss or business owner won't worry about whether or not they've chosen a good platform to run their business on. In this sense Enterprise-ready is a measure of brand recognition rather than technological maturity.
Having built a couple "Enterprise" applications...
Enterprise outside of development means, that if it breaks, someone can fix it. I've worked with employers/contractors that stick with quite possibly the worst managing hosting providers, data vendors, or such because they will fix problems when they crop up, even if they crop up a lot it, and have someone to call when they break.
So to restate it another way, Enterprise software is Enterprisey because it has support options available. A simple example: jQuery isn't enterprisey while ExtJS is, because ExtJS has a corporate support structure to it. (Yes I know these two frameworks is like comparing a toolset to a factory manufactured home kit ).
As my day job is all about enterprise architecture, I believe that the word enterprise isn't nowadays about size nor scale but refers more to how a software product is sold.
For example, Ruby on Rails isn't enterprise because there is no vendor that will come into your shop and do Powerpoint presentations repeatedly for the developer community. Ruby on Rails doesn't have a sales executive that takes me out to the golf course or my favorite restaurant for lunch. Ruby on Rails also isn't deeply covered by industry analyst firms such as Gartner.
Ruby on Rails will never be considered "enterprise" until these things occur...
From my experience, "Enterprise ready" label is an indicator of the fear of managers to adopt an open-source technology, possibly balanced with a desire not to stay follower in that technology.
This may objectively argued with considerations such as support from a third party company or integration in existing development tools.
I suppose an application could be considered "enterprise ready" when it is stable enough that a large company would use it. It would also imply some level of support, so when it does inevitable break.
Wether or not something is "enterprise ready" is entirely subjective, and undefined, and rather "buzz word'y".. Basically, you can't have a test_isEnterpriseReady() - just make your application as reliable and efficient as it can be..

Visual Studio Team System switching opinions

Assume your .NET-based development team is already using the following set of tools in its processes:
Subversion / TortoiseSVN / VisualSVN (source control)
NUnit (unit testing)
An open source Wiki
A proprietary bug-tracking system that is paid for
You are happy with Subversion and NUnit, but dislike the Wiki and bug-tracking system. You also would like to add some lightweight project-management software (like Fogbugz/Trac) - it does not have to be free, but obviously cheaper is better.
Can you make a compelling argument for adopting VSTS, either to add missing features and replace disliked software or to handle everything (including the source control)? Is the integration of all these features greater than the sum of the parts, or would it simply be better to acquire and replace the parts that you either do not like or do not have?
I remember looking into VSTS a few years ago and thought it was terribly expensive and not really better than many of the free options, but I assume Microsoft has continued to work on it?
VSTS is great, if you do everything in it. Unfortunately the price has not become better over the years. :( The CAL's are still ludicrously expensive. The only improvement is that if a person uses only the work item system, and works only with his/her own work items (no peeking at other person's work items!) then there is no need for a CAL. This makes it a bit easier to use it as an external bugreport system. Still it leaves a lot to be desired in this area.
There is one way to alleviate the cost - become Microsoft Certified Partner. If you are a simple partner, you get 5 VS/TFS licenses for free; if you are a Gold Certifiend Partner, you get 25 (if memory fails me not). That should be enough for most companies. But getting the Gold status might be tricky, depending on what you do.
If you only dislike those two parts, then perhaps it's better just to find a replacement for them instead for everything? There are many wiki systems out there, some should be to your liking. The same goes for bugtracking too.
We are extremely happy with not only the tools, but the integration that Team Foundation Server, and the various Team Editions have given us. We previously used Borland's StarTeam for source control and issue tracking with a 3rd party wiki, the name of which escapes me at the moment.
It came time for us to extend our licensing and support agreement with Borland, only to learn that the cost of adding users to our license and upgrading the product would cost us as much (a little more, actually) than biting the bullet and making the switch. One thing to consider is that you would normally pay for the development tools to begin with, so the cost is partially absorbed by our budget.
We also did not feel the need for getting Team Suite for every person. You might want to consider it for the developers, but other disciplines don't really have a benefit in using all of the tools in most companies.
We were able to get the appropriate team editions for twelve people, enough CALs for 50 users (for Team Explorer, Teamprise, Team Project Portals, Team Web Access), Teamprise for the five Mac Users that we have, and the Team Foundation Server software itself for under six figures. Considering that includes the developer tools that we normally would be buying, it was a good deal.
The upfront cost on new licensing also covered two years, so we could split the budget between the 2008 and 2009 fiscal years. The very important thing is to make sure not to let the licenses lapse, as the renewals on licenses cost a fraction of the initial cost and also include version upgrades.
As to the features, we are in the process of rolling out. About half of our department completed training, and I have already started migrating projects over. The development team absolutely loves the features and tight integration with their workflow. Version control is a snap, and work items (and their related reporting artifacts) are extensible to the nth degree. The fact that TFS relies heavily on bringing sanity to workflow management helps to tie in all of the processes to a level that you just can not get with multiple vendors.
My absolute favorite thing, though, is the extensibility model. Using the Team Foundation Server API, you can easily write check-in policies, write tools to interface with the system, develop plug-ins, and more. We are already seeing gains in productivity and the quality of our products through a minimal implementation.
Still on the horizon, though, is integrating Team Build. I have yet to set up a build project, but it seems to be seamless and painless. Time will tell... :-)
Edit - I forgot to mention that our migration to TFS includes licensing for the Test Load Agent. The load testing functionality within Team Test is one of, if not the absolute best that I have seen.
Where I'm at, we've settled on the following:
SVN for source control
Redmine for bug-tracking and wiki
NUnit for unit testing
CruiseControl.NET for our build server
Redmine is an open source Ruby on Rails application that supports multiple projects much better than Trac and seems to be much easier to administer. It's definitely worth checking out.
VSTS seems to be way too much money compared to other products. As an additional benefit, you also get the souce with open source solutions, which allows you to modify things to fit your need if the capability isn't there yet.
I'd stick with SVN and use trac or bugzilla or fogbugz. You could also do a trial of team server. In my opinion it is not worth the money. MS had their chance with version control and they screwed it up a long time ago. Too late to the party if you ask me and frankly I am not impressed with how they try to control all your development experience in the IDE with "integration" to the source control. I prefer the perforce/SVN and separate defect tracking solution.
With all that said, you probably can't go wrong with any of the following:
bugzilla or trac or fogbugz AND SVN
MS team thingamabob