Related
If you have nothing, you cannot write a test because there is nothing to test. This seems pretty obvious to me but never seems to be addressed by proponents of TDD.
In order to write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. That is what comes first, not the test.
Tests can never come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
It's true that in order to write a test, the test writer must form some conception on how the test code can interact with the System Under Test. In that sense, conceptual design 'comes first'.
Test-driven development (TDD), however, is valuable because it's not (only) a quality assurance methodology. It's first and foremost a fast feedback loop.
While you may have an initial design in mind, once you start to write a test, you may discover that this design doesn't work (or is awkward to use). This often happens, and should cause you to immediately adjust course.
The red-green-refactor cycle suggests a model to think of TDD. Each such cycle may be a minute or two.
Thus, you may start with an initial design in mind, but then adjust it (or completely rethink it) every other minute.
never seems to be addressed by proponents of TDD
I disagree. Plenty of introductions to TDD discuss this. Two good books that discuss this (and much more) are Kent Beck's Test-Driven Development by Example and Nat Pryce and Steve Freeman's Growing Object-Oriented Code Guided by Tests.
It's the other way round.
If you write a test that calls a function which does not exist, your test suite fails and you get an error forcing you to define that function, just like writing any other test forces you to write the implementation.
Your tests don't need to run to be good tests. But this kind of test is not meant to stay in your test suite. They are sometimes referred to as "staircase tests": you need to write them to get going but they are only instrumental.
What happens generally is that as soon as this test passes, you make it fail by being more specific. Technically the test you end up with is the same you would have written after the fact and it didn't take more time to write it, but during this process you were able to run the test suite one or more times, so you're spending less time in an invalid state, so to speak.
I would like to add that there is nothing untrue in your question, but your conclusion doesn't follow the premise: it is true that what come first is the specification, but there is nothing inconsistent with formalising this specification in a test before the code is written. the spec, and the tests, force you to write the code. TDD is an incremental way of formalising the spec that ensures the spec always comes first.
To write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. THAT is what comes first, NOT the test. Tests can NEVER come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
Not quite right (not entirely wrong either - It's Complicated[tm])
If you look at the first example in Test Driven Development by Example, you'll see that Beck doesn't begin with classes and methods. He doesn't even begin with a test.
The very first thing that he creates is a "to-do" list, where each of the entries in the todo list is a representation of a behavior (my terminology, not his). So we see things like
$5 + 10 CHF = $10 if rate is 2:1
These days, you'd be more likely to see this idea expressed as Hoare triple (Given/When/Then, Arrange/Act/Assert, etc). But what we have here is a reminder to the programmer that we want an automated check that measures the result of adding two different currencies together, and confirms that the result matches some specification.
In his exercise, his to do list includes a "simpler" test, which is the one he attempts first
$5 * 2 = $10
That same todo list also includes some other concerns the has about the design, NOT expressed in test form. Also, the list grows as he works through the problem.
In this sense, the test absolutely comes first. We write the test in a language to be consumed by humans. Translating the test into a language understood by the machine comes later.
In the second step, where we describe the test to the machine, things get messier. It is absolutely the case that, as we are designing the test, we are also designing the communication protocol that allows the test to measure what the production code does. So there's a certain amount of communication design that is happening in parallel with the "test" design.
But even here, the test is not specifying all of the classes that are going to exist, it's only specifying what it needs to perform its measurement. We describe a facade, but we aren't specifying what lies beyond that facade.
It can happen, as we design more of the system, that the facade we specify is used only by tests, as a way of communicating with a different underlying design of production code.
(Note: I say classes here for consistency with the question and with early literature, taken primarily from examples in Smalltalk or Java. Feel free to substitute "functions" for "classes" if that makes you more comforatble.)
Now, the most common case is that the facade is the production code; we don't typically add elements to the design until we have a non-speculative motivation for them.
"Unit testing" puts some strain on these ideas - how can you possibly write a unit test without first designing your unit boundaries?
The real answer is an unfortunate one -- Kent Beck didn't write unit tests. He wrote "programmer tests" (a term that got retconned in later) and called them unit tests.
Using the testing language of the 1990s (which is when all this mess started), a more appropriate term is probably "composite tests".
You've also got "the London School", that was trying to figure out how to TDD a particular design style; writing a test for that style requires a more complicated testing facade "up front" (roles and interfaces and stable substitute implementations and so on).
It can also be worth keeping in mind the setting.
(Disclaimer: this isn't something I witnessed first hand - think "based on a true story" rather than "facts")
TDD (and its parent idea "test first" programming in XP) are pushing back against "up front design" of the sort where you decide what the class hierarchy and relationships should be, and document them, before you actually sit down to write the code.
The core argument being that the design process needs shorter feedback loops; that we don't get deeply committed to a particular design until we've acquired a lot of evidence that it is going to work out OK.
All that said, yes it is absolutely the case that TDD, as a technique, works so much better in the hands of someone who is already good at software design. See Michael Feathers on the Look, Ma, no hands! era.
There is no magic.
Let's say that you have to test a new way of getting a sum of two numbers from software and you have to test that functionality and do BDD automation.
From below two what would be a better approach for automation (Also why)?
1) Using fixed input and expecting the same output. Ex: Input -> 3,5 Output -> 8
OR
2) Using random two number on every run and validating it against the conventional sum.
The first.
BDD isn't really about testing; it's about using examples to illustrate desired behaviour. The examples we use are "exemplars"; specifically picked for that illustration.
In your case, the sum is a pretty trivial problem. When we're dealing with more complex business behaviour though, we'll ask, "Can you give me an example?" The conversation that follows is the most important part of BDD. From that, we get realistic examples of the kind of inputs we'll be handling, and not just the expected output but also the value of that output, and who it's valuable to.
Once we automate the scenarios, they provide tests as a nice by-product, but that's not all they do. They're also living documentation. Business people can read them to see what a system does, and team members can use them to get a feel for the capabilities already in place.
That's a lot harder if the scenarios are generic ("a random number" and "another random number" and "a result") rather than specific ("2" and "3" and "5").
I like Lunivore's answer, it's interesting. Expect this thread to be banned from SO in 15 mins! :)
You haven't explained the business value in what you're doing. Trivially, using a random input is more complex than a fixed input, so more things could go wrong, making it worse. But maybe you WANT to test random inputs? Maybe there is some kind of value, but you haven't given enough information?
I'd say that you're putting the cart before the horse, and looking at it from a junior TDD programmer's perspective, not a senior BDD developer's perspective. The "business value" is something that is often above someone's particular paygrade. But that's where BDD starts from, it's not where it end's up.
I'd ask you, "What is the perceived value in being able to sum these numbers? Are you a crazy person? Why are you talking to a consultant about this?"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am writing a user manual and I have come to a discussion with a colleague. He says I cannot use the word "you" anywhere in the manual. Now I remember something about this at school but that was not for writing procedures. Also, doing some googling I observed that most tutorials where using it a lot. I would prefer using it but only if this is considered good practice. what do you think?
The alternatives that I know of are:
'You' (second person singular) - "You should put the plate on the table."
Imperative - "Put the plate on the table."
'We' (first person plural) - "We should put the plate on the table."
'The user' (third person singular) - "The user should put the plate on the table."
Passive - "The plate should be put on the table."
My own preferences are:
I prefer the imperative as the default mode, because it's the briefest (least verbiage).
I avoid the passive, and the first person plural.
I use the second person pronoun ("you") or a third person noun (e.g. "your system administrator") when I want an explicit subject instead of the imperative.
Some people believe that manuals should be written as if they were scientific papers. Others believe that technical accuracy and readability is more important. I'm of the latter persuasion - use "you" if it fits with your overall style, but be consistent in your usage - I find documents that switches between "you" and "we" are irritating (and it's a sin I've been guilty of myself).
Which is easier to understand?
Click the button. You will see a dialog box where you can type your name.
or
The action of clicking the button will cause the appearance of a dialog box allowing the possibility for the user to enter his or her name.
The first is much easier to grasp. (Using "you" can sometimes be sloppy, but that tends to be in cases where it's used as a substitute for "one", or "some people", or "people in general". It's fine to use it where you are actually referring to the person reading the text.)
If you want, you can avoid the
you-style by writing in the
passive/imperative style. You can
also try the 'we' approach, but that
might sound a bit childish. You're
doing nothing wrong with using you
though.
To avoid writing in the you-style,
use the passive/imperative style. The
we-approach might also work, though
it might sound a bit childish. There
is nothing wrong with using you
though.
We can avoid writing in the you-style
by employing the passive/imperative
style. Or we could use the
we-approach, though we might sound a
bit childish. One could try the one
approach, but risk sounding to
stiff-upper-lip and alienating the
reader. We don't mind using you once
in a while, though.
I myself do prefer the second line. A series of commands is easier to follow then a story in the you-form.
You should be writing explantions in the third person.
The Java streams model is a classic Decorator pattern example.
You should write instructions in the second person, but even then, it's still not a good idea to refer to the reader as "you".
Create a constructor that can initialize lists based on a given list of lists.
Now, how did you feel after I issued 2 commands to you, my reader?
Technical Writing Enforce the rule of using passive text only. which mean avoiding "you" will be a good idea to stay in the safe side. that's based on how i do it personally.
I would do what Google, Microsoft, Yahoo, etc do. Here's a random Help page from Google:
http://mail.google.com/support/bin/answer.py?hl=en&answer=8494
shows that "you" is being used. You can check how Microsoft writes their User Manual too.
As a side note, I wouldn't use "I" or "we".
I think if you are providing imperatives, such as "Open the door", or otherwise directly addressing the reader, then you should use "you" instead of making yourself more difficult to comprehend by talking about some abstract user.
Even in scientific papers, some of the most formal writing I can think of, it is debatable whether or not I, we and other first person language is permissible. As much as high school grammar teachers might like you to think otherwise, there is no universally appropriate scheme.
I would say just be careful. It could come across as too casual. If the intended audience is business-y, I would avoid it. However, if it's a home user scenario or the marketing is casual (think Southwest Airlines), I'd say go with it.
Just don't overuse. Then it becomes taxing on the reader.
Sample of how it's intended to be used?
It all depends on the tone and style of your writing. Formal approaches discourage the use of "you". Personally, I like
to use a style that is concise, to the point and relatively informal. I have no problem with the "you" word when used sparingly.
Avoid over usage as in:
When you want to start the application you have to enter your password and then you have to select the function you want to use.
From the Handbook of Technical Writing. 8th Edition (p. 262):
You can make sentences shorter by leaving out some articles(a, an, the), some pronouns (you, this, these), and some verbs, but such sentences may result in telegraphics style and be harder to understand.
So, I'd say it's OK to use you, but like Gilbert Le Blanc said in his comment, it's often better to write 'then click the button' instead of 'then you click the button'.
Impersonal form should be preferred. The use of 'you' would be too clear, and most of your clients will believe you are not professional. A clear manual will also reduce the need for post-sale customer support, and cause losses to the company.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was talking with some of the mentors in a local robotics competition for 7th and 8th level kids. The robot was using PBASIC and the parallax Basic Stamp. One of the major issues was this was short term project that required building the robot, teaching them to program in PBASIC and having them program the robot. All in only 2 hours or so a week over a couple months. PBASIC is kinda nice in that it has built in features to do everything, but information overload is possible to due this.
My thought are simplicity is key.
When you have kids struggling to grasp:
if X>10 then <DOSOMETHING>
There is not much point in throwing "proper" object oriented programming at them.
What are the essentials needed to foster an interest in programming?
Edit:
I like the notion of interpreted on the PC as learning tool. Due to the target platforms more than likely being somewhat resource constrained, I would like to target languages that are appropriate for embedded work. (Python and even Lua require more resources than the target likely to have. And I actually kinda like Lua.) I suppose that is one of the few virtues BASIC has, it has been ran on systems with less than 4K for over 30 years. C may not be a bad option if there are some "friendly" tools available such as Ch.
The most important is not a lot of boiler plate to make the simplest program run.
If you start of with a bunch of
import Supercalifragilistic from <expialidocious>
public void inherited security model=<apartment>
public : main .....
And tell them they "not to worry they aren't supposed to understand that" - you are going to put off both the brightest and the dumbest.
The nice thing about python is that printing "hello world" is print "hello world"
Fun, quick results. Capture the attention span of the kid.
Interpretive shells like most scripting languages offer (command line) that lets the student just type 1 or 2 liners is a big deal.
python:
>>> 1+1
2
Boom, instant feedback, kid thinks "the computer is talking back". Kids love that. Remember Eliza, anyone?
If they get bogged down in installing an IDE, creating a project, bleh bleh bleh, sometimes the tangents will take you away from the main topic.
BASIC is good too.
Look for some things online like "SIMPLE" : http://www.simplecodeworks.com/website.html
A team of researchers, beginning at Rice, then spreading out to Brown, Chicago, Northeastern, Northwestern, and Utah, have been studying this question for about 15 years. I can't summarize all their discoveries here, but here are some of their most important findings:
Irregular syntax can be a barrier to entry.
The language should be divided into concentric subsets, and you should choose a subset appropriate to the student's level of knowledge. For example, their smallest subset is called the "Beginning Student" language.
The compiler's error messages should be matched to the students' level of knowledge. If you are using subsets, different subsets might give different messages for the same error.
Beginners find it difficult to learn the phase distinction: separate phases for type checking and run time, with different kinds of errors. For this reason, beginners do better with a language where types are checked at run time, i.e., a dynamically typed language.
Beginners find it difficult to reason about mutable variables and mutable objects. If you teach pure functional programming, by contrast, you can leverage students' experience with high-school and middle-school algebra.
Beginning students are more engaged by an interactive programming environment than by the old edit-compile-link-go model.
Beginning students are engaged by splash and by interactivity. It's good if your language's standard library provides built-in support for creating and displaying images. It's better if those images are supported within the interactive programming environment, instead of requiring a separate player or viewer. And it's even better if your standard library can support moving images, or some other kind of animation.
Interestingly, they have got very good results with just 2D images. Even though we are all surrounded by examples of 3D computer graphics, students seem to get very engaged working with just two-dimensional images.
These results have been obtained primarily with college students, and they have been replicated at over 20 universities. However, the research team has also done some work with high-school and middle-school students. The first papers on that work are just coming out, so I'm less aware of the new findings and am not able to summarize them.
When you have kids struggling to grasp:
if X>10 then <DOSOMETHING>
Maybe it's a sign they shouldn't be doing programming?
What are the essentials needed to foster an interest in programming?
To see success with no or little effort. To create something running in a matter of minutes. A lot of programming languages can offer it, including the scary C++.
In order to avoid complication with #includes, multiple source files, modularization and compilation, why not have a look elsewhere? Try to write some Excel macros or use any other software with some basic built-in scripting language to automate certain tasks?
Another idea could be to play with web pages. It is not exactly programming, but at least easy to achieve something and show to others with pride.
This has been said on SO before, but... try Scratch. It's an incredible learning tool for kids. It teaches the basics of programming concepts in a hands-on and language-independent way. After a bit of playing around with it they can get down to learning a specific language's implementation of the concepts they already understand.
The common theme in languages that are easy for beginners - especially children to pick up is that there's very little barrier to entry, and immediate feedback. If "hello world" doesn't look a lot like print "Hello, world!", it's going to be harder for people to pick up. The following features along those lines come to mind:
Interpreted, or incrementally JIT compiled (which looks like an interpreter to the user)
No boilerplate
No attempt to enforce a specific programming style (e.g. Java requiring that everything be in a class definition, or Haskell enforcing purely functional design)
Dynamic typing
Implicit coercion (maybe)
A REPL
Breaking the problem (read program) down into a small set of sections (modules) that do one thing and do it very well.
You have to get them to stop thinking like a user and start thinking like a programmer. They need to take it one step at a time. Ask them what they have to think of in order to figure out the problem them selves and then write them down as steps. If you can then you break each step even more in the same mater. When done you will have the program in english making it simpler to program for real.
I did this with a friend that just could not get it and now he can. He used to look at something that I did and be bewildered and I would say that he has done more complex stuff than this.
One of the more persistently-present arguments I have had with other programmers is whether or not one's first language should require explicit type languages. Many are of the opinion that learning a language which requires you to explicitly declare type information is one which will teach you to program typefully. Conversely, it can be said that dynamic languages might present a less demanding learning curve. It goes either way, I suppose.
My advice: start with a simple model of how a computer works. I am particular to stack machines as good tools for teaching computation.
Remember that beginners are learning two disciplines at the same time: how computers work and the abstract logic involved (the basics of Computer Science), plus how to write programs that match their intended logic (learning a specific language's syntax and idioms). You have to address both concerns in an interwoven fashion in order for the students to quickly become effective. This is also the reason experienced programmers can often pick up new languages quickly.
It's worth noting Python grew out of a project for a language named ABC, which was targeted at beginners. For example, the required colon isn't strictly required, but was found to improve readability:
if some_condition:
do_this()
I got 3 words : Karel the Robot.
it's a really really simple 'language' that is designed to teach people the basis of programming :
Look for it on the web. You can look at this, though I never tried it :
http://karel.sourceforge.net/
While this isn't related to programming a robot, I think web programming is a great place to start with kids that age. It's how I started at that exact age. It easily translates to something kids understand if they use the web at all. Start with HTML, throw in Javascript, and soon they want to be doing features requiring server-side scripting or some sort, and things progress from there.
With the kind of kids who are already interested in robotics, though, I'd actually go for a different language like the ones already described. If you want to work in a field like robotics, you don't need to be convinced to try something hard. You need to be challenged.
A few years ago I saw a presentation at Ignite! Seattle from one of the people working on the project now known as Kodu who envisioned as a programming language for children. He spent time talking about what common language features could simply be thrown out in a programming environment meant to teach fundamentals.
A lot of cherished imperative constructs, like C-style for loops, were simply left out in favor of a simple object-messaging approach. Object-oriented programming isn't hard to understand when you think about "objects" and "messages"; the hard part is when you deal with things that programmers, but not children, care about, like inheritance and contracts and sweeping abstractions. I've got this thing (noun), now act on it (verb), in this way (adverb like quickly), when thing (sees/bumps into) something (with some attribute) (that's your if). Events are really conditions, and have all of the power of conditions, but it's up to the runtime to identify when those events happen.
This kind of event and messaging driven approach probably translates even better to robots than procedural programming would, anyway, so it might be a good way to look at the problem. Try not to think about what you'd "need" to know to program in C or Pascal or something; think about what you'd want to be able to make something do.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
So I'm working on this class that's supposed to request help documentation from a vendor through a web service. I try to name it DocumentRetriever, VendorDocRequester, DocGetter, but they just don't sound right. I ended up browsing through dictionary.com for half an hour trying to come up with an adequate word.
Start programming with bad names is like having a very bad hair day in the morning, the rest of the day goes downhill from there. Feel me?
What you are doing now is fine, and I highly recommend you stick with your current syntax, being:
context + verb + how
I use this method to name functions/methods, SQL stored procs, etc. By keeping with this syntax, it will keep your Intellisense/Code Panes much more neat. So you want EmployeeGetByID() EmployeeAdd(), EmployeeDeleteByID(). When you use a more grammatically correct syntax such as GetEmployee(), AddEmployee() you'll see that this gets really messy if you have multiple Gets in the same class as unrelated things will be grouped together.
I akin this to naming files with dates, you want to say 2009-01-07.log not 1-7-2009.log because after you have a bunch of them, the order becomes totally useless.
One lesson I have learned, is that if you can't find a name for a class, there is almost always something wrong with that class:
you don't need it
it does too much
A good naming convention should minimize the number of possible names you can use for any given variable, class, method, or function. If there is only one possible name, you'll never have trouble remembering it.
For functions and for singleton classes, I scrutinize the function to see if its basic function is to transform one kind of thing into another kind of thing. I'm using that term very loosely, but you'll discover that a HUGE number of functions that you write essentially take something in one form and produce something in another form.
In your case it sounds like your class transforms a Url into a Document. It's a little bit weird to think of it that way, but perfectly correct, and when you start looking for this pattern, you'll see it everywhere.
When I find this pattern, I always name the function xFromy.
Since your function transforms a Url into a Document, I would name it
DocumentFromUrl
This pattern is remarkably common. For example:
atoi -> IntFromString
GetWindowWidth -> WidthInPixelsFromHwnd // or DxFromWnd if you like Hungarian
CreateProcess -> ProcessFromCommandLine
You could also use UrlToDocument if you're more comfortable with that order. Whether you say xFromy or yTox is probably a matter of taste, but I prefer the From order because that way the beginning of the function name already tells you what type it returns.
Pick one convention and stick to it. If you are careful to use the same names as your class names in your xFromy functions, it'll be a lot easier to remember what names you used. Of course, this pattern doesn't work for everything, but it does work where you're writing code that can be thought of as "functional."
Sometimes there isn't a good name for a class or method, it happens to us all. Often times, however, the inability to come up with a name may be a hint to something wrong with your design. Does your method have too many responsibilities? Does your class encapsulate a coherent idea?
Thread 1:
function programming_job(){
while (i make classes){
Give each class a name quickly; always fairly long and descriptive.
Implement and test each class to see what they really are.
while (not satisfied){
Re-visit each class and make small adjustments
}
}
}
Thread 2:
while(true){
if (any code smells bad){
rework, rename until at least somewhat better
}
}
There's no Thread.sleep(...) anywhere here.
I do spend a lot of time as well worrying about the names of anything that can be given a name when I am programming. I'd say it pays off very well though. Sometimes when I am stuck I leave it for a while and during a coffee break I ask around a bit if someone has a good suggestion.
For your class I'd suggest VendorHelpDocRequester.
The book Code Complete by Steve Mcconnell has a nice chapter on naming variables/classes/functions/...
I think this is a side effect.
It's not the actual naming that's hard. What's hard is that the process of naming makes you face the horrible fact that you have no idea what the hell you're doing.
I actually just heard this quote yesterday, through the Signal vs. Noise blog at 37Signals, and I certainly agree with it:
"There are only two hard things in Computer Science: cache invalidation and naming things."
— Phil Karlton
It's good that it's difficult. It's forcing you to think about the problem, and what the class is actually supposed to do. Good names can help lead to good design.
Agreed. I like to keep my type names and variables as descriptive as possible without being too horrendously long, but sometimes there's just a certain concept that you can't find a good word for.
In that case, it always helps me to ask a coworker for input - even if they don't ultimately help, it usually helps me to at least explain it out loud and get my wheels turning.
I was just writing on naming conventions last month: http://caseysoftware.com/blog/useful-naming-conventions
The gist of it:
verbAdjectiveNounStructure - with Structure and Adjective as optional parts
For verbs, I stick to action verbs: save, delete, notify, update, or generate. Once in a while, I use "process" but only to specifically refer to queues or work backlogs.
For nouns, I use the class or object being interacted with. In web2project, this is often Tasks or Projects. If it's Javascript interacting with the page, it might be body or table. The point is that the code clearly describes the object it's interacting with.
The structure is optional because it's unique to the situation. A listing screen might request a List or an Array. One of the core functions used in the Project List for web2project is simply getProjectList. It doesn't modify the underlying data, just the representation of the data.
The adjectives are something else entirely. They are used as modifiers to the noun. Something as simple as getOpenProjects might be easily implemented with a getProjects and a switch parameter, but this tends to generate methods which require quite a bit of understanding of the underlying data and/or structure of the object... not necessarily something you want to encourage. By having more explicit and specific functions, you can completely wrap and hide the implementation from the code using it. Isn't that one of the points of OO?
More so than just naming a class, creating an appropriate package structure can be a difficult but rewarding challenge. You need to consider separating the concerns of your modules and how they relate to the vision of the application.
Consider the layout of your app now:
App
VendorDocRequester (read from web service and provide data)
VendorDocViewer (use requester to provide vendor docs)
I would venture to guess that there's a lot going on inside a few classes. If you were to refactor this into a more MVC-ified approach, and allow small classes to handle individual duties, you might end up with something like:
App
VendorDocs
Model
Document (plain object that holds data)
WebServiceConsumer (deal with nitty gritty in web service)
Controller
DatabaseAdapter (handle persistance using ORM or other method)
WebServiceAdapter (utilize Consumer to grab a Document and stick it in database)
View
HelpViewer (use DBAdapter to spit out the documention)
Then your class names rely on the namespace to provide full context. The classes themselves can be inherently related to application without needing to explicitly say so. Class names are simpler and easier to define as a result!
One other very important suggestion: please do yourself a favor and pick up a copy of Head First Design Patterns. It's a fantastic, easy-reading book that will help you organize your application and write better code. Appreciating design patterns will help you to understanding that many of the problems you encounter have already been solved, and you'll be able to incorporate the solutions into your code.
Leo Brodie, in his book "Thinking Forth", wrote that the most difficult task for a programmer was naming things well, and he stated that the most important programming tool is a thesaurus.
Try using the thesaurus at http://thesaurus.reference.com/.
Beyond that, don't use Hungarian Notation EVER, avoid abbreviations, and be consistent.
Best wishes.
In short:
I agree that good names are important, but I don't think you have to find them before implementing at all costs.
Of course its better to have a good name right from the start. But if you can't come up with one in 2 minutes, renaming later will cost less time and is the right choice from a productivity point of view.
Long:
Generally it's often not worth to think too long about a name before implementing. If you implement your class, naming it "Foo" or "Dsnfdkgx", while implementing you see what you should have named it.
Especially with Java+Eclipse, renaming things is no pain at all, as it carefully handles all references in all classes, warns you of name collisions, etc. And as long as the class is not yet in the version control repository, I don't think there's anything wrong with renaming it 5 times.
Basically, it's a question of how you think about refactoring. Personally, I like it, though it annoys my team mates sometimes, as they believe in never touch a running system. And from everything you can refactor, changing names is one of the most harmless things you can do.
Why not HelpDocumentServiceClient kind of a mouthful, or HelpDocumentClient...it doesn't matter it's a vendor the point is it's a client to a webservice that deals with Help documents.
And yes naming is hard.
There is only one sensible name for that class:
HelpRequest
Don't let the implementation details distract you from the meaning.
Invest in a good refactoring tool!
I stick to basics: VerbNoun(arguments). Examples: GetDoc(docID).
There's no need to get fancy. It will be easy to understand a year from now, whether it's you or someone else.
For me I don't care how long a method or class name is as long as its descriptive and in the correct library. Long gone are the days where you should remember where each part of the API resides.
Intelisense exists for all major languages. Therefore when using a 3rd party API I like to use its intelisense for the documentation as opposed to using the 'actual' documentation.
With that in mind I am fine to create a method name such as
StevesPostOnMethodNamesBeingLongOrShort
Long - but so what. Who doesnt use 24inch screens these days!
I have to agree that naming is an art. It gets a little easier if your class is following a certain "desigh pattern" (factory etc).
This is one of the reasons to have a coding standard. Having a standard tends to assist coming up with names when required. It helps free up your mind to use for other more interesting things! (-:
I'd recommend reading the relevant chapter of Steve McConnell's Code Complete (Amazon link) which goes into several rules to assist readability and even maintainability.
HTH
cheers,
Rob
Nope, debugging is the most difficult thing thing for me! :-)
DocumentFetcher? It's hard to say without context.
It can help to act like a mathematician and borrow/invent a lexicon for your domain as you go: settle on short plain words that suggest the concept without spelling it out every time. Too often I see long latinate phrases that get turned into acronyms, making you need a dictionary for the acronyms anyway.
The language you use to describe the problem, is the language you should use for the variables, methods, objects, classes, etc. Loosely, nouns match objects and verbs match methods. If you're missing words to describe the problem, you're also missing a full understanding (specification) of the problem.
If it's just choosing between a set of names, then it should be driven by the conventions you are using to build the system. If you've come to a new spot, uncovered by previous conventions, then it's always worth spending some effort on trying extend them (properly, consistently) to cover this new case.
If in doubt, sleep on it, and pick the first most obvious name, the next morning :-)
If you wake up one day and realize you were wrong, then change it right away.
Paul.
BTW: Document.fetch() is pretty obvious.
I find I have the most trouble in local variables. For example, I want to create an object of type DocGetter. So I know it's a DocGetter. Why do I need to give it another name? I usually end up giving it a name like dg (for DocGetter) or temp or something equally nondescriptive.
Don't forget design patterns (not just the GoF ones) are a good way of providing a common vocabulary and their names should be used whenever one fits the situation. That will even help newcomers that are familiar with the nomenclature to quickly understand the architecture. Is this class you're working on supposed to act like a Proxy, or even a Façade ?
Shouldn't the vendor documentation be the object? I mean, that one is tangible, and not just as some anthropomorphization of a part of your program. So, you might have a VendorDocumentation class with a constructor that fetches the information. I think that if a class name contains a verb, often something has gone wrong.
I definitely feel you. And I feel your pain. Every name I think of just seems rubbish to me. It all seems so generic and I want to eventually learn how to inject a bit of flair and creativity into my names, making them really reflect what they describe.
One suggestion I have is to consult a Thesaurus. Word has a good one, as does Mac OS X. That can really help me get my head out of the clouds and gives me a good starting place as well as some inspiration.
If the name would explain itself to a lay programmer then there's probably no need to change it.