Related
I would like to know the fastest/best way to learn the business logic in a new project.
Most projects have been running for years, some of them are poorly documented, but you still need to know how to work with them. What is the best way to do this? (Use Case Diagram / support from colleagues / code analysis etc.)
The problem with verbose logging is that you may be overloaded with details that do not help, and even if you have the right details, you may misunderstand the big picture.
Moreover, if projects run for years, and are poorly documented, chances are the little available documentation is already obsolete. And chances are, the team did not invest heavily in logging either.
Reverse engineering the code is another approach, but where to start if there are millions of lines of code in the legacy system? Some things can be easily read in code, but many more complex, emergent behavior comes from the interactions between many classes, and this kind of knowledge is the most difficult to extract.
So here is the way to go:
Talk to colleagues. The best approach to move knowledge from one brain to another is direct conversation. It works much better than any formal diagram or any documentation. Unfortunately this is not always possible (e.g. team left)
If 1 is not possible, understand the business user's point of view. May be there is a user manual? Maybe some colleagues of the user-support? if none of those are possible, the ultimate way is to spend some time in the day of the user's life. You will not understand how the system works, but at least you'll get a quick intro in what the system is supposed to do, what matters to the users, and maybe some business rules.
Check for automated test cases. In fact, such test cases are a hidden and up-to date documentation resource.
Check for non-automated test cases, in particular use acceptance tests, and integration tests. If these are not automated, there are chances that they are already obsolete. But it's better than nothing.
Reverse engineer the code. Identify the main classes and how they interact. And yes, some simplified class diagrams will help you to understand how classes are related (no need to document properties and methods: these can be found back in the code). And some sequence diagram will help you to get a picture of the more complex interactions.
Run server, log verbose everywhere. Follow code flow and dig in.
In the last days I have found the componed based entity system.
I think i understand and know how it works, but if I try to implement it, I get a lot of problems.
Here a diagram with the most Problems:
http://s7.directupload.net/file/d/3327/pzy7kanj_jpg.htm
And I wonder me, can I add the same component twice to an entity, or should I dont allow this?
If no, how I can add multiple sprites to an Entity?
And if a make a CollisionComponent or a Physik Component, the Physik and the MoveComponent must communicate, right ?
I hope anyone can me help.
Here I have made another diagram, from Ideas:
http://s1.directupload.net/images/130725/5p83qysu.jpg
Ok there seem to be a lot of related questions between this post and the diagrams you linked to. The answer to these questions are 'it depends.' and I am guessing you would like some guidance on the subject of CBSEs rather than one line answers to all of your questions. Rather than try and regurgitate the discussion based on similar queries I will post some links here that should get you thinking more about these issues.
First there is the ES wiki. This wiki is still in its infancy, but it holds some great discussion on entity systems.
Next, there some really productive discussion over at gamedev stack exchange (maybe this question should be moved there as well). First take a look at this question, it's a good starting point because it explains why there are no clear trends in ES design (when that answer was written) and from what I've seen it still holds true today. Then there is an absolutely amazing answer given by Byte56 that explains the basics of the system-component approach extremely well, and is well worth reading even if you think you've already got a handle on it. Next there is a helpful series of questions asked by Yannbane. These questions address some of the finer points in more detail. Here are all three questions. Also, there are some questions on how to integrate input and Game States.
Finally, you should check out some open source entity systems that have had a lot of attention such as Artemis written in Java (There seems to be a server issue right now for that link but this article is a good intro, and there are c# and c++ repositories available, so you can dig into some code). There is also ash written in AS3.
Now I feel I must answer one of your questions directly:
And if a make a CollisionComponent or a Physik Component, the Physik and the MoveComponent must communicate, right ?
From the diagrams you linked to you seem to be using the systems-based approach (similar to artemis). From my reading I believe that this is one area that has been generally agreed on by the community: in the system approach the idea is to centralize the component logic so that (aside from a few simple methods as mentioned in the answer to Yannabe's second question) the component is little more than a data container. So if your components are stored in entities then your systems are required to sift through all of your entities. The systems that require more than one component should be asking each entity for all the components it needs to do its work. If it doesn't find all of those components the system should move on. So based on what you have decided so far the answer is no, components should not be communicating with eachother.
Anyway, good luck on this.
We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.
I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am converting an open source Java library to C#, which has a number of methods and classes tagged as deprecated. This project is an opportunity to start with a clean slate, so I plan to remove them entirely. However, being new to working on larger projects, I am nervous that the situation will arise again. Since much of agile development revolves around making something work now and refactoring later if needed, it seems like deprecation of APIs must be a common problem. Are there preventative measures I can take to avoid/minimize API deprecation, even if I am not entirely sure of the future direction of a project?
I'm not sure there is much you can do. Requirements change, and if you absolutely have to make sure that clients of the API are not broken by newer API version, you'll have rely on simply deprecating code until you think that no-one is using the deprecated code.
Placing [Obsolete] attributes on code causes the compiler to create warnings if there are any references to the obsolete methods. This way clients of the API, if they are diligent about fixing their compiler warnings, can gradually move to the new methods without having everything break with the new version.
Its useful if you use the ObsoleteAttribute's override which takes a string:
[Obsolete("Foo is deprecated. Use Bar instead for munging widgets.")]
<frivolous>
Perhaps you could create a TimeBombAttribute:
[TimeBomb(new DateTime(2010,1,1), "Foo will blow up! Better use Bar, or else."]
In your code, reflect for methods with the timebomb attribute and throw KaboomException if they are called after the specified date. That'll make sure that after 1st January 2010 no-one is using the obsolete methods, and you can clean up your API nicely. :)
</frivolous>
As Matt says, the Obsolete attribute is your friend... but whenever you apply it, provide details of how to change calling code. That way you've got a lot better chance of people actually changing. You might also want to consider specifying which version you anticipate removing the method in (probably the next major release).
Of course, you should be diligent in making sure you don't call the obsolete code - particularly in sample code.
Since much of agile development revolves around making something work now and refactoring later if needed
That's not agile. It's cowboy coding disguised under the label of agile.
The ideal is that whatever you complete, is complete, according to whatever Definition of Done you have. Usually the DoD states something along the lines of "feature impelmented, tested and related code refactored". Of course, if you are working on a throwaway prototype, you can have a more relaxed DoD.
API modifications are a difficult beast. If they are only project-internal APIs you are modifying, the best way to go is to refactor early. If you need to change the internal API, just go ahead and change all API clients at the same time. This way the refactoring debt does not grow very large and you don't have to use deprecation.
For published APIs you probably have some source and binary compatibility guarantees you have to maintain, at least until the next major release or so. Marking the old APIs deprecated works while maintaining compatibility. As with internal APIs, you should fix your internal code as soon as possible to not use the deprecated APIs.
Matt's answer is solid advice. I just wanted to mention that intially you probably want to use something along the lines of:
[Obsolete("Please use ... instead ", false)]
Once you have the code ported, change the false to true and the compiler will then treat all the calls to the method as an error.
Watch Josh Bloch's "How to Design a Good API and Why It Matters"
Most important w/r/t deprecation is knowing that "when in doubt, leave it out." Watch the video for clarification, but it has to do with having to support what you provide forever. If you are realistically expecting that API to be reused, you're effectively setting your decisions in stone.
I think API design is a much trickier thing to do in an Agile fashion because you're expecting it to be reused probably in many different ways. You have to worry about breaking others that are dependent on you, and so while it can be done, it's tough to have the right design emerge without getting a quick turnaround from other teams. Of course deprecation is going to help here, but I think YAGNI is a lot better design heuristic when it comes to APIs.
I think deprecation of code is an inevitable byproduct of Agile processes like continuous refactoring and incremental development. So if you end up with deprecated code as you work on your project, that's not necessarily a bad thing--just a fact of life. Of course, you will probably find that, rather than deprecating code, you end up keeping a lot of code but refactoring it into different methods, classes, and so on.
So, bottom line: I wouldn't worry about deprecating code during Agile development. If it served its purpose for a while, you're doing the right thing.
The rule of thumb for API design is to focus on what it does, rather than how it does it. Once you know the end goal, figure out the absolute minimum input you need and use that. Avoid passing your own objects as parameters, pass only data.
Seperate configuration from execution. For exmaple, maybe you have an image encoder/decoder.
Instead of making a call like:
Encoder.Encode( bytes, width, height, compression_type, compression_ratio, palette, etc etc);
Make it
Encoder.setCompressionType(compression_type);
Encoder.setCompressionType(compression_ratio);
etc,etc
Encoder.Encode(bytes, width, height);
That way adding or removing settings is much less likely to break existing implementations.
For deprecation, there's basically 3 types of APIs: internal, external, and public.
Internal is when its only your team working on the code. Deprecating these APIs isn't a big deal. Your team is the only one using it, so they aren't around long, there's pressure to change them, people aren't afraid to change them, and people know how to change them.
External is when its the same code base, but different teams are using it. This might be some common libraries in a large company, or a popular open source library. The point is, people can choose the version of code they compile with. The ease of deprecating an API depends on the size of the organization and how well they communicate. IMO, its the deprecator's job to update old code, rather than mark it deprecated and let warnings fly throughout the code base. Why the deprecator instead of the deprecatee? Because the depcarator is in the know; they know what changed and why.
Those two cases are pretty easy. So long as there is backwards compatibility, you can generally do whatever you'd like, update the clients yourself, or convince the maintainers to do it.
Then there are public api's. These are basically external API's that the clients don't have much control over, such as a web API. These are incredibly hard to update or deprecate. Most won't notice its broken, won't have someone to fix it, won't get notifications that its changing, and will only fix it once its broken (after they've yelled at you for breaking it, over course).
I've had to do the above a few times, and it is such a chore. I think the best you can do is purposefully break it early, wait a bit, and then restore it. You send out the usual warnings and deprecations first, of course, but - trust me - nothing will happen until something breaks.
An idea I've yet to try is to let people register simple apps that run small tests. When you want to do an API update, you run the external tests and contact the affected people.
Another approach to be popular is to have clients depend on (web) services. There are constructs out there that allow you to version your services and allow clients to perform lookups. This adds a lot more moving parts and complexity into the equation, but can be helpful if you are looking at turning over a lot of versions, and having to support multiple versions in production.
This article does a good job of explaining the problem and an approach.