Help with design problem (extending a generic inteface) - oop

I am part of a studentproject and we are to develop a product for a company using Java EE. As "lead architect" in the project I am responsible for composing a good design which should be flexible for further extensions.
Background info: We are to develop a website with a drag and drop GUI with possibilites to connect data sources with data manipulations to perform on that specific data. The GUI should be generic and possible to integrate with upcoming products. This means that we cannot code to an implementation in the presentation layer. Instead we will use an interface to define what kind of data manipulations that are possible for all kinds of products. However, each product might also sport product specific data manipulations (thus extending the interface with more methods).
The problem I have with the scenario above is that I dont see how we could pass on these "product specific data manipulations" to the GUI and say that, in addition to the generic interface, we also possess these data manipulation actions...
Now I had a discussion with some of the more experienced programmers from the company and they told me that there is a common solution to this problem - more specifically known as the "Observer pattern". They draw something like [1] on the whiteboard and explained that it would be possible to "register" to a third party (getApplicationContext) that in turn could convey our product specific interface. This is a common problem to get rid of those nasty circular dependencies, they explained.
I have now had a look on the observer pattern and how it works and I still dont really get how I am supposed to solve the design problem. Could someone possibly try to explain how it would turn out in my specific scenario? I have no real problem understanding how it works with "subjects" and "observers".
Here is an UML diagram of the design where we are using a reference of the specific product. This is what is undesirable and something we would like to get around.
(maybe I got this all wrong...)
I am sorry but I cant change the picture to the correct one as I am a new user... Here is a link to an updated UML diagram:

It seems what you are looking for is the Model View Controller design pattern. The Observer pattern is just a part of this design pattern. There is a short description for doing this with Java Servlets and JavaServer Pages from Java EE on the wikipedia article.

Related

Method JavaFx TreeItem getRoot() is not visible. What is the OOP/MVC reason it is not?

I needed to get the root item of a TreeView. The obvious way to get it is to use the getRoot() on the TreeView. Which I use.
I like to experiment, and was wondering if I can get same root, buy climbing up the tree from a leaf item (a TreeItem), using recursively getParent() until the result is NULL.
It is working as well, and, in my custom TreeItem, I added a public method 'getRoot()' to play around with it. Thus finding out this method does already exist in parent TreeItem, but is not exposed.
My question : Why would it not be exposed ? Is is a bad practice regarding OOP / MVC architecture ?
The reason for the design is summed up by kleopatra's comment:
Why would it not be exposed I would pose it the other way round: why should it? It's convenience api at best, easy to implement by clients, not really needed - adding such to a framework/toolkit tends to exploding api/implementation to maintain.
JavaFX is filled with decisions like this on purpose. A lot of the reasoning is based on experience (good and bad) from AWT/Spring. Just some examples:
For specifying execution on the UI thread, there is a runLater API, but no invokeAndWait API like Swing, even though it would be easy for the framework to provide such an API and it has been requested.
Providing an invokeAndWait API means that naive (and experienced :-) developers could use it incorrectly to accidentally deadlock threads.
Lots of classes are final and not extensible.
Sometimes developers want to extend classes, but can't because they are final. This means that they can't over-ride a lot of the built-in tested functionality of the framework and accidentally break it that way. Instead they can usually use aggregation over inheritance to do what they need. The framework forces them to do so in order to protect itself and them.
Color objects are immutable.
Immutable objects in general make stuff easier to maintain.
Native look and feels aren't part of the framework.
You can still create them if you want, and there are 3rd party libraries that do that, but it doesn't need to be in the core framework.
The application programming interface is single threaded not multi-threaded.
Because the developers of the framework realized that multi-threaded UI frameworks are a failed dream.
The philosophy was to code to make the 80% use case easier and the the 20% use case (usually) possible, using additional user or 3rd party code, while making it difficult for the user code to accidentally (or intentionally) break the framework. You just stumbled upon one instance of an application of this philosophy.
There are a whole host of catch-phrases that you could use to describe the reason for this design approach. None of them are OOP or MVC specific. The underlying principles have been around far longer than software engineering, they are just approaches towards work and engineering in general. Here are some links if interested:
You ain't going to need it YAGNI
Minimal viable product MVP
Worse-is-better
Muntzing
Feature creep prevention
Keep it simple stupid KISS
Occam's razor

How to understand the big picture in a loose coupled application?

We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.

How to Design a generic business entity and still be OO?

I am working on a packaged product that is supposed to cater to multiple clients with varying requirements (to a certain degree) and as such should be built in a manner to be flexible enough to be customizable by each specific client. The kind of customization we are talking about here is that different client's may have differing attributes for some of the key business objects. Also, they could have differing business logic tied in with their additional attributes as well
As an very simplistic example: Consider "Automobile" to be a business entity in the system and as such has 4 key attributes i.e. VehicleNumber, YearOfManufacture, Price and Colour.
It is possible that one of the clients using the system adds 2 more attributes to Automobile namely ChassisNumber and EngineCapacity. This client needs some business logic associated with these fields to validate that the same chassisNumber doesnt exist in the system when a new Automobile gets added.
Another client just needs one additional attribute called SaleDate. SaleDate has its own business logic check which validates if the vehicle doesnt exist in some police records as a stolen vehicle when the sale date is entered
Most of my experience has been in mostly making enterprise apps for a single client and I am really struggling to see how I could handle a business entity whose attributes are dynamic and also has a capacity for having dynamic business logic as well in an object oriented paradigm
Key Issues
Are there any general OO principles/patterns that would help me in tackling this kind of design?
I am sure people who have worked on generic / packaged products would have faced similar scenarios in most of them. Any advice / pointers / general guidance is also appreciated.
My technology is .NET 3.5/ C# and the project has a layered architecture with a business layer that consists of business entities that encompass their business logic
This is one of our biggest challenges, as we have multiple clients that all use the same code base, but have widely varying needs. Let me share our evolution story with you:
Our company started out with a single client, and as we began to get other clients, you'd start seeing things like this in the code:
if(clientName == "ABC") {
// do it the way ABC client likes
} else {
// do it the way most clients like.
}
Eventually we got wise to the fact that this makes really ugly and unmanageable code. If another client wanted theirs to behave like ABC's in one place and CBA's in another place, we were stuck. So instead, we turned to a .properties file with a bunch of configuration points.
if((bool)configProps.get("LastNameFirst")) {
// output the last name first
} else {
// output the first name first
}
This was an improvement, but still very clunky. "Magic strings" abounded. There was no real organization or documentation around the various properties. Many of the properties depended on other properties and wouldn't do anything (or would even break something!) if not used in the right combinations. Much (possibly even most) of our time in some iterations was spent fixing bugs that arose because we had "fixed" something for one client that broke another client's configuration. When we got a new client, we would just start with the properties file of another client that had the configuration "most like" the one this client wanted, and then try to tweak things until they looked right.
We tried using various techniques to get these configuration points to be less clunky, but only made moderate progress:
if(userDisplayConfigBean.showLastNameFirst())) {
// output the last name first
} else {
// output the first name first
}
There were a few projects to get these configurations under control. One involved writing an XML-based view engine so that we could better customize the displays for each client.
<client name="ABC">
<field name="last_name" />
<field name="first_name" />
</client>
Another project involved writing a configuration management system to consolidate our configuration code, enforce that each configuration point was well documented, allow super users to change the configuration values at run-time, and allow the code to validate each change to avoid getting an invalid combination of configuration values.
These various changes definitely made life a lot easier with each new client, but most of them failed to address the root of our problems. The change that really benefited us most was when we stopped looking at our product as a series of fixes to make something work for one more client, and we started looking at our product as a "product." When a client asked for a new feature, we started to carefully consider questions like:
How many other clients would be able to use this feature, either now or in the future?
Can it be implemented in a way that doesn't make our code less manageable?
Could we implement a different feature that what they are asking for, which would still meet their needs while being more suited to reuse by other clients?
When implementing a feature, we would take the long view. Rather than creating a new database field that would only be used by one client, we might create a whole new table which could allow any client to define any number of custom fields. It would take more work up-front, but we could allow each client to customize their own product with a great degree of flexibility, without requiring a programmer to change any code.
That said, sometimes there are certain customizations that you can't really accomplish without investing an enormous effort in complex Rules engines and so forth. When you just need to make it work one way for one client and another way for another client, I've found that your best bet is to program to interfaces and leverage dependency injection. If you follow "SOLID" principles to make sure your code is written modularly with good "separation of concerns," etc., it isn't nearly as painful to change the implementation of a particular part of your code for a particular client:
public FirstLastNameGenerator : INameDisplayGenerator
{
IPersonRepository _personRepository;
public FirstLastNameGenerator(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
public string GenerateDisplayNameForPerson(int personId)
{
Person person = _personRepository.GetById(personId);
return person.FirstName + " " + person.LastName;
}
}
public AbcModule : NinjectModule
{
public override void Load()
{
Rebind<INameDisplayGenerator>().To<FirstLastNameGenerator>();
}
}
This approach is enhanced by the other techniques I mentioned earlier. For example, I didn't write an AbcNameGenerator because maybe other clients will want similar behavior in their programs. But using this approach you can fairly easily define modules that override default settings for specific clients, in a way that is very flexible and extensible.
Because systems like this are inherently fragile, it is also important to focus heavily on automated testing: Unit tests for individual classes, integration tests to make sure (for example) that your injection bindings are all working correctly, and system tests to make sure everything works together without regressing.
PS: I use "we" throughout this story, even though I wasn't actually working at the company for much of its history.
PPS: Pardon the mixture of C# and Java.
That's a Dynamic Object Model or Adaptive Object Model you're building. And of course, when customers start adding behaviour and data, they are programming, so you need to have version control, tests, release, namespace/context and rights management for that.
A way of approaching this is to use a meta-layer, or reflection, or both. In addition you will need to provide a customisation application which will allow modification, by the users, of your business logic layer. Such a meta-layer does not really fit in your layered architecture - it is more like a layer orthoganal to your existing architecture, though the running application will probably need to refer to it, at least on initialisation. This type of facility is probably one of the fastest ways of screwing up the production application known to man, so you must:
Ensure that the access to this editor is limited to people with a high level of rights on the system (eg administrator).
Provide a sandbox area for the customer modifications to be tested before any changes they are testing are put on the production system.
An "OOPS" facility whereby they can revert their production system to either your provided initial default, or to the last revision before the change.
Your meta-layer must be very tightly specified so that the range of activities is closely defined - George Orwell's "What is not specifically allowed, is forbidden."
Your meta-layer will have objects in it such as Business Object, Method, Property and events such as Add Business Object, Call Method etc.
There is a wealth of information about meta-programming available on the web, but I would start with Pattern Languages of Program Design Vol 2 or any of the WWW resources related to, or emanating from Kent or Coplien.
We develop an SDK that does something like this. We chose COM for our core because we were far more comfortable with it than with low-level .NET, but no doubt you could do it all natively in .NET.
The basic architecture is something like this: Types are described in a COM type library. All types derive from a root type called Object. A COM DLL implements this root Object type and provides generic access to derived types' properties via IDispatch. This DLL is wrapped in a .NET PIA assembly because we anticipate that most developers will prefer to work in .NET. The Object type has a factory method to create objects of any type in the model.
Our product is at version 1 and we haven't implemented methods yet - in this version business logic must be coded into the client application. But our general vision is that methods will be written by the developer in his language of choice, compiled to .NET assemblies or COM DLLs (and maybe Java too) and exposed via IDispatch. Then the same IDispatch implementation in our root Object type can call them.
If you anticipate that most of the custom business logic will be validation (such as checking for duplicate chassis numbers) then you could implement some general events on your root Object type (assuming you did it something like the way we do.) Our Object type fires an event whenever a property is updated, and I suppose this could be augmented by a validation method that gets called automatically if one is defined.
It takes a lot of work to create a generic system like this, but the payoff is that application development on top of the SDK is very quick.
You say that your customers should be able to add custom properties and implement business logic themselves "without programming". If your system also implements data storage based on the types (ours does) then the customer could add properties without programming, by editing the model (we provide a GUI model editor.) You could even provide a generic user application that dynamically presents the appropriate data-entry controls depending on the types, so your customers could capture custom data without additional programming. (We provide a generic client application but it's more a developer tool than a viable end-user application.) I don't see how you could allow your customers to implement custom logic without programming... unless you want to provide some kind of drag-n-drop GUI workflow builder... surely a huge task.
We don't envisage business users doing any of this stuff. In our development model all customisation is done by a developer, but not necessarily an expensive one - part of our vision is to allow less experienced developers produce robust business applications.
Design a core model that acts as its own independent project
Here's a list of some possible basic requirements...
The core design would contain:
classes that work (and possibly be extended) in all of the subprojects.
more complex tools like database interactions (unless those are project specific)
a general configuration structure that should be considered standard across all projects
Then, all of the subsequent projects that are customized per client are considered extensions of this core project.
What you're describing is the basic purpose of any Framework. Namely, create a core set of functionality that can be set apart from the whole so you don't have to duplicate that development effort in every project you create. Ie, drop in a framework and half your work is done already.
You might say, "what about the SCM (Software Configuration Management)?"
How do you track revision history of all of the subprojects without including the core into the subproject repository?
Fortunately, this is an old problem. Many software projects, especially those in the the linux/open source world, make extensive use of external libraries and plugins.
In fact git has a command that's specifically used to import one project repository into another as a sub-repository (preserving all of the sub-repository's revision history etc). In fact, you can't modify the contents of the sub-repository because the project won't track it's history at all.
The command I'm talking about is called 'git submodule'.
You may ask, "what if I develop a really cool feature in one client's project that I'd like to use in all of my client's projects?".
Just add that feature to the core and run a 'git submodule sync' on all the other projects. The way git submodule works is, it points to a specific commit within the sub-repository's history tree. So, when that tree is changed upstream, you need to pull those changes back downstream to the projects where they're used.
The structure to implement such a thing would work like this. Lets say that you software is written specifically to manage a car dealership (inventory, sales, employees, customers, orders, etc...). You create a core module that covers all of these features because they are expected to be used in the software for all of your clients.
But, you have recently gained a new client who wants to be more tech savvy by adding online sales to their dealership. Of course, their website is designed by a separate team of web developers/designers and webmaster but they want a web API (Ie, service layer) to tap into the current infrastructure for their website.
What you'd do is create a project for the client, we'll call it WebDealersRUs and link the core submodule into the repository.
The hidden benefit of this is, once you start to look as a codebase as pluggable parts, you can start to design them from the start as modular pieces that are capable of being dropped in to a project with very little effort.
Consider the example above. Lets say that your client base is starting to see the merits of adding a web-front to increase sales. Just pull the web API out of the WebDealersRUs into its own repository and link it back in as a submodule. Then propagate to all of your clients that want it.
What you get is a major payoff with minimal effort.
Of course there will always be parts of every project that are client specific (branding, ect). That's why every client should have a separate repository containing their unique version of the software. But that doesn't mean that you can't pull parts out and generalize them to be reused in subsequent projects.
While I approach this issue from the macro level, it can be applied to smaller/more specific parts of the codebase. The key here is code that you wish to re-use needs to be genericized.
OOP comes into play here because: where the functionality is implemented in the core but extended in client's code you'll use a base class and inherit from it; where the functionality is expected to return a similar type of result but the implementations of that functionality may be wildly different across classes (Ie, there's no direct inheritance hierarchy) it's best to use an interface to enforce that relationship.
I know your question is general, not tied to a technology, but since you mention you actually work with .NET, I suggest you look at a new and very important technology piece that is part of .NET 4: the 'dynamic' type.
There is also a good article on CodeProject here: DynamicObjects – Duck-Typing in .NET.
It's probably worth to look at, because, if I have to implement the dynamic system you describe, I would certainly try to implement my entities based on the DynamicObject class and add custom properties and methods using the TryGetxxx methods. It also depends whether you are focused on compile time or runtime. Here is an interesting link here on SO: Dynamically adding members to a dynamic object on this subject.
Two approaches is what I feel:
1) If different clients fall on to same domain (as Manufacturing/Finance) then it's better to design objects in such a way that BaseObject should have attributes which are very common and other's which could vary in between clients as key-value pairs. On top of it, try to implement rule engine like IBM ILog(http://www-01.ibm.com/software/integration/business-rule-management/rulesnet-family/about/).
2) Predictive Model Markup Language(http://en.wikipedia.org/wiki/PMML)

How to design a business logic layer

To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).

Pattern for client-side update in SOA

I want to develop a data-driven WPF application, which uses WCF to connect to the server-side, which itself uses NHibernate to persist data. For examle there is a domain-object called "Customer" and there is also a flattened (with Automapper) "CustomerDTO" which is returned by a WCF-operation called "GetCustomer(int customerId)".
I don't know where I should make data-validation and how I should handle client-side updating, so that one could modify single or multiple properties on the client by editing a form and finally clicking "save"...
Could you please provide me with some common patterns in such a situation or any best-pratice examples, which target real LOB-applications (n-tiered pattern, multiple layers, etc.)
Sounds like a good fit for Self Tracking Entities. Be aware that STE's are hot off the press and still have a bit more maturing to do. The approach used with them should answer your common patterns question.