There is plenty of documentation out there that talks about design patterns (e.g. Visitor), SOLID (Single Responsibility etc), KISS (Keep it Simple, Stupid), tiered design etc.
One thing I don't fully understand is how to decide when a new project/DLL is required when extending an application. Is there any criteria that is used?
For example, System.Windows.Forms (http://msdn.microsoft.com/en-us/library/system.windows.forms.containercontrol.aspx) is part of System.Windows.Forms.dll yet it derives from System.MarshalByRefObject, which is part of mscorlib.dll.
You're mixing up assemblies (DLLs) and namespaces.
Assemblies are the binary files which contain the implementations of classes, etc.
Namespaces are just a way to organize classes, enums, etc. into logical groups, to prevent from having every class accessible from every level, and prevent naming conflicts (eg. System.Windows.Forms.Timer and System.Threading.Timer).
System.Windows.Forms doesn't derive from System, and System doesn't live solely in mscorlib.dll. Anyone can put anything in the System namespace - even you could do it. It's just a sub-namespace of System.
There are several reasons for breaking a subset of code out into a separate assembly. A big one is re-usability. If you have some common controls or utilities, you can maintain it in its own DLL and use it across projects without copy-and-pasting of code.
Don't confuse tiers with layers. Layering your code is almost always a must. Splitting your code out into separate physical tiers, however, is something that you usually don't want to do until you actually need to (following the KISS principle).
If you layer your code properly, then when the time comes that you need to break it out into separate tiers, doing so should be a very painless process. If, however, you never layered your code properly you'll find that splitting out the tiers will be very difficult.
As a simple example, lets say you create a login form and lets say you put all the logic to gather the system information, access the database, validate the user credentials, and build the permissions, all directly into the WinForm class. The code I just described has only 1 layer and it has only 1 tier. If you then found yourself needing to create a web-based login page using ASP.NET, you would find it very difficult to reuse that existing code. With the web based login, you'd want to at the very least, separate the UI logic from the business/data access logic, but because it's all directly in the WinForm class, it's all unusable without re-factoring the code.
Now, let's say, instead of putting all that code in the form, you took the time to layer it properly. Let's say you broke out all of the code that accessed the database about put it all into data access classes. And then you put all of the business logic code put it all in business classes. At that point, the actual code in the WinForm class should be limited to doing nothing but UI related logic such as handling control events, setting labels, etc. In this second example. you still only have 1 tier, but you have three distinct and independent layers (viz. UI, Business, Data Access).
If you had already layered your code like that, then when the time came that you needed to reuse it in the web-based project, you could easily move the business and data access layers into a class library (dll) and then reuse them in the ASP.NET project for the server-side tier.
Breaking your code into separate class libraries is only typically necessary in two situations:
You need to reuse the code in multiple projects
You need to divide your project into multiple tiers
Even if you put all your code in a single project, as long as it is well-layered, it will be very easy to split the project up into multiple class libraries when such a situation arises. So the big design issue is not how many DLL's you have. Rather, the big design issue is how many layers you have. Once you have the code layered, it will be easy to move it around between different projects as necessary.
In practical terms, even when you don't need to reuse the code between projects nor support n-tiers, you may still legitimately choose to divide your layers into separate class libraries. It may make sense to do so purely for organizational purposes, or for consistency. For instance, if another developer comes behind you and sees classes in a class library called "MyCompany.Feature.Business", they can safely assume that those classes are all part of the business logic layer. In that way, breaking your code up into separate class libraries can be self-documenting.
There are other reasons too, for putting code in dlls. For instance, it makes it easy to support plug-in architectures or to make it easier to update one part of the application at a time.
Related
I usually split projects into layers i.e. presentation layer, business logic layer and data logic layer. Sometimes I will separate the layers using namespaces and sometimes I will have three separate DLL's (using tiers).
I see developers splitting tiers into multiple DLL's. For example, I once saw a business logic layer with over one hundred different project files and hence over one hundred different DLLs. Also the MSDN documentation shows that the .NET framework contains multiple DLL's e.g. mscorlib etc.
I believe that the reasoning behind having separate DLLs is that it minimizes the memory footprint and also it allows multiple developers to work on different projects e.g. one team could work on one project and another team on another project etc.
I work in a two developer team. What criteria do developers use deciding to split into separate DLLs?
What is the reasoning for separating layers into multiple DLLs?
There are various reasons to do this.
It adds isolation, which can help the compiler prevent you from mixing concerns. Without adding a reference explicitly, you can't use internal types in the other DLLs "by accident", which allows the compiler to help you keep your code cleaner.
If you don't use an assembly at runtime, it won't be loaded. This can keep the memory footprint smaller. (If all assemblies are used, however, it won't help).
It provides a logical separation within your APIs and projects, which can help with organization and maintainability of your code. Note that too many projects is just as bad (or sometimes worse) than too few, however, as many projects adds complexity that may not be beneficial.
Separating code into more than one assembly is done for many reasons, some more technical than others. Assemblies can be used for logical grouping of code much like namespaces and, in fact, one common pattern is to separate large namespaces (concerns) into separate assemblies for that namespace. But that reason is most definitely not the best reason to use more than one assembly.
Code reuse is like the number one factor for placing code into different assemblies. For example, you may have a console application and all of the code in is the one execute file that is compiled. Later on, you decide to create a web app front-nd for the same application. Instead of copying the core code from your console app to your web app, you would likely refactor the solution into three projects: a class library for the code code (the main implementation), a console app (which already exists) and a web app. The console app and web app projects/assemblies will reference the class library project/assembly and the main code is reused across both implementations. This is an oversimplification, mind you.
Another reason to separate code into multiple assemblies to separate concerns while managing dependencies. In this case, you may have code that requires references to web-oriented dependencies (other assemblies) that you may not want to reference in your core application assemblies. You would do that so that you may reuse your core assemblies without taking unnecesary dependencies when they are not needed by breaking up the app into additional assemblies/projects.
Another reason is to facilitate concurrent development of a large team where sub-teams may each work on a different assembly, helping to reduce the number of "collissions" between developers working on different concerns of the application.
I am working on a packaged product that is supposed to cater to multiple clients with varying requirements (to a certain degree) and as such should be built in a manner to be flexible enough to be customizable by each specific client. The kind of customization we are talking about here is that different client's may have differing attributes for some of the key business objects. Also, they could have differing business logic tied in with their additional attributes as well
As an very simplistic example: Consider "Automobile" to be a business entity in the system and as such has 4 key attributes i.e. VehicleNumber, YearOfManufacture, Price and Colour.
It is possible that one of the clients using the system adds 2 more attributes to Automobile namely ChassisNumber and EngineCapacity. This client needs some business logic associated with these fields to validate that the same chassisNumber doesnt exist in the system when a new Automobile gets added.
Another client just needs one additional attribute called SaleDate. SaleDate has its own business logic check which validates if the vehicle doesnt exist in some police records as a stolen vehicle when the sale date is entered
Most of my experience has been in mostly making enterprise apps for a single client and I am really struggling to see how I could handle a business entity whose attributes are dynamic and also has a capacity for having dynamic business logic as well in an object oriented paradigm
Key Issues
Are there any general OO principles/patterns that would help me in tackling this kind of design?
I am sure people who have worked on generic / packaged products would have faced similar scenarios in most of them. Any advice / pointers / general guidance is also appreciated.
My technology is .NET 3.5/ C# and the project has a layered architecture with a business layer that consists of business entities that encompass their business logic
This is one of our biggest challenges, as we have multiple clients that all use the same code base, but have widely varying needs. Let me share our evolution story with you:
Our company started out with a single client, and as we began to get other clients, you'd start seeing things like this in the code:
if(clientName == "ABC") {
// do it the way ABC client likes
} else {
// do it the way most clients like.
}
Eventually we got wise to the fact that this makes really ugly and unmanageable code. If another client wanted theirs to behave like ABC's in one place and CBA's in another place, we were stuck. So instead, we turned to a .properties file with a bunch of configuration points.
if((bool)configProps.get("LastNameFirst")) {
// output the last name first
} else {
// output the first name first
}
This was an improvement, but still very clunky. "Magic strings" abounded. There was no real organization or documentation around the various properties. Many of the properties depended on other properties and wouldn't do anything (or would even break something!) if not used in the right combinations. Much (possibly even most) of our time in some iterations was spent fixing bugs that arose because we had "fixed" something for one client that broke another client's configuration. When we got a new client, we would just start with the properties file of another client that had the configuration "most like" the one this client wanted, and then try to tweak things until they looked right.
We tried using various techniques to get these configuration points to be less clunky, but only made moderate progress:
if(userDisplayConfigBean.showLastNameFirst())) {
// output the last name first
} else {
// output the first name first
}
There were a few projects to get these configurations under control. One involved writing an XML-based view engine so that we could better customize the displays for each client.
<client name="ABC">
<field name="last_name" />
<field name="first_name" />
</client>
Another project involved writing a configuration management system to consolidate our configuration code, enforce that each configuration point was well documented, allow super users to change the configuration values at run-time, and allow the code to validate each change to avoid getting an invalid combination of configuration values.
These various changes definitely made life a lot easier with each new client, but most of them failed to address the root of our problems. The change that really benefited us most was when we stopped looking at our product as a series of fixes to make something work for one more client, and we started looking at our product as a "product." When a client asked for a new feature, we started to carefully consider questions like:
How many other clients would be able to use this feature, either now or in the future?
Can it be implemented in a way that doesn't make our code less manageable?
Could we implement a different feature that what they are asking for, which would still meet their needs while being more suited to reuse by other clients?
When implementing a feature, we would take the long view. Rather than creating a new database field that would only be used by one client, we might create a whole new table which could allow any client to define any number of custom fields. It would take more work up-front, but we could allow each client to customize their own product with a great degree of flexibility, without requiring a programmer to change any code.
That said, sometimes there are certain customizations that you can't really accomplish without investing an enormous effort in complex Rules engines and so forth. When you just need to make it work one way for one client and another way for another client, I've found that your best bet is to program to interfaces and leverage dependency injection. If you follow "SOLID" principles to make sure your code is written modularly with good "separation of concerns," etc., it isn't nearly as painful to change the implementation of a particular part of your code for a particular client:
public FirstLastNameGenerator : INameDisplayGenerator
{
IPersonRepository _personRepository;
public FirstLastNameGenerator(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
public string GenerateDisplayNameForPerson(int personId)
{
Person person = _personRepository.GetById(personId);
return person.FirstName + " " + person.LastName;
}
}
public AbcModule : NinjectModule
{
public override void Load()
{
Rebind<INameDisplayGenerator>().To<FirstLastNameGenerator>();
}
}
This approach is enhanced by the other techniques I mentioned earlier. For example, I didn't write an AbcNameGenerator because maybe other clients will want similar behavior in their programs. But using this approach you can fairly easily define modules that override default settings for specific clients, in a way that is very flexible and extensible.
Because systems like this are inherently fragile, it is also important to focus heavily on automated testing: Unit tests for individual classes, integration tests to make sure (for example) that your injection bindings are all working correctly, and system tests to make sure everything works together without regressing.
PS: I use "we" throughout this story, even though I wasn't actually working at the company for much of its history.
PPS: Pardon the mixture of C# and Java.
That's a Dynamic Object Model or Adaptive Object Model you're building. And of course, when customers start adding behaviour and data, they are programming, so you need to have version control, tests, release, namespace/context and rights management for that.
A way of approaching this is to use a meta-layer, or reflection, or both. In addition you will need to provide a customisation application which will allow modification, by the users, of your business logic layer. Such a meta-layer does not really fit in your layered architecture - it is more like a layer orthoganal to your existing architecture, though the running application will probably need to refer to it, at least on initialisation. This type of facility is probably one of the fastest ways of screwing up the production application known to man, so you must:
Ensure that the access to this editor is limited to people with a high level of rights on the system (eg administrator).
Provide a sandbox area for the customer modifications to be tested before any changes they are testing are put on the production system.
An "OOPS" facility whereby they can revert their production system to either your provided initial default, or to the last revision before the change.
Your meta-layer must be very tightly specified so that the range of activities is closely defined - George Orwell's "What is not specifically allowed, is forbidden."
Your meta-layer will have objects in it such as Business Object, Method, Property and events such as Add Business Object, Call Method etc.
There is a wealth of information about meta-programming available on the web, but I would start with Pattern Languages of Program Design Vol 2 or any of the WWW resources related to, or emanating from Kent or Coplien.
We develop an SDK that does something like this. We chose COM for our core because we were far more comfortable with it than with low-level .NET, but no doubt you could do it all natively in .NET.
The basic architecture is something like this: Types are described in a COM type library. All types derive from a root type called Object. A COM DLL implements this root Object type and provides generic access to derived types' properties via IDispatch. This DLL is wrapped in a .NET PIA assembly because we anticipate that most developers will prefer to work in .NET. The Object type has a factory method to create objects of any type in the model.
Our product is at version 1 and we haven't implemented methods yet - in this version business logic must be coded into the client application. But our general vision is that methods will be written by the developer in his language of choice, compiled to .NET assemblies or COM DLLs (and maybe Java too) and exposed via IDispatch. Then the same IDispatch implementation in our root Object type can call them.
If you anticipate that most of the custom business logic will be validation (such as checking for duplicate chassis numbers) then you could implement some general events on your root Object type (assuming you did it something like the way we do.) Our Object type fires an event whenever a property is updated, and I suppose this could be augmented by a validation method that gets called automatically if one is defined.
It takes a lot of work to create a generic system like this, but the payoff is that application development on top of the SDK is very quick.
You say that your customers should be able to add custom properties and implement business logic themselves "without programming". If your system also implements data storage based on the types (ours does) then the customer could add properties without programming, by editing the model (we provide a GUI model editor.) You could even provide a generic user application that dynamically presents the appropriate data-entry controls depending on the types, so your customers could capture custom data without additional programming. (We provide a generic client application but it's more a developer tool than a viable end-user application.) I don't see how you could allow your customers to implement custom logic without programming... unless you want to provide some kind of drag-n-drop GUI workflow builder... surely a huge task.
We don't envisage business users doing any of this stuff. In our development model all customisation is done by a developer, but not necessarily an expensive one - part of our vision is to allow less experienced developers produce robust business applications.
Design a core model that acts as its own independent project
Here's a list of some possible basic requirements...
The core design would contain:
classes that work (and possibly be extended) in all of the subprojects.
more complex tools like database interactions (unless those are project specific)
a general configuration structure that should be considered standard across all projects
Then, all of the subsequent projects that are customized per client are considered extensions of this core project.
What you're describing is the basic purpose of any Framework. Namely, create a core set of functionality that can be set apart from the whole so you don't have to duplicate that development effort in every project you create. Ie, drop in a framework and half your work is done already.
You might say, "what about the SCM (Software Configuration Management)?"
How do you track revision history of all of the subprojects without including the core into the subproject repository?
Fortunately, this is an old problem. Many software projects, especially those in the the linux/open source world, make extensive use of external libraries and plugins.
In fact git has a command that's specifically used to import one project repository into another as a sub-repository (preserving all of the sub-repository's revision history etc). In fact, you can't modify the contents of the sub-repository because the project won't track it's history at all.
The command I'm talking about is called 'git submodule'.
You may ask, "what if I develop a really cool feature in one client's project that I'd like to use in all of my client's projects?".
Just add that feature to the core and run a 'git submodule sync' on all the other projects. The way git submodule works is, it points to a specific commit within the sub-repository's history tree. So, when that tree is changed upstream, you need to pull those changes back downstream to the projects where they're used.
The structure to implement such a thing would work like this. Lets say that you software is written specifically to manage a car dealership (inventory, sales, employees, customers, orders, etc...). You create a core module that covers all of these features because they are expected to be used in the software for all of your clients.
But, you have recently gained a new client who wants to be more tech savvy by adding online sales to their dealership. Of course, their website is designed by a separate team of web developers/designers and webmaster but they want a web API (Ie, service layer) to tap into the current infrastructure for their website.
What you'd do is create a project for the client, we'll call it WebDealersRUs and link the core submodule into the repository.
The hidden benefit of this is, once you start to look as a codebase as pluggable parts, you can start to design them from the start as modular pieces that are capable of being dropped in to a project with very little effort.
Consider the example above. Lets say that your client base is starting to see the merits of adding a web-front to increase sales. Just pull the web API out of the WebDealersRUs into its own repository and link it back in as a submodule. Then propagate to all of your clients that want it.
What you get is a major payoff with minimal effort.
Of course there will always be parts of every project that are client specific (branding, ect). That's why every client should have a separate repository containing their unique version of the software. But that doesn't mean that you can't pull parts out and generalize them to be reused in subsequent projects.
While I approach this issue from the macro level, it can be applied to smaller/more specific parts of the codebase. The key here is code that you wish to re-use needs to be genericized.
OOP comes into play here because: where the functionality is implemented in the core but extended in client's code you'll use a base class and inherit from it; where the functionality is expected to return a similar type of result but the implementations of that functionality may be wildly different across classes (Ie, there's no direct inheritance hierarchy) it's best to use an interface to enforce that relationship.
I know your question is general, not tied to a technology, but since you mention you actually work with .NET, I suggest you look at a new and very important technology piece that is part of .NET 4: the 'dynamic' type.
There is also a good article on CodeProject here: DynamicObjects – Duck-Typing in .NET.
It's probably worth to look at, because, if I have to implement the dynamic system you describe, I would certainly try to implement my entities based on the DynamicObject class and add custom properties and methods using the TryGetxxx methods. It also depends whether you are focused on compile time or runtime. Here is an interesting link here on SO: Dynamically adding members to a dynamic object on this subject.
Two approaches is what I feel:
1) If different clients fall on to same domain (as Manufacturing/Finance) then it's better to design objects in such a way that BaseObject should have attributes which are very common and other's which could vary in between clients as key-value pairs. On top of it, try to implement rule engine like IBM ILog(http://www-01.ibm.com/software/integration/business-rule-management/rulesnet-family/about/).
2) Predictive Model Markup Language(http://en.wikipedia.org/wiki/PMML)
To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).
At which point do you decide that some of your subroutines and common code should be placed in a class library or DLL? In one of my applications, I would like to share some of my common code between different projects (as we all know, it's a programming sin to duplicate code).
The vast majority of my code is all within a single project. I also have one small utility that's partitioned from the main executable that runs with elevated permissions for a sole purpose. The two items have, at most, three subroutines in common. Should these common subroutines be placed and called from a class library? How do you decide when to do this? When you have at least one shared subroutine? Twenty-plus lines of code?
I don't believe that this should be language specific or framework dependent, but if so, I'm using the .NET framework.
There's more ways to share code between applications than with a DLL. From what it sounds like, you're not talking about a lot of code, so it sounds like you don't need to worry about it too much.
In general, I use the following rule of thumb:
For trivial code duplication (a couple simple 1-2 line functions, that are easy to understand and debug) I'll just copy and paste the code.
For more complicated functions (a small library of stand-alone helper functions, contained in a file or two, which require a modest level of maintenance and debugging) I'll simply include the file in both projects (either by linking, or defining a subrepository, or something like that).
For more extensive code sharing (a group of interrelated classes, or a database communication layer, which is useful for multiple projects) I'll refactor them out into a standalone library, and package and distribute them using whatever's appropriate for whatever I'm programming in.
Because the complexity of managing your code increases by an order of magnitude for each step (when you're packaging DLLs for multiple projects you now need to think about versioning issues) you only want to move to the next level when you need to. It doesn't sound like you're feeling the pain of handling your common code yet, and if that's the case there's no real need.
If code is shared between multiple applications, then it has to reside in a DLL or class library.
For a larger application you might also want to break different subsystems of the application into separate libraries. That way each project can focus on one particular task. This simplifies the structure of your application and makes it easier to find any one piece of code. For example, you might have a GUI application with different DLL's (.NET projects) for:
Working with a specific Network protocol
Accessing common code, for example utility classes
Access to legacy code (via PInvoke)
etc...
We are migrating our applications to VB.Net 2008 from Classic VB and I need to create a base namespace and business layer. My method of approach is going to be to visit our top BA and identify the common areas of our (Fixed Income) company and try to form a decent inheritence model with as much of the code in generics as possible.
What's everyone's experience of doing this and also as a second part of the question, we are looking at incorporating Web Focus into the OLAP side, how would this affect the design of the corporate namespace and it's derivatives?
I think the best way to begin to create a corporate .NET framework is to begin by harvesting existing code out of current corporate projects. Building a framework from scratch by talking to a BA without writing code for a specific, concrete project might lead you to over design the framework in some areas and totally miss some necessary features in others (as well, it might place artificial constraints on your framework clients for no good reason).
See Fowler's entry on Harvested Framework and this blog post for a more complete explanation.
I'm not familiar with Web Focus but I'm guessing it would affect it in some way, however, if you go with a Harvested Framework, your usage of it in the first few applications you build will shape how you use Web Focus within the framework.
Jereme has it right on the framework. I'll briefly mention something obvious about namespaces.
Always remember what a namespace is for - it's to provide a "space" in which names will live. In particular, it's meant to provide a space small enough that the people creating names within that space will be less likely to produce duplicate or confusing names.
This can only work if the namespaces are organized along patterns of organization, or of domain knowledge. A simple example often used is a pattern of Company.BusinessUnit.Application. The theory is that within the set of developers working on a given application, there is less chance for name duplication. This will not be true for a large application, where you would want to break it further based on layer or area. Similarly, of the business unit is too large, you'll want to break that down.
But in all cases, you're really trying to partition sets of brains, as it's the brains that create the names.
If your application is under VB6 (not VB3) then I strongly recommend that do the redesign to a class hierarchy in VB6 first. The reason for this is that in any conversion you try to preserve the behavior of the old application. Is stretches out the project time to do this and do a redesign at the same time.
By making the design changes in the applications original language first then you are assured that any bugs that result are due to the design not the conversion.
I done three major conversions of our software in the past 20 years; (DOS to VB3) (VB3 to object oriented design in VB6) and (VB6 to VB.NET).
Finally it is straight forward to make a design in VB6 that is ports over to VB.NET readily. The trick is to hide the specific VB6 APIs and constructs behind a interface (graphics, printing, etc)>
When do the conversion I recommend working from the top down. Change over your forms first to .NET which calls the VB6 COM DLLs. Then convert each layer over until you reach the bottom DLLs.
Again, if you try to change the design AND convert to another language for any complex application you will double the conversion time.