I know with Castle Windsor, you can register aspects (when using method interception in Windsor as AOP) using code instead of applying attributes to classes. Is the same possible in Postsharp? It's a preference things, but prefer to have aspects matched to interfaces/objects in one place, as opposed to attributes all over.
Update:
Curious if I can assign aspects to interfaces/objects similiar to this:
container.Register(
Component
.For<IService>()
.ImplementedBy<Service>()
.Interceptors(InterceptorReference.ForType<LoggingAspect>()).Anywhere
);
If you could do this, you would have the option of NOT having to place attributes on assemblies/class/methods to apply aspects. I can then have one code file/class that contains which aspects are applied to which class/methods/etc.
Yes. You can either use multicasting (http://www.sharpcrafters.com/blog/post/Day-2-Applying-Aspects-with-Multicasting-Part-1.aspx , http://www.sharpcrafters.com/blog/post/Day-3-Applying-Aspects-with-Multicasting-Part-2.aspx) or you can use aspect providers (http://www.sharpcrafters.com/blog/post/PostSharp-Principals-Day-12-e28093-Aspect-Providers-e28093-Part-1.aspx , http://www.sharpcrafters.com/blog/post/PostSharp-Principals-Day-13-e28093-Aspect-Providers-e28093-Part-2.aspx).
Example:
using System;
using PostSharp.Aspects;
using PostSharp.Extensibility;
[assembly: PostSharpInterfaceTest.MyAspect(AttributeTargetTypes = "PostSharpInterfaceTest.Interface1", AttributeInheritance = MulticastInheritance.Multicast)]
namespace PostSharpInterfaceTest
{
class Program
{
static void Main(string[] args)
{
Example e = new Example();
Example2 e2 = new Example2();
e.DoSomething();
e2.DoSomething();
Console.ReadKey();
}
}
class Example : Interface1
{
public void DoSomething()
{
Console.WriteLine("Doing something");
}
}
class Example2 : Interface1
{
public void DoSomething()
{
Console.WriteLine("Doing something else");
}
}
interface Interface1
{
void DoSomething();
}
[Serializable]
class MyAspect : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
Console.WriteLine("Entered " + args.Method.Name);
}
}
}
I recommend that if you have complex requirements for determining which types get certain aspects that you consider creating an aspect provider instead.
Have a look at LOOM.NET, there you have a post compiler and a runtime weaver. With the later one you are able to archive exactly what you want.
It should be possible to use the PostSharp XML configuration. The XML configuration is the unification of the Plug-in and Project models in the project loader.
Description of .psproj could be found at http://www.sharpcrafters.com/blog/post/Configuring-PostSharp-Diagnostics-Toolkits.aspx.
Note, that I've only seen examples how PostSharp Toolkits use this XML configuration.
But it should work for custom aspects the same way.
Warning: I've noticed that installation of a PostSharp Toolkit from Nuget overwrites existing psproj file. So do not forget to back up it.
Related
I have an ASP.NET Core application. The application has few helper classes that does some work. Each class has different signature method. I see lot of .net core examples online that create interface for each class and then register types with DI framework. For example
public interface IStorage
{
Task Download(string file);
}
public class Storage
{
public Task Download(string file)
{
}
}
public interface IOcr
{
Task Process();
}
public class Ocr:IOcr
{
public Task Process()
{
}
}
Basically for each interface there is only one class. Then i register these types with DI as
services.AddScoped<IStorage, Storage>();
services.AddScoped<IOcr,Ocr>();
But i can register type without having interfaces so interfaces here look redundant. eg
services.AddScoped<Storage>();
services.AddScoped<Ocr>();
So do i really need interfaces?
No, you don't need interfaces for dependency injection. But dependency injection is much more useful with them!
As you noticed, you can register concrete types with the service collection and ASP.NET Core will inject them into your classes without problems. The benefit you get by injecting them over simply creating instances with new Storage() is service lifetime management (transient vs. scoped vs. singleton).
That's useful, but only part of the power of using DI. As #DavidG pointed out, the big reason why interfaces are so often paired with DI is because of testing. Making your consumer classes depend on interfaces (abstractions) instead of other concrete classes makes them much easier to test.
For example, you could create a MockStorage that implements IStorage for use during testing, and your consumer class shouldn't be able to tell the difference. Or, you can use a mocking framework to easily create a mocked IStorage on the fly. Doing the same thing with concrete classes is much harder. Interfaces make it easy to replace implementations without changing the abstraction.
Does it work? Yes. Should you do it? No.
Dependency Injection is a tool for the principle of Dependency Inversion : https://en.wikipedia.org/wiki/Dependency_inversion_principle
Or as it's described in SOLID
one should “depend upon abstractions, [not] concretions."
You can just inject concrete classes all over the place and it will work. But it's not what DI was designed to achieve.
No, we don't need interfaces. In addition to injecting classes or interfaces you can also inject delegates. It's comparable to injecting an interface with one method.
Example:
public delegate int DoMathFunction(int value1, int value2);
public class DependsOnMathFunction
{
private readonly DoMathFunction _doMath;
public DependsOnAFunction(DoMathFunction doMath)
{
_doMath = doMath;
}
public int DoSomethingWithNumbers(int number1, int number2)
{
return _doMath(number1, number2);
}
}
You could do it without declaring a delegate, just injecting a Func<Something, Whatever> and that will also work. I'd lean toward the delegate because it's easier to set up DI. You might have two delegates with the same signature that serve unrelated purposes.
One benefit to this is that it steers the code toward interface segregation. Someone might be tempted to add a method to an interface (and its implementation) because it's already getting injected somewhere so it's convenient.
That means
The interface and implementation gain responsibility they possibly shouldn't have just because it's convenient for someone in the moment.
The class that depends on the interface can also grow in its responsibility but it's harder to identify because the number of its dependencies hasn't grown.
Other classes end up depending on the bloated, less-segregated interface.
I've seen cases where a single dependency eventually grows into what should really be two or three entirely separate classes, all because it was convenient to add to an existing interface and class instead of injecting something new. That in turn helped some classes on their way to becoming 2,500 lines long.
You can't prevent someone doing what they shouldn't. You can't stop someone from just making a class depend on 10 different delegates. But it can set a pattern that guides future growth in the right direction and provides some resistance to growing interfaces and classes out control.
(This doesn't mean don't use interfaces. It means that you have options.)
I won't try to cover what others have already mentioned, using interfaces with DI will often be the best option. But it's worth mentioning that using object inheritance at times may provide another useful option. So for example:
public class Storage
{
public virtual Task Download(string file)
{
}
}
public class DiskStorage: Storage
{
public override Task Download(string file)
{
}
}
and registering it like so:
services.AddScoped<Storage, DiskStorage>();
Without Interface
public class Benefits
{
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : Benefits
{
private readonly Benefits BT;
public Teacher(Benefits _BT)
{ BT = _BT; }
public void TeacherBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
public class Student : Benefits
{
private readonly Benefits BS;
public Student(Benefits _BS)
{ BS = _BS; }
public void StudentBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
here you can see benefits for Teachers is accessible in Student class and benefits for Student is accessible in Teacher class which is wrong.
Lets see how can we resolve this problem using interface
With Interface
public interface IBenefitForTeacher
{
void BenefitForTeacher();
}
public interface IBenefitForStudent
{
void BenefitForStudent();
}
public class Benefits : IBenefitForTeacher, IBenefitForStudent
{
public Benefits() { }
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : IBenefitForTeacher
{
private readonly IBenefitForTeacher BT;
public Teacher(IBenefitForTeacher _BT)
{ BT = _BT; }
public void BenefitForTeacher()
{
BT.BenefitForTeacher();
}
}
public class Student : IBenefitForStudent
{
private readonly IBenefitForStudent BS;
public Student(IBenefitForStudent _BS)
{ BS = _BS; }
public void BenefitForStudent()
{
BS.BenefitForStudent();
}
}
Here you can see there is no way to call Teacher benefits in Student class and Student benefits in Teacher class
So interface is used here as an abstraction layer.
I am writing a new app and I have chosen to use Java for flexibility. It is a GUI app so I will use JavaFX. This is my first time using Java but I have experience with C#.
I am getting familiar with JavaFX Properties, they look like a great way of bi-directional binding between front-end and back-end.
My code uses classes from an open-source API, and I would like to convert the members of these classes to JavaFX Properties (String => StringProperty, etc). I believe this would be transparent to any objects that refer to these members.
Is it ok to do this?
Is it the suggested way of dealing with existing classes?
What do I do about Enum types? E.g. an enum member has it's value changed, how should I connect the enum member to the front-end?
Thank you :)
In general, as long as you don't change the public API of the class - in other words you don't remove any public methods, modify their parameter types or return types, or change their functionality - you should not break any code that uses them.
So, e.g. a change from
public class Foo {
private String bar ;
public String getBar() {
return bar ;
}
public void setBar(String bar) {
this.bar = bar ;
}
}
to
public class Foo {
private final StringProperty bar = new SimpleStringProperty();
public StringProperty barProperty() {
return bar ;
}
public String getBar() {
return barProperty().get();
}
public void setBar(String bar) {
barProperty().set(bar);
}
}
should not break any clients of the class Foo. The only possible problem is that classes that have subclassed Foo and overridden getBar() and/or setBar(...) might get unexpected behavior if their superclass is replaced with the new implementation (specifically, if getBar() and setBar(...) are not final, you have no way to enforce that getBar()==barProperty().get(), which is desirable).
For enums (and other objects) you can use an ObjectProperty<>:
Given
public enum Option { FIRST_CHOICE, SECOND_CHOICE, THIRD_CHOICE }
Then you can do
public class Foo {
private final ObjectProperty<Option> option = new SimpleObjectProperty<>();
public ObjectProperty<Option> optionProperty() {
return option ;
}
public Option getOption() {
return optionProperty().get();
}
public void setOption(Option choice) {
optionProperty().set(choice);
}
}
One caveat to all this is that you do introduce a dependency on the JavaFX API that wasn't previously present in these classes. JavaFX ships with the Oracle JDK, but it is not a full part of the JSE (e.g. it is not included in OpenJDK by default, and not included in some other JSE implementations). So in practice, you're highly unlikely to be able to persuade the developers of the open source library to accept your changes to the classes in the library. Since it's open source, you can of course maintain your own fork of the library with JavaFX properties, but then it will get tricky if you want to incorporate new versions of that library (you will need to merge two different sets of changes, essentially).
Another option is to use bound properties in the classes, and wrap them using a Java Bean Property Adapter. This is described in this question.
I am using the new Test Doubles in EF6 as outlined here from MSDN . VS2013 with Moq & nUnit.
All was good until I had to do something like this:
var myFoo = context.Foos.Find(id);
and then:
myFoo.Name = "Bar";
and then :
context.Entry(myFoo).Property("Name").IsModified = true;
At this point is where I get an error:
Additional information: Member 'IsModified' cannot be called for
property 'Name' because the entity of type
'Foo' does not exist in the context. To add an
entity to the context call the Add or Attach method of
DbSet.
Although, When I examine the 'Foos' in the context with an AddWatch I can see all items I Add'ed before running the test. So they are there.
I have created the FakeDbSet (or TestDbSet) from the article. I am putting each FakeDbSet in the FakeContext at the constructor where each one gets initialized. Like this:
Foos = new FakeDbSet<Foo>();
My question is, is it possible to work with the FakeDbSet and the FakeContext with the test doubles scenario in such a way to have access to DbEntityEntry and DBPropertyEntry from the test double? Thanks!
I can see all items I Add'ed before running the test. So they are there.
Effectively, you've only added items to an ObservableCollection. The context.Entry method reaches much deeper than that. It requires a change tracker to be actively involved in adding, modifying and removing entities. If you want to mock this change tracker, the ObjectStateManager (ignoring the fact that it's not designed to be mocked at all), good luck! It's got over 4000 lines of code.
Frankly, I don't understand all these blogs and articles about mocking EF. Only the numerous differences between LINQ to objects and LINQ to entites should be enough to discourage it. These mock contexts and DbSets build an entirely new universe that's a source of bugs in itself. I've decided to do integrations test only when and wherever EF is involved in my code. A working end-to-end test gives me a solid feeling that things are OK. A unit test (faking EF) doesn't. (Others do, don't get me wrong).
But let's assume you'd still like to venture into mocking DbContext.Entry<T>. Too bad, impossible.
The method is not virtual
It returns a DbEntityEntry<T>, a class with an internal constructor, that is a wrapper around an InternalEntityEntry, which is an internal class. And, by the way, DbEntityEntry doesn't implement an interface.
So, to answer your question
is it possible to (...) have access to DbEntityEntry and DBPropertyEntry from the test double?
No, EF's mocking hooks are only very superficial, you'll never even come close to how EF really works.
Just abstract it. If you are working against an interface, when creating your own doubles, put the modified stuff in a seperate method. My interface and implementation (generated by EF, but I altered the template) look like this:
//------------------------------------------------------------------------------
// <auto-generated>
// This code was generated from a template.
//
// Manual changes to this file may cause unexpected behavior in your application.
// Manual changes to this file will be overwritten if the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------
namespace Model
{
using System;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
public interface IOmt
{
DbSet<DatabaseOmtObjectWhatever> DatabaseOmtObjectWhatever { get; set; }
int SaveChanges();
void SetModified(object entity);
void SetAdded(object entity);
}
public partial class Omt : DbContext, IOmt
{
public Omt()
: base("name=Omt")
{
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
throw new UnintentionalCodeFirstException();
}
public virtual DbSet<DatabaseOmtObjectWhatever> DatabaseOmtObjectWhatever { get; set; }
public void SetModified(object entity)
{
Entry(entity).State = EntityState.Modified;
}
public void SetAdded(object entity)
{
Entry(entity).State = EntityState.Added;
}
}
}
On my service layer I have injected an UnitOfWork and 2 repositories in the constructor. The Unit of Work and repository have an instance of a DbContext I want to share between the two of them. How can I do that with Ninject ? Which scope should be considered ?
I am not in a web application so I can't use InRequestScope.
I try to do something similar... and I am using DI however, I need my UoW to be Disposed and created like this.
using (IUnitOfWork uow = new UnitOfWorkFactory.Create())
{
_testARepository.Insert(a);
_testBRepository.Insert(b);
uow.SaveChanges();
}
EDIT: I just want to be sure i understand… after look at https://github.com/ninject/ninject.extensions.namedscope/wiki/InNamedScope i though about my current console application architecture which actually use Ninject.
Lets say :
Class A is a Service layer class
Class B is an unit of work which take into parameter an interface (IContextFactory)
Class C is a repository which take into parameter an interface (IContextFactory)
The idea here is to be able to do context operations on 2 or more repository and using the unit of work to apply the changes.
Class D is a context factory (Entity Framework) which provide an instance (keep in a container) of the context which is shared between Class B et C (.. and would be for other repositories aswell).
The context factory keep the instance in his container so i don’t want to reuse this instance all the name since the context need to be disposed at the end of the service operaiton.. it is the main purpose of the InNamedScope actually ?
The solution would be but i am not sure at all i am doing it right, the services instance gonna be transcient which mean they actually never disposed ? :
Bind<IScsContextFactory>()
.To<ScsContextFactory>()
.InNamedScope("ServiceScope")
.WithConstructorArgument(
"connectionString",
ConfigurationUtility.GetConnectionString());
Bind<IUnitOfWork>().To<ScsUnitOfWork>();
Bind<IAccountRepository>().To<AccountRepository>();
Bind<IBlockedIpRepository>().To<BlockedIpRepository>();
Bind<IAccountService>().To<AccountService>().DefinesNamedScope("ServiceScope");
Bind<IBlockedIpService>().To<BlockedIpService>().DefinesNamedScope("ServiceScope");
UPDATE: This approach works against NuGet current, but relies in an anomaly in the InCallscope implementation which has been fixed in the current Unstable NuGet packages. I'll be tweaking this answer in a few days to reflect the best approach after some mulling over. NB the high level way of structuring stuff will stay pretty much identical, just the exact details of the Bind<DbContext>() scoping will work. (Hint: CreateNamedScope in unstable would work or one could set up the Command Handler as DefinesNamedScope. Reason I dont just do that is that I want to have something that composes/plays well with InRequestScope)
I highly recommend reading the Ninject.Extensions.NamedScope integration tests (seriously, find them and read and re-read them)
The DbContext is a Unit Of Work so no further wrapping is necessary.
As you want to be able to have multiple 'requests' in flight and want to have a single Unit of Work shared between them, you need to:
Bind<DbContext>()
.ToMethod( ctx =>
new DbContext(
connectionStringName: ConfigurationUtility.GetConnectionString() ))
.InCallScope();
The InCallScope() means that:
for a given object graph composed for a single kernel.Get() Call (hence In Call Scope), everyone that requires an DbContext will get the same instance.
the IDisposable.Dispose() will be called when a Kernel.Release() happens for the root object (or a Kernel.Components.Get<ICache>().Clear() happens for the root if it is not .InCallScope())
There should be no reason to use InNamedScope() and DefinesNamedScope(); You don't have long-lived objects you're trying to exclude from the default pooling / parenting / grouping.
If you do the above, you should be able to:
var command = kernel.Get<ICommand>();
try {
command.Execute();
} finally {
kernel.Components.Get<ICache>().Clear( command ); // Dispose of DbContext happens here
}
The Command implementation looks like:
class Command : ICommand {
readonly IAccountRepository _ar;
readonly IBlockedIpRepository _br;
readonly DbContext _ctx;
public Command(IAccountRepository ar, IBlockedIpRepository br, DbContext ctx){
_ar = ar;
_br = br;
_ctx = ctx;
}
void ICommand.Execute(){
_ar.Insert(a);
_br.Insert(b);
_ctx.saveChanges();
}
}
Note that in general, I avoid having an implicit Unit of Work in this way, and instead surface it's creation and Disposal. This makes a Command look like this:
class Command : ICommand {
readonly IAccountService _as;
readonly IBlockedIpService _bs;
readonly Func<DbContext> _createContext;
public Command(IAccountService #as, IBlockedIpServices bs, Func<DbContext> createContext){
_as = #as;
_bs = bs;
_createContext = createContext;
}
void ICommand.Execute(){
using(var ctx = _createContext()) {
_ar.InsertA(ctx);
_br.InsertB(ctx);
ctx.saveChanges();
}
}
This involves no usage of .InCallScope() on the Bind<DbContext>() (but does require the presence of Ninject.Extensions.Factory's FactoryModule to synthesize the Func<DbContext> from a straightforward Bind<DbContext>().
As discussed in the other answer, InCallScope is not a good approach to solving this problem.
For now I'm dumping some code that works against the latest NuGet Unstable / Include PreRelease / Instal-Package -Pre editions of Ninject.Web.Common without a clear explanation. I will translate this to an article in the Ninject.Extensions.NamedScope wiki at some stagehave started to write a walkthrough of this technique in the Ninject.Extensions.NamedScope wiki's CreateNamedScope/GetScope article.
Possibly some bits will become Pull Request(s) at some stage too (Hat tip to #Remo Gloor who supplied me the outline code). The associated tests and learning tests are in this gist for now), pending packaging in a proper released format TBD.
The exec summary is you Load the Module below into your Kernel and use .InRequestScope() on everything you want created / Disposed per handler invocation and then feed requests through via IHandlerComposer.ComposeCallDispose.
If you use the following Module:
public class Module : NinjectModule
{
public override void Load()
{
Bind<IHandlerComposer>().To<NinjectRequestScopedHandlerComposer>();
// Wire it up so InRequestScope will work for Handler scopes
Bind<INinjectRequestHandlerScopeFactory>().To<NinjectRequestHandlerScopeFactory>();
NinjectRequestHandlerScopeFactory.NinjectHttpApplicationPlugin.RegisterIn( Kernel );
}
}
Which wires in a Factory[1] and NinjectHttpApplicationPlugin that exposes:
public interface INinjectRequestHandlerScopeFactory
{
NamedScope CreateRequestHandlerScope();
}
Then you can use this Composer to Run a Request InRequestScope():
public interface IHandlerComposer
{
void ComposeCallDispose( Type type, Action<object> callback );
}
Implemented as:
class NinjectRequestScopedHandlerComposer : IHandlerComposer
{
readonly INinjectRequestHandlerScopeFactory _requestHandlerScopeFactory;
public NinjectRequestScopedHandlerComposer( INinjectRequestHandlerScopeFactory requestHandlerScopeFactory )
{
_requestHandlerScopeFactory = requestHandlerScopeFactory;
}
void IHandlerComposer.ComposeCallDispose( Type handlerType, Action<object> callback )
{
using ( var resolutionRoot = _requestHandlerScopeFactory.CreateRequestHandlerScope() )
foreach ( object handler in resolutionRoot.GetAll( handlerType ) )
callback( handler );
}
}
The Ninject Infrastructure stuff:
class NinjectRequestHandlerScopeFactory : INinjectRequestHandlerScopeFactory
{
internal const string ScopeName = "Handler";
readonly IKernel _kernel;
public NinjectRequestHandlerScopeFactory( IKernel kernel )
{
_kernel = kernel;
}
NamedScope INinjectRequestHandlerScopeFactory.CreateRequestHandlerScope()
{
return _kernel.CreateNamedScope( ScopeName );
}
/// <summary>
/// When plugged in as a Ninject Kernel Component via <c>RegisterIn(IKernel)</c>, makes the Named Scope generated during IHandlerFactory.RunAndDispose available for use via the Ninject.Web.Common's <c>.InRequestScope()</c> Binding extension.
/// </summary>
public class NinjectHttpApplicationPlugin : NinjectComponent, INinjectHttpApplicationPlugin
{
readonly IKernel kernel;
public static void RegisterIn( IKernel kernel )
{
kernel.Components.Add<INinjectHttpApplicationPlugin, NinjectHttpApplicationPlugin>();
}
public NinjectHttpApplicationPlugin( IKernel kernel )
{
this.kernel = kernel;
}
object INinjectHttpApplicationPlugin.GetRequestScope( IContext context )
{
// TODO PR for TrgGetScope
try
{
return NamedScopeExtensionMethods.GetScope( context, ScopeName );
}
catch ( UnknownScopeException )
{
return null;
}
}
void INinjectHttpApplicationPlugin.Start()
{
}
void INinjectHttpApplicationPlugin.Stop()
{
}
}
}
I am writing a number of small, simple applications which share a common structure and need to do some of the same things in the same ways (e.g. logging, database connection setup, environment setup) and I'm looking for some advice in structuring the reusable components. The code is written in a strongly and statically typed language (e.g. Java or C#, I've had to solve this problem in both). At the moment I've got this:
abstract class EmptyApp //this is the reusable bit
{
//various useful fields: loggers, db connections
abstract function body()
function run()
{
//do setup
this.body()
//do cleanup
}
}
class theApp extends EmptyApp //this is a given app
{
function body()
{
//do stuff using some fields from EmptyApp
}
function main()
{
theApp app = new theApp()
app.run()
}
}
Is there a better way? Perhaps as follows? I'm having trouble weighing the trade-offs...
abstract class EmptyApp
{
//various fields
}
class ReusableBits
{
static function doSetup(EmptyApp theApp)
static function doCleanup(EmptyApp theApp)
}
class theApp extends EmptyApp
{
function main()
{
ReusableBits.doSetup(this);
//do stuff using some fields from EmptyApp
ReusableBits.doCleanup(this);
}
}
One obvious tradeoff is that with option 2, the 'framework' can't wrap the app in a try-catch block...
I've always favored re-use through composition (your second option) rather than inheritance (your first option).
Inheritance should only be used when there is a relationship between the classes rather than for code reuse.
So for your example I would have multiple ReusableBits classes each doing 1 thing that each application a make use of as/when required.
This allows each application to re-use the parts of your framework that are relevant for that specific application without being forced to take everything, Allowing the individual applications more freedom. Re-use through inheritance can sometimes become very restrictive if you have some applications in the future that don't exactly fit into the structure you have in mind today.
You will also find unit testing and test driven development much easier if you break your framework up into separate utilities.
Why not make the framework call onto your customisable code ? So your client creates some object, and injects it into the framework. The framework initialises, calls setup() etc., and then calls your client's code. Upon completion (or even after a thrown exception), the framework then calls cleanup() and exits.
So your client would simply implement an interface such as (in Java)
public interface ClientCode {
void runClientStuff(); // for the sake of argument
}
and the framework code is configured with an implementation of this, and calls runClientStuff() whenever required.
So you don't derive from the application framework, but simply provide a class conforming to a particular contract. You can configure the application setup at runtime (e.g. what class the client will provide to the app) since you're not deriving from the app and so your dependency isn't static.
The above interface can be extended to have multiple methods, and the application can call the required methods at different stages in the lifecycle (e.g. to provide client-specific setup/cleanup) but that's an example of feature creep :-)
Remember, inheritance is only a good choice if all the object that are inheriting reuse the code duo to their similarities. or if you want callers to be able to interact with them in the same fission.
if what i just mentioned applies to you then based on my experience its always better to have the common logic in your base/abstract class.
this is how i would re-write your sample app in C#.
abstract class BaseClass
{
string field1 = "Hello World";
string field2 = "Goodbye World";
public void Start()
{
Console.WriteLine("Starting.");
Setup();
CustomWork();
Cleanup();
}
public virtual void Setup()
{Console.WriteLine("Doing Base Setup.");}
public virtual void Cleanup()
{Console.WriteLine("Doing Base Cleanup.");}
public abstract void CustomWork();
}
class MyClass : BaseClass
{
public override void CustomWork()
{Console.WriteLine("Doing Custome work.");}
public override void Cleanup()
{
Console.WriteLine("Doing Custom Cleanup");
//You can skip the next line if you want to replace the
//cleanup code rather than extending it
base.Cleanup();
}
}
void Main()
{
MyClass worker = new MyClass();
worker.Start();
}