loose coupling related to composition - oop

After searching different forums related to tight coupling (when a group of classes are highly dependent on one another)
Example1
class CustomerRepository
{
private readonly Database database;
public CustomerRepository(Database database)
{
this.database = database;
}
public void Add(string CustomerName)
{
database.AddRow("Customer", CustomerName);
}
}
class Database
{
public void AddRow(string Table, string Value)
{
}
}
Above class CustomerRepository is dependent on Database class so they are tightly coupled .and i think this class is also an example of Compostion ,then i searched for loose coupling so changing the above class so that tight coupling dependency is removed.
Example2
class CustomerRepository
{
private readonly IDatabase database;
public CustomerRepository(IDatabase database)
{
this.database = database;
}
public void Add(string CustomerName)
{
database.AddRow("Customer", CustomerName);
}
}
interface IDatabase
{
void AddRow(string Table, string Value);
}
class Database : IDatabase
{
public void AddRow(string Table, string Value)
{
}
}
I have searched that composition support loose coupling now my question is how example1 is tightly coupled as it were based on composition? secondly what is the relation between loose coupling and composition?
Any help will be greatly appreciated.

What you have there is not really tight coupling. Tight coupling would be this:
class CustomerRepository {
private readonly Database database;
public CustomerRepository() {
this.database = new Database;
}
}
The class has a hardcoded dependency on a specific Database class which cannot be substituted. That's really tight coupling.
The composition example you're showing is already loosely coupled, since it's entirely possible to substitute the dependency being injected into the constructor by any other Database inheriting class.
Your second example is even more loosely coupled, since it uses an interface instead of a concrete class; but that's something of a minor detail.

#deceze's explains most of your questions. I am just adding my 2 cents to his answer.
Both examples are loosely coupled but to different degrees.
Example -1 You are allowed to inject object of concrete type via its constructor.
Example -2 You are allowed to inject an objecto of abstract type via its constructor.
What makes the example 2 more loosly coupled is the due to Dependency Inversion Principle. It's main idea is - one should “Depend upon Abstractions. Do not depend upon concretions.”
Second example depends on interface and not on a Concrete class like the first one. Now comes the confusion -why interface is special why not a class both of them do the same thing?
Lets assume tomorrow if you want to delete Database Class and replace it with new class FlatFile, you need to change CustomerRepository class in the first example but not the second. In the second example the person who will create an instance of CustomerRepository only should worry replacing Database class with FlatFile Class. This is the meaning of loose copling changing Database class should not force you to change CustomerRepository class.
To answer your last question
what is the relation between loose coupling and composition?
There is no direct relation, you can still use composition and mess up the coupling between class by not implementing the Dependency inversion principle. So the right question you should ask is -
How to make a tightly coupled code to loosely coupled?
Follow Dependency inversion principle.

Related

Do we need interfaces for dependency injection?

I have an ASP.NET Core application. The application has few helper classes that does some work. Each class has different signature method. I see lot of .net core examples online that create interface for each class and then register types with DI framework. For example
public interface IStorage
{
Task Download(string file);
}
public class Storage
{
public Task Download(string file)
{
}
}
public interface IOcr
{
Task Process();
}
public class Ocr:IOcr
{
public Task Process()
{
}
}
Basically for each interface there is only one class. Then i register these types with DI as
services.AddScoped<IStorage, Storage>();
services.AddScoped<IOcr,Ocr>();
But i can register type without having interfaces so interfaces here look redundant. eg
services.AddScoped<Storage>();
services.AddScoped<Ocr>();
So do i really need interfaces?
No, you don't need interfaces for dependency injection. But dependency injection is much more useful with them!
As you noticed, you can register concrete types with the service collection and ASP.NET Core will inject them into your classes without problems. The benefit you get by injecting them over simply creating instances with new Storage() is service lifetime management (transient vs. scoped vs. singleton).
That's useful, but only part of the power of using DI. As #DavidG pointed out, the big reason why interfaces are so often paired with DI is because of testing. Making your consumer classes depend on interfaces (abstractions) instead of other concrete classes makes them much easier to test.
For example, you could create a MockStorage that implements IStorage for use during testing, and your consumer class shouldn't be able to tell the difference. Or, you can use a mocking framework to easily create a mocked IStorage on the fly. Doing the same thing with concrete classes is much harder. Interfaces make it easy to replace implementations without changing the abstraction.
Does it work? Yes. Should you do it? No.
Dependency Injection is a tool for the principle of Dependency Inversion : https://en.wikipedia.org/wiki/Dependency_inversion_principle
Or as it's described in SOLID
one should “depend upon abstractions, [not] concretions."
You can just inject concrete classes all over the place and it will work. But it's not what DI was designed to achieve.
No, we don't need interfaces. In addition to injecting classes or interfaces you can also inject delegates. It's comparable to injecting an interface with one method.
Example:
public delegate int DoMathFunction(int value1, int value2);
public class DependsOnMathFunction
{
private readonly DoMathFunction _doMath;
public DependsOnAFunction(DoMathFunction doMath)
{
_doMath = doMath;
}
public int DoSomethingWithNumbers(int number1, int number2)
{
return _doMath(number1, number2);
}
}
You could do it without declaring a delegate, just injecting a Func<Something, Whatever> and that will also work. I'd lean toward the delegate because it's easier to set up DI. You might have two delegates with the same signature that serve unrelated purposes.
One benefit to this is that it steers the code toward interface segregation. Someone might be tempted to add a method to an interface (and its implementation) because it's already getting injected somewhere so it's convenient.
That means
The interface and implementation gain responsibility they possibly shouldn't have just because it's convenient for someone in the moment.
The class that depends on the interface can also grow in its responsibility but it's harder to identify because the number of its dependencies hasn't grown.
Other classes end up depending on the bloated, less-segregated interface.
I've seen cases where a single dependency eventually grows into what should really be two or three entirely separate classes, all because it was convenient to add to an existing interface and class instead of injecting something new. That in turn helped some classes on their way to becoming 2,500 lines long.
You can't prevent someone doing what they shouldn't. You can't stop someone from just making a class depend on 10 different delegates. But it can set a pattern that guides future growth in the right direction and provides some resistance to growing interfaces and classes out control.
(This doesn't mean don't use interfaces. It means that you have options.)
I won't try to cover what others have already mentioned, using interfaces with DI will often be the best option. But it's worth mentioning that using object inheritance at times may provide another useful option. So for example:
public class Storage
{
public virtual Task Download(string file)
{
}
}
public class DiskStorage: Storage
{
public override Task Download(string file)
{
}
}
and registering it like so:
services.AddScoped<Storage, DiskStorage>();
Without Interface
public class Benefits
{
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : Benefits
{
private readonly Benefits BT;
public Teacher(Benefits _BT)
{ BT = _BT; }
public void TeacherBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
public class Student : Benefits
{
private readonly Benefits BS;
public Student(Benefits _BS)
{ BS = _BS; }
public void StudentBenefit()
{
base.BenefitForTeacher();
base.BenefitForStudent();
}
}
here you can see benefits for Teachers is accessible in Student class and benefits for Student is accessible in Teacher class which is wrong.
Lets see how can we resolve this problem using interface
With Interface
public interface IBenefitForTeacher
{
void BenefitForTeacher();
}
public interface IBenefitForStudent
{
void BenefitForStudent();
}
public class Benefits : IBenefitForTeacher, IBenefitForStudent
{
public Benefits() { }
public void BenefitForTeacher() { }
public void BenefitForStudent() { }
}
public class Teacher : IBenefitForTeacher
{
private readonly IBenefitForTeacher BT;
public Teacher(IBenefitForTeacher _BT)
{ BT = _BT; }
public void BenefitForTeacher()
{
BT.BenefitForTeacher();
}
}
public class Student : IBenefitForStudent
{
private readonly IBenefitForStudent BS;
public Student(IBenefitForStudent _BS)
{ BS = _BS; }
public void BenefitForStudent()
{
BS.BenefitForStudent();
}
}
Here you can see there is no way to call Teacher benefits in Student class and Student benefits in Teacher class
So interface is used here as an abstraction layer.

How to deal with hard to express requirements for dependencies?

When doing IoC, I (think that I) understand its use for getting the desired application level functionality by composing the right parts, and the benefits for testability. But at the microlevel, I don't quite understand how to make sure that an object gets dependencies injected that it can actually work with. My example for this is a BackupMaker for a database.
To make a backup, the database needs to be exported in a specific format, compressed using a specific compression algorithm, and then packed together with some metadata to form the final binary. Doing all of these tasks seems to be far from a single responsibility, so I ended up with two collaborators: a DatabaseExporter and a Compressor.
The BackupMaker doesn't really care how the database is exported (e.g. using IPC to a utility that comes with the database software, or by doing the right API calls) but it does care a lot about the result, i.e. it needs to be a this-kind-of-database backup in the first place, in the transportable (version agnostic) format, either of which I don't really know how to wrap in a contract. Neither does it care if the compressor does the compression in memory or on disk, but it has to be BZip2.
If I give the BackupMaker the wrong kinds of exporter or compressor, it will still produce a result, but it will be corrupt - it'll look like a backup, but it won't have the format that it should have. It feels like no other part of the system can be trusted to give it those collaborators, because the BackupMaker won't be able to guarantee to do the right thing itself; its job (from my perspective) is to produce a valid backup and it won't if the circumstances ain't right, and worse, it won't know about it. At the same time, even when writing this, it seems to me that I'm saying something stupid now, because the whole point of single responsibilities is that every piece should do its job and not worry about the jobs of others. If it were that simple though, there would be no need for contracts - J.B. Rainsberger just taught me there is. (FYI, I sent him this question directly, but I haven't got a reply yet and more opinions on the matter would be great.)
Intuitively, my favorite option would be to make it impossible to combine classes/objects in an invalid way, but I don't see how to do that. Should I write horrendously specific interface names, like IDatabaseExportInSuchAndSuchFormatProducer and ICompressorUsingAlgorithmXAndParametersY and assume that no classes implement these if they don't behave as such, and then call it a day since nothing can be done about outright lying code? Should I go as far as the mundane task of dissecting the binary format of my database's exports and compression algorithms to have contract tests to verify not only syntax but behavior as well, and then be sure (but how?) to use only tested classes? Or can I somehow redistribute the responsibilities to make this issue go away? Should there be another class whose responsibility it is to compose the right lower level elements? Or am I even decomposing too much?
Rewording
I notice that much attention is given to this very particular example. My question is more general than that, however. Therefore, for the final day of the bounty, I will try to summarize is as follows.
When using dependency injection, by definition, an object depends on other objects for what it needs. In many book examples, the way to indicate compatibility - the capability to provide that need - is by using the type system (e.g. implementing an interface). Beyond that, and especially in dynamic languages, contract tests are used. The compiler (if present) checks the syntax, and the contract tests (that the programmer needs to remember about) verify the semantics. So far, so good. However, sometimes the semantics are still too simple to ensure that some class/object is usable as a dependency to another, or too complicated to be described properly in a contract.
In my example, my class with a dependency on a database exporter considers anything that implements IDatabaseExportInSuchAndSuchFormatProducer and returns bytes as valid (since I don't know how to verify the format). Is very specific naming and such a very rough contract the way to go or can I do better than that? Should I turn the contract test into an integration test? Perhaps (integration) test the composition of all three? I'm not really trying to be generic but am trying to keep responsibilities separate and maintain testability.
What you have discovered in your question is that you have 2 classes that have an implicit dependency on one another. So, the most practical solution is to make the dependency explicit.
There are a number of ways you could do this.
Option 1
The simplest option is to make one service depend on the other, and make the dependent service explicit in its abstraction.
Pros
Few types to implement and maintain.
The compression service could be skipped for a particular implementation just by leaving it out of the constructor.
The DI container is in charge of lifetime management.
Cons
May force an unnatural dependency into a type where it is not really needed.
public class MySqlExporter : IExporter
{
private readonly IBZip2Compressor compressor;
public MySqlExporter(IBZip2Compressor compressor)
{
this.compressor = compressor;
}
public void Export(byte[] data)
{
byte[] compressedData = this.compressor.Compress(data);
// Export implementation
}
}
Option 2
Since you want to make an extensible design that doesn't directly depend on a specific compression algorithm or database, you can use an Aggregate Service (which implements the Facade Pattern) to abstract the more specific configuration away from your BackupMaker.
As pointed out in the article, you have an implicit domain concept (coordination of dependencies) that needs to be realized as an explicit service, IBackupCoordinator.
Pros
The DI container is in charge of lifetime management.
Leaving compression out of a particular implementation is as easy as passing the data through the method.
Explicitly implements a domain concept that you are missing, namely coordination of dependencies.
Cons
Many types to build and maintain.
BackupManager must have 3 dependencies instead of 2 registered with the DI container.
Generic Interfaces
public interface IBackupCoordinator
{
void Export(byte[] data);
byte[] Compress(byte[] data);
}
public interface IBackupMaker
{
void Backup();
}
public interface IDatabaseExporter
{
void Export(byte[] data);
}
public interface ICompressor
{
byte[] Compress(byte[] data);
}
Specialized Interfaces
Now, to make sure the pieces only plug together one way, you need to make interfaces that are specific to the algorithm and database used. You can use interface inheritance to achieve this (as shown) or you can just hide the interface differences behind the facade (IBackupCoordinator).
public interface IBZip2Compressor : ICompressor
{}
public interface IGZipCompressor : ICompressor
{}
public interface IMySqlDatabaseExporter : IDatabaseExporter
{}
public interface ISqlServerDatabaseExporter : IDatabaseExporter
{}
Coordinator Implementation
The coordinators are what do the job for you. The subtle difference between implementations is that the interface dependencies are explicitly called out so you cannot inject the wrong type with your DI configuration.
public class BZip2ToMySqlBackupCoordinator : IBackupCoordinator
{
private readonly IMySqlDatabaseExporter exporter;
private readonly IBZip2Compressor compressor;
public BZip2ToMySqlBackupCoordinator(
IMySqlDatabaseExporter exporter,
IBZip2Compressor compressor)
{
this.exporter = exporter;
this.compressor = compressor;
}
public void Export(byte[] data)
{
this.exporter.Export(byte[] data);
}
public byte[] Compress(byte[] data)
{
return this.compressor.Compress(data);
}
}
public class GZipToSqlServerBackupCoordinator : IBackupCoordinator
{
private readonly ISqlServerDatabaseExporter exporter;
private readonly IGZipCompressor compressor;
public BZip2ToMySqlBackupCoordinator(
ISqlServerDatabaseExporter exporter,
IGZipCompressor compressor)
{
this.exporter = exporter;
this.compressor = compressor;
}
public void Export(byte[] data)
{
this.exporter.Export(byte[] data);
}
public byte[] Compress(byte[] data)
{
return this.compressor.Compress(data);
}
}
BackupMaker Implementation
The BackupMaker can now be generic as it accepts any type of IBackupCoordinator to do the heavy lifting.
public class BackupMaker : IBackupMaker
{
private readonly IBackupCoordinator backupCoordinator;
public BackupMaker(IBackupCoordinator backupCoordinator)
{
this.backupCoordinator = backupCoordinator;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData = this.backupCoordinator.Compress(data);
// Backup the data
this.backupCoordinator.Export(compressedData);
}
}
Note that even if your services are used in other places than BackupMaker, this neatly wraps them into one package that can be passed to other services. You don't necessarily need to use both operations just because you inject the IBackupCoordinator service. The only place where you might run into trouble is if using named instances in the DI configuration across different services.
Option 3
Much like Option 2, you could use a specialized form of Abstract Factory to coordinate the relationship between concrete IDatabaseExporter and IBackupMaker, which will fill the role of the dependency coordinator.
Pros
Few types to maintain.
Only 1 dependency to register in the DI container, making it simpler to deal with.
Moves lifetime management into the BackupMaker service, which makes it impossible to misconfigure DI in a way that will cause a memory leak.
Explicitly implements a domain concept that you are missing, namely coordination of dependencies.
Cons
Leaving compression out of a particular implementation requires you implement the Null object pattern.
The DI container is not in charge of lifetime management and each dependency instance is per request, which may not be ideal.
If your services have many dependencies, it may become unwieldy to inject them through the constructor of the CoordinationFactory implementations.
Interfaces
I am showing the factory implementation with a Release method for each type. This is to follow the Register, Resolve, and Release pattern which makes it effective for cleaning up dependencies. This becomes especially important if 3rd parties could implement the ICompressor or IDatabaseExporter types because it is unknown what kinds of dependencies they may have to clean up.
Do note however, that the use of the Release methods is totally optional with this pattern and excluding them will simplify the design quite a bit.
public interface IBackupCoordinationFactory
{
ICompressor CreateCompressor();
void ReleaseCompressor(ICompressor compressor);
IDatabaseExporter CreateDatabaseExporter();
void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter);
}
public interface IBackupMaker
{
void Backup();
}
public interface IDatabaseExporter
{
void Export(byte[] data);
}
public interface ICompressor
{
byte[] Compress(byte[] data);
}
BackupCoordinationFactory Implementation
public class BZip2ToMySqlBackupCoordinationFactory : IBackupCoordinationFactory
{
public ICompressor CreateCompressor()
{
return new BZip2Compressor();
}
public void ReleaseCompressor(ICompressor compressor)
{
IDisposable disposable = compressor as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
public IDatabaseExporter CreateDatabaseExporter()
{
return new MySqlDatabseExporter();
}
public void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter)
{
IDisposable disposable = databaseExporter as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
}
public class GZipToSqlServerBackupCoordinationFactory : IBackupCoordinationFactory
{
public ICompressor CreateCompressor()
{
return new GZipCompressor();
}
public void ReleaseCompressor(ICompressor compressor)
{
IDisposable disposable = compressor as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
public IDatabaseExporter CreateDatabaseExporter()
{
return new SqlServerDatabseExporter();
}
public void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter)
{
IDisposable disposable = databaseExporter as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
}
BackupMaker Implementation
public class BackupMaker : IBackupMaker
{
private readonly IBackupCoordinationFactory backupCoordinationFactory;
public BackupMaker(IBackupCoordinationFactory backupCoordinationFactory)
{
this.backupCoordinationFactory = backupCoordinationFactory;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData;
ICompressor compressor = this.backupCoordinationFactory.CreateCompressor();
try
{
compressedData = compressor.Compress(data);
}
finally
{
this.backupCoordinationFactory.ReleaseCompressor(compressor);
}
// Backup the data
IDatabaseExporter exporter = this.backupCoordinationFactory.CreateDatabaseExporter();
try
{
exporter.Export(compressedData);
}
finally
{
this.backupCoordinationFactory.ReleaseDatabaseExporter(exporter);
}
}
}
Option 4
Create a guard clause in your BackupMaker class to prevent non-matching types from being allowed, and throw an exception in the case they are not matched.
In C#, you can do this with attributes (which apply custom metadata to the class). Support for this option may or may not exist in other platforms.
Pros
Seamless - no extra types to configure in DI.
The logic for comparing whether types match could be expanded to include multiple attributes per type, if needed. So a single compressor could be used for multiple databases, for example.
100% of invalid DI configurations will cause an error (although you may wish to make the exception specify how to make the DI configuration work).
Cons
Leaving compression out of a particular backup configuration requires you implement the Null object pattern.
The business logic for comparing types is implemented in a static extension method, which makes it testable but impossible to swap with another implementation.
If the design is refactored so that ICompressor or IDatabaseExporter are not dependencies of the same service, this will no longer work.
Custom Attribute
In .NET, an attribute can be used to attach metadata to a type. We make a custom DatabaseTypeAttribute that we can compare the database type name with two different types to ensure they are compatible.
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public DatabaseTypeAttribute : Attribute
{
public DatabaseTypeAttribute(string databaseType)
{
this.DatabaseType = databaseType;
}
public string DatabaseType { get; set; }
}
Concrete ICompressor and IDatabaseExporter Implementations
[DatabaseType("MySql")]
public class MySqlDatabaseExporter : IDatabaseExporter
{
public void Export(byte[] data)
{
// implementation
}
}
[DatabaseType("SqlServer")]
public class SqlServerDatabaseExporter : IDatabaseExporter
{
public void Export(byte[] data)
{
// implementation
}
}
[DatabaseType("MySql")]
public class BZip2Compressor : ICompressor
{
public byte[] Compress(byte[] data)
{
// implementation
}
}
[DatabaseType("SqlServer")]
public class GZipCompressor : ICompressor
{
public byte[] Compress(byte[] data)
{
// implementation
}
}
Extension Method
We roll the comparison logic into an extension method so every implementation of IBackupMaker automatically includes it.
public static class BackupMakerExtensions
{
public static bool DatabaseTypeAttributesMatch(
this IBackupMaker backupMaker,
Type compressorType,
Type databaseExporterType)
{
// Use .NET Reflection to get the metadata
DatabaseTypeAttribute compressorAttribute = (DatabaseTypeAttribute)compressorType
.GetCustomAttributes(attributeType: typeof(DatabaseTypeAttribute), inherit: true)
.SingleOrDefault();
DatabaseTypeAttribute databaseExporterAttribute = (DatabaseTypeAttribute)databaseExporterType
.GetCustomAttributes(attributeType: typeof(DatabaseTypeAttribute), inherit: true)
.SingleOrDefault();
// Types with no attribute are considered invalid even if they implement
// the corresponding interface
if (compressorAttribute == null) return false;
if (databaseExporterAttribute == null) return false;
return (compressorAttribute.DatabaseType.Equals(databaseExporterAttribute.DatabaseType);
}
}
BackupMaker Implementation
A guard clause ensures that 2 classes with non-matching metadata are rejected before the type instance is created.
public class BackupMaker : IBackupMaker
{
private readonly ICompressor compressor;
private readonly IDatabaseExporter databaseExporter;
public BackupMaker(ICompressor compressor, IDatabaseExporter databaseExporter)
{
// Guard to prevent against nulls
if (compressor == null)
throw new ArgumentNullException("compressor");
if (databaseExporter == null)
throw new ArgumentNullException("databaseExporter");
// Guard to prevent against non-matching attributes
if (!DatabaseTypeAttributesMatch(compressor.GetType(), databaseExporter.GetType()))
{
throw new ArgumentException(compressor.GetType().FullName +
" cannot be used in conjunction with " +
databaseExporter.GetType().FullName)
}
this.compressor = compressor;
this.databaseExporter = databaseExporter;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData = this.compressor.Compress(data);
// Backup the data
this.databaseExporter.Export(compressedData);
}
}
If you decide on one of these options, I would appreciate if you left a comment as to which one you go with. I have a similar situation in one of my projects, and I am leaning toward Option 2.
Response to your Update
Is very specific naming and such a very rough contract the way to go or can I do better than that? Should I turn the contract test into an integration test? Perhaps (integration) test the composition of all three? I'm not really trying to be generic but am trying to keep responsibilities separate and maintain testability.
Creating an integration test is a good idea, but only if you are certain that you are testing the production DI configuration. Although it also makes sense to test it all as a unit to verify it works, it doesn't do you much good for this use case if the code that ships is configured differently than the test.
Should you be specific? I believe I have already given you a choice in that matter. If you go with the guard clause, you don't have to be specific at all. If you go with one of the other options, you have a good compromise between specific and generic.
I know you stated that you are not intentionally trying to be generic, and it is good to draw the line somewhere to ensure a solution is not over-engineered. On the other hand, if the solution has to be redesigned because an interface was not generic enough that is not a good thing either. Extensibility is always a requirement whether it is specified up front or not because you never really know how business requirements will change in the future. So, having a generic BackupMaker is definitely the best way to go. The other classes can be more specific - you just need one seam to swap implementations if future requirements change.
My first suggestion would be to critically think if you need to be that generic: You have a concrete problem to solve, you want to backup a very specific database into a specific format. Is there any benefit you get by solving the problem for arbitary databases and arbitary formats? What you surely get of a generic solution is boilerplate code and increased complexity (people understand concrete problems, not generic ones).
If this applies to you, then my suggestion would be to not let your DatabaseExporter accept interfaces, but instead only concrete implementations. There are enough modern tools out there which will also allow you mocking concrete classes, so testability is not an argument for using interfaces here aswell.
on the other hand, if you do have to backup several databases with different strategies, then I would probably introduce something like a
class BackupPlan {
public DatabaseExporter exporter() {/**...*/}
public Compressor compressor() {/** ... */}
}
then your BackupMaker will get passed one BackupPlan, specifying which database to be compressed with which algorithm.
Your question is emphasizing the fact that object composition is very important and that the entity that is responsible for such composition (wiring) has a big responsibility.
Since you already have a generic BackupMaker, I would suggest that you keep it this way, and push the big responsibility of making sure that the right composition of objects (to solve the specific problem) is done in the composition root.
Readers of your application source code (you and your team members), would have a single place (the composition root) to understand how you compose your objects to solve your specific problem by using the generic classes (e.g. BackupMaker).
Put in other words, the composition root is where you decide on the specifics. Its where you use the generic to create the specific.
To reply on the comment:
which should know what about those dependencies?
The composition root needs to know about everything (all the dependencies) since it is creating all the objects in the application and wiring them together. The composition root knows what each piece of the puzzle does and it connects them together to create a meaningful application.
For the BackupMaker, it should only care about just enough to be able to do its single responsibility. In your example, its single (simple) responsibility (as it seems to me) is to orchestrate the consumption of other objects to create a backup.
As long as you are using DI, a class will never be sure that its collaborator will behave correctly, only the composition root will. Consider this simple and extreme example of an IDatabaseExporter implementation (assume that the developer actually gave this class this name, and that he intentionally implemented it this way):
public class StupidDisastrousDatabaseExporter : IDatabaseExporter
{
public ExportedData Export()
{
DoSomethingStupidThatWillDeleteSystemDataAndMakeTheEnterpriseBroke();
...
}
private void DoSomethingStupidThatWillDeleteSystemDataAndMakeTheEnterpriseBroke()
{
//do it
...
}
}
Now, the BackupMaker will never know that it is consuming a stupid and disastrous database exporter, only the composition root does. We can never blame the programmer that wrote the BackupMaker class for this disastrous mistake (or the programmer who designed the IDatabaseExporter contract). But the programmer(s) that are composing the application in the composition root are blamed if they inject a StupidDisastrousDatabaseExporter instance into the constructor of BackupMaker.
Of course, no one should have written the StupidDisastrousDatabaseExporter class in the first place, but I gave you an extreme example to show you that a contract (interface) can never (and should never) guarantee every aspect about its implementors. It should just say enough.
Is there a way to express IDatabaseExporter in such a way that guarantees that implementors of such interface will not make stupid or disastrous actions? No.
Please note that while the BackupMaker is dealing with contracts (no 100% guarantees), the composition root is actually dealing with concrete implementation classes. This gives it the great power (and thus the great responsibility) to guarantee the composition of the correct object graph.
how do I make sure that I'm composing in a sensible way?
You should create automated end-to-end tests for the object graph created by the composition root. Here you are making sure that the composition root has done its big responsibility of composing the objects in a correct way. Here you can test the exact details that you wanted (like that the backup result was in some exact format/details).
Take a look at this article for an approach to automated testing using the Composition Root.
I believe this may be a problem that occurs when focusing too much on object models, at the exclusion of function compositions. Consider the first step in a naive function decomposition (function as in f : a -> b):
exporter: data -> (format, memory), or exception
compressor: memory -> memory, or exception
writer: memory -> side-effect, or exception
backup-maker: (data, exporter, compressor, writer) -> backup-result
So backup-maker, the last function, can be parametized with those three functions, assuming I've considered your use-case correctly, and if the three parameters have the same input and output types, e.g. format, and memory, despite their implementation.
Now, "the guts", or a possible decomposition (read right to left) of backup-maker, with all functions bound, taking data as the argument, and using the composition operator ".":
backup-maker: intermediate-computation . writer . intermediate-computation . compressor . intermediate-computation . exporter
I especially want to note that this model of architecture can be expressed later as either object interfaces, or as first-class functions, e.g. c++ std::function.
Edit: It can also be refined to terms of generics, where memory is a generic type argument, to provide type safety where wanted. E.g.
backup-maker<type M>: (data, exporter<M>, compressor<M>, writer<M>) -> ..
More information about the technique and benefits of Function Decomposition can be found here:
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
Your requirements seem contradictory:
You want to be specific (allowing only a subset (or only one ?) of combinations)
But you also want to be generic by using interfaces, DI, etc.
My advice is to keep things simple (in your case it means don't try to be generic) until your code evolve.
Only when your code will evolve, refactor in a more generic way. The code below shows a compromise between generic/specific:
public interface ICompressor {
public byte[] compress(byte[] source); //Note: the return type and type argument may not be revelant, just for demonstration purpose
}
public interface IExporter {
public File export(String connectionString); //Note: the return type and type argument may not be revelant, just for demonstration purpose
}
public final class Bzip2 implements ICompressor {
#Override
public final byte[] compress(byte[] source) {
//TODO
}
}
public final class MySQL implements IExporter {
#Override
public final File export(String connnectionString) {
//TODO
}
}
public abstract class ABackupStrategy {
private final ICompressor compressor;
private final IExporter exporter;
public ABackupStrategy(final ICompressor compressor, final IExporter exporter) {
this.compressor = compressor;
this.exporter = exporter;
}
public final void makeBackup() {
//TODO: compose with exporter and compressor to make your backup
}
}
public final class MyFirstBackupStrategy extends ABackupStrategy {
public MyFirstBackupStrategy(final Bzip2 compressor, final MySQL exporter) {
super(compressor, exporter);
}
}
With ICompressor and IExporter, you can easily add other compression algorithm, other database from which to export.
With ABackupStrategy, you can easily define a new allowed combination of concrete compressor/exporter by inheriting it.
Drawback: I had to make ABackupStrategy abstract without declaring any abstract method, which is in contradiction with the OOP-principles.

OO Design Principle - Open Closed Principle

I have a Layout Manager Class and this class designed for setting datagrid layout.
Code:
class LayoutManager
{
private object _target;
public LayoutManager(object aDataGrid)
{
_target = aDataGrid;
}
public void SaveLayout(string strProfileID)
{
}
public void LoadLayout(string strProfileID)
{
}
//in future I might add below function
public void ResetLayout()//OtherFunction0
{
}
public void OtherFunction1()
{
}
public void OtherFunction2()
{
}
}
According to OCP "a Class should be open for extension, but closed for modification". If I add the new function in LayoutManager Class, is this action violate the OCP? If yes, what is the proper way to design the class?
I don't think that adding methods to a class in general violates the OCP prinicple,
as this in fact extends the class's behviour.
The problem is if you change existing behaviours.
So that if the code on your added methods might change the behaviour of the existing methods
(because it changes the object's state) that would be a violation.
The correct way to follow the SOLID principals, is to make an interface:
ILayoutManager with the interfaces you want , with documented behaviours.
The class LayoutManager would implement this interface.
Other new methods might be added in a new interface, say ILayoutFoo or added to the existing interface, as long as they won't break the contract of the documented behaviour in the existing methods.
It's not possible to directly answer this without some concrete code.
Generally speaking though, the upshot of the OCP is that when classes derive from your base class and then override methods, the internal invariants shouldn't break because that's modification. There shouldn't be any way for the derived class to change those parts of the class' behaviour. The derived classes can change behaviour or add new functionality by using the parts exposed by the base class.
Whenever we speak about Open-Closed Principle, one important issue comes into play, which is called Strategic Closure.
It should be clear that no significant program can be 100% closed. In general, no matter how “closed” a module is, there will always be some kind of change against which it is not closed. Since closure cannot be complete, it must be strategic. That is, the designer must choose the kinds of changes against which to close his design. This takes a certain amount of prescience derived from experience. The experienced designer knows the users and the industry well enough to judge the probability of different kinds of changes. He then makes sure that the open-closed principle is invoked for the most probable changes.
For example in famous sample of Shape class you just grantee that your program (in side of Client and Shape)just is closed for modification about adding new shape.
public class Shape {
public draw() {
}
}
public class Circle extends Shape {
#Override
public void draw() {
// implementation special to Circle
}
}
public class Client {
...
public drawMyShape(Shape shape) {
shape.draw();
}
...
}
According to this Strategy, when you are designing your program, you should make a decision about the sections that you want to be closed to changes. Therefore, in your example, when you were designing your program, if you decided that your entity (in this case it is GraphCalculator class) should be closed for modification and open to extension regarding to adding new functionality, adding new function in this example violates Open-Closed Principle, due to the fact that it changes implementation in side of client and GraphCalculator class. And solution can be using abstraction, which is mentioned in previous answers.

Multiple Inheritance: What's a good example?

I'm trying to find a good example for the use of multiple inheritance what cannot be done with normal interfaces.
I think it's pretty hard to find such an example which cannot be modeled in another way.
Edit: I mean, can someone name me a good real-world example of when you NEED to use multiple inheritance to implement this example as clean as possible. And it should not make use of multiple interfaces, just the way you can inherit multiple classes in C++.
The following is a classic:
class Animal {
public:
virtual void eat();
};
class Mammal : public Animal {
public:
virtual void breathe();
};
class WingedAnimal : public Animal {
public:
virtual void flap();
};
// A bat is a winged mammal
class Bat : public Mammal, public WingedAnimal {
};
Source: wiki.
One example where multiple class inheritance makes sense is the Observer pattern. This pattern describes two actors, the observer and the observable, and the former wants to be notified when the latter changes its object state.
A simplified version for notifying clients can look like this in C#:
public abstract class Observable
{
private readonly List<IObserver> _observers = new List<IObserver>();
// Objects that want to be notified when something changes in
// the observable can call this method
public void Subscribe(IObserver observer)
{
_observers.Add(observer);
}
// Subclasses can call this method when something changes
// to notify all observers
protected void Notify()
{
foreach (var observer in _observers)
observer.Notify();
}
}
This basically is the core logic you need to notify all the registered observers. You could make any class observable by deriving from this class, but as C# does only support single class inheritance, you are limited to not derive from another class. Something like this wouldn't work:
public class ImportantBaseClass { /* Members */ }
public class MyObservableSubclass : ImportantBaseClass, Observable { /* Members */ }
In these cases you often have to replicate the code that makes subclasses observable in all of them, basically violating the Don't Repeat Yourself and the Single Point of Truth principles (if you did MVVM in C#, think about it: how often did you implement the INotifyPropertyChanged interface?). A solution with multiple class inheritance would be much cleaner in my opinion. In C++, the above example would compile just fine.
Uncle Bob wrote an interesting article about this, that is where I got the example from. But this problem often applies to all interfaces that are *able (e.g. comparable, equatable, enumerable, etc.): a multiple class inheritance version is often cleaner in these cases, as stated by Bertrand Meyer in his book "Object-Oriented Software Construction".

Bridge Pattern - Composition or Aggregation?

I'm reading some books about Design Patterns and while some describe the relation between the abstraction and the implementation as a composition, some describe it as an aggregation. Now I wonder: is this dependant on the implementation? On the language? Or context?
The terms "composition" and "aggregation" mean more or less the same thing and may be used interchangeably. Aggregation may be used more frequently when describing container classes such as lists, dynamic arrays, maps, and queues where the elements are all of the same type; however, both terms may be found to describe classes defined in terms of other classes, regardless of whether those types are homogenous (all of the same type) or heterogenous (objects of different types).
To make this clearer:
class Car {
// ...
private:
Engine engine;
Hood hood;
};
// The car is *composed* of an engine and a hood. Hence, composition. You are
// also bringing together (i.e. *aggregating*) an engine and hood into a car.
The relationship between abstraction and implementation typically implies inheritance, rather than composition/aggregation; typically the abstraction is an interface or virtual base class, and the implementation is a fully concrete class that implements the given interface. But, to make things confusing, composition/aggregation can be a part of the interface (because, for example, you may need to set/get the objects that are used as building blocks), and they are also an approach to implementation (because you might use delegation to provide the definition for methods in your implementation).
To make this clearer:
interface Car {
public Engine getEngine();
public Hood getHood();
public void drive();
}
// In the above, the fact that a car has these building blocks
// is a part of its interface (the abstraction).
class HondaCivic2010 implements Car {
public void drive(){ getEngine().drive(); }
// ...
}
// In the above, composition/delegation is an implementation
// strategy for providing the drive functionality.
Since you have tagged your question "bridge", I should point out that the definition of the bridge pattern is a pattern where you use composition rather than inheritance to allow for variation at multiple different levels. An example that I learned at college... using inheritance you might have something like:
class GoodCharacter;
class BadCharacter;
class Mage;
class Rogue;
class GoodMage : public GoodCharacter, Mage;
class BadMage : public BadCharacter, Mage;
class GoodRogue : public GoodCharacter, Rogue;
class BadRogue : public BadCharacter, Rogue;
As you can see, this kind of thing goes pretty crazy, and you get a ridiculous number of classes. The same thing, with the bridge pattern, would look like:
class Personality;
class GoodPersonality : public Personality;
class BadPersonality : public Personality;
class CharacterClass;
class Mage : public CharacterClass;
class Rogue : public CharacterClass;
class Character {
public:
// ...
private:
CharacterClass character_class;
Personality personality;
};
// A character has both a character class and a personality.
// This is a perfect example of the bridge pattern, and we've
// reduced MxN classes into a mere M+N classes, and we've
// arguably made the system even more flexible than before.
the bridge pattern must use delegation (aggregation/composition and not inheritance). from the gang-of-four book:
Use the Bridge pattern when
* you want to avoid a permanent binding between an abstraction and its implementation. This might be the case, for example, when the implementation must be selected or switched at run-time.
* both the abstractions and their implementations should be extensible by subclassing. In this case, the Bridge pattern lets you combine the different abstractions and implementations and extend them independently.
* changes in the implementation of an abstraction should have no impact on clients; that is, their code should not have to be recompiled.
* (C++) you want to hide the implementation of an abstraction completely from clients. In C++ the representation of a class is visible in the class interface.
* you have a proliferation of classes as shown earlier in the first Motivation diagram. Such a class hierarchy indicates the need for splitting an object into two parts. Rumbaugh uses the term "nested generalizations" [RBP+91] to refer to such class hierarchies.
* you want to share an implementation among multiple objects (perhaps using reference counting), and this fact should be hidden from the client. A simple example is Coplien's String class [Cop92], in which multiple objects can share the same string representation (StringRep).
Standard UML of Bridge pattern clears out all air around the confusion. Below is an explanation with a brief example to clear the air around this.
Apologies for this lengthy code, best way is to copy this code to Visual Studio to easily understand it.
Read through the explanation written at the end of code
interface ISpeak
{
void Speak();
}
class DogSpeak : ISpeak
{
public void Speak()
{
Console.WriteLine("Dog Barks");
}
}
class CatSpeak : ISpeak
{
public void Speak()
{
Console.WriteLine("Cat Meows");
}
}
abstract class AnimalBridge
{
protected ISpeak Speech;
protected AnimalBridge(ISpeak speech)
{
this.Speech = speech;
}
public abstract void Speak();
}
class Dog : AnimalBridge
{
public Dog(ISpeak dogSpeak)
: base(dogSpeak)
{
}
public override void Speak()
{
Speech.Speak();
}
}
class Cat : AnimalBridge
{
public Cat(ISpeak catSpeak)
: base(catSpeak)
{
}
public override void Speak()
{
Speech.Speak();
}
}
-- ISpeak is the abstraction that bot Dog and Cat has to implement
-- Decoupled Dog and Cat classes by introducing a bridge "Animal" that is composed of ISpeak
-- Dog and Cat classes extend Animal class and thus are decoupled from ISpeak.
Hope this clarifies