IOC , Class Factory, Open/Closed - vb.net

I have a question about IOC, factories, and the Open/Closed principle.
consider, if you will, the following factory
public function PODocument( _type as string) as IPODocument
dim d as new PODocument
if _type = "service" then
d.header = new ServicePOHeader()
d.details = new ServicePOLineItems()
else if _type = "merchandise" then
d.header = new MerchandisePOHeader()
d.details = new MerchandisePOLineItems()
end if
return d
end function
This is working for me and I can nicely have a webpage show information about heterogeneous collections.
My challenge is that today someone told me sometimes a certain customer will order a service and merchandise together. Come on, who among us could have seen that coming?
So I write a new set of providers that handle the added complexity, changed the factory to include a case for the new type, I'm back off and running.
However, I have violated the open/closed principle by changing the factory, which was released to production.
Is there a way to set this up so that I am not constantly changing the factory?
thanks in advance.

Yes. The simplest example for your case would be to define a factory class for each _type, and name them ServiceFactory, MerchandiseFactory, etc, or put a <PODocumentType("Service")> etc on them.
Then just find all factories (for example, using reflection), put them in a Dictionary(Of String, IPODocumentFactory) and select correct one based on key.
In a more complicated case, IPODocumentFactory interface may include CanCreate() method in addition to Create(). Then you can select a factory on the list based on its opinion about current situation.
Note that the discovery and list resolution support is often provided out of the box by DI frameworks such as Unity or Autofac.

Related

OOP - Best way to populate Object

I try to follow OOP and S.O.L.I.D for my code and came across a topic where I'm not sure what's the best way to align it with OOP.
When I have an Object with different properties, what's the ideal way to populate those properties?
Should the Object if possible populate them?
Should separate code assign the values to the properties?
I have the following example (Python), which I created based on how I think is the best way.
The Object has methods that will populate the properties so if you instantiate the Object, in this case an Account with and Id, it should automatically populate the properties since they can be retrieved with the ID using DB and API.
class Account:
def __init__(self, account_id):
self.id = account_id
self.name = None
self.page = Page(self.id)
def populate(self):
self.get_name()
def get_name(self):
''' Code to retrieve Account name from DB with account_id '''
self.name = <NAME>
class Page:
def __init__(self, account_id):
self.id = 0
self.name = None
self.user_count = 0
self.populate()
def populate():
self.get_id()
self.get_name()
self.get_user_count()
def get_id(self):
''' Code to retrieve Page Id from DB with account_id '''
self.id = <Page_ID>
def get_name(self):
''' Code to retrieve Account name from DB with account_id '''
self.name = <NAME>
def get_user_count(self):
''' Code to retrieve user_count from API with page_id '''
self.user_count = <user_count>
So instead of doing something like:
account = Account()
account.id = 1
account.name = function.to.get.account.name()
account.page = Page()
account.page.id = 1
I will have it managed by the class itself.
The problem I could see with this is the dependency on the DB and API classes/methods which go against S.O.L.I.D (D - Dependency Inversion Principle) as I understand.
For me it makes sense to have the class handle this, today I have some similar code that would describe the Object and then populate the Object properties in a separate Setup class, but I feel this is unnecessary.
The articles I found for OOP/S.O.L.I.D did not cover this and I didn't really find any explanation on how to handle this with OOP.
Thanks for your help!
Michael
Your design seems fairly ok. Try having a look # Builder Pattern and Composite pattern. And for your question on violation of D principle.
The problem I could see with this is the dependency on the DB and API classes/methods which go against S.O.L.I.D (D - Dependency Inversion Principle) as I understand.
The dependencies are already abstracted into separate components I guess. It is the responsibility of that component to give the data you needed per the contract. Your implementation should be independent of the what that component internally does. A DAO layer for accessing DB and Data APIs can help you achieve this.
The dependency then your system will have is the DAO component. That can be injected with different ways. There are frameworks available to help you here or you can control the creation of objects and injections when needed with some combinations of factory, builder and singleton patterns. If it is single DB instance or LogFile you may also consider using Singleton to get the component.
Hope it helps!
In looking at SOLID and OOP the key is to analyze what it is your doing and separate out the various elements to make sure that they do not end up mixed together. If they do the code tends to become brittle and difficult to change.
Looking at the Account class several questions need to be answered when considering the SOLID principles. The first is what is the responsibility of the Account class? The Single Responsibility Principle would say there should be one and only one reason for it to change. So does the Account class exist to retrieve the relevant information from the DB or does it exist for some other purpose? If for some other purpose then it has multiple responsibilities and the code should be refactored so that the class has a single responsibility.
With OOP typically when retrieving data from a DB a repository class is used to retrieve information from the DB for an object. So in this case the code that is used to retrieve information from the DB would be extracted out form the Account class and put in this repository class: AccountRepository.
Dependency Inversion means that the Account class should not depend explicitly on the AccountRepository class, instead the Account class should depend on the abstraction. The abstraction would be an interface or abstract class with no implementation. In this case we could use an interface called IAccountRepository. Now the Account class could use either an IoC container or a factory class to get the repository instance while only having knowledge of IAccountRepository

VB.NET EF - Creating a generic function that inspects an entity & permits entity save? Code First

I know, this sounds strange... But this is what I'm trying to do, how far along I am, and why I'm even doing it in the first place:
The class is configured, as an instance, with the name of the class. The context class is also available for preparing the batch class.
Pass a generic list of objects (entities, really) to a class.
That class (which can be pre-configured for this particular class) does just one thing: Adds a new entity to the backend database via DBContext.
The same class can be used for any entity described in the Context class, but each instance of the class is for just one entity class.
I want to write a blog article on my blog showing the performance of dynamically adjusting the batch size when working with EF persistence, and how constantly looking for the optimum batch size can be done.
When I say "DBContext" class, I mean a class similar to this:
Public Class CarContext
Inherits DbContext
Public Sub New()
MyBase.New("name=vASASysContext")
End Sub
Public Property Cars As DbSet(Of Car)
Protected Overrides Sub OnModelCreating(modelBuilder As DbModelBuilder)
modelBuilder.Configurations.Add(Of Car)(New CarConfiguration())
End Sub
End Class
That may sound confusing. Here is a use-case (sorta):
Need: I need to add a bunch of entities to a database. Let's call them cars, for lack of an easier example. Each entity is an instantiation of a car class, which is configured via Code First EF6 to be manipulated like any other class that is well defined in DBContext. Just like in real world classes, not all attributes are mapped to a database column.
High Level:
-I throw all the entities into a generic list, the kind of list supported by our 'batch class'.
-I create an instance of our batch class, and configure it to know that it is going to be dealing with car entities. Perhaps I even pass the context class that has the line:
-Once the batch class is ready (may only need the dbcontext instance and the name of the entity), if is given the list of entities via a function something like:
Public Function AddToDatabase(TheList as List(Of T)) As Double
The idea is pretty simple. My batch class is set to add these cars to the database, and will add the cars in the list it was given. The same rules apply to adding the entities in batch as they do when adding via DBContext normally.
All I want to happen is that the batch class itself does not need to be customized for each entity type it would deal with. Configuration is fine, but not customization.
OK... But WHY?
Adding entities via a list is easy. The secret sauce is the batch class itself. I've already written all the logic that determines the rate at which the entities are added (in something akin to "Entities per Second). It also keeps track of the best performance, and varies the batch size occasionally to verify.
The reason the function above (called AddToDatabase() for illustrative purposes) returns a double is that the double represents the amount of entities that were added per second.
Ultimately, the batch class returns to the calling method the number of entities to put in the next batch to maintain peak performance.
So my question for you all is how I can determine the entity fields, and use that to save the entities in a given list.
Can it be as simple as copying the entities? Do I even need to use reflection at all, and have the class use 'GetType' to figure out the entity class in the list (cars)?
How would you go about this?
Thank yu very much in advance for your reading this far, and your thoughtful response..
[Don't read further unless you are into this kind of thing!]
The performance of a database operation isn't linear, and is dependent on several factors (memory, CPU load, DB connectivity, etc.), and the DB is not always on the same machine as the application. It may even involve web services.
Your first instinct is to say that more entities in a single batch is best, but that is probably not true in most cases. When you add entities to a batch add, at first you see an increase in performance (increase in entities/second). But as the batch size increases, the performance may reach a maximum, then start to decrease (for a lot of reasons, not excluding environmental, such as memory). For non-memory issues, the batch performance may start to level off, and we haven't even discussed the impact of the batch on the system itself.
So in the case of a leveling off, I don't want my batch size any larger than it needs to be to be in the neighborhood of peak performance. Also, with smaller batch sizes, the class is able to evaluate the system's performance more frequently.
Being new to Code First and EF6, I can see that there must be some way to use reflection to determine how to take the given list of entities, break them apart into the entity attributes, and persist them via the EF itself.
So far, I do it this way because I need to manually configure each parameter in the INSERT INTO...
For Each d In TheList
s = "INSERT INTO BDTest (StartAddress, NumAddresses, LastAddress, Duration) VALUES (#StartAddress, #NumAddresses, #LastAddress, #Duration)"
Using cmd As SqlCommand = New SqlCommand(s, conn)
conn.Open()
cmd.Parameters.Add("#StartAddress", Data.SqlDbType.NVarChar).Value = d.StartIP
cmd.Parameters.Add("#NumAddresses", Data.SqlDbType.Int).Value = d.NumAddies
cmd.Parameters.Add("#LastAddress", Data.SqlDbType.NVarChar).Value = d.LastAddie
singleRate = CDbl(Me.TicksPerSecond / swSingle.Elapsed.Ticks)
cmd.Parameters.Add("#Duration", Data.SqlDbType.Int).Value = singleRate
cmd.ExecuteNonQuery()
conn.Close()
End Using
Next
I need to steer away in this test code from using SQL, and closer toward EF6...
What are your thoughts?
TIA!
So, there are two issues I see that you should tackle. First is your question about creating a "generic" method to add a list of entities to the database.
Public Function AddToDatabase(TheList as List(Of T)) As Double
If you are using POCO entities, then I would suggest you create an abstract base class or interface for all entity classes to inherit from/implement. I'll go with IEntity.
Public Interface IEntity
End Interface
So your Car class would be:
Public Class Car
Implements IEntity
' all your class properties here
End Class
That would handle the generic issue.
The second issue is one of batch inserts. A possible implementation of your method could be as follows. This will insert a batches of 100, modify the paramater inputs as needed. Also replace MyDbContext with the actual Type of your DbContext.
Public Function AddToDatabase(ByVal entities as List(Of IEntity)) As Double
Using scope As New TransactionScope
Dim context As MyDbContext
Try
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
Dim count = 0
For Each entityToInsert In entities
count += 1
context = AddToContext(context, entityToInsert, count, 100, true)
Next
context.SaveChanges()
Finally
If context IsNot Nothing
context.Dispose()
End If
End Try
scope.Complete()
End Using
End Function
Private Function AddToContext(ByVal context As MyDbContext, ByVal entity As IEntity, ByVal count As Integer, ByVal commitCount As Integer, ByVal recreateContext as Boolean) As MyDbContext
context.Set(entity.GetType).Add(entity)
If (count % commitCount = 0)
context.SaveChanges()
If (recreateContext)
context.Dispose()
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
End If
End If
return context
End Function
Also, please apologize if this is not 100% perfect as I mentally converted it from C# to VB.Net while typing. It should be very close.

Dependency injection of local variable returned from method

I'm (somewhat) new to DI and am trying to understand how/why it's used in the codebase I am maintaining. I have found a series of classes that map data from stored procedure calls to domain objects. For example:
Public Sub New(domainFactory As IDomainFactory)
_domainFactory = domainFactory
End Sub
Protected Overrides Function MapFromRow(row As DataRow) As ISomeDomainObject
Dim domainObject = _domainFactory.CreateSomeDomainObject()
' Populate the object
domainObject.ID = CType(row("id"), Integer)
domainObject.StartDate = CType(row("date"), Date)
domainObject.Active = CType(row("enabled"), Boolean)
Return domainObject
End Function
The IDomainFactory is injected with Spring.Net. It's implementation simply has a series of methods that return new instances of the various domain objects. eg:
Public Function CreateSomeDomainObject() As ISomeDomainObject
Return New SomeDomainObject()
End Function
All of the above code strikes me as worse than useless. It's hard to follow and does nothing of value. Additionally, as far as I can tell it's a misuse of DI as it's not really intended for local variables. Furthermore, we don't need more than one implementation of the domain objects, we don't do any unit testing and if we did we still wouldn't mock the domain objects. As you can see from the above code, any changes to the domain object would be the result of changes to the SP, which would mean the MapFromRow method would have to be edited anyway.
That said, should I rip this nonsense out or is it doing something amazingly awesome and I'm missing it?
The idea behind dependency injection is to make a class (or another piece of code) independent on a specific implementation of an interface. The class outlined above does not know which class implements IDomainFactory. A concrete implementation is injected through its constructor and later used by the method MapFromRow. The domain factory returns an implementation of ISomeDomainObject which is also unknown.
This allows you supplement another implementation of these interfaces without having to change the class shown above. This is very practical for unit tests. Let's assume that you have an interface IDatabase that defines a method GetCustomerByID. It is difficult to test code that uses this method, since it depends on specific customer records in the database. Now you can inject a dummy database class that returns customers generated by code that does not access a physical database. Such a dummy class is often called a mock object.
See Dependency injection on Wikipedia.

Return object as parameter for nHibernate entity

I'm writing a solution where the data entities are passed to clients using datasets through a WCF service, with nHibernate as ORM.
My predecessor has written some translator classes, converting datasets to entities and vice versa. In most cases he's declared the return object as a parameter for the object.
For example:
Public Shared Function CreateEntity(ByVal ds As DataSetObject, ByVal entity As EntityObject) As EntityObject
Dim row As ds.EntityObjectRow = ds.EntityObject(0)
entity.Id = row.Id
// Etc.
Return entity
End Function
While I want to do it like:
Public Shared Function CreateEntity(ByVal ds As DataSetObject) As EntityObject
Dim row As ds.EntityObjectRow = ds.EntityObject(0)
Dim entity As New EntityObject
entity.Id = row.Id
// Etc.
Return entity
End Function
He's not with the company anymore, so I can't ask him why he's done it this way. Hence my question here. Is there some performance gain, or traceablity thing with nHibernate by using the first implementation rather than the latter?
When at the university I was always told not to pass the return object to the method, unless there was a very specific reason for it.
Please advice. :)
From the information you gave there is no problem to create the object to return from inside the method and not receive it from outside.
The only reason I can see for this is that maybe he passes the entity as a parameter with its ID already defined and doesn't change it inside, because the dataset could not know the entity's ID. But I guess that's not the case. So in my opinion go ahead and do the way you propose.

God object - decrease coupling to a 'master' object

I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)