As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
People say C++ inheritance is evil, so Java 'fixed' this problem with interface.
But Scala introduced traits, they're... interface with partial implementation? Doesn't this brought multiple inheritance back?
Does it mean Scala guys think multiple inheritance is good? Or They have some critical differences I haven't noticed?
The worst part of multiple inheritance is diamond inheritance, where a subclass has two or more paths to the same parent somewhere up the chain. This creates ambiguity if implementations differ along the two paths (i.e. are overridden from the original implementation). In C++ the solution is particularly ugly: you embed both incompatible parent classes and have to specify when you call which implementation you want. This is confusing, creates extra work at every call site (or, more likely, forces you to override explicitly and state the one you want; this manual work is tedious and introduces a chance for error), and can lead to objects being larger than they ought to be.
Scala solves some but not all of the problems by limiting multiple inheritance to traits. Because traits have no constructors, the final class can linearize the inheritance tree, which is to say that even though two parents on a path back to a common super-parent nominally are both parents, one is the "correct" one, namely the one listed last. This scheme would leave broken half-initialized classes around if you could have (completely generic) constructors, but as it is, you don't have to embed the class twice, and at the use site you can ignore how much inheritance if any has happened. It does not, however, make it that much easier to reason about what will happen when you layer many traits on top of each other, and if you inherit from both B and C, you can't choose to take some of B's implementations and the some of C's.
So it's better in that it addresses some of the most serious criticisms of the C++ model. Whether it is better enough is a matter of taste; plenty of people even like the taste of C++'s multiple inheritance well enough to use it.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am working on a game/simulation application that uses multiple mathematical solvers. There is an already existing adapter class for each of them. These adapter classes provide all rendering and functional information for the application.
Broadly speaking, we keep an adapter object to represent an instance and call methods to achieve:
Generate rendering data.
modify object state. There are just too many function to do it.
read data model information for various purposes.
Now the problem is that these classes keep growing over a period of time and carry too much information and responsibility.
My question is how can I redesign/restructure these classes to make better sense. Is there any design pattern I should be looking at?
Edit: As requested, here is the broad list of things any adapter class will be doing.
Sync with the current data stored in mathematical solver.
Sync with the data model of our application. For things like
undo/redo.
Modify object state: Change in shape. This is most important and have
various, more than 50, functions to achieve it. All are self
contained single service call with parameters. I am trying to insert
interfaces and factory here but function signatures are not
compatible.
Get data model information of mathematical solver. Like getChildern
etc.
Change visibility and other graphic property.
The principle to use would be Information Expert from GRASP:
[…] Using the principle of Information Expert, a general approach to assigning responsibilities is to look at a given responsibility, determine the information needed to fulfill it, and then determine where that information is stored. Information Expert will lead to placing the responsibility on the class with the most information required to fulfill it. […]
Though never explicitly mentioned, applying this principle will likely lead you to using the patterns given by Martin Fowler in the chapter Moving Features Between Objects in his Refactoring book:
[…] Often classes become bloated with too many responsibilities. In this case I use Extract Class to separate some of these responsibilities. If a class becomes too irresponsible, I use Inline Class to merge it into another class. If another class is being used, it often is helpful to hide this fact with Hide Delegate. Sometimes hiding the delegate class results in constantly changing the owner’s interface, in which case you need to use Remove Middle Man […]
In general, break down your classes so that each only has one reason to change, per the Single Responsibility Principle. Leave your "adapters" in place as a Facade over the classes you'll extract; it will make the refactor smoother.
Since you describe a list of responsibilities that are common to all your adapters, you probably have a lot of code that is almost the same between the adapters. As you go through this exercise, try extracting the same responsibility from several adapters, and watch for ways to eliminate duplication.
It would be tempting to start with extracting a class for "Modify Object state". Since you have more than 50 (!) functions fulfilling that responsibility, you should probably break that down into several classes, if you can. As this is likely the biggest cause of bloat in your adapter class, just doing it may solve the problem, though it will be important to break it down or you'll just move the God class, instead of simplifying it.
However, this will be a lot of work and it will likely be complex enough that you won't easily see opportunities for reuse of the extracted classes between adapters. On the other hand, extracting small responsbilities won't get you a lot of benefit. I would pick something in the middle to start.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm trying to get into OOP lately, and I'm having trouble with SOLID principles and design patterns. I see why people use them, and I really want to use them too, but I can't wrap my head around developing my classes to the specifications. I would really appreciate anything that would help my understanding of such.
I've taken a class in college that spent two weeks around design patters, and read the Gang of Four book to no avail. Understanding what each pattern served for and how to use them to fit my problems was very hard for me, a developer that didn't have much experience in OO programming.
The book that really made it click for me was Head First Design Patterns. It starts by showing a problem, different approaches the developers considered, and then how they ended up using a design pattern in order to fix it. It uses a very simple language and keeps the book very engaging.
Design patterns end up being a way to describe a solution, but you don't have to adapt your classes to the solution. Think of them more as a guide that suggest a good solution to a wide array of problems.
Let's talk about SOLID:
Single responsibility. A class should have only one responsibility. That means that for example, a Person class should only worry about the domain problem regarding the person itself, and not for example, its persistence in the database. For that, you may want to use a PersonDAO for example. A Person class may want to keep its responsibilities the shortest it can. If a class is using too many external dependencies (that is, other classes), that's a symptom that the class is having too many responsibilities. This problem often comes when developers try to model the real world using objects and take it too far. Loosely coupled applications often are not very easy to navigate and do not exactly model how the real world works.
Open Closed. Classes should be extendible, but not modifiable. That means that adding a new field to a class is fine, but changing existing things are not. Other components on the program may depend on said field.
Liskov substitution. A class that expects an object of type animal should work if a subclass dog and a subclass cat are passed. That means that Animal should NOT have a method called bark for example, since subclasses of type cat won't be able to bark. Classes that use the Animal class, also shouldn't depend on methods that belong to a class Dog. Don't do things like "If this animal is a dog, then (casts animal to dog) bark. If animal is a cat then (casts animal to cat) meow".
Interface segregation principle. Keep your interfaces the smallest you can. A teacher that also is a student should implement both the IStudent and ITeacher interfaces, instead of a single big interface called IStudentAndTeacher.
Dependency inversion principle. Objects should not instantiate their dependencies, but they should be passed to them. For example, a Car that has an Engine object inside should not do engine = new DieselEngine(), but rather said engine should be passed to it on the constructor. This way the car class will not be coupled to the DieselEngine class.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've looked around and haven't found anything specific to my question but that's partially because I'm unsure how to phrase it.
I manage around 20 different C# .NET applications that all do relatively similar things.
I am working on consolidating the common code into a data, business logic, and presentation.
My question is related to the Business Logic layer.
I gather that a business/domain object is one that holds state and sometimes may perform related actions (if you take that approach).
But what would you call an object that is only working through a routine?
For example:
In the presentation layer a button event is fired.
The presentation layer points to this class and calls the "RunJob()"
method.
RunJob() does all the work it needs to do and then finishes. For
example, it may read a table and output it into a CSV (a lot of
these apps are data pushers). It may or may not use internal
fields/properties. These properties may be used to display data in
the interface or to create output.
Is there a name for this or is it just a bad pattern/bad OO in practice? I don't think this qualifies as a business object or helper. I've seen some other topics that hint it might be a "Service" object.
Thanks!
call it WorkerThread for now and see uncle bob's article on naming" http://www.objectmentor.com/resources/articles/Naming.pdf. then change the name to something reasonable.
your class is not necessarily a bad class. most entities usually do not have much behaviour, otoh some helper classes do not have much state.
The name of your objects depend on specifically what work they do. TableImporter and CsvExporter are good names for the tasks you described. The methods should also be appropriately named. It may be the case that you want to abstract an interface Runner and have a generic RunJob method to decouple your presentation and model layers, but it could be more clear and decoupled if you use a controller instead.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Programming languages like C# or Java feature static methods, despite being heavily object oriented.
I'm aware that there are many cases where static methods are used for performance or convenience reasons, but i can't stop wondering if there exist actual coding problems which could not be solved without the use of static methods.
I think that some of the common cases which would be named here could be just "normal" methods, instead of being static, like:
main: The purpose of the main-method is the creation of the very first running thread of the program and starting it. So this might as well just an object derived from a Thread class
Loggers: Logger implementations often use static methods. I don't see the point in that as i might want to exchange a logger for another on with an identical interface
Math: Math functions really seem to be a perfect candidate for static methods at first sight, but there might be cases where you might want to exchange your math library transparently for another one (i.e. if you need more performance on the sin() function you might want to use an implementation with a faster, less precise algorithm if precision is not critical for your application)
Singletons: Are considered bad practice by many. If only one instance is necessary you might think about actually creating only one instance.
So, what might be cases where static methods are really absolutely needed?
IMO, Static methods are needed while defining factories to create objects of different sub types of a given type where the choice of sub type is dependent on the inputs to this static factory method and is hidden from the client.
Your Logger example actually falls under this category where the actual logger is decided based on the package/class it is needed (ofcourse the other factory methods on Logger take other parameters to decide on the appropriate Logger instance to be returned).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is your perspective on downcasting? Is it ALWAYS wrong, or are there cases where it is acceptable, or even preferable or desired?
Is there some good measure/guideline we can give that tells us when downcasting is "evil", and when it's "ok"/"good"?
(I know a similar question exists, but that question spins out from a concrete case. I'd like to have it answered from a general design perspective.)
No, it's definitely not always wrong.
For example, suppose in C# you have an event handler - that gets given a sender parameter, representing the originator of the event. Now you might hook up that event handler to several buttons, but you know they're always buttons. It's reasonable to cast sender to Button within that code.
That's just one example - there are plenty of others. Sometimes it's just a way around a slightly awkward API, other times it comes out of not being able to express the type within the normal type system cleanly. For example, you might have a Dictionary<Type, object> appropriate encapsulated, with generic methods to add and retrieve values - where the value of an entry is of the type of the key. A cast is entirely natural here - you can see that it will always work, and it's giving more type safety to the rest of the system.
It's never an ideal solution and should be avoided wherever possible - unless the alternative would be worse. Sometimes, it cannot be avoided, e.g. pre-Generics Java's Standard API library had lots of classes (most prominently the collections) that required downcasting to be useful. And sometimes, changing the design to avoid the downcast would complicate it significantly, so that the downcast is the better solution.
An example for "legal" downcasting is Java pre 5.0 where you had to downcast container elements to their concrete type when accessing them. It was unavoidable in that context. This also shows the other side of the question though: if you need to downcast a lot in a given situation, it starts to be evil, so it is better to find another solution without downcasting. That resulted in the introduction of generics in Java 5.
John Vlissides analyzes this issue (aka "Type Laundering") a lot in his excellent book Pattern Hatching (practically a sequel to Design Patterns).