Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I know very well about the traditional arguments about why Interface Inheritance is prefered to multiple inheritance, there has been already a post here :
Should C# have multiple inheritance?
But according to Stroustrup the real reason why Microsoft and Sun decided to get rid off multiple inheritance is that they have vested interest to do so: instead of putting features in the languages, they put in frameworks so that people then become tied to their platform instead of people having the same capability at a language standard level.
What do you think ?
Why Sun and Microsoft consider developers too immature to just make the choice themselves ?
Above is my explicit interpretation of what he said. Of course he did say that in a more politically-correct way :)
Excerpt from "A Conversation with Bjarne Stroustrup"
http://www.artima.com/intv/modern.html
People quite correctly say that you
don't need multiple inheritance,
because anything you can do with
multiple inheritance you can also do
with single inheritance. You just use
the delegation trick I mentioned.
Furthermore, you don't need any
inheritance at all, because anything
you do with single inheritance you can
also do without inheritance by
forwarding through a class. Actually,
you don't need any classes either,
because you can do it all with
pointers and data structures. But why
would you want to do that? When is it
convenient to use the language
facilities? When would you prefer a
workaround? I've seen cases where
multiple inheritance is useful, and
I've even seen cases where quite
complicated multiple inheritance is
useful. Generally, I prefer to use the
facilities offered by the language to
doing workarounds.
From "Interview of Bjarne Stroustrup by "Developpeur Reference""
http://www2.research.att.com/~bs/nantes-interview-english.html
You can always re-write an example using multiple inheritance into on the uses single inheritance only (by using forwarding functions). However, the result is often an example that is longer, reflect the design less directly, and is harder to maintain. Note that you can also rewrite every example using single inheritance to an example using no inheritance using the same technique and with the same negative impact on code clarity. A language that does not support multiple inheritance is simply less expressive than one that supports multiple inheritance and thereby forces the programmer to occasionally complicate code.
...
People talk a lot about frameworks, but history is littered with frameworks that didn't live up to their expectations. I have seen successful frameworks, but they were generally limited in scope. I'm skeptical of "universal" frameworks, and even more so when such frameworks are products of a platform vendor competing with similar frameworks from other vendors. As a user, I prefer to maintain my independence from vendors as far as possible.
I'd like to seen libraries providing cleaner and more general access to frameworks - as opposed to languages intimately tied to a single framework.
My own thought:
People do follow fashion and IT is no exception. Nobody dares to question the fundamentals until some Gurus have themselves interest to do so.
For example in the case of Java nobody dared to question EJB until Rod Johnson came along with another framework which he said was inspired by .NET pragmatism.
And now .NET is becoming itself more and more frameworklish with EF.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I asked this question yesterday and the user #dfeuer advised me, that as a beginner I should not define my own classes. His comment:
Haskell beginners shouldn't define their own classes at all. Learn to define functions, and types, and instances. These are the vast majority of actual Haskell code. As you do this, you'll get a good feel for what makes some classes really useful and others less so. You'll learn what makes some classes easy to use and others full of booby traps. Then when you find a good reason to actually define your own class, you'll go through a slew of bad class designs before you get good enough at it that only most of your attempts go badly. Designing good classes is really hard and rarely necessary.
I am curious, why is defining my own classes usually (for a beginner) a bad idea? What are these "booby traps" and why is it so hard to design good classes?
I thought classes are used to define interfaces to data as I do in OOP. When I write java code, I try to write as much code as possible with abstract classes and especially interfaces, so that when I need to change the data, most of my code remains unchanged and that my methods are highly reusable. Another comment under that question by #Carl suggests, that this is not how classes should be used
Why did you create that class? It feels very weird to me - very much like something that someone used to OOP would do, rather than someone used to Haskell. It has too many parameters, they're connected in what feels like a very ad-hoc manner...
My fear is, that without this OOP use of classes, any change in data would break huge part of code. Is this fear unfunded? And if it is funded, why I should not use classes to define interface to data?
To be fair, I am self taught java programmer and I did not read others people code, so maybe I am doing java wrong also. I only read some books on how the language works and then built an application. I developed it for a year or so, and my whole style is consequence of this experience alone. My style seems to work well for my needs though, and thus I assume it is how java programming/OOP is indeed done.
I'm a relatively a new (and amateur) Haskell enthusiast.
I'd say: just stop thinking you can reuse OOP knowledge, patterns, and other things in Haskell. Even terminology is not "reusable". Classes are not the same thing in OOP languages vs Haskell (well, they are called typeclasses in Haskell, actually).
This is an answer to a question of mine. It starts more or less like this:
It's true that typeclasses can express what interfaces do in OO languages, but this doesn't always make sense.
i.e. stating the inherent difference between two similar (only apparently similar!) concepts in Haskell vs OOP languages.
Another interesting link is on Design Patterns in Haskell. It is very high level, and I still don't quite understand how some tools can be used in Haskell as an alternative to a specific OOP pattern. (Probably the fact that first-class function remove the need for the strategy pattern is the only thing that is totally clear to me, at the moment.) However, I think it is a good reading and, most of all, it should convince you that learning and coding in Haskell comes with a huge mental shift, and it is best approached by starting from zero. If you refuse that, you're not gonna learn Haskell.
I'm not saying that you shouldn't use your brain to notice similarities between OOP languages and Haskell. You should just assume that even trying to build on those observations will handicap your learning process.
As regards Haskell specifically, sitting down and studying LYAH as you were at school (with a laptop to try out examples) is a good way to learn very well the basics. It is an easy-ish to read book, and guides you by hand.
For what is worth, I think that Structure and Interpretation of Computer Programs is a good book that can accompany learning a functional language, as it gives you a practical background to the shift of philosophy I mentioned earlier. You must do the exercises. Doing them will force you towards that mental shift.
A final suggestion, that I would never apply before studying LYAH thoroughly, is to complete The Monad Challenges. But I have to say that LYAH does already a good job at teaching you what the Challenges ask you to think about. I found myself thinking "I already know this", "why is the challenge going so roundabout?".
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Context
This essay goes into detail describing "objects" and "abstract data types" (ADT) (and here is an older explanation by the same author)
Here is an excerpt:
Despite 25 years of research, there is still
widespread confusion about the two forms of data abstraction,
abstract data types and objects. This essay attempts to
explain the differences and also why the differences matter.
The typical response is a variant of
“objects are a kind of abstract data type”.
This response is consistent with most programming language
textbooks. [... But] the textbooks are
wrong! Objects and abstract data types are not the same
thing, and neither one is a variation of the other. They are
fundamentally different and in many ways complementary,
in that the strengths of one are the weaknesses of the other.
The issues are obscured by the fact that most modern programming
languages support both objects and abstract data
types, often blending them together into one syntactic form.
But syntactic blending does not erase fundamental semantic
differences which affect flexibility, extensibility, safety and
performance of programs. Therefore, to use modern programming
languages effectively, one should understand the
fundamental difference between objects and abstract data
types.
Question
Is there a concise explanation using modern, non-academic language examples? (If not, it would be great if someone provided one here or I might write my own answer when I have the time)
Of particular interest are the definitions and distinctions between objects and ADT's, and practical implications when writing code (or designing a language).
Caveat
I strongly recommend looking at the linked essay before commenting or answering.
Here is an example of type of insight I am looking for, also excerpted from the essay:
Abstract data types define operations that collect together the behaviors for a given action. Objects organize the matrix the other way, collecting together all the actions associated with a given representation. It is easier to add new operations in an ADT, and new representations using objects. [...] Object-oriented programs can use inheritance to add new operations.
Note that at least as far as the essay is concerned, as of Jan 3, 2014, Wikipedia is wrong (or at least incomplete) and so are most textbooks. The essay was written by a computer science professor after noticing the lack of understanding of these concepts, even among his academic peers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the difference between a software development pattern?
A methodology such as agile DSDM etc how is OO classed as a methodology and a paradigm?
How can OO be applied to a methodology such as agile if itself is a methodology?
Whats the difference between a paradigm and a methodology or a development pattern?
Thanks for any replys.
"When I use a word," Humpty Dumpty
said, in a rather scornful tone, "it
means just what I choose it to mean -
neither more nor less." "The question
is," said Alice, "whether you can make
words mean so many different things."
"The question is," said Humpty Dumpty,
"which is to be master - that's all."
Through the Looking Glass.
Well, not my answer, Lewis Carroll's.
Looking at only one of the questions you asked: "...how is OO classed as a methodology and a paradigm?"
That, at least, has a fairly simple answer:
Object Oriented Design is an analysis methodology.
Object Oriented Programming is an implementation paradigm.
OOD involves analyzing a problem in terms of objects and their interactions. OOP involves implementing a solution as a set of interacting objects.
"Agile" (I hate that name -- though I'll admit "eXtreme Programming" is worse) is really about project management. Just for example, you can apply Pair Programming about equally to something like assembly language or C as to a language that explicitly supports object oriented programming (though being a relatively new idea, it's probably used most often in conjunction with relatively new languages).
Edit: How I'd separate "methodology" from "paradigm" is fairly simple (at least in theory).
Paradigm is really just a fancy word for "example". If I'm following that example to a meaningful degree, the source code (for example) to the program should contain direct, (fairly) clearly defined results from having followed that example. Just for the obvious one, a class publicly derived from another would be a pretty obvious indication of OOP.
A methodology, by contrast, doesn't necessarily show a direct, definable result in the source code. Just for example, there's unlikely to be much in the source code to indicate whether it was developed using "Agile" methodology. I might be able to take a guess if (for example) all the source code files contained comments indicating two authors, but (at best) it would a rather indirect indication of one specific piece of the methodology.
I said in theory, because things can get a bit "fuzzy" at times. If I try hard enough, I can probably write pretty close to pure procedural code, even in a language like Smalltalk that favors objects almost exclusively. Likewise, if I try hard enough I can write OO code in something like C that doesn't really support it. In a case like this, the indications of following the paradigm will usually be harder to find or define than in a more straightforward case.
Methodology is about people. Paradigm is about software.
A paradigm is a way of thinking about a problem - so objects, a relational database, lambda calculus are all models for getting a problem into your head
A methodology is a way of actualy building something based on the paradigm.
If you like, the paradigm is the architect, what are building? should it be a suspension bridge or an arch. The methodology is the engineering, how many cables, how thick, which subcontractors.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am convinced that functional programming is an excellent choice when it comes to applications that require a lot of computation (data mining, AI, nlp etc).
Is functional programming being used in any well known enterprise applications or open source projects? How did they incorporate business logic into the functional design?
Please disregard the fact that there are very few people using functional programming and that it's kind of tough.
Thanks
Functional programming languages like Clojure and Scala are good for pretty much anything. As for Haskell, an experienced Haskell programming would probably be able to substitute Haskell with any language for any problem - Efficient or not. I don't know if there is a functional programming language that could be considered /best/ out of all languages for this specific problem, but rest assured it will work and very well at that.
Also, Clojure and Scala are implemented on the JVM. So technically they /are/ on an enterprise platform.
What are business rules if not functions? Application of rules can be expressed as applying a function to a set of data. It can also be combined with polymorphism. e.g. through generic functions (multiple dispatch can be handy, too) and inheritance.
Code is data, data is code, and both should be like water.
From what I've seen, Scala looks like it handles normal Java just fine. Hence, anything that Java can handle for business, Scala could too.
On the .NET side, F# is another great example of a functional language that works fine for "business" applications. To put it simply, F# can do everything C# can do, and more, easier.
But for both of these languages, the "programming in the large" side tends to borrow from OOP. Not that there's anything wrong with mixing things, but perhaps thats not what you asked. If you want to stick to a more functional approach, and say, not use objects, you could run into a bit more hassle because the tooling support won't be on the same level. With languages that easily integrate with .NET/Java, that's not as big an issue.
As far as "is it wise?": That depends on the project, company, and other environmental factors. It seems that a common "enterprise pattern" is that code has to be extremely dumbed down so that anyone can work on it. In that case, you might get people involved who'd think that using a lambda makes it too difficult for others to understand.
But is it wise to use functional programming for a typical enterprise application where there are a lot of business rules but not much in terms of computation?
Business rules are just computation and you can often express them more succinctly and clearly using functional programming.
A growing number of enterprise apps are written in functional languages. Citrix XenDesktop and XenServer are built upon a tool stack written primarily in OCaml. The MyLife people search engine is written in OCaml. We are a small company but all of our LOB software (e.g. credit-card transactions, accounts, web analytics) are written in F#. Microsoft's ads on Bing use F# code. Perhaps the most obvious example is anyone using recent versions of C# and .NET because they are almost certainly using functional concepts (e.g. delegates).
If you mean more exotic functional languages such as Clojure, Scala and Haskell then I believe some people are using them but I do not have any details myself.
More than a year ago I delved a bit into Haskell and also tried a few things that I would regard as a typical business problem (To put it bluntly, given a number of values, what is the correct response?). Hence, I would say, yes, you should be able to model a number of business problems with functional programming.
Personally I couldn't find the same obviousness in Haskell to which I can push a OO + functional approach like with C# , but this could well be because I haven't done much with Haskell and a lot more with C#.
Then there is the thing how to communicate with a customer. My experience is that many of them think in strictly chronological terms, which kind of favours imperative programming. Even when going into models of state changes etc. you can lose the odd customer. Thinking along function compositions and monads that may represent the chronological operations of the business could probably be beyond many,many customers.
Either way, you can find my business-y example here.
I assume when you talk about a lot of business rules you are thinking about application development. Application development in the sense that you want to model a real-world workflow. Unlike vanilla programming, application development involves higher levels of responsibility (particularly for requirement capturing and testing). If so, I strongly suggest to see if you could apply domain-driven development. A natural choice for domain-driven development is an object-oriented approach. This and the fact that a lot of programmers are decent at object-orientated programming is one reason for its popularity in application development. However, this does not mean that real-world, big-scale projects are always written this way (read http://www.paulgraham.com/avg.html).
You might want to check out the iTasks system which is a library for the functional language Clean and is designed exactly to express workflow and business processes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP).
Is the Single Responsibility Principle really a rule of OOP?
My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance.
Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle.
But, is it a rule, and thus implies that it shouldn't be broken?
Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong.
As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices.
The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort.
SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things.
Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want.
But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
None of these rules are laws. They are more guidelines and best practices. There are times when it doesn't make sense to follow "the rules" and you need to do what is best for your situation.
Don't be afraid to do what you think is right. You might actually come up with newer and better rules.
To quote Captain Barbossa:
"..And secondly, you must be a pirate for the pirate's code to apply and you're not.
And thirdly, the code is more what you'd call "guidelines" than actual rules...."
To quote Jack Sparrow & Gibbs.
"I thought you were supposed to keep to the code."
Mr. Gibbs: "We figured they were more actual guidelines. "
So clearly Pirates understand this pretty well.
The "rules" could be understood via the patterns movement as "Forces"
So there is a force trying to make the class have a single responsibility. (cohesion)
But there is also a force trying to keep the coupling to other classes down.
As with all design ( not just code) the answer is that it depends.
Ahh, I guess this pertains to an answer I gave. :)
As with most rules and laws, there are underlying motives by which these rules are relevant -- if the underlying motive is not present or applicable to your case, then you are free to bend/break the rules according to your own needs.
That being said, SRP is not a rule of OOP per se, but are considered best practices to create OOP applications that are both easily extensible and unit-testable.
Both are characteristics that I consider as of utmost importance in enterprise application development, where maintenance of existing applications occupies more time than new development does.
As many of the other posters have said, all rules are made to be broken.
That being said, I do think that SRP is one of the more important rules for writing good code. It's not specific to Object Oriented programming, but the "encapsulation" part of OOP is very hard to do right if the class does not have a single responsibility.
After all, how do you correctly and simply encapsulate a class with multiple responsibilities? Usually the answer is multiple interfaces and in many languages that can help quite a bit, but it's still confusing to the users of your class that it may apply in completely different ways in different situations.
SRP is just another expression of ISP :-) .
And the "P" means "principle" , not "rule" :D