Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to understand the difference between imperative and declarative paradigms, because I have to classify Visual Basic. NET in the different paradigms. Beyond object-oriented, I guess it's also imperative or declarative. If someone could help me explaining how I realized I would appreciate
Imperative code is procedural: do this, then this, then that, then the other, in that order. It's very precise and specific in what you want. Most languages that we use for building end-user applications are imperative languages, including VB.Net. Language structures that indicate an imperative language include if blocks, loops, and variable assignments.
Declarative code just describes the result you want the system to provide, but can leave some actual implementation details up to the system. The canonical example of a declarative language is SQL (though it has some imperative features as well). You could also consider document markup languages to be declarative. Language structures that indicate a declarative language include set-based operations and markup templates.
Here's the trick: while VB.Net would traditionally be considered imperative, as of the introduction of LINQ back in 2008, VB.Net also has significant declarative features that a smart programmer will take advantage of. These features allow you to write VB.Net code that looks a lot like SQL.
I classify CSharp/VB as Multi-paradigm. They are imperative (IF,FOR,WHILE), declarative (LINQ) object-oriented and functional(Lambda). I think that in today landscape there are no more pure languages, they have a bunch of bits of several paradigms.
"The idea of a multiparadigm language is to provide a framework in which programmers can work in a variety of styles, freely intermixing constructs from different paradigms" http://en.wikipedia.org/wiki/Timothy_Budd
VB.NET never required LINQ to be considered declarative. In my understanding, declarative means that a programming language can speak English, i.e. business logic is written exactly as requirements say. The actual implementation may vary. This is called domain driven design (DDD) in some schools of thought.
For this matter, any object oriented language can be seen as declarative. Which does not take away its imperative functions - those are used to make it as declarative as you want it. And this is power behind properly implemented OO concepts, with a concrete task in mind.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Can any language be used to program in any paradigm? For example C doesn't have classes but s it is possible to program in OOP. There are some languages (such as assembly) I can't see using OOP in.
Yes, simply due to the fact you can implement an interpreter for your $favorite $paradigm in the host language.
Practically though, this is not feasible, efficient or right.
C++ is ultimately assembly, you just have a compiler to write the assembly for you from a nicer description. So sure you can do OOP in assembly, just as you can do OOP in C; it's just that a lot of the OO concepts end up being implemented with convention and programmer discipline rather than being forced by the structure of the language, with the result that huge classes of bugs become possible that your language tools probably won't be very good at helping you find.
Similar arguments follow for most paradigm/language mismatches. Lots of object-oriented programs have been written in C this way, so it can even be a somewhat practical thing to do, not just an academic matter.
It can be a little harder when what you want is to remove restrictions rather than add them.
In purity-enforced languages such as Haskell and Mercury you can't suddenly break out object-oriented style packets-of-encapsulated-mutable-state in the middle of arbitrary pure code (at least not without using "all bets are off" features like unsafePerformIO in Haskell or promise_pure in Mercury to lie to the compiler, at which point your program may well completely fail to work unless you can wrap a pure interface around the regions in which you do this). However you can write whole programs in procedural or object-oriented style in these languages, by never leaving the mechanism they use to do IO.
Likewise, if you consider the use of duck typing in dynamic languages to be a paradigm, it's pretty painful to get something similar in languages with static typing, but you can always find a way to represent your dynamic types as data. But you again find yourself doing thing with convention and reimplementation that you would get for free if you were really using a duck typing language.
I'm pretty sure it would be hard to find a language (usable for writing general purpose programs) that can't be adapted to write code in any paradigm you like. The adaptation may not produce very efficient code (sometimes it can though; adapting C or assembly to any paradigm can usually be made pretty much as efficient as if you had a language tuned for that paradigm), and it will almost certainly be horribly inefficient in terms of programmer time.
No, not all languages can be used to program in any paradigm. However, the more popular ones - python, c++, etc all allows you to chose how you want to program. Even php is adding OO support.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I know very well about the traditional arguments about why Interface Inheritance is prefered to multiple inheritance, there has been already a post here :
Should C# have multiple inheritance?
But according to Stroustrup the real reason why Microsoft and Sun decided to get rid off multiple inheritance is that they have vested interest to do so: instead of putting features in the languages, they put in frameworks so that people then become tied to their platform instead of people having the same capability at a language standard level.
What do you think ?
Why Sun and Microsoft consider developers too immature to just make the choice themselves ?
Above is my explicit interpretation of what he said. Of course he did say that in a more politically-correct way :)
Excerpt from "A Conversation with Bjarne Stroustrup"
http://www.artima.com/intv/modern.html
People quite correctly say that you
don't need multiple inheritance,
because anything you can do with
multiple inheritance you can also do
with single inheritance. You just use
the delegation trick I mentioned.
Furthermore, you don't need any
inheritance at all, because anything
you do with single inheritance you can
also do without inheritance by
forwarding through a class. Actually,
you don't need any classes either,
because you can do it all with
pointers and data structures. But why
would you want to do that? When is it
convenient to use the language
facilities? When would you prefer a
workaround? I've seen cases where
multiple inheritance is useful, and
I've even seen cases where quite
complicated multiple inheritance is
useful. Generally, I prefer to use the
facilities offered by the language to
doing workarounds.
From "Interview of Bjarne Stroustrup by "Developpeur Reference""
http://www2.research.att.com/~bs/nantes-interview-english.html
You can always re-write an example using multiple inheritance into on the uses single inheritance only (by using forwarding functions). However, the result is often an example that is longer, reflect the design less directly, and is harder to maintain. Note that you can also rewrite every example using single inheritance to an example using no inheritance using the same technique and with the same negative impact on code clarity. A language that does not support multiple inheritance is simply less expressive than one that supports multiple inheritance and thereby forces the programmer to occasionally complicate code.
...
People talk a lot about frameworks, but history is littered with frameworks that didn't live up to their expectations. I have seen successful frameworks, but they were generally limited in scope. I'm skeptical of "universal" frameworks, and even more so when such frameworks are products of a platform vendor competing with similar frameworks from other vendors. As a user, I prefer to maintain my independence from vendors as far as possible.
I'd like to seen libraries providing cleaner and more general access to frameworks - as opposed to languages intimately tied to a single framework.
My own thought:
People do follow fashion and IT is no exception. Nobody dares to question the fundamentals until some Gurus have themselves interest to do so.
For example in the case of Java nobody dared to question EJB until Rod Johnson came along with another framework which he said was inspired by .NET pragmatism.
And now .NET is becoming itself more and more frameworklish with EF.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the difference between a software development pattern?
A methodology such as agile DSDM etc how is OO classed as a methodology and a paradigm?
How can OO be applied to a methodology such as agile if itself is a methodology?
Whats the difference between a paradigm and a methodology or a development pattern?
Thanks for any replys.
"When I use a word," Humpty Dumpty
said, in a rather scornful tone, "it
means just what I choose it to mean -
neither more nor less." "The question
is," said Alice, "whether you can make
words mean so many different things."
"The question is," said Humpty Dumpty,
"which is to be master - that's all."
Through the Looking Glass.
Well, not my answer, Lewis Carroll's.
Looking at only one of the questions you asked: "...how is OO classed as a methodology and a paradigm?"
That, at least, has a fairly simple answer:
Object Oriented Design is an analysis methodology.
Object Oriented Programming is an implementation paradigm.
OOD involves analyzing a problem in terms of objects and their interactions. OOP involves implementing a solution as a set of interacting objects.
"Agile" (I hate that name -- though I'll admit "eXtreme Programming" is worse) is really about project management. Just for example, you can apply Pair Programming about equally to something like assembly language or C as to a language that explicitly supports object oriented programming (though being a relatively new idea, it's probably used most often in conjunction with relatively new languages).
Edit: How I'd separate "methodology" from "paradigm" is fairly simple (at least in theory).
Paradigm is really just a fancy word for "example". If I'm following that example to a meaningful degree, the source code (for example) to the program should contain direct, (fairly) clearly defined results from having followed that example. Just for the obvious one, a class publicly derived from another would be a pretty obvious indication of OOP.
A methodology, by contrast, doesn't necessarily show a direct, definable result in the source code. Just for example, there's unlikely to be much in the source code to indicate whether it was developed using "Agile" methodology. I might be able to take a guess if (for example) all the source code files contained comments indicating two authors, but (at best) it would a rather indirect indication of one specific piece of the methodology.
I said in theory, because things can get a bit "fuzzy" at times. If I try hard enough, I can probably write pretty close to pure procedural code, even in a language like Smalltalk that favors objects almost exclusively. Likewise, if I try hard enough I can write OO code in something like C that doesn't really support it. In a case like this, the indications of following the paradigm will usually be harder to find or define than in a more straightforward case.
Methodology is about people. Paradigm is about software.
A paradigm is a way of thinking about a problem - so objects, a relational database, lambda calculus are all models for getting a problem into your head
A methodology is a way of actualy building something based on the paradigm.
If you like, the paradigm is the architect, what are building? should it be a suspension bridge or an arch. The methodology is the engineering, how many cables, how thick, which subcontractors.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've been mainly exposed to OO programming so far and am looking forward to learning a functional language. My questions are:
When do you choose functional programming over object-oriented?
What are the typical problem definitions where functional programming is a better choice?
When do you choose functional programming over object oriented?
When you anticipate a different kind of software evolution:
Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.
When evolution goes the wrong way, you have problems:
Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.
This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.
What are the typical problem definitions where functional programming is a better choice?
Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
You don't necessarily have to choose between the two paradigms. You can write software with an OO architecture using many functional concepts. FP and OOP are orthogonal in nature.
Take for example C#. You could say it's mostly OOP, but there are many FP concepts and constructs. If you consider Linq, the most important constructs that permit Linq to exist are functional in nature: lambda expressions.
Another example, F#. You could say it's mostly FP, but there are many OOP concepts and constructs available. You can define classes, abstract classes, interfaces, deal with inheritance. You can even use mutability when it makes your code clearer or when it dramatically increases performance.
Many modern languages are multi-paradigm.
Recommended readings
As I'm in the same boat (OOP background, learning FP), I'd suggest you some readings I've really appreciated:
Functional Programming for Everyday .NET Development, by Jeremy Miller. A great article (although poorly formatted) showing many techniques and practical, real-world examples of FP on C#.
Real-World Functional Programming, by Tomas Petricek. A great book that deals mainly with FP concepts, trying to explain what they are, when they should be used. There are many examples in both F# and C#. Also, Petricek's blog is a great source of information.
Object Oriented Programming offers:
Encapsulation, to
control mutation of internal state
limit coupling to internal representation
Subtyping, allowing:
substitution of compatible types (polymorphism)
a crude means of sharing implementation between classes (implementation inheritance)
Functional Programming, in Haskell or even in Scala, can allow substitution through more general mechanism of type classes. Mutable internal state is either discouraged or forbidden. Encapsulation of internal representation can also be achieved. See Haskell vs OOP for a good comparison.
Norman's assertion that "Adding a new kind of thing to a functional program may require editing many function definitions to add a new case." depends on how well the functional code has employed type classes. If Pattern Matching on a particular Abstract Data Type is spread throughout a codebase, you will indeed suffer from this problem, but it is perhaps a poor design to start with.
EDITED Removed reference to implicit conversions when discussing type classes. In Scala, type classes are encoded with implicit parameters, not conversions, although implicit conversions are another means to acheiving substitution of compatible types.
If you're in a heavily concurrent environment, then pure functional programming is useful. The lack of mutable state makes concurrency almost trivial. See Erlang.
In a multiparadigm language, you may want to model some things functionally if the existence of mutable state is must an implementation detail, and thus FP is a good model for the problem domain. For example, see list comprehensions in Python or std.range in the D programming language. These are inspired by functional programming.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am convinced that functional programming is an excellent choice when it comes to applications that require a lot of computation (data mining, AI, nlp etc).
Is functional programming being used in any well known enterprise applications or open source projects? How did they incorporate business logic into the functional design?
Please disregard the fact that there are very few people using functional programming and that it's kind of tough.
Thanks
Functional programming languages like Clojure and Scala are good for pretty much anything. As for Haskell, an experienced Haskell programming would probably be able to substitute Haskell with any language for any problem - Efficient or not. I don't know if there is a functional programming language that could be considered /best/ out of all languages for this specific problem, but rest assured it will work and very well at that.
Also, Clojure and Scala are implemented on the JVM. So technically they /are/ on an enterprise platform.
What are business rules if not functions? Application of rules can be expressed as applying a function to a set of data. It can also be combined with polymorphism. e.g. through generic functions (multiple dispatch can be handy, too) and inheritance.
Code is data, data is code, and both should be like water.
From what I've seen, Scala looks like it handles normal Java just fine. Hence, anything that Java can handle for business, Scala could too.
On the .NET side, F# is another great example of a functional language that works fine for "business" applications. To put it simply, F# can do everything C# can do, and more, easier.
But for both of these languages, the "programming in the large" side tends to borrow from OOP. Not that there's anything wrong with mixing things, but perhaps thats not what you asked. If you want to stick to a more functional approach, and say, not use objects, you could run into a bit more hassle because the tooling support won't be on the same level. With languages that easily integrate with .NET/Java, that's not as big an issue.
As far as "is it wise?": That depends on the project, company, and other environmental factors. It seems that a common "enterprise pattern" is that code has to be extremely dumbed down so that anyone can work on it. In that case, you might get people involved who'd think that using a lambda makes it too difficult for others to understand.
But is it wise to use functional programming for a typical enterprise application where there are a lot of business rules but not much in terms of computation?
Business rules are just computation and you can often express them more succinctly and clearly using functional programming.
A growing number of enterprise apps are written in functional languages. Citrix XenDesktop and XenServer are built upon a tool stack written primarily in OCaml. The MyLife people search engine is written in OCaml. We are a small company but all of our LOB software (e.g. credit-card transactions, accounts, web analytics) are written in F#. Microsoft's ads on Bing use F# code. Perhaps the most obvious example is anyone using recent versions of C# and .NET because they are almost certainly using functional concepts (e.g. delegates).
If you mean more exotic functional languages such as Clojure, Scala and Haskell then I believe some people are using them but I do not have any details myself.
More than a year ago I delved a bit into Haskell and also tried a few things that I would regard as a typical business problem (To put it bluntly, given a number of values, what is the correct response?). Hence, I would say, yes, you should be able to model a number of business problems with functional programming.
Personally I couldn't find the same obviousness in Haskell to which I can push a OO + functional approach like with C# , but this could well be because I haven't done much with Haskell and a lot more with C#.
Then there is the thing how to communicate with a customer. My experience is that many of them think in strictly chronological terms, which kind of favours imperative programming. Even when going into models of state changes etc. you can lose the odd customer. Thinking along function compositions and monads that may represent the chronological operations of the business could probably be beyond many,many customers.
Either way, you can find my business-y example here.
I assume when you talk about a lot of business rules you are thinking about application development. Application development in the sense that you want to model a real-world workflow. Unlike vanilla programming, application development involves higher levels of responsibility (particularly for requirement capturing and testing). If so, I strongly suggest to see if you could apply domain-driven development. A natural choice for domain-driven development is an object-oriented approach. This and the fact that a lot of programmers are decent at object-orientated programming is one reason for its popularity in application development. However, this does not mean that real-world, big-scale projects are always written this way (read http://www.paulgraham.com/avg.html).
You might want to check out the iTasks system which is a library for the functional language Clean and is designed exactly to express workflow and business processes.