"Meta-logic" and "object-logic" (as word) definition in Isabelle - definition

What is the formal and complete definition of the words "meta-logic" and "object-logic" in Isabelle? I see people keep using these but could not find any definition for these.

You don't find them because they are specific to Isabelle (as far as I know). "Object-logic" and "meta-logic" are terms introduced by Larry Paulson (as far as I can tell). In general, though not specifically, they are related to the general terms "metalanguage" and "object language", for disciplines like logic and set theory. Do a search on those and you'll get the standard wiki pages, because they're a standard part of logic.
Here, I'm looking at page 16, 2.2.3 Meta and object language of Logic and Computation - Interactive Proof with Cambridge LCF, by Larry Paulson, published 1987. At that time he was still conforming to standard terms, but then he switched. I forgot where I read it, but he made the switch somewhere to "meta-logic" and "object-logic", to clarify things for his own purpose. The two terms are in his papers and in the Isabelle distribution docs.
Others can give you their expert knowledge, but the meta-logic specifically is what you get when you import the theory Pure, in particular, a minimal set of logical connectives, ==>, \<And>, &&&, and ==. Discussion of these are spread throughout the Isabelle distribution documentation.
I know nothing much about intuitionistic logic, other than it doesn't provide the law of excluded middle, but you will read that they provide a minimal, intuitionistic logic.
Don't thank me. I've just read some things here and there, and listened. Others can provide expert knowledge.

My findings with regard this question are below.
I found in the Clemens Ballarin slides, slide 20.:
Meta logic: The logic used to formalize another logic.
Example: Mathematics used to formalize derivations in formal logic.
and it is put in parallel with:
Meta language: The language used to talk about another language.
Examples: German in a Spanish class, English in an English class.
Wikipedia has an entry on Metalogic, one section is Metalanguage - Object language:
In metalogic, formal languages are sometimes called object languages.
The language used to make statements about an object language is
called a metalanguage. This distinction is a key difference between
logic and metalogic. While logic deals with proofs in a formal system,
expressed in some formal language, metalogic deals with proofs about a
formal system which are expressed in a metalanguage about some object
language.
And here is slide 21 from Ballarin:

Related

How to implement getter in smalltalk pharo

I have class Person with my setters and getters like this:
Object subclass: #Person
instanceVariableName: 'name document'
classVariableName ' '
package: 'foo'
name
^name
name: anObject
name:= anObject
document
^document
document: anObject
document:= anObject
Then i instance my class in my TestPerson
setUp
p1:= Person name:'Alice' document:='12345'
So here i don't undestand how can i implement my getter to compare if my name is really Alice.
For example in Java it Will be like this:
p1.getName().equals("Alice");
p1 name = 'Alice'
You really need to read a book on Smalltalk, you are missing the basics and "learning by stack overflow" does not seems to be the best way.
There is a MOOC: http://mooc.pharo.org/.
There are a lot of free books here: http://books.pharo.org
Here you have some free general books: http://stephane.ducasse.free.fr/FreeBooks.html
And here you can find more general documentation: http://pharo.org/documentation (you can watch some introductory screencasts there).
I wanted to explain why Esteban's advice is important, and why it is especially good advice in the case of Smalltalk.
Many other languages use very basic programming concepts that every seasoned programmer would already know, but these are drowned into an ocean of special syntax, edge cases, exceptions, multiple layers of often inconsistent or unrelated detailed arbitrary language design rules. Therefore, when you learn these languages the challenge is often indeed "how do i do X (a simple concept you already know) in language Y".
This makes if fair to ask "how do I do X in language Y" (as you just did).
It also makes it difficult to use the books or documentation available about that language, because they are either going to try to teach you X all over again (but you already know X, you just want to know how to do it in Y!), or they are going to be a long list of special tips and tricks shining a light on all the special cases and idiosyncrasies of language Y (and they might actually not cover your particular question, or if they do, you would not easily find it in the materials) .
Smalltalk is different. Because Smalltalk is built orthogonally on a very small, simple and consistent design of concepts and syntax.
So, with Smalltalk, you can afford the time to read the book (it is short: the syntax famously fits on a postcard and the concepts are equally few and simple). The book will cover most and all of the special cases, because... There mostly aren't any such cases. Your knowledge will then apply universally. It will work horizontally (in all parts of the system) and vertically (at the highest and lowest levels of abstraction in the system).
It is such a liberating feeling to be able to focus on your own problem, knowing that the language supports you and does not get in the way, instead of wasting your mental energy forcing you to remember silly arbitrary things.

Differencce between interface and API

could you please explain the difference between interface and API?
I was looking for this information here and using google but I only find a special information for Oracle.
What I'm looking for is the general difference.
Much appreciated!
Update:
thank you all for the answers. My question was kept deliberately general because I
1) do not have the detailed information about the used programming languages (question based on a short information about one vendor's implementation in my project);
2) I wanted to understand the general high-level difference between the both terms.
Without further context, your question is a bit broad; but lets try; by looking up the definition of API on wikipedia:
In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building software and applications.
Then: API stands for application programming interface; indicating that well, an API contains all elements (plural!) required to create an application which wants to interact with the component behind that API.
Whereas an interface in its stricter sense typically denotes a "single specific entity"; like the List interface in java describes an ordered collection (without providing details about specific implementation).
But of course, there are certain commonalities - both terms are about the description of the "boundary" of a "system". Long story short: there is simply no sharp, precise separation between those two concepts. Thus, when using those words within a group of people, you might want to first "step back" and discuss terminology - to ensure that all people involved have the same understanding of such terms. Or as DDD puts it: you want to create a Ubiquitous Language in which such terms have a clear, well defined meaning.
Finally: it is also worth mentioning that the term interface has different meanings when using different programming languages. In Java, an interface is really a concept embodied within the language core; where as in C++, an "interface" would probably be seen as the content of a single header file; leading to subtle but important "flavors" of "interface" for those two languages.
Edit:
Both, interface and API should not expose (too much?!) of the internals to the outside world.
Generally speaking, an API outlines a "component" (a complete "application" for example); whereas an interface might outline a "smaller" entity".
For your other refinement, about one company providing the API, and another the interface - I cant say anything. Because; as others have explained too: the definitions for those terms are really to unclear/fuzzy. We would need to know much more about the application and its requirements to even comment on your statement here.
The question "what is the difference between X and Y" is only meaningful when X and Y have single meanings and strict definitions. But "interface" and "API" do not have either single meanings nor strict definitions, so one cannot tell what is the difference between them.
For the most part, there is no difference, but there exist certain contexts where one would be suitable to use, while the other would be less suitable, or even unsuitable.
So, for example, a class implements an interface, never an API. On the other hand, an entire software system is more likely to be said to expose APIs rather than interfaces, though to say interfaces in this case would not be wrong either.
I wish there was some easy distinction, like "interfaces are small-scale, APIs are large-scale", or "interfaces are more specific, APIs are more nebulous", but there is no such distinction.

How Do I Design Abstract Semantic Graphs?

Can someone direct me to online resources for designing and implementing abstract semantic graphs (ASG)? I want to create an ASG editor for my language. Being able to edit the ASG directly has a number of advantages:
Only identifiers and literals need to be typed in and identifiers are written only once, when they're defined. Everything else is selected via the mouse.
Since the editor knows the language's grammar, there are no more syntax errors. The editor prevents them from being created in the first place.
Since the editor knows the language's semantics, there are no more semantic errors.
There are some secondary advantages:
Since all the reserved words are easily separable, a program can be written in one locale and viewed in other. On-the-fly changes of locale are possible.
All the text literals are easily separable, so changes of locale are easily made, including on-the-fly changes.
I'm not aware of a book on the matter, but you'll find the topic discussed in portions of various books on computer language. You'll also find discussions of this surrounding various projects which implement what you describe. For instance, you'll find quite a bit of discussion regarding the design of Scratch. Most workflow engines are also based on scripting in semantic graphs.
Allow me to opine... We've had the technology to manipulate language structurally for basically as long as we've had programming languages. I believe that the reason we still use textual language is a combination of the fact that it is more natural for us as humans, who communicate in natural language, to wield, and the fact that it is sometimes difficult to compose and refactor code when proper structure has to be maintained. If you're not sure what I mean, try building complex expressions in Scratch. Text is easier and a decent IDE gives virtually as much verification of correct structure.*
*I don't mean to take anything away from Scratch, it's a thing of beauty and is perfect for its intended purpose.

Is there any Mathematical Model or Theory behind Programming Languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
RDBMS are based on Relational Algebra as well as Codd's Model. Do we have something similar to that for Programming languages or OOP?
Do we have [an underlying model] for programming languages?
Heavens, yes. And because there are so many programming languages, there are multiple models to choose from. Most important first:
Church's untyped lambda calculus is a model of computation that is as powerful as a Turing machine (no more and no less). The famous "Church-Turing hypothesis" is that these two equivalent models represent the most general model of computation that we know how to implement. The lambda calculus is extremely simple; in its entirety the language is
e ::= x | e1 e2 | \x.e
which constitute variables, function applications, and function definitions. The lambda calculus also comes with a fairly large collection of "reduction rules" for simplifying expressions. If you find an expression that can't be reduced, that is called a "normal form" and represents a value.
The lambda calculus is so general that you can take it in several directions.
If you want to use all the available rules, you can write specialized tools like partial evaluators and parts of compilers.
If you avoid reducing any subexpression under a lambda, but otherwise use all the rules available, you wind up with a model of a lazy functional language like Haskell or Clean. In this model, if a reduction can terminate, it is guaranteed to, and it is easy to represent infinite data structures. Very powerful.
If you avoid reducing any subexpression under a lambda, and if you also insist on reducing each argument to a normal form before a function is applied, then you have a model of an eager functional language like F#, Lisp, Objective Caml, Scheme, or Standard ML.
There are also several flavors of typed lambda calculi, of which the most famous are grouped under the name System F, which were discovered independently by Girard (in logic) and by Reynolds (in computer science). System F is an excellent model for languages like CLU, Haskell, and ML, which are polymorphic but have compile-time type checking. Hindley (in logic) and Milner (in computer science) discovered a restricted form of System F (now called the Hindley-Milner type system) which makes it possible to infer System F expressions from some expressions of the untyped lambda calculus. Damas and Milner developed an algorithm do this inference, which is used in Standard ML and has been generalized in other languages.
Lambda calculus is just pushing symbols around. Dana Scott's pioneering work in denotational semantics showed that expressions in the lambda calculus actually correspond to mathematical functions—and he identified which ones. Scott's work is especially important in making sense of "recursive definitions", which are commonplace in computer science but are nonsensical from a mathematical point of view. Scott and Christopher Strachey showed that a recursive definition is equivalent to the least defined solution to a recursion equation, and furthermore showed how that solution could be constructed. Any language that allows recursion, and especially languages that allow recursion at arbitrary type (like Haskell and Clean) owes something to Scott's model.
There is a whole family of models based on abstract machines. Here there is not so much an individual model as a technique. You can define a language by using a state machine and defining transitions on the machine. This definition encompasses everything from Turing machines to Von Neumann machines to term-rewriting systems, but generally the abstract machine is designed to be "as close to the language as possible." The design of such machines, and the business of proving theorems about them, comes under the heading of operational semantics.
What about object-oriented programming?
I'm not as well educated as I should be about abstract models used for OOP. The models I'm most familiar with are very closely connected to implementation strategies. If I wanted to investigate this area further I would start with William Cook's denotational semantics for Smalltalk. (Smalltalk as a language is very simple, almost as simple as the lambda calculus, so it makes a good case study for modeling more complicated object-oriented languages.)
Wei Hu reminds me that Martin Abadi and Luca Cardelli have put together an ambitious body of work on foundational calculi (analogous to the lambda calculus) for object-oriented languages. I don't understand the work well enough to summarize it, but here is a passage from the Prologue of their book, which I feel is worth quoting:
Procedural languages are generally well understood; their constructs are by now standard, and their formal underpinnings are solid. The fundamental features of these languages have been distilled into formalisms that prove useful in identifying and explaining issues of implementation, static analysis, semantics, and verification.
An analogous understanding has not yet emerged for object-oriented languages. There is no widespread agreement on a collection of basic constructs and on their properties... This situation might improve if we had a better understanding of the foundations of object-oriented languages.
... we take objects as primitive and concentrate on the intrinsic rules that objects should obey. We introduce object calculi and develop a theory of objects around them. These object calculi are as simple as function calculi, but represent objects directly.
I hope this quotation gives you an idea of the flavor of the work.
Lisp is based on Lambda Calculus, and is the inspiration for much of what we see in modern languages today.
Von-Neumann machines are the foundation of modern computers, which were first programmed in assembler language, then in FORmula TRANslator. Then the formal linguistic theory of context-free-grammars was applied, and underlies the syntax of all modern languages.
Computability theory (formal automata) has a hierachy of machine-types that parallels the hierarchy of formal grammars, for example, regular-grammar = finite-state-machine, context-free-grammar = pushdown-automaton, context-sensitive-grammar = turing-machine.
There also is information theory, of two types, Shannon and Kolmogorov, that can be applied to computing.
There are lesser-known models of computing, such as recursive-function-theory, register-machines, and Post-machines.
And don't forget predicate-logic in its various forms.
Added: I forgot to mention discrete math - group theory and lattice theory. Lattices in particular are (IMHO) a particularly nifty concept underlying all boolean logic, and some models of computation, such as denotational semantics.
Functional languages like lisp inherit their basic concepts from Church's "lambda calculs" (wikipedia article here).
Regards
One concept may be Turing Machine.
If you study programming languages (eg: at a University), there is quite a lot of theory, and not a little math involved.
Examples are:
Finite State Machines
Formal Lanugages (and Context Free Grammars like BNF used to describe them)
The construction of LRish parser tables
The closest analogy I can think of is Gurevich Evolving Algebras that, nowadays, are more known under the name of "Gurevich Abstract State Machines" (GASM).
I've long hoped to see more real applications of the theory when Gurevich joined Microsoft, but it seems that very few is coming out. You can check the ASML page on the Microsoft site.
The good point about GASM is that they closely resemble pseudo-code even if their semantic is formally specified. This means that practitioners can easily grasp them.
After all, I think that part of the success of Relational Algebra is that it is the formal foundation of concepts that can be easily grasped, namely tables, foreign keys, joins, etc.
I think we need something similar for the dynamic components of a software system.
There are many dimensions to address your question, scattering in the answers.
First of all, to describe the syntax of a language and specify how a parser would work, we use context-free grammars.
Then you need to assign meanings to the syntax. Formal semantics come in handy; the main players are operational semantics, denotational semantics, and axiomatic semantics.
To rule out bad programs you have the type system.
In the end, all computer programs can reduce to (or compile to, if you will) very simple computation models. Imperative programs are more easily mapped to Turing machines, and functional programs are mapped to lambda calculus.
If you're learning all this stuff by yourself, I highly recommend http://www.uni-koblenz.de/~laemmel/paradigms0910/, because the lectures are videotaped and put online.
The history section of Wikipedia's Object-oriented programming could be enlightening.
Plenty has been mentioned of the application of math to computational theory and semantics. I like the mention of type theory and I'm glad someone mentioned lattice theory. Here are just a few more.
No one has explicitly mentioned category theory, which shows up more in functional languages than elsewhere, such as through the concepts of monads and functors. Then there's model theory and the various incarnations of logic that actually show up in theorem provers or the logic language Prolog. There are also mathematical applications to foundations of and problems in concurrent languages.
There is no mathematical model for OOP.
Relational algebra in the mathemaical model for SQL. It was created bt E.F. Codd. C.J. Date was also a reknown cientist who helped with this theory. The whole idea is that you can do every operation as a set operation, affecting a lot of values at the same time. This of course means that the database engine has to be told WHAT to get out, and the database is able to optimize your query.
Both Codd and Date criticized SQL because they were involved in the theory, but they were not involved in the creation of SQL.
See this video: http://player.oreilly.com/videos/9781491908853?toc_id=182164
There is a lot of information from Chris Date. I remember that Date criticized the SQL programming language as being a terrible language, but I cannot find the paper.
Teh critique was basically that most languages allow to write expressions and assign variables to those expressions, but SQL does not.
Since SQL is a kind of logical language, I guess you could write relational algebra in Prolog. At least you would have a real language. So you could write queries in Prolog. And since in prolog you have a lot of programs to interpret natural language, you could query your database using natural language.
According to Uncle Bob, databases are not going to be needed when everyone has SSD, because the architecture of SSDs means that access is so fast as RAM. So you can have all your objects in RAM.
https://www.youtube.com/watch?feature=player_detailpage&v=t86v3N4OshQ#t=3287
The only problem with ditching SQL is that you would end up without a query language for the database.
So yes and no, relational algebra was used as inspiration for SQL, but SQL is not really an implementation of relational algebra.
In the case of the Lisp, things are different. The main idea was that implementing the eval function in Lisp you could have the whole language implemented. That's whe the first Lisp implementation is only half a page of code.
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/
To laugh a little bit: https://www.youtube.com/watch?v=hzf3hTUKk8U
The importance of functional programming all comes down to curried functions and lazy calls. And never forget environments and closures. And map-reduce. This all means we will be coding in functional languages in 20 years.
Now back to OOP, there is no formalization of OOP.
Interestingly, the second OO language ever created, Smalltalk, only has objects, it doesn't have primitives or anything like that. And the creator, Alan Kay, explicitly created blocks to work exactly as Lisp functions.
Some people claim OOP could maybe be formalized using category theory, which is kind of set theory but with morphisms. A morphism is a structure preserving map between objects. So in general you could have map( f, collection ) and get back a collection with all elements being f applied.
I'm pretty sure Lisp has that, but Lisp also has functions that return one element in a collection, that destroys the structure, so a morphism is a especial kind of function and because of that, you would need to reduce and limit the functions in Lisp so that they are all morphisms.
https://www.youtube.com/watch?feature=player_detailpage&v=o6L6XeNdd_k#t=250
The main problem with this is that functions don't exist independently of objects in OOP, but in category theory they do. They are therefore incompatible. You could develop a new language in which to express category theory.
An experimental theoretical language created explicitly to try to formalize OOP is Z. Z is derived from requirements formalism.
Another attempt is Luca Cardelli's formalism:
Javahttp://lucacardelli.name/Papers/PrimObjImp.pdf
Javahttp://lucacardelli.name/Papers/PrimObj1stOrder.A4.pdf
Javahttp://lucacardelli.name/Papers/PrimObjSemLICS.A4.pdf
I'm unable to read and understand that notation. It seems like a useless excercise, since as far as I know, no one has ever implemented this the way lamba calculus was implemented in Lisp.
As I know, Formal grammars is used for description of syntax.

Does functional programming replace GoF design patterns?

Since I started learning F# and OCaml last year, I've read a huge number of articles which insist that design patterns (especially in Java) are workarounds for the missing features in imperative languages. One article I found makes a fairly strong claim:
Most people I've met have read
the Design Patterns book by the Gang of
Four (GoF). Any self respecting programmer
will tell you that the book is
language agnostic and the patterns
apply to software engineering in
general, regardless of which language
you use. This is a noble claim.
Unfortunately it is far removed from
the truth.
Functional languages are extremely
expressive. In a functional language
one does not need design patterns
because the language is likely so high
level, you end up programming in
concepts that eliminate design
patterns all together.
The main features of functional programming (FP) include functions as first-class values, currying, immutable values, etc. It doesn't seem obvious to me that OO design patterns are approximating any of those features.
Additionally, in functional languages which support OOP (such as F# and OCaml), it seems obvious to me that programmers using these languages would use the same design patterns found available to every other OOP language. In fact, right now I use F# and OCaml every day, and there are no striking differences between the patterns I use in these languages vs. the patterns I use when I write in Java.
Is there any truth to the claim that functional programming eliminates the need for OOP design patterns? If so, could you post or link to an example of a typical OOP design pattern and its functional equivalent?
The blog post you quoted overstates its claim a bit. FP doesn't eliminate the need for design patterns. The term "design patterns" just isn't widely used to describe the same thing in FP languages. But they exist. Functional languages have plenty of best practice rules of the form "when you encounter problem X, use code that looks like Y", which is basically what a design pattern is.
However, it's correct that most OOP-specific design patterns are pretty much irrelevant in functional languages.
I don't think it should be particularly controversial to say that design patterns in general only exist to patch up shortcomings in the language.
And if another language can solve the same problem trivially, that other language won't have need of a design pattern for it. Users of that language may not even be aware that the problem exists, because, well, it's not a problem in that language.
Here is what the Gang of Four has to say about this issue:
The choice of programming language is important because it influences one's point of view. Our patterns assume Smalltalk/C++-level language features, and that choice determines what can and cannot be implemented easily. If we assumed procedural languages, we might have included design patterns called "Inheritance", "Encapsulation," and "Polymorphism". Similarly, some of our patterns are supported directly by the less common object-oriented languages. CLOS has multi-methods, for example, which lessen the need for a pattern such as Visitor. In fact, there are enough differences between Smalltalk and C++ to mean that some patterns can be expressed more easily in one language than the other. (See Iterator for example.)
(The above is a quote from the Introduction to the Design Patterns book, page 4, paragraph 3)
The main features of functional
programming include functions as
first-class values, currying,
immutable values, etc. It doesn't seem
obvious to me that OO design patterns
are approximating any of those
features.
What is the command pattern, if not an approximation of first-class functions? :)
In an FP language, you'd simply pass a function as the argument to another function.
In an OOP language, you have to wrap up the function in a class, which you can instantiate and then pass that object to the other function. The effect is the same, but in OOP it's called a design pattern, and it takes a whole lot more code.
And what is the abstract factory pattern, if not currying? Pass parameters to a function a bit at a time, to configure what kind of value it spits out when you finally call it.
So yes, several GoF design patterns are rendered redundant in FP languages, because more powerful and easier to use alternatives exist.
But of course there are still design patterns which are not solved by FP languages. What is the FP equivalent of a singleton? (Disregarding for a moment that singletons are generally a terrible pattern to use.)
And it works both ways too. As I said, FP has its design patterns too; people just don't usually think of them as such.
But you may have run across monads. What are they, if not a design pattern for "dealing with global state"? That's a problem that's so simple in OOP languages that no equivalent design pattern exists there.
We don't need a design pattern for "increment a static variable", or "read from that socket", because it's just what you do.
Saying a monad is a design pattern is as absurd as saying the Integers with their usual operations and zero element is a design pattern. No, a monad is a mathematical pattern, not a design pattern.
In (pure) functional languages, side effects and mutable state are impossible, unless you work around it with the monad "design pattern", or any of the other methods for allowing the same thing.
Additionally, in functional languages
which support OOP (such as F# and
OCaml), it seems obvious to me that
programmers using these languages
would use the same design patterns
found available to every other OOP
language. In fact, right now I use F#
and OCaml everyday, and there are no
striking differences between the
patterns I use in these languages vs
the patterns I use when I write in
Java.
Perhaps because you're still thinking imperatively? A lot of people, after dealing with imperative languages all their lives, have a hard time giving up on that habit when they try a functional language. (I've seen some pretty funny attempts at F#, where literally every function was just a string of 'let' statements, basically as if you'd taken a C program, and replaced all semicolons with 'let'. :))
But another possibility might be that you just haven't realized that you're solving problems trivially which would require design patterns in an OOP language.
When you use currying, or pass a function as an argument to another, stop and think about how you'd do that in an OOP language.
Is there any truth to the claim that
functional programming eliminates the
need for OOP design patterns?
Yep. :)
When you work in a FP language, you no longer need the OOP-specific design patterns. But you still need some general design patterns, like MVC or other non-OOP specific stuff, and you need a couple of new FP-specific "design patterns" instead. All languages have their shortcomings, and design patterns are usually how we work around them.
Anyway, you may find it interesting to try your hand at "cleaner" FP languages, like ML (my personal favorite, at least for learning purposes), or Haskell, where you don't have the OOP crutch to fall back on when you're faced with something new.
As expected, a few people objected to my definition of design patterns as "patching up shortcomings in a language", so here's my justification:
As already said, most design patterns are specific to one programming paradigm, or sometimes even one specific language. Often, they solve problems that only exist in that paradigm (see monads for FP, or abstract factories for OOP).
Why doesn't the abstract factory pattern exist in FP? Because the problem it tries to solve does not exist there.
So, if a problem exists in OOP languages, which does not exist in FP languages, then clearly that is a shortcoming of OOP languages. The problem can be solved, but your language does not do so, but requires a bunch of boilerplate code from you to work around it. Ideally, we'd like our programming language to magically make all problems go away. Any problem that is still there is in principle a shortcoming of the language. ;)
Is there any truth to the claim that functional programming eliminates the need for OOP design patterns?
Functional programming is not the same as object-oriented programming. Object-oriented design patterns don't apply to functional programming. Instead, you have functional programming design patterns.
For functional programming, you won't read the OO design pattern books; you'll read other books on FP design patterns.
language agnostic
Not totally. Only language-agnostic with respect to OO languages. The design patterns don't apply to procedural languages at all. They barely make sense in a relational database design context. They don't apply when designing a spreadsheet.
a typical OOP design pattern and its functional equivalent?
The above shouldn't exist. That's like asking for a piece of procedural code rewritten as OO code. Ummm... If I translate the original Fortran (or C) into Java, I haven't done anything more than translate it. If I totally rewrite it into an OO paradigm, it will no longer look anything like the original Fortran or C -- it will be unrecognizable.
There's no simple mapping from OO design to functional design. They're very different ways of looking at the problem.
Functional programming (like all styles of programming) has design patterns. Relational databases have design patterns, OO has design patterns, and procedural programming has design patterns. Everything has design patterns, even the architecture of buildings.
Design patterns -- as a concept -- are a timeless way of building, irrespective of technology or problem domain. However, specific design patterns apply to specific problem domains and technologies.
Everyone who thinks about what they're doing will uncover design patterns.
Brian's comments on the tight linkage between language and pattern is to the point,
The missing part of this discussion is the concept of idiom. James O. Coplien's book, "Advanced C++" was a huge influence here. Long before he discovered Christopher Alexander and the Column Without a Name (and you can't talk sensibly about patterns without reading Alexander either), he talked about the importance of mastering idioms in truly learning a language. He used string copy in C as an example, while(*from++ = *to++); You can see this as a bandaid for a missing language feature (or library feature), but what really matters about it is that it's a larger unit of thought, or of expression, than any of its parts.
That is what patterns, and languages, are trying to do, to allow us to express our intentions more succinctly. The richer the units of thought the more complex the thoughts you can express. Having a rich, shared vocabulary at a range of scales - from system architecture down to bit twiddling - allows us to have more intelligent conversations, and thoughts about what we should be doing.
We can also, as individuals, learn. Which is the entire point of the exercise. We each can understand and use things we would never be able to think of ourselves. Languages, frameworks, libraries, patterns, idioms and so on all have their place in sharing the intellectual wealth.
The GoF book explicitly ties itself to OOP - the title is Design Patterns - Elements of Reusable Object-Oriented Software (emphasis mine).
Design Patterns in Dynamic Programming by Peter Norvig has thoughtful coverage of this general theme, though about 'dynamic' languages instead of 'functional' (there's overlap).
Here's another link, discussing this topic: http://blog.ezyang.com/2010/05/design-patterns-in-haskel/
In his blog post Edward describes all 23 original GoF patterns in terms of Haskell.
When you try to look at this at the level of "design patterns" (in general) and "FP versus OOP", the answers you'll find will be murky at best.
Go a level deeper on both axes, though, and consider specific design patterns and specific language features and things become clearer.
So, for example, some specific patterns, like Visitor, Strategy, Command, and Observer definitely change or disappear when using a language with algebraic data types and pattern matching, closures, first class functions, etc. Some other patterns from the GoF book still 'stick around', though.
In general, I would say that, over time, specific patterns are being eliminated by new (or just rising-in-popularity) language features. This is the natural course of language design; as languages become more high-level, abstractions that could previously only be called out in a book using examples now become applications of a particular language feature or library.
(Aside: here's a recent blog I wrote, which has other links to more discussion on FP and design patterns.)
I would say that when you have a language like Lisp with its support for macros, then you can build you own domain-specific abstractions, abstractions which often are much better than the general idiom solutions.
Norvig's presentation alludes to an analysis they did of all the GoF patterns, and they say that 16 of the 23 patterns had simpler implementations in functional languages, or were simply part of the language. So presumably at least seven of them either were a) equally complicated or b) not present in the language. Unfortunately for us, they are not enumerated!
I think it's clear that most of the "creational" or "structural" patterns in GoF are merely tricks to get the primitive type systems in Java or C++ to do what you want. But the rest are worthy of consideration no matter what language you program in.
One might be Prototype; while it is a fundamental notion of JavaScript, it has to be implemented from scratch in other languages.
One of my favorite patterns is the Null Object pattern: represent the absence of something as an object that does an appropriate kind of nothing. This may be easier to model in a functional language. However, the real achievement is the shift in perspective.
And even the OO design pattern solutions are language specific.
Design patterns are solutions to common problems that your programming language doesn't solve for you. In Java, the Singleton pattern solves the one-of-something (simplified) problem.
In Scala, you have a top level construct called Object in addition to Class. It's lazily instantiated and there is only one.You don't have to use the Singleton pattern to get a Singleton. It's part of the language.
Patterns are ways of solving similar problems that get seen again and again, and then get described and documented. So no, FP is not going to replace patterns; however, FP might create new patterns, and make some current "best practices" patterns "obsolete".
As others have said, there are patterns specific to functional programming. I think the issue of getting rid of design patterns is not so much a matter of switching to functional, but a matter of language features.
Take a look at how Scala does away with the "singleton pattern": you simply declare an object instead of a class.
Another feature, pattern matching, helps avoiding the clunkiness of the visitor pattern. See the comparison here:
Scala's Pattern Matching = Visitor Pattern on Steroids
And Scala, like F#, is a fusion of OO-functional. I don't know about F#, but it probably has these kind of features.
Closures are present in functional language, but they need not be restricted to them. They help with the delegator pattern.
One more observation. This piece of code implements a pattern: it's such a classic and it's so elemental that we don't usually think of it as a "pattern", but it sure is:
for(int i = 0; i < myList.size(); i++) { doWhatever(myList.get(i)); }
Imperative languages like Java and C# have adopted what is essentially a functional construct to deal with this: "foreach".
The GoF Design Patterns is coding workaround recipes for OO languages that are descendants of Simula 67, like Java and C++.
Most of the "ills" treated by the design patterns are caused by:
statically typed classes, which specify objects, but are not themselves objects;
restriction to single dispatch (only the leftmost argument is used to select a method, the remaining arguments are considered as static types only: if they have dynamic types, it's up to the method to sort that out with ad-hoc approaches);
distinction between regular function calls and object-oriented function calls, meaning that object-oriented functions cannot be passed as functional arguments where regular functions are expected and vice versa; and
distinction between "base types" and "class types".
There isn't a single one of these design patterns that doesn't disappear in the Common Lisp Object System, even though the solution is structured in essentially the same way as in the corresponding design pattern. (Moreover, that object system precedes the GoF book by well over a decade. Common Lisp became an ANSI standard the same year that that book was first published.)
As far as functional programming is concerned, whether or not the patterns apply to it depends on whether the given functional programming language has some kind of object system, and whether it is modeled after the object systems which benefit from the patterns. That type of object-orientation does not mix well with functional programming, because the mutation of state is at the front and centre.
Construction and non-mutating access are compatible with functional programming, and so patterns which have to do with abstracting access or construction could be applicable: patterns like Factory, Facade, Proxy, Decorator, and Visitor.
On the other hand, the behavioral patterns like State and Strategy probably do not directly apply in functional OOP because mutation of state is at their core. This doesn't mean they don't apply; perhaps they somehow apply in combination with whatever tricks are available for simulating mutable state.
I'd like to plug a couple of excellent but somewhat dense papers by Jeremy Gibbons: "Design patterns as higher-order datatype-generic programs" and "The essence of the Iterator pattern" (both available here: http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/).
These both describe how idiomatic functional constructs cover the terrain that is covered by specific design patterns in other (object-oriented) settings.
You can't have this discussion without bringing up type systems.
The main features of functional programming include functions as first-class values, currying, immutable values, etc. It doesn't seem obvious to me that OO design patterns are approximating any of those features.
That's because these features don't address the same issues that OOP does... they are alternatives to imperative programming. The FP answer to OOP lies in the type systems of ML and Haskell... specifically sum types, abstract data types, ML modules, and Haskell typeclasses.
But of course there are still design patterns which are not solved by FP languages. What is the FP equivalent of a singleton? (Disregarding for a moment that singletons are generally a terrible pattern to use)
The first thing typeclasses do is eliminate the need for singletons.
You could go through the list of 23 and eliminate more, but I don't have time to right now.
I think only two GoF Design Patterns are designed to introduce the functional programming logic into natural OO language. I think about Strategy and Command.
Some of the other GoF design patterns can be modified by functional programming to simplify the design and keep the purpose.
Essentially, yes!
When a pattern circumvents the missing features (high order functions, stream handling...) that ultimalty facilitate composition.
The need to re-write patterns' implementation again and again can itself be seen as a language smell.
Besides, this page (AreDesignPatternsMissingLanguageFeatures) provides a "pattern/feature" translation table and some nice discussions, if you are willing to dig.
Functional programming does not replace design patterns. Design patterns can not be replaced.
Patterns simply exist; they emerged over time. The GoF book formalized some of them. If new patterns are coming to light as developers use functional programming languages that is exciting stuff, and perhaps there will be books written about them as well.
In the new 2013 book named "Functional Programming Patterns- in Scala and Clojure" the author Michael.B. Linn does a decent job comparing and providing replacements in many cases for the GoF patterns and also discusses the newer functional patterns like 'tail recursion', 'memoization', 'lazy sequence', etc.
This book is available on Amazon. I found it very informative and encouraging when coming from an OO background of a couple of decades.
OOP and the GoF patterns deal with states. OOP models reality to keep the code base as near as possible to the given requirements of reality. GoF design patterns are patterns that were identified to solve atomic real world problems. They handle the problem of state in a semantic way.
As in real functional programming no state exists, it does not make sense to apply the GoF patterns. There are not functional design patterns in the same way there are GoF design patterns. Every functional design pattern is artifical in contrast to reality as functions are constructs of math and not reality.
Functions lack the concept of time as they are always return the same value whatever the current time is unless time is part of the function parameters what makes it really hard to prrocess "future requests". Hybrid languages mix those concepts make the languages not real functional programming languages.
Functional languages are rising only because of one thing: the current natural restrictions of physics. Todays processors are limited in their speed of processing instructions due to physical laws. You see a stagnation in clock frequency but an expansion in processing cores. That's why parallelism of instructions becomes more and more important to increase speed of modern applications. As functional programming by definition has no state and therefore has no side effects it is safe to process functions safely in parallel.
GoF patterns are not obsolete. They are at least necessary to model the real world requirements. But if you use a functional programming language you have to transform them into their hybrid equivalents. Finally you have no chance to make only functional programs if you use persistence. For the hybrid elements of your program there remains the necessity to use GoF patterns. For any other element that is purely functional there is no necessity to use GoF patterns because there is no state.
Because the GoF patterns are not necessary for real functional programming, it doesn't mean that the SOLID principles should not be applied. The SOLID principles are beyond any language paradigm.
As the accepted answer said, OOP and FP all have their specific patterns.
However, there are some patterns which are so common that all programming platforms I can think of should have. Here is an (incomplete) list:
Adapter. I can hardly think of a useful programming platform which is so comprehensive (and self-fulfilled) that it does not need to talk to the world. If it is going to do so, an adapter is definitely needed.
Façade. Any programming platforms that can handle big source code should be able to modularise. If you were to create a module for other parts of the program, you will want to hide the "dirty" parts of the code and give it a nice interface.
Interpreter. In general, any program is just doing two things: parse input and print output. Mouse inputs need to be parsed, and window widgets need to be printed out. Therefore, having an embedded interpreter gives the program additional power to customise things.
Also, I noticed in a typical FP language, Haskell, there is something similar to GoF patterns, but with different names. In my opinion this suggest they were there because there are some common problems to solve in both FP and OOP languages.
Monad transformer and decorator. The former used to add additional ability into an existing monad, the latter add additional ability to an existing object.
I think that each paradigm serves a different purpose and as such cannot be compared in this way.
I have not heard that the GoF design patterns are applicable to every language. I have heard that they are applicable to all OOP languages. If you use functional programming then the domain of problems that you solve is different from OO languages.
I wouldn't use functional language to write a user interface, but one of the OO languages like C# or Java would make this job easier. If I were writing a functional language then I wouldn't consider using OO design patterns.
OOP and FP have different goals. OOP aims to encapsulate the complexities/moving parts of software components and FP aims to minimize the complexity and dependencies of software components.
However these two paradigms are not necessarily 100% contradicting and could be applied together to get the benefit from both worlds.
Even with a language that does not natively support functional programming like C#, you could write functional code if you understand the FP principles. Likewise you could apply OOP principles using F# if you understand OOP principles, patterns, and best practices. You would make the right choice based on the situation and problem that you try to solve, regardless of the programming language you use.
Some patterns are easier to implement in a language supporting FP. For example, Strategy can be implemented using nicely using closures. However depending on context, you may prefer to implement Strategy using a class-based approach, say where the strategies themselves are quite complicated and/or share structure that you want to model using Template Method.
In my experience developing in a multi-paradigm language (Ruby), the FP implementation works well in simple cases, but where the context is more complicated the GoF OOP based approach is a better fit.
The FP approach does not replace the OOP approach, it complements it.
It does, in that a high-level functional PL (like OCaml, with classes, modules, etc.) certainly supersedes OOP imperative languages in type versatility and power of expression. The abstractions do not leak, you can express most of your ideas directly in the program. Therefore, yes, it does replace design patterns, most of which are ridiculously simplistic compared to functional patterns anyhow.
In functional programming, design patterns have a different meaning. In fact, most of OOP design patterns are unnecessary in functional programming because of the higher level of abstraction and HOFs used as building blocks.
The principle of an HOF means that functions can be passed as
arguments to other functions. and functions can return values.
The paramount characteristic of functional programming, IMHO, is that you are programming with nothing but expressions -- expressions within expressions within expressions that all evaluate to the last, final expression that "warms the machine when evaluated".
The paramount characteristic of object-oriented programming, IMHO is that you are programming with objects that have internal state. You cannot have internal state in pure functions -- object-oriented programming languages need statements to make things happen. (There are no statements in functional programming.)
You are comparing apples to oranges. The patterns of object-oriented programming do not apply to function programming, because functional programming is programming with expressions, and object-oriented programming is programming with internal state.
Brace yourselves.
It will aggravate many to hear me claim to have replaced design patterns and debunked SOLID and DRY. I'm nobody. Nevertheless, I correctly modeled collaborative (manufacturing) architecture and published the rules for building processes online along with the code and science behind it at my website http://www.powersemantics.com/.
My argument is that design patterns attempt to achieve what manufacturing calls "mass customization", a process form in which every step can be reshaped, recomposed and extended. You might think of such processes as uncompiled scripts. I'm not going to repeat my (online) argument here. In short, my mass customization architecture replaces design patterns by achieving that flexibility without any of the messy semantics. I was surprised my model worked so well, but the way programmers write code simply doesn't hold a candle to how manufacturing organizes collaborative work.
Manufacturing = each step interacts with one product
OOP = each step interacts with itself and other modules, passing the product around from point to point like useless office workers
This architecture never needs refactoring. There are also rules concerning centralization and distribution which affect complexity. But to answer your question, functional programming is another set of processing semantics, not an architecture for mass custom processes where 1) the source routing exists as a (script) document which the wielder can rewrite before firing and 2) modules can be easily and dynamically added or removed.
We could say OOP is the "hardcoded process" paradigm and that design patterns are ways to avoid that paradigm. But that's what mass customization is all about. Design patterns embody dynamic processes as messy hardcode. There's just no point. The fact that F# allows passing functions as a parameter means functional and OOP languages alike attempt to accomplish mass customization itself.
How confusing is that to the reader, hardcode which represents script? Not at all if you think your compiler's consumers pay for such features, but to me such features are semantic waste. They are pointless, because the point of mass customization is to make processes themselves dynamic, not just dynamic to the programmer wielding Visual Studio.
Let give an example of the wrong premise you state.
The adapter pattern we have in OOP as usecase adapter such as in cleanarch and ddd can be implemented in Functional via the monad variation of Option.
You are not replacing them but transforming them.