Can any language be used to program in any paradigm? [closed] - oop

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Can any language be used to program in any paradigm? For example C doesn't have classes but s it is possible to program in OOP. There are some languages (such as assembly) I can't see using OOP in.

Yes, simply due to the fact you can implement an interpreter for your $favorite $paradigm in the host language.
Practically though, this is not feasible, efficient or right.

C++ is ultimately assembly, you just have a compiler to write the assembly for you from a nicer description. So sure you can do OOP in assembly, just as you can do OOP in C; it's just that a lot of the OO concepts end up being implemented with convention and programmer discipline rather than being forced by the structure of the language, with the result that huge classes of bugs become possible that your language tools probably won't be very good at helping you find.
Similar arguments follow for most paradigm/language mismatches. Lots of object-oriented programs have been written in C this way, so it can even be a somewhat practical thing to do, not just an academic matter.
It can be a little harder when what you want is to remove restrictions rather than add them.
In purity-enforced languages such as Haskell and Mercury you can't suddenly break out object-oriented style packets-of-encapsulated-mutable-state in the middle of arbitrary pure code (at least not without using "all bets are off" features like unsafePerformIO in Haskell or promise_pure in Mercury to lie to the compiler, at which point your program may well completely fail to work unless you can wrap a pure interface around the regions in which you do this). However you can write whole programs in procedural or object-oriented style in these languages, by never leaving the mechanism they use to do IO.
Likewise, if you consider the use of duck typing in dynamic languages to be a paradigm, it's pretty painful to get something similar in languages with static typing, but you can always find a way to represent your dynamic types as data. But you again find yourself doing thing with convention and reimplementation that you would get for free if you were really using a duck typing language.
I'm pretty sure it would be hard to find a language (usable for writing general purpose programs) that can't be adapted to write code in any paradigm you like. The adaptation may not produce very efficient code (sometimes it can though; adapting C or assembly to any paradigm can usually be made pretty much as efficient as if you had a language tuned for that paradigm), and it will almost certainly be horribly inefficient in terms of programmer time.

No, not all languages can be used to program in any paradigm. However, the more popular ones - python, c++, etc all allows you to chose how you want to program. Even php is adding OO support.

Related

How to use Type Classes in Haskell and difference with java interfaces [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I asked this question yesterday and the user #dfeuer advised me, that as a beginner I should not define my own classes. His comment:
Haskell beginners shouldn't define their own classes at all. Learn to define functions, and types, and instances. These are the vast majority of actual Haskell code. As you do this, you'll get a good feel for what makes some classes really useful and others less so. You'll learn what makes some classes easy to use and others full of booby traps. Then when you find a good reason to actually define your own class, you'll go through a slew of bad class designs before you get good enough at it that only most of your attempts go badly. Designing good classes is really hard and rarely necessary.
I am curious, why is defining my own classes usually (for a beginner) a bad idea? What are these "booby traps" and why is it so hard to design good classes?
I thought classes are used to define interfaces to data as I do in OOP. When I write java code, I try to write as much code as possible with abstract classes and especially interfaces, so that when I need to change the data, most of my code remains unchanged and that my methods are highly reusable. Another comment under that question by #Carl suggests, that this is not how classes should be used
Why did you create that class? It feels very weird to me - very much like something that someone used to OOP would do, rather than someone used to Haskell. It has too many parameters, they're connected in what feels like a very ad-hoc manner...
My fear is, that without this OOP use of classes, any change in data would break huge part of code. Is this fear unfunded? And if it is funded, why I should not use classes to define interface to data?
To be fair, I am self taught java programmer and I did not read others people code, so maybe I am doing java wrong also. I only read some books on how the language works and then built an application. I developed it for a year or so, and my whole style is consequence of this experience alone. My style seems to work well for my needs though, and thus I assume it is how java programming/OOP is indeed done.
I'm a relatively a new (and amateur) Haskell enthusiast.
I'd say: just stop thinking you can reuse OOP knowledge, patterns, and other things in Haskell. Even terminology is not "reusable". Classes are not the same thing in OOP languages vs Haskell (well, they are called typeclasses in Haskell, actually).
This is an answer to a question of mine. It starts more or less like this:
It's true that typeclasses can express what interfaces do in OO languages, but this doesn't always make sense.
i.e. stating the inherent difference between two similar (only apparently similar!) concepts in Haskell vs OOP languages.
Another interesting link is on Design Patterns in Haskell. It is very high level, and I still don't quite understand how some tools can be used in Haskell as an alternative to a specific OOP pattern. (Probably the fact that first-class function remove the need for the strategy pattern is the only thing that is totally clear to me, at the moment.) However, I think it is a good reading and, most of all, it should convince you that learning and coding in Haskell comes with a huge mental shift, and it is best approached by starting from zero. If you refuse that, you're not gonna learn Haskell.
I'm not saying that you shouldn't use your brain to notice similarities between OOP languages and Haskell. You should just assume that even trying to build on those observations will handicap your learning process.
As regards Haskell specifically, sitting down and studying LYAH as you were at school (with a laptop to try out examples) is a good way to learn very well the basics. It is an easy-ish to read book, and guides you by hand.
For what is worth, I think that Structure and Interpretation of Computer Programs is a good book that can accompany learning a functional language, as it gives you a practical background to the shift of philosophy I mentioned earlier. You must do the exercises. Doing them will force you towards that mental shift.
A final suggestion, that I would never apply before studying LYAH thoroughly, is to complete The Monad Challenges. But I have to say that LYAH does already a good job at teaching you what the Challenges ask you to think about. I found myself thinking "I already know this", "why is the challenge going so roundabout?".

do naming rules in kotlin matter [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
An absolute pet hate are naming rules for the sake of it, when development enviroments are so good at letting users know what each item is.
As the title suggests are there any pitfalls if a developer were to name all types, objects, variables etc.. 'all in "snake_case", specifically in Kotlin. Ignoring the auto generated names for binding etc.
Coding style, such as naming, doesn't matter to the compiler.
But it matters to humans — and as a couple of wise people once said, “programs must be written for people to read, and only incidentally for machines to execute.”  (They were probably exaggerating for effect, but I think there's still a grain of truth there.)
Consistency in naming means that you don't have to stop and think about whether to use underscores or capitals (or spaces or dashes or whatever inside backquotes); it makes classes and methods easier to find in your code as well as in libraries and frameworks; it plays better with Kotlin properties (which look for getXxx/setXxx/isXxx method in the bytecode); it removes a source of disagreement among developers; it's less likely to cause problems with IDEs and frameworks and source-code tools which tend to assume you're using standard naming conventions; it makes the codebase easier for new developers to get up to speed with.
But, more than all those, code which doesn't follow conventions iS_нa℞𝐝𝑒𝕽-τଠ𐍂ɘⱭ𐐼.  When things that work the same look the same, differences are easier to see.  The less time you spend deciphering names, that more time is left for understanding what the code is doing with them.  It's the same reason why we use consistent indentation and spacing and structure and design patterns.  With fewer surface differences, you can more easily see the underlying structures and patterns in the code, and deviations (and hence bugs) become more obvious.
Coding — by which I include debugging, maintaining, and enhancing as well as writing fresh code — is hard, and we humans are limited, so we should make things as easy for ourselves as possible.  Developing software is a constant battle against complexity; every little simplification helps.  You may think that using snake_case instead of camelCase is insignificant; but the mere fact you're asking about it here shows that it makes a difference!
The answers to this question and this question give many more (and better-argued) reasons why consistency is important.
(As it happens, I've spent many years using languages which prefer snake_case, and also with those which prefer camelCase, and I definitely find the latter easier to read in context.  But that's a much less important consideration than consistency.)
Apart from arguing about that with other developers, and calls to all library functions looking different, the language will work perfectly and not care about that.

Imperative vs Declarative code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to understand the difference between imperative and declarative paradigms, because I have to classify Visual Basic. NET in the different paradigms. Beyond object-oriented, I guess it's also imperative or declarative. If someone could help me explaining how I realized I would appreciate
Imperative code is procedural: do this, then this, then that, then the other, in that order. It's very precise and specific in what you want. Most languages that we use for building end-user applications are imperative languages, including VB.Net. Language structures that indicate an imperative language include if blocks, loops, and variable assignments.
Declarative code just describes the result you want the system to provide, but can leave some actual implementation details up to the system. The canonical example of a declarative language is SQL (though it has some imperative features as well). You could also consider document markup languages to be declarative. Language structures that indicate a declarative language include set-based operations and markup templates.
Here's the trick: while VB.Net would traditionally be considered imperative, as of the introduction of LINQ back in 2008, VB.Net also has significant declarative features that a smart programmer will take advantage of. These features allow you to write VB.Net code that looks a lot like SQL.
I classify CSharp/VB as Multi-paradigm. They are imperative (IF,FOR,WHILE), declarative (LINQ) object-oriented and functional(Lambda). I think that in today landscape there are no more pure languages, they have a bunch of bits of several paradigms.
"The idea of a multiparadigm language is to provide a framework in which programmers can work in a variety of styles, freely intermixing constructs from different paradigms" http://en.wikipedia.org/wiki/Timothy_Budd
VB.NET never required LINQ to be considered declarative. In my understanding, declarative means that a programming language can speak English, i.e. business logic is written exactly as requirements say. The actual implementation may vary. This is called domain driven design (DDD) in some schools of thought.
For this matter, any object oriented language can be seen as declarative. Which does not take away its imperative functions - those are used to make it as declarative as you want it. And this is power behind properly implemented OO concepts, with a concrete task in mind.

What is the difference between a software development pattern a methodology(agile, dsdm etc) and a paradigm(specifically object oriented)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What is the difference between a software development pattern?
A methodology such as agile DSDM etc how is OO classed as a methodology and a paradigm?
How can OO be applied to a methodology such as agile if itself is a methodology?
Whats the difference between a paradigm and a methodology or a development pattern?
Thanks for any replys.
"When I use a word," Humpty Dumpty
said, in a rather scornful tone, "it
means just what I choose it to mean -
neither more nor less." "The question
is," said Alice, "whether you can make
words mean so many different things."
"The question is," said Humpty Dumpty,
"which is to be master - that's all."
Through the Looking Glass.
Well, not my answer, Lewis Carroll's.
Looking at only one of the questions you asked: "...how is OO classed as a methodology and a paradigm?"
That, at least, has a fairly simple answer:
Object Oriented Design is an analysis methodology.
Object Oriented Programming is an implementation paradigm.
OOD involves analyzing a problem in terms of objects and their interactions. OOP involves implementing a solution as a set of interacting objects.
"Agile" (I hate that name -- though I'll admit "eXtreme Programming" is worse) is really about project management. Just for example, you can apply Pair Programming about equally to something like assembly language or C as to a language that explicitly supports object oriented programming (though being a relatively new idea, it's probably used most often in conjunction with relatively new languages).
Edit: How I'd separate "methodology" from "paradigm" is fairly simple (at least in theory).
Paradigm is really just a fancy word for "example". If I'm following that example to a meaningful degree, the source code (for example) to the program should contain direct, (fairly) clearly defined results from having followed that example. Just for the obvious one, a class publicly derived from another would be a pretty obvious indication of OOP.
A methodology, by contrast, doesn't necessarily show a direct, definable result in the source code. Just for example, there's unlikely to be much in the source code to indicate whether it was developed using "Agile" methodology. I might be able to take a guess if (for example) all the source code files contained comments indicating two authors, but (at best) it would a rather indirect indication of one specific piece of the methodology.
I said in theory, because things can get a bit "fuzzy" at times. If I try hard enough, I can probably write pretty close to pure procedural code, even in a language like Smalltalk that favors objects almost exclusively. Likewise, if I try hard enough I can write OO code in something like C that doesn't really support it. In a case like this, the indications of following the paradigm will usually be harder to find or define than in a more straightforward case.
Methodology is about people. Paradigm is about software.
A paradigm is a way of thinking about a problem - so objects, a relational database, lambda calculus are all models for getting a problem into your head
A methodology is a way of actualy building something based on the paradigm.
If you like, the paradigm is the architect, what are building? should it be a suspension bridge or an arch. The methodology is the engineering, how many cables, how thick, which subcontractors.

Functional programming vs Object Oriented programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've been mainly exposed to OO programming so far and am looking forward to learning a functional language. My questions are:
When do you choose functional programming over object-oriented?
What are the typical problem definitions where functional programming is a better choice?
When do you choose functional programming over object oriented?
When you anticipate a different kind of software evolution:
Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.
When evolution goes the wrong way, you have problems:
Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.
This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.
What are the typical problem definitions where functional programming is a better choice?
Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
You don't necessarily have to choose between the two paradigms. You can write software with an OO architecture using many functional concepts. FP and OOP are orthogonal in nature.
Take for example C#. You could say it's mostly OOP, but there are many FP concepts and constructs. If you consider Linq, the most important constructs that permit Linq to exist are functional in nature: lambda expressions.
Another example, F#. You could say it's mostly FP, but there are many OOP concepts and constructs available. You can define classes, abstract classes, interfaces, deal with inheritance. You can even use mutability when it makes your code clearer or when it dramatically increases performance.
Many modern languages are multi-paradigm.
Recommended readings
As I'm in the same boat (OOP background, learning FP), I'd suggest you some readings I've really appreciated:
Functional Programming for Everyday .NET Development, by Jeremy Miller. A great article (although poorly formatted) showing many techniques and practical, real-world examples of FP on C#.
Real-World Functional Programming, by Tomas Petricek. A great book that deals mainly with FP concepts, trying to explain what they are, when they should be used. There are many examples in both F# and C#. Also, Petricek's blog is a great source of information.
Object Oriented Programming offers:
Encapsulation, to
control mutation of internal state
limit coupling to internal representation
Subtyping, allowing:
substitution of compatible types (polymorphism)
a crude means of sharing implementation between classes (implementation inheritance)
Functional Programming, in Haskell or even in Scala, can allow substitution through more general mechanism of type classes. Mutable internal state is either discouraged or forbidden. Encapsulation of internal representation can also be achieved. See Haskell vs OOP for a good comparison.
Norman's assertion that "Adding a new kind of thing to a functional program may require editing many function definitions to add a new case." depends on how well the functional code has employed type classes. If Pattern Matching on a particular Abstract Data Type is spread throughout a codebase, you will indeed suffer from this problem, but it is perhaps a poor design to start with.
EDITED Removed reference to implicit conversions when discussing type classes. In Scala, type classes are encoded with implicit parameters, not conversions, although implicit conversions are another means to acheiving substitution of compatible types.
If you're in a heavily concurrent environment, then pure functional programming is useful. The lack of mutable state makes concurrency almost trivial. See Erlang.
In a multiparadigm language, you may want to model some things functionally if the existence of mutable state is must an implementation detail, and thus FP is a good model for the problem domain. For example, see list comprehensions in Python or std.range in the D programming language. These are inspired by functional programming.