Do you think functional language is good for applications that have a lot of business rules but very few computation? [closed] - oop

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am convinced that functional programming is an excellent choice when it comes to applications that require a lot of computation (data mining, AI, nlp etc).
Is functional programming being used in any well known enterprise applications or open source projects? How did they incorporate business logic into the functional design?
Please disregard the fact that there are very few people using functional programming and that it's kind of tough.
Thanks

Functional programming languages like Clojure and Scala are good for pretty much anything. As for Haskell, an experienced Haskell programming would probably be able to substitute Haskell with any language for any problem - Efficient or not. I don't know if there is a functional programming language that could be considered /best/ out of all languages for this specific problem, but rest assured it will work and very well at that.
Also, Clojure and Scala are implemented on the JVM. So technically they /are/ on an enterprise platform.

What are business rules if not functions? Application of rules can be expressed as applying a function to a set of data. It can also be combined with polymorphism. e.g. through generic functions (multiple dispatch can be handy, too) and inheritance.
Code is data, data is code, and both should be like water.

From what I've seen, Scala looks like it handles normal Java just fine. Hence, anything that Java can handle for business, Scala could too.
On the .NET side, F# is another great example of a functional language that works fine for "business" applications. To put it simply, F# can do everything C# can do, and more, easier.
But for both of these languages, the "programming in the large" side tends to borrow from OOP. Not that there's anything wrong with mixing things, but perhaps thats not what you asked. If you want to stick to a more functional approach, and say, not use objects, you could run into a bit more hassle because the tooling support won't be on the same level. With languages that easily integrate with .NET/Java, that's not as big an issue.
As far as "is it wise?": That depends on the project, company, and other environmental factors. It seems that a common "enterprise pattern" is that code has to be extremely dumbed down so that anyone can work on it. In that case, you might get people involved who'd think that using a lambda makes it too difficult for others to understand.

But is it wise to use functional programming for a typical enterprise application where there are a lot of business rules but not much in terms of computation?
Business rules are just computation and you can often express them more succinctly and clearly using functional programming.
A growing number of enterprise apps are written in functional languages. Citrix XenDesktop and XenServer are built upon a tool stack written primarily in OCaml. The MyLife people search engine is written in OCaml. We are a small company but all of our LOB software (e.g. credit-card transactions, accounts, web analytics) are written in F#. Microsoft's ads on Bing use F# code. Perhaps the most obvious example is anyone using recent versions of C# and .NET because they are almost certainly using functional concepts (e.g. delegates).
If you mean more exotic functional languages such as Clojure, Scala and Haskell then I believe some people are using them but I do not have any details myself.

More than a year ago I delved a bit into Haskell and also tried a few things that I would regard as a typical business problem (To put it bluntly, given a number of values, what is the correct response?). Hence, I would say, yes, you should be able to model a number of business problems with functional programming.
Personally I couldn't find the same obviousness in Haskell to which I can push a OO + functional approach like with C# , but this could well be because I haven't done much with Haskell and a lot more with C#.
Then there is the thing how to communicate with a customer. My experience is that many of them think in strictly chronological terms, which kind of favours imperative programming. Even when going into models of state changes etc. you can lose the odd customer. Thinking along function compositions and monads that may represent the chronological operations of the business could probably be beyond many,many customers.
Either way, you can find my business-y example here.

I assume when you talk about a lot of business rules you are thinking about application development. Application development in the sense that you want to model a real-world workflow. Unlike vanilla programming, application development involves higher levels of responsibility (particularly for requirement capturing and testing). If so, I strongly suggest to see if you could apply domain-driven development. A natural choice for domain-driven development is an object-oriented approach. This and the fact that a lot of programmers are decent at object-orientated programming is one reason for its popularity in application development. However, this does not mean that real-world, big-scale projects are always written this way (read http://www.paulgraham.com/avg.html).

You might want to check out the iTasks system which is a library for the functional language Clean and is designed exactly to express workflow and business processes.

Related

How to use Type Classes in Haskell and difference with java interfaces [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I asked this question yesterday and the user #dfeuer advised me, that as a beginner I should not define my own classes. His comment:
Haskell beginners shouldn't define their own classes at all. Learn to define functions, and types, and instances. These are the vast majority of actual Haskell code. As you do this, you'll get a good feel for what makes some classes really useful and others less so. You'll learn what makes some classes easy to use and others full of booby traps. Then when you find a good reason to actually define your own class, you'll go through a slew of bad class designs before you get good enough at it that only most of your attempts go badly. Designing good classes is really hard and rarely necessary.
I am curious, why is defining my own classes usually (for a beginner) a bad idea? What are these "booby traps" and why is it so hard to design good classes?
I thought classes are used to define interfaces to data as I do in OOP. When I write java code, I try to write as much code as possible with abstract classes and especially interfaces, so that when I need to change the data, most of my code remains unchanged and that my methods are highly reusable. Another comment under that question by #Carl suggests, that this is not how classes should be used
Why did you create that class? It feels very weird to me - very much like something that someone used to OOP would do, rather than someone used to Haskell. It has too many parameters, they're connected in what feels like a very ad-hoc manner...
My fear is, that without this OOP use of classes, any change in data would break huge part of code. Is this fear unfunded? And if it is funded, why I should not use classes to define interface to data?
To be fair, I am self taught java programmer and I did not read others people code, so maybe I am doing java wrong also. I only read some books on how the language works and then built an application. I developed it for a year or so, and my whole style is consequence of this experience alone. My style seems to work well for my needs though, and thus I assume it is how java programming/OOP is indeed done.
I'm a relatively a new (and amateur) Haskell enthusiast.
I'd say: just stop thinking you can reuse OOP knowledge, patterns, and other things in Haskell. Even terminology is not "reusable". Classes are not the same thing in OOP languages vs Haskell (well, they are called typeclasses in Haskell, actually).
This is an answer to a question of mine. It starts more or less like this:
It's true that typeclasses can express what interfaces do in OO languages, but this doesn't always make sense.
i.e. stating the inherent difference between two similar (only apparently similar!) concepts in Haskell vs OOP languages.
Another interesting link is on Design Patterns in Haskell. It is very high level, and I still don't quite understand how some tools can be used in Haskell as an alternative to a specific OOP pattern. (Probably the fact that first-class function remove the need for the strategy pattern is the only thing that is totally clear to me, at the moment.) However, I think it is a good reading and, most of all, it should convince you that learning and coding in Haskell comes with a huge mental shift, and it is best approached by starting from zero. If you refuse that, you're not gonna learn Haskell.
I'm not saying that you shouldn't use your brain to notice similarities between OOP languages and Haskell. You should just assume that even trying to build on those observations will handicap your learning process.
As regards Haskell specifically, sitting down and studying LYAH as you were at school (with a laptop to try out examples) is a good way to learn very well the basics. It is an easy-ish to read book, and guides you by hand.
For what is worth, I think that Structure and Interpretation of Computer Programs is a good book that can accompany learning a functional language, as it gives you a practical background to the shift of philosophy I mentioned earlier. You must do the exercises. Doing them will force you towards that mental shift.
A final suggestion, that I would never apply before studying LYAH thoroughly, is to complete The Monad Challenges. But I have to say that LYAH does already a good job at teaching you what the Challenges ask you to think about. I found myself thinking "I already know this", "why is the challenge going so roundabout?".

Imperative vs Declarative code [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to understand the difference between imperative and declarative paradigms, because I have to classify Visual Basic. NET in the different paradigms. Beyond object-oriented, I guess it's also imperative or declarative. If someone could help me explaining how I realized I would appreciate
Imperative code is procedural: do this, then this, then that, then the other, in that order. It's very precise and specific in what you want. Most languages that we use for building end-user applications are imperative languages, including VB.Net. Language structures that indicate an imperative language include if blocks, loops, and variable assignments.
Declarative code just describes the result you want the system to provide, but can leave some actual implementation details up to the system. The canonical example of a declarative language is SQL (though it has some imperative features as well). You could also consider document markup languages to be declarative. Language structures that indicate a declarative language include set-based operations and markup templates.
Here's the trick: while VB.Net would traditionally be considered imperative, as of the introduction of LINQ back in 2008, VB.Net also has significant declarative features that a smart programmer will take advantage of. These features allow you to write VB.Net code that looks a lot like SQL.
I classify CSharp/VB as Multi-paradigm. They are imperative (IF,FOR,WHILE), declarative (LINQ) object-oriented and functional(Lambda). I think that in today landscape there are no more pure languages, they have a bunch of bits of several paradigms.
"The idea of a multiparadigm language is to provide a framework in which programmers can work in a variety of styles, freely intermixing constructs from different paradigms" http://en.wikipedia.org/wiki/Timothy_Budd
VB.NET never required LINQ to be considered declarative. In my understanding, declarative means that a programming language can speak English, i.e. business logic is written exactly as requirements say. The actual implementation may vary. This is called domain driven design (DDD) in some schools of thought.
For this matter, any object oriented language can be seen as declarative. Which does not take away its imperative functions - those are used to make it as declarative as you want it. And this is power behind properly implemented OO concepts, with a concrete task in mind.

OO ABAP: When and Why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Months after my company has upgraded from 4.6c to ECC6.0, our team of programmers are still coding in the traditional 4.7c way. I'm eager to try out the new OO approach of ABAP, but much to my dismay most people here only emphasize on getting things done in the shortest time frame given.
My question would be:
1) When do people in your organization actually started coding in OO ABAP?
2) Is there any significant reason that people would want to code it in an OO way? e.g. Call Method is faster than a PERFORM statement?
1) When do people in your organization actually started coding in OO ABAP?
Most developers in my organisation have learnt the classic ABAP before introduction of ABAP OO. They are mostly senior developers who restrain from learning proper OOP and OOD principles. They are still using mostly procedural ABAP features.
Furthermore, we work in a legacy environment. the basics of our backend was build during the times of 4.6C. It is hard to bring proper OO Design into legacy systems.
On the other hand, the procedural features still work. Some features like transactional database updates are mostly used from the procedural part of ABAP. You might know Update Function Modules or Subroutines exclusively for database transactions (those you can call IN UPDATE TASK). They are an integral part of the ABAP basic components. One can't deny that the procedural ABAP part is still needed.
2) Is there any significant reason that people would want to code it in an OO way? e.g. Call Method is faster than a PERFORM statement?
How did you compare the runtime of CALL METHOD vs. PERFOM? Did you try the program RSHOWTIM / Or have you done some performance tests from the ABAP workbench? A single subroutine call does not differ significantly from a method invocation. However, if called in mass test method invocations have a slightly better performance (in the magnitude of microseconds).
On the whole, I recommend OOD and OOP with the same arguments as the users who posted before. But you have to keep in mind that senior developers familiar with the old ABAP world have to understand OO principles before they start writing ABAP OO.
Otherwise, your organization would not profit by ABAP OO, on the contrary. There are a lot of experienced ABAP developers without OO knowledge who were pushed to write classes. What they do is actually mimicking procedural principles with classes (e.g. a class with static methods exclusively - as a substitute for function modules/subroutines).
Best of luck for your organisation for your challenge with ABAP OO! It is not about the language, it is more about getting OO principles into the mind of your staff.
I don't know about ABAP, but I have seen the same happen with VB developers moving to the .Net platform.
Programmers are comfortable in their old way of programming, and the old way still works. The new way of programming takes a lot of investment, not only from the company but also from people who have to move out of their comfort zone into uncertain territory. If your company is unwilling to invest in training and time for research this problem will get bigger because people will have to invest their own time, not everyone is willing to do that.
As Taurean already showed there are convincing reasons to move to the OO way of doing things. They're mostly not about performance but about better decoupling of components in your system making it far more maintainable.
But in my experience its hard to convince people to move out of their comfort zone using reasonable arguments like that. It usually works better to show them the way. Slowly start using OO constructs in your own code, show people how clean it looks. This isn't something you'll achieve in months, it can take years to get people to think and work differently.
A team of experienced procedural developers is unlikely to start developing in an OO style anytime soon, unless a significant (and expensive) effort is made to train and coach them.
There are numerous reasons for that:
It takes about a year of immersion in a real OO environment (smalltalk, not java or c++) to get any good at OO development.
They cannot start from scratch, there is a lot of legacy code, and time pressure.
All their legacy code is not OO. It takes a significant effort to restructure.
The legacy code is not well structured and has lots of duplication and no unit tests. Changing it takes too much time, so they don't have time to fix things. (It's amazing what you can deduce from a project without knowing anything about it. :) ).
As a consequence, their new code will most likely be procedural but in classes and methods. They will not be impressed by the advantages of OO.
Some good reasons to switch to ABAP OO is:
ABAP OO is more explicit and simpler to use
ABAP OO has a stricter syntax check which removes a lot of the ambiguity in the ABAP language
Much of the new Netweaver functionality is only available using OO
Add this to the benefits listed by Taurean:
Better data encapsulation
Multiple Instansiation
Better Garbage Collection
Code Reuse through inheritance
Manipulate busines objects through standard interfaces
Event Driven programming
Starting to use ABAP OO:
Start by calling some SAP standard OO functionality in your code: Use ALV classes, rather than the Function Modules - the classes provides much more functionality. Try calling some of the standard methods in the CL_ABAP* or CL_GUI_FRONTEND* classes
Write a report as a Singleton using local classes
Try designing a simple class in SE24 for something that is familiar to you (a file-handler for instance)
Resources:
Design Patterns in Object-Oriented ABAP by Igor Barbaric
Not yet using ABAP Objects? Eight Reasons why Every ABAP developer should give it a second look. by Horst Keller and Gerd Kluger
OO or not OO is not a question!!
Question is where OO and where NOT OO .
All advantages of OO approach (OOD and OOP) can be fully exploited as long as you are in customer name space. However every access to SAP standard functionality creates huge headaches.
Transactional integrity, object consistency and synchronisation, DB commits, screens (module pools and selection screens), authority checks, batch-input. These are just some of objects that is difficult (or even impossible) to integrate in OO approach. Integration of SAP standard modules moves this to even higher level of complexity.
User-exits, Events:
Most of data are provides in interface. Access to customer specific data or customisation can be placed in objects.
Reports: Most of data will be read by standard FM. Specific data processing can be placed in objects. Can be easy reused in others reports. SAP enjoy controls can be wrapped with object shell for easy using und reusing. Screens can NOT be palces in objects. :-(((
Core processing: Replacing of SAP business object maintenance or SAP processes is not encouraged by SAP. But if this is a case by patient and ready for huge effort. Lets look closer. There are a lot of technichal challanges: singleton pattern, optimisation of DB access, locking, synchronisation, etc. Separation of technical and business functionality should be adressed. Objets are not really suitable for mass procesing (high DB load) therefore mass processing should be adressed.
Below are some of the advantages of OOP as you must be knowing:
Data Encapsulation
Instantiation
Code Reuse
Interfaces
Taking advantage of these, there are many important reasons for using OO ABAP "whenever possible". Even if you do not want use OO programming, using ABAP Objects is still a good idea since it provides some features that procedural programming does not.
So here's what ABAP Objects offers you over Procedural ABAP:
Better encapsulation
Support for multiple instantiation
Better techniques for reusing code
Better interfaces
An explicit event concept
There are only two purposes for which Procedural ABAP is found essential:
Encapsulation of classic screens in
function modules.
When you want to make functions
available to other systems, but are
not able to make class methods
available externally using XI server
proxies. In this case, you have to
use function modules.
Study about them in detail here and you will see that you don't need any significant operational/demonstrative reason to convince yourself to move to OO ABAP, coz all these reasons are already very significant.
To put it simple, use it when you have a relatively young team who are eager and ready to learn a new programming paradigm. In a senior-dominated team, adoption OO could be challenging. More so because the maintainability of the program goes down. The org may need new employees to maintain the OO code.
From a design perspective, there is no question(as a lot of people have also said in this forum) that its the best and has been in use since ages. SAP is way behind in terms of technology. Their ECC DB design is still in 2-NF. the standard 3-NF is what they've called '3D' database.
No deviating too much from the main topic, I believe you now have too many good answers to reach a decision.

First impressions of the Fantom programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Has anyone here given the Fantom programming language a whirl? (pun intended).
My first impression:
I like the ability to have the code run on either the .NET or Java VM.
The syntax is nice and clean and does not try anything fancy.
I have a belief that "the library is the language" and the developers of Fan believe that their USP is their APIs:
But getting a language to run on both Java and .NET is the easy part - in fact there are many solutions to this problem. The hard part is getting portable APIs. Fan provides a set of APIs which abstract away the Java and .NET APIs. We actually consider this one of Fan's primary benefits, because it gives us a chance to develop a suite of system APIs that are elegant and easy to use compared to the Java and .NET counter parts.
Any other thoughts, first impressions, pros and cons?
It looks very inspired by Ruby. It says that it's RESTful but I don't see how exactly. Compare with boo, which is more mature yet similar in many ways (its syntax is Python inspired, though).
The design decisions to keep generics and namespaces very limited are questionable.
I think their explanation sums it up:
"The primary reason we created Fan is
to write software that can seamlessly
run on both the Java VM and the .NET
CLR. The reality is that many software
organizations are committed to one or
the other of these platforms."
It doesn't look better than all other non-JVM/.NET languages. In the absence of any information about them (their blog is just an error page), I see no reason why they would necessarily get this righter than others. Every language starts out fairly elegant for the set of things it was designed for (though I see some awkwardness in the little Fan code I looked at just now) -- the real question is how well it scales to completely new things, and we simply don't know that yet.
But if your organization has a rule that "everything must run on our VM", then it may be an acceptable compromise for you.
You're giving up an awful lot just for VM independence. For example, yours is the first Fan question here on SO -- a couple orders of magnitude fewer than Lisp.
For what problem is Fan the best solution? Python and Ruby can already run on both VMs (or neither), have big communities and big libraries, and seem to be about the same level of abstraction, but are far more mature.
I have never heard of Fan until a couple of weeks ago. From the web site, it is about one year old so still pretty young and unproven. There are a couple of interesting points however: First the language is tackling the problem of concurrency by providing an actor model (similar to erlang) and by supporting immutable objects. Second, the object follows the example of Scala with type inference. Type inference allows the programmer to omit type declarations but have it computed by the compiler providing the advantage of short and cleaner code as in a dynamically type language while preserving the efficiency of a statically type language. And last, it seems like a very fast language, nearly as fast as Java and really close or beating the second fastest language on the JM: scala. Benchmark showing the performance can be found at http://www.slideshare.net/michael.galpin/performance-comparisons-of-dynamic-languages-on-the-java-virtual-machine?type=powerpoint.
This is very interesting.
Java (or C#) was created in order to eliminate Platform dependency by creating a JVM (or CLR) that will compile the code into a specific machine code at run time.
Now , There is a languege which is Virtual Machine independent? umm .... what the hell?!?!
Again , this is a very interesting topic , That might be the future...:) going to one universal single languege
I think it looks like a great language feature-wise, but I'm not sure how useful it is. I don't think it is all that useful to target .NET and JVM. Java is already cross-platform, and .NET is too, with Mono. By targeting two VMs, you have to use only the APIs that are available on both. You can't use any of the great native APIs that are available for Java and .NET. I can't imagine that their API is anywhere near as complete as either Java's of .NET's.

OOP vs Functional Programming vs Procedural [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What are the differences between these programming paradigms, and are they better suited to particular problems or do any use-cases favour one over the others?
Architecture examples appreciated!
All of them are good in their own ways - They're simply different approaches to the same problems.
In a purely procedural style, data tends to be highly decoupled from the functions that operate on it.
In an object oriented style, data tends to carry with it a collection of functions.
In a functional style, data and functions tend toward having more in common with each other (as in Lisp and Scheme) while offering more flexibility in terms of how functions are actually used. Algorithms tend also to be defined in terms of recursion and composition rather than loops and iteration.
Of course, the language itself only influences which style is preferred. Even in a pure-functional language like Haskell, you can write in a procedural style (though that is highly discouraged), and even in a procedural language like C, you can program in an object-oriented style (such as in the GTK+ and EFL APIs).
To be clear, the "advantage" of each paradigm is simply in the modeling of your algorithms and data structures. If, for example, your algorithm involves lists and trees, a functional algorithm may be the most sensible. Or, if, for example, your data is highly structured, it may make more sense to compose it as objects if that is the native paradigm of your language - or, it could just as easily be written as a functional abstraction of monads, which is the native paradigm of languages like Haskell or ML.
The choice of which you use is simply what makes more sense for your project and the abstractions your language supports.
I think the available libraries, tools, examples, and communities completely trumps the paradigm these days. For example, ML (or whatever) might be the ultimate all-purpose programming language but if you can't get any good libraries for what you are doing you're screwed.
For example, if you're making a video game, there are more good code examples and SDKs in C++, so you're probably better off with that. For a small web application, there are some great Python, PHP, and Ruby frameworks that'll get you off and running very quickly. Java is a great choice for larger projects because of the compile-time checking and enterprise libraries and platforms.
It used to be the case that the standard libraries for different languages were pretty small and easily replicated - C, C++, Assembler, ML, LISP, etc.. came with the basics, but tended to chicken out when it came to standardizing on things like network communications, encryption, graphics, data file formats (including XML), even basic data structures like balanced trees and hashtables were left out!
Modern languages like Python, PHP, Ruby, and Java now come with a far more decent standard library and have many good third party libraries you can easily use, thanks in great part to their adoption of namespaces to keep libraries from colliding with one another, and garbage collection to standardize the memory management schemes of the libraries.
These paradigms don't have to be mutually exclusive. If you look at python, it supports functions and classes, but at the same time, everything is an object, including functions. You can mix and match functional/oop/procedural style all in one piece of code.
What I mean is, in functional languages (at least in Haskell, the only one I studied) there are no statements! functions are only allowed one expression inside them!! BUT, functions are first-class citizens, you can pass them around as parameters, along with a bunch of other abilities. They can do powerful things with few lines of code.
While in a procedural language like C, the only way you can pass functions around is by using function pointers, and that alone doesn't enable many powerful tasks.
In python, a function is a first-class citizen, but it can contain arbitrary number of statements. So you can have a function that contains procedural code, but you can pass it around just like functional languages.
Same goes for OOP. A language like Java doesn't allow you to write procedures/functions outside of a class. The only way to pass a function around is to wrap it in an object that implements that function, and then pass that object around.
In Python, you don't have this restriction.
For GUI I'd say that the Object-Oriented Paradigma is very well suited. The Window is an Object, the Textboxes are Objects, and the Okay-Button is one too. On the other Hand stuff like String Processing can be done with much less overhead and therefore more straightforward with simple procedural paradigma.
I don't think it is a question of the language neither. You can write functional, procedural or object-oriented in almost any popular language, although it might be some additional effort in some.
In order to answer your question, we need two elements:
Understanding of the characteristics of different architecture styles/patterns.
Understanding of the characteristics of different programming paradigms.
A list of software architecture styles/pattern is shown on the software architecture article on Wikipeida. And you can research on them easily on the web.
In short and general, Procedural is good for a model that follows a procedure, OOP is good for design, and Functional is good for high level programming.
I think you should try reading the history on each paradigm and see why people create it and you can understand them easily.
After understanding them both, you can link the items of architecture styles/patterns to programming paradigms.
I think that they are often not "versus", but you can combine them. I also think that oftentimes, the words you mention are just buzzwords. There are few people who actually know what "object-oriented" means, even if they are the fiercest evangelists of it.
One of my friends is writing a graphics app using NVIDIA CUDA. Application fits in very nicely with OOP paradigm and the problem can be decomposed into modules neatly. However, to use CUDA you need to use C, which doesn't support inheritance. Therefore, you need to be clever.
a) You devise a clever system which will emulate inheritance to a certain extent. It can be done!
i) You can use a hook system, which expects every child C of parent P to have a certain override for function F. You can make children register their overrides, which will be stored and called when required.
ii) You can use struct memory alignment feature to cast children into parents.
This can be neat but it's not easy to come up with future-proof, reliable solution. You will spend lots of time designing the system and there is no guarantee that you won't run into problems half-way through the project. Implementing multiple inheritance is even harder, if not almost impossible.
b) You can use consistent naming policy and use divide and conquer approach to create a program. It won't have any inheritance but because your functions are small, easy-to-understand and consistently formatted you don't need it. The amount of code you need to write goes up, it's very hard to stay focused and not succumb to easy solutions (hacks). However, this ninja way of coding is the C way of coding. Staying in balance between low-level freedom and writing good code. Good way to achieve this is to write prototypes using a functional language. For example, Haskell is extremely good for prototyping algorithms.
I tend towards approach b. I wrote a possible solution using approach a, and I will be honest, it felt very unnatural using that code.