Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated patterns and practices like SOA, DDD, IoC, MVC, POCO, MVVM? [closed] - agile-processes

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 13 years ago.
It seems to me that Agile methodologies encourage us to keep things simple, and lean, and not add complexity and sophistication until its needed. But the pace and volume of technology change encourages the use of increasingly abstract, complex and sophisticated tools and patterns to solve problems that we may not have yet (and may never encounter) in complex ways with significant learning curves and significant investments of effort.

Are KISS and YAGNI at odds with the trends towards increasingly more sophisticated ...
A car has an accelator and a brake, and a steering wheel that can turn left and/or right: it's up to the drivers to decide which to use when.

I'll keep my answer short and let the experts lay it out better...
I think that KISS applies to everything you listed. You mention increasing abstraction and complexity, which, I think, balance eachother.
The systems we are developing today must be complex, because, most of the time, the solution to a complex problem is inherently complex. However, to keep things simple, we use abstraction. Even if our complex system is built with, say, eight layers, we can follow KISS by keeping each layer simple.
For instance, to pick an item or two off your list:
SOA is not complex because we can wrap service calls in a wrapper object. This object handles the connection and makes calls, which are pretty easy to do because they simply pass parameters on.
MVC is not complex because we clearly separate our logic. We have a simple controller for directing requests and setting up data, a simple model to represent our domain, and a simple view that displays whatever data is passed to it.
However, in both of these cases, the pattern as a whole (or the system, if you will) is complex and non-trivial. It is the fact that we consider small, simple parts one at a time, and then fit them together, that lets us maintain our mental model as we work.

I agree with ChrisW's answer.
The idea is to stick with KISS and YAGNI as much as possible, but when the need arises and you need a sophisticated / complex solution, stand on the shoulders of giants and use proven patterns to guide you. These patterns and practices are meant to simplify your work, if using them is harder than the hack alternative, you should stick with the hack. Just make sure you take into account maintainability etc.
As an example, when you build the 1st version of a website it may consist of 1 or 2 main functionalities and just a few pages. You probably don't need MVC for this (even though it might be nice to start that way)
BUT, after you add a few more features and you have dozens of pages to manage along with how to share functionality between them, it might become apparent that you need MVC to better structure your application.
Similarly, if from the get go you know you will have to deal with something like returning multiple views of a common piece of data, MVC simplifies your problem by laying out a pattern for you to follow.
In summary, YAGNI now, but if you need it later then KISS by using a known pattern / solution.

Sigh.
We must have increasingly sophisticated and abstract components to match the demand for increasingly sophisticated software.
Most of us have limited brain space. We must learn to cope with our limited brains by using more sophisticated abstractions.
The alternative is not using abstractions, limiting ourselves to machine code.
Please read http://www.cs.utexas.edu/~EWD/transcriptions/EWD01xx/EWD117.html
"In spite of all its deficiencies,
mathematical reasoning presents an
outstanding model of how to grasp
extremely complicated structures with
a brain of limited capacity. And it
seems worthwhile to investigate to
what extent these proven methods can
be transplanted to the art of computer
usage. In the design of programming
languages one can let oneself be
guided primarily by considering "what
the machine can do". Considering,
however, that the programming language
is the bridge between the user and the
machine --that it can, in fact, be
regarded as his tool-- it seems just
as important to take into
consideration "what Man can think". It
is in this vein that we shall continue
our investigations."

I'm going to make a subjective answer (so sue me). I think that if you program by acronyms then you are going to run into trouble.
At the end of the day you are trying to make money for a business, or hopefully yourself. As such each decision you make is an engineering decision based on cost, time and benefits. You have to evaluate the use of a technique on the cost of implementation, maintenance etc, and make the best choice.
I think the only fair answer is that the tools and techniques chosen have to match with the desired goal of the engineering.

Its a matter of the right tool for the right job. The problem is when architects and/or developers begin to believe that a particular methodology or technology is a "golden hammer." That is when things become religious, and religion and reason do not play nicely together ;)
Oh and by the way, "agile" does not necessarily mean you don't use some of the acronyms you mention, or some framework that implements them. Those decisions are usually made far in advance of implementing the sorts of things that developers have come to associate with agile, e.g. user stories, sprints, etc.

First off, the list of acronyms doesn't really necessarily make sense - there's not really much simpler than POCO, for example...
However, KISS and YAGNI are achieved most effectively, in many circumstances, by using concepts like IoC, MVC, and MVVM - provided you use the patterns correctly.
Patterns aren't complicated, in and of themselves. It may take a bit of learning to understand what the pattern is trying to accomplish, but often, a pattern exists purely to simplify either code, maintenance, or usability - and usually all of the above. This fits in perfectly with keep it simple, for example.

IMHO, you (generally) don't want to start out with a complicated design. Could this be a local method rather than a service? Do I need an IoC container yet? This is particularly relevant when it comes to design patterns.
However, as you test and refactor your code, certain patterns (such as Ioc) will help you to achieve goals such as testability and DRY (Don't Repeat Yourself). If you know design patterns well, you can apply them at the appropriate time.

yes
-- this space intentionally left blank --

Related

How to go about designing a module?

How do you usually go about when you need to design a module? Till now I've taken care of how easy it is to use, how intuitive its API is, extendibility, performance and stuff like that.
But what seems fairly simple and straight-forward to me might seem over-complicated for other users. Although it doesn't happen that often, it does happen sometimes to all of us (I hope).
Are there any questions that you should ask yourself before designing a class hierarchy/API/whatever before you proceed with coding, other than the issues I already mentioned?
If you believe the question is better suited for a different section on SO please feel free to migrate it, but I'd still like an answer.
Cheers.
Your question is a very good one, and one that has answers, but so complicated that the answer is basically the experience in programming.
There are general principles to make software, but I think that here, in this short answer, I can give you one concept that you can apply. Software is a representation of a domain (such as a bank software is made to tailor the financial system, or a radar software is made to tailor the ideas and principles of radar detection). Software, therefore, is like a theory: it fits the current knowledge of your domain perfectly, allows inferences and extensions. If more knowledge becomes available, the theory should be extended, polished or made more general to accommodate this new knowledge, while still remaining valid for the previous knowledge.
Hence, all the concepts about theories apply:
satisfy the requirements imposed by your knowledge in a unified framework that sounds homogeneous and well integrated.
be simple, but look for patterns that you may make more general, and spotlight these patterns for better integration.
don't be too simple. If your software does not fit the requirements, your theory is too limited and must be extended.
allow your software to accommodate new requirements, software is not cast in stone. it mutates and evolves, accommodating the new requirements, or losing functionalities that are no longer needed.
So, software should be minimalistic but not too much, beautiful but practical.
When it comes to put into practice these directions, I suggest you to allow time for learning your domain. You can't model something that you don't understand. Learn the basics, and start from something simple, then progressively refine them. You will occasionally see that some things "feel" in the wrong place. Ask yourself questions such as
"who is responsible to do this operation?"
"Is this dependency logical and needed for this object to work, or is it just a spurious one due to bad code organization?"
"Is this high-level or low-level functionality?"
"Am I repeating this ?"
"can I change this object/layer/subsystem internally without the code outside knowing ?"
"can I extend this in the future without ruining or invalidating the past ?"
"can I test and probe this functionality easily for correct behavior ?"
"Is it easy and intuitive to understand and use ?"
"can I recombine what I already have easily and without touching to implement new behavior?"
"Is this functionality isolated so that I can show it to the outside world without the bulk of the rest of the code I manipulate ?"
You should consider SOLID Principles and here.
And about responsibility assigment apply GRASP Patterns

Appreciating the value of good design [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Having recently worked with a bunch of people (from two different companies) right out of school or with 1-2 years experience, I was initially quite impressed with their knowledge of the various industry buzz words and design patterns etc. Furthermore they each had a good understanding on OO design principle and use of interface.
To cut a long story short…. In just a few days of working with them I found that things were not as they appeared.
Let me define some terms I’ll use here
Knowledge – Something you learn either in school or book or on the Internet etc.
Experience – The amount of time you’ve been doing something
Skill – Only gained through experience. That is acquiring skill (over time) and knowing how to apply the knowledge you have
What I found was that even though they knew this stuff, they really didn’t know how to apply this knowledge. You’d have all these patterns waving in your face but any code they had to write of their own accord had basic flaws with it. They could tell you the virtues of a certain design pattern and could come up with somewhat of an implementation but could not recognize basic flaws in the design.
Of course I had my fair share of the “One who knows not that he knows not –Confucius”.
Each night I’d spend a lot of time re-iterating everything that was said during the day trying to understand who was saying what and why, trying to figure out what I could do by way of examples during training or code review. But frankly I was quite puzzled.
After about 2-3 weeks I started to figure it out.
Anyway, the questions first
1. Have you experience this sort of thing?
2. How did you (or do you) tackle this?
My conclusion was that either schools are doing a bad job or Google is their friend and they’re getting all this “knowledge” and think they know.
But I feel
In order to be able to recognize and appreciate good design one MUST write code that is well,… not so well designed. Struggle with it and then fix it to know the pain and therefore recognize good and bad design and appreciate it
Practice and Experience – you just can’t beat that. There is so much that experience (and the quality of experience) brings to the table that you just can’t match it with just knowledge or a little bit of experience.
Some other things I experienced:
“Why is this an interface and not a base class” – you’ll get all kinds of answers but none of them is the right reason.
Why this design pattern and not that, or forget design patterns for a minute and just design (they’re utterly lost – that’s when you see their real design coding skills)
Over engineering – don’t recognize it and can’t appreciate they it could be a maintenance nightmare as the system grows. I found this to be a big issue. It's as if everything has the potential to change. A simple process of sending an email has 3 classes in addition to the various classes the in the .NET framework you'd use to send an email.
Using all the new features in the framework or language just because (I’ve even seen this in some of Microsoft’s source code for a certain framework for which source code is available)
So 10 years from now, everyone writing code is writing it using all the fancy framework or language features using all the possible design patterns, such that “legacy” code is well written and well designed. Or is it? What do you think?
Does anyone else feel that 10 years from now we’ll just be shifting through a different kind of muck. Muck that’s scattered about in a dozen more code files then it used to be because now we’ve got classes and so called loosely coupled code but it’s just a different kind of mess and in fact harder to clean up?
Interesting deliberations. I have always felt that with time we are over engineering our systems with all the patterns flying around. An extra layer of abstraction means more failure in understanding in future. My personal approach is to keep things simple and only introduce complexity if it is required. Decouple if decoupling is required. Many of the design requirements do flow in systems because we blindly put in requirements document that it should be maintainable, reliable and all *able. It's also necessary to understand the degree in which we want these *ables and more importantly how they impact our budget and business values both in shorter and longer term.
One important aspect is always to keep a very tight focus on business requirements, at every stage, both in terms of functionality and budget.
I completely agree that the newer breed of developers appear to be very knowledgeable when it comes to design patterns and the latest buzzwords like hibernate, jason, nant, ajax etc. In the other hand I have found that even the best among them, those who can be considered star programmers appear to have limited understanding and knowledge of what is really happening under the hood.
I had a several conversations in the past with young guns who were viewing spring as a major innovation trying to convince them that what this framework is providing us through reflection consist the evolution of things like IDL, type libraries, COM and CORBA.
When it comes to design patterns and the terminology introduced by the gang of four, we all know that their proposed architectures have been used for decades before and a senior developer was using them almost intuitively without knowing the formal differences of a regular factory versus an abstract. There is no doubt of course that the formalization that was introduced by the movement of DP was beneficial for the industry although the recognition and successful implementation of patterns still (and probably always) rely in the experience and talent of the developer since this process is impossible to become a purely mechanical and deterministic.
An additional point I have to make regarding newcomers to the field of SD is their inclination to spread their skill set very horizontally, trying to cover as many technologies possible as opposed to deeply concentrate in a specific domain and master it.

SOLID vs. YAGNI [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
One of the most frequent arguments I hear for not adhering to the SOLID principles in object-oriented design is YAGNI (although the arguer often doesn't call it that):
"It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity)."
"Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that significant new requirements will ever come in."
"If in the unlikely case of new requirements my code gets too cluttered I still can refactor for the new requirement. So your 'What if you later need to…' argument doesn't count."
What would be your most convincing arguments against such practice? How can I really show that this is an expensive practice, especially to somebody that doesn't have too much experience in software development.
Design is the management and balance of trade-offs. YAGNI and SOLID aren't conflicting: the former says when to add features, the latter says how, but they both guide the design process. My responses, below, to each of your specific quotes use principles from both YAGNI and SOLID.
It is three times as difficult to build reusable components as single use
components.
A reusable component should be tried out in three different
applications before it will be sufficiently general to accept into a reuse
library.
  — Robert Glass' Rules of Three, Facts and Fallacies of Software Engineering
Refactoring into reusable components has the key element of first finding the same purpose in multiple places, and then moving it. In this context, YAGNI applies by inlining that purpose where needed, without worrying about possible duplication, instead of adding generic or reusable features (classes and functions).
The best way, in the initial design, to show when YAGNI doesn't apply is to identify concrete requirements. In other words, do some refactoring before writing code to show that duplication is not merely possible, but already exists: this justifies the extra effort.
Yes, I can put all my business logic directly into the GUI code it is much easier and quicker. This will always be the only GUI and it is highly unlikely that signifcant new requirements will ever come in.
Is it really the only user interface? Is there a background batch mode planned? Will there ever be a web interface?
What is your testing plan, and will you be testing back-end functionality without a GUI? What will make the GUI easy for you to test, since you usually don't want to be testing outside code (such as platform-generic GUI controls) and instead concentrate on your project.
It is OK that I put both feature X and feature Y into the same class. It is so simple why bother adding a new class (i.e. complexity).
Can you point out a common mistake that needs to be avoided? Some things are simple enough, such as squaring a number (x * x vs squared(x)) for an overly-simple example, but if you can point out a concrete mistake someone made—especially in your project or by those on your team—you can show how a common class or function will avoid that in the future.
If, in the unlikely case of new requirements, my code gets too cluttered I still can refactor for the new requirement. So your "What if you later need to..." argument doesn't count.
The problem here is the assumption of "unlikely". Do you agree it's unlikely? If so, you're in agreement with this person. If not, your idea of the design doesn't agree with this person's—resolving that discrepancy will solve the problem, or at least show you where to go next. :)
I like to think about YAGNI in terms of "half, not half-assed", to borrow the phrase from 37signals (https://gettingreal.37signals.com/ch05_Half_Not_Half_Assed.php). It's about limiting your scope so you can focus on doing the most important things well. It's not an excuse to get sloppy.
Business logic in the GUI feels half-assed to me. Unless your system is trivial, I'd be surprised if your business logic and GUI haven't already changed independently, several times over. So you should follow the SRP ("S" in SOLID) and refactor - YAGNI doesn't apply, because you already need it.
The argument about YAGNI and unnecessary complexity absolutely applies if you're doing extra work today to accommodate hypothetical future requirements. When those "what if later we need to..." scenarios fail to materialize, you're stuck with higher maintenance costs from the abstractions that now get in the way of the changes you actually have. In this case, we're talking about simplifying the design by limiting scope -- doing half, rather than being half-assed.
It sounds like you're arguing with a brick wall. I'm a big fan of YAGNI, but at the same time, I also expect that my code will always be used in at least two places: the application, and the tests. That's why things like business logic in UI code don't work; you can't test business logic separate of UI code in that circumstance.
However, from the responses you're describing, it sounds like the person is simply uninterested in doing better work. At that point, no principle is going to help them; they only want to do the minimum possible. I'd go so far as to say that it's not YAGNI driving their actions, but rather laziness, and you alone aren't going to beat laziness (almost nothing can, except a threatening manager or the loss of a job).
There is no answer, or rather, there is an answer neither you nor your interlocutor might like: both YAGNI and SOLID can be wrong approaches.
Attempting to go for SOLID with an inexperienced team, or a team with tight delivery objectives pretty much guarantees you will end up with an expensive, over-engineered bunch of code... that will NOT be SOLID, just over-engineered (aka welcome to the real-world).
Attempting to go YAGNI for a long term project and hope you can refactor later only works to an extent (aka welcome to the real-world). YAGNI excels at proof-of-concepts and demonstrators, getting the market/contract and then be able to invest into something more SOLID.
You need both, at different points in time.
The correct application of these principles is often not very obvious and depends very much on experience. Which is hard to obtain if you didn't do it yourself. Every programmer should have had experiences of the consequences of doing it wrong, but of course it always should be "not my" project.
Explain to them what the problem is, if they don't listen and you're not in a position to make them listen, let them do the mistakes. If you're too often the one having to fix the problem, you should polish your resume.
In my experience, it's always a judgment call. Yes, you should not worry about every little detail of your implementation, and sometimes sticking a method into an existing class is an acceptable, though ugly solution.
It's true that you can refactor later. The important point is to actually do the refactoring. So I believe the real problem is not the occasional design compromise, but putting off refactoring once it becomes clear there's a problem. Actually going through with it is the hard part (just like with many things in life... ).
As to your individual points:
It is OK that I put both feature X
and feature Y into the same class. It
is so simple why bother adding a new
class (i.e. complexity).
I would point out that having everything in one class is more complex (because the relationship between the methods is more intimate, and harder to understand). Having many small classes is not complex. If you feel the list is getting to long, just organize them into packages, and you'll be fine :-). Personally, I have found that just splitting a class into two or three classes can help a lot with readability, without any further change.
Don't be afraid of small classes, they don't bite ;-).
Yes, I can put all my business logic
directly into the GUI code it is much
easier and quicker. This will always
be the only GUI and it is highly
unlikely that signifcant new
requirements will ever come in.
If someone can say "it is highly unlikely that signifcant new requirements will ever come in." with a straight face, I believe that person really, really needs a reality check. Be blunt, but gentle...
If in the unlikely case of new
requirements my code gets too
cluttered I still can refactor for the
new requirement. So your 'What if you
later need to ...' argument doesn't
count
That has some merit, but only if they actually do refactor later. So accept it, and hold them to their promise :-).
SOLID principles allow software to adapt to change - in both requirements and techical changes (new components, etc), two of your arguments are for unchanging requirements:
"it is highly unlikely that signifcant new requirements will ever come in."
"If in the unlikely case of new requirements"
Could this really be true?
There is no substitute for experience when it comes to the various expenses of development. For many practitioners I think doing things in the lousy, difficult to maintain way has never resulted in problems for them (hey! job security). Over the long term of a product I think these expenses become clear, but doing something about them ahead of time is someone else's job.
There are some other great answers here.
Understandable, flexible and capable of fixes and improvements are always things that you are going to need. Indeed, YAGNI assumes that you can come back and add new features when they prove necessary with relative ease, because nobody is going to do something crazy like bunging irrelevant functionality in a class (YAGNI in that class!) or pushing business logic to UI logic.
There can be times when what seems crazy now was reasonable in the past - sometimes the boundary lines of UI vs business or between different sets of responsibilities that should be in a different class aren't that clear, or even move. There can be times when 3hours of work is absolutely necessary in 2hours time. There are times when people just don't make the right call. For those reasons occasional breaks in this regard will happen, but they are going to get in the way of using the YAGNI principle, not be a cause of it.
Quality unit tests, and I mean unit tests not integration tests, need code that adheres to SOLID. Not necessarily 100%, in fact rarely so, but in your example stuffing two features into one class will make unit testing harder, breaks the single responsibility principle, and makes code maintenance by team newbies much harder (as it is much harder to comprehend).
With the unit tests (assuming good code coverage) you'll be able to refactor feature 1 safe and secure you won't break feature 2, but without unit tests and with the features in same class (simply to be lazy in your example) refactoring is risky at best, disastrous at best.
Bottom line: follow the KIS principle (keep it simple), or for the intellectual the KISS principle (kis stupid). Take each case on merit, there's no global answer but always consider if other coders need to read / maintain the code in the future and the benefit of unit tests in each scenario.
tldr;
SOLID assumes, you understand (somewhat atleast), the future changes to the code, wrt SRP. I will say that is being optimistic about capability to predict.
YAGNI on the other hand, assumes most of the times you don't know future direction of change, which is pessimistic about capability to predict.
Hence it follows that SOLID/SRP asks you to form classes for the code such that it will have single reason for change. E.g. a small GUI change or ServiceCall change.
YAGNI says (if you want to force apply it in this scenario), since you don't know WHAT is going to change, and if a GUI change will cause a GUI+ServiceCall change (similarly A backend change causing GUI+SeviceCall change), just put all that code in single class.
Long answer :
Read the book 'Agile Software Development, Principles, Patterns, and Practices'
I am putting short excerpt from it about SOLID/SRP :
"If,[...]the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
There is a corrolary here. An axis of change is an axis of change only if the changes occur. It is not wise to apply SRP—or any other principle, for that matter—if there is no symptom."

The limit of OOP Paradigm in really complex system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I asked a question previously about Dataset vs Business Objects
.NET Dataset vs Business Object : Why the debate? Why not combine the two?
and I want to generalize the question here: where is the proof that OOP is really suitable for very complex problems ? Let's take a MMO Game Engine for example. I'm not specialist at all but as I read this article, it clearly stands that OOP is far from being enough:
http://t-machine.org/index.php/2007/11/11/entity-systems-are-the-future-of-mmog-development-part-2/
It concludes:
Programming well with Entity Systems is very close to programming with a Relational Database. It would not be unreasonable to call ES’s a form of “Relation Oriented Programming”.
So isn't OOP trying to get rid off something that is here to stay ?
OOP is non-linear, Relational is linear, both are necessary depending on the part of a system so why try to eliminate Relational just because it isn't "pure" Object. Is OOP an end by itself ?
My question is not is OOP usefull. OOP is usefull, my question is rather why the purists want to do "pure" OOP ?
As the author of the linked post, I thought I'd throw in a couple of thoughts.
FYI: I started seriously (i.e. for commercial work) using OOP / ORM / UML in 1997, and it took me about 5 years of day to day usage to get really good at it IMHO. I'd been programming in ASM and non-OOP languages for about 5 years by that point.
The question may be imperfectly phrased, but I think it's a good question to be asking yourself and investigating - once you understand how to phrase it better, you'll have learnt a lot useful about how this all hangs together.
"So isn't OOP trying to get rid off something that is here to stay ?"
First, read Bjarne's paper here: http://www.stroustrup.com/oopsla.pdf
IMHO, no-one should be taught any OOP without reading that paper (and re-reading after they've "learnt" OOP). So many many people misunderstand what they're dealing with.
IME, many university courses don't teach OOP well; they teach people how to write methods, and classes, and how to use objects. They teach poorly why you would do these things, where the ideas come from, etc. I think much of the mis-usage comes from that: almost a case of the blind leading the blind (they aren't blind in "how" to use OOP, they're just blind in "why" to use OOP).
To quote from the final paragraphs of the paper:
"how you support good programming techniques and good design techniques matters more than labels and buzz words. The fundamental idea is simply to improve design and programming through abstraction. You want to hide details, you want to exploit any commonality in a system, and you want to make this affordable.
I would like to encourage you not to make object-oriented a meaningless term. The notion of ‘‘object-oriented’’ is too frequently debased:
– by equating it with good,
– by equating it with a single language, or
– by accepting everything as object-oriented.
I have argued that there are–and must be–useful techniques beyond object-oriented programming and design. However, to avoid being totally misunderstood, I would like to emphasize that I wouldn’t attempt a serious project using a programming lan-
guage that didn’t at least support the classical notion of object-oriented programming. In addition to facilities that support object-oriented programming, I want –and C++ provides features that go beyond those in their support for direct expression of concepts and relationships."
Now ... I'd ask you ... of all the OOP programmers and OOP projects you've seen, how many of them can honestly claim to have adhered to what Bjarne requests there?
IME, less than the majority.
Bjarne states that:
"The fundamental idea is simply to improve design and programming through abstraction"
...and yet many people invent for themselves a different meaning, something like:
"The fundamental idea is that OOP is good, and everything-not-OOP is inferior"
Programmers who have programmed sequentially with ASM, then later ASM's, then pascal, then C, then C++, and have been exposed to the chaos that was programming pre-encapsulation etc tend to have better understanding of this stuff. They know why OOP came about, what it was trying to solve.
Funnily enough, OOP was not trying to solve every programming problem. Who'd have htought it, to say how it's talked about today?
It was aimed at a small number of problems that were hugely dangerous the bigger your project got, and which it turned out to be somewhere between "good" and "very good" at solving.
But even some of them it isn't any better than merely "good" at solving; there are other paradigms that are better...
All IMHO, of course ;)
Systems of any notable complexity are not linear. Even if you worked really hard to make a system one linear process, you're still relying on things like disks, memory and network connections that can be flaky, so you'll need to work around that.
I don't know that anyone thinks OOP is the final answer. It's just a way of dealing with complexity by trying to keep various problems confined to the smallest possible sphere so the damage they do when they blow up is minimized. My problem with your question is that it assumes perfection is possible. If it were, I could agree OOP isn't necessary. It is for me until someone comes up with a better way for me to minimize the number of mistakes I make.
Just read yr article about Entity Systems, which compares ES to OOP, and it is flagrantly wrong about several aspects of OOP. for e.g., When there are 100 instances of a class, OOP does not mandate that there be 100 copies of the classes methods loaded in memory, only one is necessary. Everything that ES purports to be able to do "better" than OOP because it has "Components", and "Systems", OOP supports as well using interfaces and static classes, (and/or Singletons).
And OOP more naturally fits with the real-world, as any real or imagined Problem Domain, consisting of multiple physical and/or non-physical items and abstractions, and the relationships between them, can be modeled with an appropriately designed hiearchical OOP class structure.
What we try to do is put an OO style on top of a relational system. In C# land this gets us a strongly typed system so that everything from end to end can be compiled and tested. The database has a hard time being tested, refactored, etc. OOP allows us to organize our application into layers and hiearchies which relational doesn't allow.
Well you've got a theoretical question.
Firstly let me agree with you that OOP is not a solve-all solution. It's good for somethings, it's not good for others. But that doesn't mean it doesn't scale up. Some horribly complex and huge systems have been designed using OOP.
I think OOP is so popular because it deserves to be. It solves some problems rather wonderfully, it is easy to think in terms of Objects because we can do that without re-programming ourselves.
So until we can all come up with a better alternatives that actually works in practical life, I think OOP is a pretty good idea and so are relational databases.
There is really no limit to what OOP can deal with - just as there is no real limit to what C can deal with, or assembler for that matter. All are Turing-complete, which is all you really need.
OOP simply gives you a higher-level way of breaking down the program, just as C is a higher-level than assembler.
The article about entity systems does not say that OO cannot do this - in fact, it sounds like they are using OOP to implement their Entities, Components, etc. In any complex domain there will be different ways of breaking it down, and using OOP you can break it down to the object/class level at some point. This does not preclude having higher-level conceptual frameworks which are used to design the OOP system.
The problem isn't the object oriented approach in most situations, the problem is performance and actual development of the underlying hardware.
The OO paradigm approach software development by providing us with a metaphor of the real world, were we have concepts which defines the common accepted and expected properties and behaivour of real objects in the world. Is the way that humans model things and we're able to solve most of the problems with it.
In theory you can define every aspect of a game, system or whatever using OO. In practice if you do, your program will simply behave too slow so the paradigm is messed up by optimizations which trade the simplicity of the model from performance.
In that way, relational databases are not object oriented so we build an object oriented layer between our code and the database... by doing so you lost some of the performance of the database and some of its expressiveness because, from the point of view of OO paradigm a relational database is a full class, is an very complex object that provides information.
From my point of view OO is an almost perfect approach in the theoretical sense of the word, as it maps closely to the way we, humans, think, but it doesn't fit well with the limited resources of the computational development... so we take shortcuts. At the and, performance is far more important than theoretical organization or clearness so this shortcuts become standards or usual practices.
That is, we are adapting the theoretical model to our current limitations. In the times of cobol in the late 70's object oriented was simply impossible... it would imply to many aspects and too little performance so we used a simplified approach, so simplified you didn't have objects or class, you had variables ... but the concept was, in that time, the same. Groups of variables described related concepts, properties that today will feet into an object. Control sequences based on a variable value where used to replace class hierarchies and so on.
I think we've been using OOP for a long time and that we'll continue using it for a long time. As hardware capabilities improve we'll be able to unsimplify the model so that it becomes more adaptable. If I describe perfectly (almost) the concept of a cat (which involves a lot of describing for a lot of concepts involved) that concept will be able to be reused everywhere... the problem here is not, as I've said, with the paradigm itself but with our limitations to implement it.
EDIT: To answer the question about why use pure OO. Every "science" wants to have a complete model to represent things. We have two physic models to describe nature, one at the microscopic level and one for the macroscopic one, and we want to have just one because it simplifies things it provides us with a better way to prove, test and develop things. With OO the same process applies. You can't analytically test and prove a system if the system doesn't follow a precise set of rules. If you are changing between paradigms in a program then your program cannot be properly analized, it has to be disected in each one, analized and then analized again to see that the interactions are correct. It makes a lot more difficult to understand a system because in fact you have two or three system that interact in different ways.
Guys, isn't the question more about ORM than OOP? OOP is a style of programming - the thing that actually gets compared is a Relational Database mapped onto objects.
OOP is actually more than just the ORM! It's also not just the inheritance and polymorphism! It's an extremly wide range of design patterns and above all it's the way we think about programming itself.
Jorge: it's ok that you've pointed out the opitimization part - what you didn't add is that this step should be done last and in 99% cases the slow part is not the OOP.
Now plain and simple: the OOP style with all the principals added to it (clean code, use of design patterns, not to deep inheritance structures and let's not forget unit testing!) it a way to make more people understand what you wrote. That in turn is needed for companies to keep their bussiness secure. That's also a recepie for small teams to have better understanding with the community. It's like a common meta language on top of the programming language itself.
It's always easier to talk about concepts from a purists point of view. Once you're faced with a real life problem things get trickier and the world is no longer just black and white. Just like the author of the article is very thorough in pointing out that they're not doing OOP the "OOP purist" tells you that OOP is the only way to go. The truth is somewhere in between.
There is no single answer, as long as you understand the different ways (OOP, entity systems, functional programming and many more) of doing things and can give good reason for why you're choosing one over the other in any given situation you're more likely to succeed.
About Entity Systems. It's an interesting conception but it brings nothing really new. For example it states:
OOP style would be for each Component to have zero or more methods, that some external thing has to invoke at some point. ES style is for each Component to have no methods but instead for the continuously running system to run it’s own internal methods against different Components one at a time.
But isn't it same as Martin Fowler's anti-pattern called "Anemic Domain Model" (which is extensively used nowadays, in fact) link ?
So basically ES is an "idea on the paper". For people to accept it, it MUST be proven with working code examples. There is not a single word in the article on how to implement this idea on practice. Nothing said about scalability concerns. Nothing said about fault tolerance...
As for your actual question I don't see how Entity Systems described in article can be similar to relational databases. Relational databases have no such thing as "aspects" that are described in the article. In fact, relational - based on tables data structure - is very limited when it comes to working with hierarchical data, for example. More limited than for example object databases...
Could you clarify what exactly you are trying to compare and prove here? OOP is a programming paradigm, one of the many. It's not perfect. It's not a silver bullet.
What does "Relation Oriented Programming" mean? Data-centric? Well, Microsoft was moving towards more data-centric style of programming until they given up on Linq2Sql and fully focused on their O/RM EntityFramework.
Also relational databases isn't everything. There is many different kinds of database architectures: hierarchical databases, network databases, object databases ect. And those can be even more efficient than relational. Relational are so popular for nearly the same reasons why OOP is so popular: it's simple, very easy to understand and most often efficient enough.
Ironically when oo programming arrived made it much easier to build larger systems, this was reflected in the ramp up in software to market.
Regarding scale and complexity, with good design you can build pretty complex systems.
see ddd Eric Evans for some principle patterns on handling complexity in oo.
However not all problem domains are best suited to all languages, if you have the freedom to choose a language choose one that suits your problem domain. or build a dsl if that's more appropriate.
We are software engineers after all, unless there is someone telling you how to do your job, just use the best tools for the job, or write them :)

Formal Methods and Enterprises [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So...
I teach formal methods in software engineering. I also teach "agile methodologies". Most people seem to think this is contradictory. I think it makes a lot of sense... I also work for a company, where we need to actually get things done :) While I can apply my earned skill points on "specification" in a day-to-day basis, my colleagues typically flee away from the word "formal".
I used to think that this was due to the intrinsic way we learn how to program: we are usually driven to find a working solution, not to understand the problem. Then I thought this was due to the fact that most people in the formal community are not engineers, but mathematicians or computer scientists. Nowadays, I wonder if it just because the formal-methods community hide behind some kind of "obfuscation" law to use all the available UNICODE symbols, actively develop rude, unesthetic tools, and laugh in the face of standards.
Yes, I've been moving from a "blame them" to a "blame us" perspective ;-)
So, my question is: do you use any kind of formal methods in your company? Have you introduced them, or were they pre-requisites? What techniques do you use to clear the fog of mathematics from people's fears and incite them to use formal methods? What do you think current tools are lacking for a more general usage?
The key to getting people to buy into any methods or methodologies is to show them how it solves problems they are having. If they can see it will make their lives better you have a much improved chance of getting them to adopt the techniques.
And if you can't show them that, perhaps you wanted to adopt the methods based on philosophy rather than practicality. Unless the others share your philosophy then you're not going to get anywhere. And perhaps you shouldn't.
Over the decades there have been a great many methodologies. Newer ones always address the shortcomings of the old ones, yet projects still get in trouble and fail. Why? Because the rock stars that come up with new methodologies are rock stars, and have made a new methodology precisely because they understand the underlying issues and how to apply them. Those who come after tend to blindly follow the recipe, and it doesn't work so well.
So I think the best thing is to teach about the underlying problems and then show how various methods attempt to deal with those problems. The differences in companies, projects, and teams is so great that no one methodology can be applied successfully to all combinations. Learning to choose an appropriate tool and apply it well is crucial.
Thank you for all contributions. They are very insightful. Allow me to flame a bit (don't take it personal, though :-)
Most people seem to think that formal methods are just about program verification. Or critical systems. This may be true if we pursue the ultimate cliche: to prove we are doing the program right (v.s. validation, which asks, as a contributor said, if we are doing the right program).
But consider model finding/checking tools, such as Alloy. Learning to use a tool like this takes a negligable ammount of time for anyone used to UML and OO. Still, it can give you immediate insight over your model. It usually takes no more than 10 minutes to find a counter-example over a small enough subset of the model one's trying to use (and that includes describing the model in Alloy in the first place).
Take requirements engineering as an example. One usually draw a lot of UML. Few people use OCL, though, and many business rules are informally annoted in natural language. Why? Time constraints?
Now consider the fact that the majority just uses her/his gut-feeling to prove that a model is satisfiable. Again, why? I can take the same amount of time (probably even less, since I don't need to care about drawing aesthetics) to write that model in Alloy, and just check for satisfiability? And what kind of mathematics do I need to now? "Predicates"? Fancy name for IFs and booleans ;-) Quantifiers? Fancy names for ForEachs()...
What about big information systems? They don't need to be critical... Just try to analyze in your head a conceptual (not implementation!) diagram with over 600 classes. I see many people banging their head in the wall with easy-to-make model mistakes because they missed some constraint, or the model allows stupid things to happen.
The fact is, one does not need to use formal approaches from head to tail. Granted, I could prove a whole application in Coq, and certify that it is 100% compliant with some specification. This may be the Computer Scientist/Mathematician approach.
Still, with a GTD philisophy, why can't I delegate some tasks for the computer and allow it to help improving my development? Is it really a matter of "time", or plain, simple lack of technical abilities and will to learn/inovate?
Working with line of business IT development in an enterprise means having to transfer knowledge about the business from actual business people into the heads of developers. While I myself find abstract maths to be one of the greatest pastimes there is, it's a terrible communications tool. And communications is what it's all about. While I might conceivably have some success convincing IT people to embrace more abstract notations, I basically have no chance with the business people.
While there are some areas where I can see a role for formal methods in an enterprise (math- and logic-heavy specialist software, significant need for provable properties as in safety critical software) they provide little help with getting correct requirements on e.g. how to fulfil a customer order by issuing one or more supply orders to a set of possible external or internal providers.
I think the jury is still out on model based approaches and domain specific languages. I think they will succeed or fail depending on whether they provide quicker feedback from IT to the wishes and needs of the business side, and whether they presume business people will have to do any significant studying.
Technology is easy. Communication is hard. Formal methods may help us do things right, but those I've seen do nothing to help us do the right things. (Yes, these are cliches, but that's because they're inescapably and painfully true.)
I'm taking a course on 'Specification and Verification'. As part of the course structure we are doing the following-
1. Learning tools like PVS(Prototype Verification System) http://pvs.csl.sri.com/ and SMV(Software Modeling and Verification) http://www.cs.cmu.edu/~modelcheck/smv.html
2. Apart from that we do dissect accidents which happened because of software failures. For e.g. - Failure of Ariane V
I feel formal methods are more applicable to scenarios where the failure cost is more than the design cost. And it seems apt to use them for softwares being used in critical systems. I guess it is used in avionics, chip design etc. and the current automobile industry is also drafting it into practice.
I have tried to get people to embrace formal specification methods a few times (Z and Alloy) and have made the same expirience that you have: Most people, while feeling that they serve a useful purpose, are very uncomfortable using them for actual work.
Funny enough, the same people are more than happy to produce utterly useless UML diagrams in ginormous quantities.
I think there are two main reasons for this:
a.) Many developers are uncomfortable with the level of abstraction required by a formal approach. The fact that most entry-level mathematics education is all calculus and non discrete-mathematics might have to do something with this.
b.) Formal methods require a very bottom up design aproach where you design your core model from the ground up and make it airtight and then connect it up to the actual user requirements by providing an interface on top of it. Since we tend to have requirements drive development efforts, a top-down approach feels more natural although it often leads to inconsistent models. It's like retrofitting a basement underneath your house after it has already been built.
Formal methods make no sense in systems where the cost of failure is low.
In a production web application, you've got multiple front-end boxes, multiple back-end boxes, multiple database boxes - if a program on any one of them fails, it's a non-event. Hardware is so cheap that you can build these systems for far less than the cost of formally specifying all your software.