What are the most practical Object-oriented software modeling methods in real world projects? - oop

I want to develope a big project, but I really don't know what is the best way to model my project. Do I even need to model my project?
What are the most practical OOP software modeling methods in real world projects? What are the best and most useful ones?

Many times its needed to capture the complex structure of classes you have in you OO system, so class diagrams from UML are used for modeling. You can also want to describe interactions of classes, for that sequence diagrams are useful. There are also other UML diagrams and each has its purpose.
If you are looking for an approach to modeling, try looking at Unified Process, which is adevelopment method, which is created by authors of UML and uses UML quite heavily and also describes how UML can be used.

Agile methodology is currently what is recommended. If you add a slice of UML then it would be better :-)

Modeling (design) is the most important part of every project.
In fact as times goes by, we sacrifice performance to gain higher level of design.
Why .NET framework is popular (compare to old tools) ? In most cases its libraries are wrappers over traditional win32 APIs, a waste of performance, instead it provides better design, which makes it easy to learn and use.
So if your project have a good design it would be easy to understand, develop, debug, maintain and extend.
Another example is OOP itself which has classes, interfaces... and bunch of constructor/destructor calls. OOP concepts are borrowed from psychiatry and the way human being see the world.
Here are two different concepts:
1) Design methodology
2) Project management methodology
There are many and I don't name good or bad. Each of them fits a scenario.
About design methodology I prefer DDD (Domain Driven Design) as it maps the industry domain terminology and concepts. So if you have a decision problem about what to do if A->B->C happened, simply you can ask a domain professional and he will say what they do in real world. DDD is good for old enough industries that have cumulative wisdom. I'm not gonna write more about design since we don't know about the project.
Project management methodologies (like agile) are the way you build the building from the map (design). The goal of project management is to use resources optimal (time, money, human resources...). This is done through work breakdown structure and make work as parallel as possible. The most known project management methodology is the traditional one in which we do everything in sequence, as civil engineers do (foundation, structure, walls...). This was good for many centuries until last decades (software industry), since in traditional project management you know where you are, where you want to go, and how to reach there. This way you can buy your furniture for a home that's a land yet !
Software industry has very rapid changes in tools and methods because is was new and no best practices were founded on thousands of failed projects. Many times when a project started it has changes because of changes in developing tools and frameworks. Other source of change is the scope of the project (where to go). Software is an intangible product so you fall in the trap of time estimations easily. For software development best practice are iterative methodologies.
Iterative methodologies suggest, a working incomplete solution which you make more complete in next iterate, rather than a non working partially complete one. This has a time overhead, instead, you sure the solution works and if any problem, you find in early stages. That's why we have nightly builds !

The best is Visual Studio 2010 Ultimate others are too cumbersome. Otherwise use light tools like yuml see http://askuml.com for samples.

Related

What types of architecture or architecture layers are not suitable for automated testing?

I was recently tasked with developing automated build and release pipelines for one of my company's legacy applications. After some investigation, I keep hearing from managers and other devs that certain application layers and architectures don't lend themselves to automation, particularly automated testing. Therefore, it's often suggested I shouldn't bother trying to apply DevOps principles and AT unless I want to re-architect the whole app.
The common cited example would be PL/SQL backends or monolithic architectures. I asked why these were not suitable, but never got a really clear answer. Does anyone have any insight on when automated test should not be used in favor of dumping the old architecture and starting fresh?
Short answer - ones that suffer from testability issues.
For a more in depth one, let's first admit that many software systems are untestable, or not immediately testable. So that, the effort of
trying to apply DevOps principles and AT
is far greater than the ROI. Such notorious example is Google's ReCAPTCHA, which causes some pain for the automation testing folks (like me). The devs are actually right to say that it will take be a
re-architect the whole app
journey, as testabilty is highly related to other key software qualities such as encapsulation, coupling, cohesion, and redundancy.
common cited example would be PL/SQL backends or monolithic architectures
Now, that is totally not the case. The firt one is more data-centric and requires a deeper understanding, but there are solutions to that as well. As to, single-tiered software applications - one can argue that in contrast to the mSOA, monolithic applications are much easier to debug and test. Since a monolithic app is a single indivisible unit, you can run end-to-end testing much faster/easier.
Put simply - if your app is highly testable, is highly usable. In case, the architecture and design were aligned to a very, very specific company needs - no wonder, is usable only up to a point.

Why was cakePHP designed to use Inheritance over Composition even though it's mostly considered a bad design?

CakePHP Applications being made in our company tends to become unmaintainable as it becomes more complex. I figured that one specific reason is inheritance which makes the functions in child classes depends a lot on it's parent classes and vice-versa (implementing template method pattern). Why is CakePHP designed this way and not friendly in using Dependency Injection, Strategies, or Factory patterns?
There is not such a bad design as you claim in the framework. Sure, there are probably things that could be done better but I would like to see a more substantial critic including solid arguments and examples. I assume you're not using the framework as it was intended.
Let me quote the first paragraph from this page.
According to Eric Evans, Domain-driven design (DDD) is not a technology or a methodology. It’s a different way of thinking about how to organize your applications and structure your code. This way of thinking complements very well the popular MVC architecture. The domain model provides a structural view of the system. Most of the time, applications don’t change, what changes is the domain. MVC, however, doesn’t really tell you how your model should be structured. That’s why some frameworks don’t force you to use a specific model structure, instead, they let your model evolve as your knowledge and expertise grows.
You're not showing code (for a reason?) so I guess your problem comes from stuffing everything into the table objects in src/Model/Table/ or doing something similar.
But you're totally free to create a folder structure like
/src/Service
/src/Model/Domain
and then simply instantiate services as you need them in your controller actions. A service could be for example \App\Service\User\Registration and using objects from App\Model\Domain\User.
I agree that the framework in fact doesn't provide any recommendation or template structure for how this could look like. For exactly this topic there is a discussion going on here. Because of a lack of such a structure I've started working on a plugin that provides this. The plugin doesn't require but suggest the usage of DI containers for the people who want them.
Given the whole fancy topic around DI and DDD so far I would say there is not the one way to get things right but different paths as long as the code is easy to maintain. And honestly, as long as this goal is archived I really don't care about how you call it. :) I think many people tend do make this topic to academic instead of simply trying to be practical.
Not everybody is even needing that structure. It depends on if you're building a RAD CRUD application or a more complex app. Not every application needs a DDD approach. There are so many shades of gray when it comes to design the business layer, no matter how the framework would do it, somebody would always complain about it.
I personally almost never missed a DI container in CakePHP, not even in the biggest project having more than ~560 database tables which was a hospital management solution and it just worked well.
I would suggest you to ask a more specific question about your approach how you structured your code and showing your structure and code and then asking for advice on how to improve it instead of blaming the tool you're using in the first place without providing context.
Unfortunately CakePHP v3 can not compare to the Zend3/Laminas, Symfony or Laravel.It is 7-8 years behind the other frameworks.If you are using cake for years or it is your 1st and last framework it is normal to not realise that.But if you have to use it after Zend 3... cake seems like really bad ecosystem.
Bad documentation
Bad ORM
Poor Routing system
Bad Templating engine
Bad idea to mix Data Mapper and Active Record
DIC is totally missing
Components - not good but not terrible
...
And many more thinks that should not be underestimated like - lack of GOOD tutorials, pluigns/addons/packages
The above thinks make developers to follow bad practices that adds a lot of technical depth.
If you care just for - it works! But not how it works and why it is bad, cake will fit ok for you.
Cake can not scale as good as Symfony/Laminas if you are doing big project.(yea AWS/GC can help for scaling a lot of thinks but not for scaling source code)
Cake doesn't allow you rapid development like Laravel/Symfony for decent project.
I'm wondering who and WHY would start a new project today using Cake as it has zero benefits over the other frameworks.
Probably only devs who used only Cake for last decade and do not want to start learning new technologies or devs that thinks SOLID is just a fancy hype with zero benefits like design patterns, DRY and KISS
CakePHP framework supplies user interaction with databases using Active record, it means that exist a high coupling between business layer and database layer which has negative effects in unit testing and because of that the framework is not friendly with Dependency Injection. The same issue happens with Factory pattern, high coupling mentioned before makes more difficult use simulated objects in unit testing.
Hope it helps!
Alberto

Can good Object Orientated Design be formalised as good relational database design has been?

In the database world, we have normalisation. You can start with a design, crank the steps and end up with a normal form of the database. This is done on the basis of the semantics of the data and can be thought of as a series of design refactorings.
In object orientated design, we have the SOLID principals and various other adhoc guidelines towards good design.
Do you think it is possible to define the equivalent of normal forms for OO, such that a series of refactoring steps could move a procedural piece of code (or poorly factored OO design) into a correct (in some well-defined sense) formulation with the same functionality?
(NB. Happy to make this community wiki)
It is a possibility, but highly unlikely.
Context
First, in the days when the Relational Model came out, people who worked in IT were more educated and they esteemed standards. Computer resources were expensive, and people were always looking for the best way to use those resources. People like Codd and Date were giants in an industry where people were high tech.
Codd did not invent Normalisation, we were Normalising our non-relational databases long before Relational came along. Normalisation is a theory and practice, published as the Principle of Full Normalisation. We were Normalising our programs, we considered accidental duplication of a subrotine (method) a serious error. Nowadays it is known as Never Duplicate Anything or Don't Repeat Yourself, but the recent versions do not acknowledge the sound academic theory behind, and are therefore its power is unreallised.
What Codd did (among many things) was define formal Normal Forms specifically for Relational Databases. And these have progressed and been refined since then. But they have also been hijacked by non-academics for the purpose of selling their gear.
The database modelling that was invented by Codd and Chen, and finished by Brown had a solid grounding. In the last 25 years, its has achieved Standardisation and been further refined and progressed by many others who had solid grounding.
The World Before OO
Let's take the programming world before OO. We had many standards and conventions, for modelling our programs, as well as for language- and platform-specific implementation. Your question simply would not apply in those days. The entire industry understood deeply that database design and program design were two different sciences, and used different modelling methodologies for them, plus whatever standards applied. People did not discuss if they implemented standards, they discussed the extent to which they complied with standards; they did not discuss if they modelled their data and programs, they discussed the extent to which they modelled their data and programs. That is how we put men on the Moon, notably in 1969.
Dawn of OO
OO came along and presented itself as if no other programming language or design methodology existed before it. Instead of using existing methodologies and extending or changing them, it denied their existence. So, not surprisingly, it has taken 20 years to formulate the new methodologies from scratch and slowly progress them to the point of SOLID and Agile, which is not mature; the reason for your question. It is telling that more than twenty such methodologies have flashed up and died during that time.
Even UML, which could have been an outright winner, applicable to any programming language suffered the same disease. It tried to be everything to everyone while denying that mature methodologies existed.
Demise of the Industry
With the advent of MS, the attitude of "anyone can do anything" (implication: you do not need formal education or qualifications), that quality and pride of profession has been lost. People now invent things from scratch as if no one on the planet has ever done it before. The IT industry today is very low tech. You kow, but most people reading these pages do not know, that there is one Relational Modelling methodology, and one Standard. They do not model, the implement. Then re-implement. And re-implement. Re-factoring as you say.
OO Proponents
The problem was that the people who came up with these OO methods were not giants among professionals; they were simply the most vocal of an un-academic lot. Famous due to publishing books, not due to peer acknowledgement. Unskilled and unaware. They had One Hammer in their toolkit, and every problem looked like a nail. Since they were not formally educated they did not know that actually database design and program design are two different sciences; that database design was quite mature, had strongly established methodologies and standards, and they simply applied their shiny new hammer to every problem, including databases.
Therefore, since they were ignoring both programming methodologies and database methodologies, inventing the wheel from scratch, those new methodologies have progressed very slowly. And with assistance from a similar crowd, without sound academic basis.
Programs today have hundreds of methods that are not used. We now have programs to detect that. Whereas with the mature methodologies, we prevent that. Thin client was not a goal to be achieved, we had a science that produced it. We now have programs to detect "dirty" data and to "clean" it. Whereas in the upper end of the database market, we simply do not allow "dirty" data into the database in the first place.
I accept that you see database design as a series of re-factorings, I understand what you mean. To me it is a science (methodology, standards) that eliminates ever having to re-factor. Even the acceptance of re-factoring is loud signal that the older programming methodologies are unknown; that the current OO methodologies are immature. The danger, what makes it annoying to work with OO people, is that the methodology itself fosters a confidence in the One Hammer mentality, and when the code breaks, they have not one leg to stand on; when the system breaks, the whole system breaks, it is not one small piece that can be repaired or replaced.
Take Scott Ambler and Agile. Ambler spend 20 years publicly and vociferously arguing with the giants of the database industry, against Normalisation. Now he has Agile, which although immature, has promise. But the secret behind it it Normalisation. He has switched tracks. And because of his past wars, he cannot come out and declare that honestly, and give others due credit, so it remains a secret, and you are left to figure out Agile without its fundaments being declared.
Prognosis
That is why I say, given the evidenced small progress in the OO world over the last 20 years; the 20 or so OO methodologies that have failed; the shallowness of the approach, it is highly unlikely that the current OO methodologies will achieve the maturity and acceptance of the (singular) database design methodology. It will take at least another 10 years, more likely 20, and it will be over taken by some replacement for OO.
For it to be a possibility two things need to happen:
The OO proponents need formal tertiary education. A good grounding in the science of programming. Sure, anyone can do anything, but to do great things, we need a great grounding. That will lead to the understanding that re-factoring is not necessary, that it can be eliminated by science.
They need to break their denial of other programming methodologies and standards. That will open the door to either building OO on top of that, or taking the fundaments of that and merging it into OO. That will lead to a solid and complete OO methodology.
Real World OO
Obviously I speak from experience. On our large projects we use the mature analysis and design methodologies, one for database and another for function. When we get to the code-cutting stage, we let the OO team use whatever they like, for their objects only, which usually means UML. No problems with architecture or structure or performance or bloatware or One Hammer or hundreds of unused objects, because all that was taken care of outside OO. And later, during UAT, no problems with finding the source of bugs or making the required changes quickly, because the entire structure, has documented structure; the blocks can be changed.
I think this is an interesting question, because it presumes that Codd's Normal Forms are actually the definition of "correct" design. Not trying to start a flame war with that statement, but I guess my point is that there are very good reasons that many DB's aren't fully normalized (e.g. join performance) leads me to think that the real-world equivalent of normalization in OO space is probably design patterns or (as you said) SOLID. In both cases you're talking about idealized guidelines that have to be applied with a suitably critical eye, rather than slavishly followed as dogma.
Not only do I fully agree with Paul, but I will go a step further.
Models are just that - only models. The Normalization models used by Relational Databases are only one approach to storing and managing data. In fact, note that while RDBMS's are common for Data Manipulation operations (the standard CRUD), we have now evolved the DataWarehouse for consolidation, analysis, and reporting. And it most definitely does NOT adhere to the normalization models found in DML land.
Now we also have Google with their BigTable architecture, and Apache with Hadoop. These newer modeling systems reflect a change in the landscape, driven by the idea of the DISTRIBUTED database. Normalization need not apply for this club either.
We can apply a successful model ony to the point at which it becomes not-so-successful, or is supplanted by an model which better suits the needs of the designer. Note the many ways we humans have modelled our universe through physics/Astronomy what have you. Modelling attmpts to describe a system in discreet terms, but as the system, or the needs of the system change, so must the model.
OOP is and has been a very, very successfulk way to model computer applications. However, the needs of the application designer are different from thos eof Database designers. MOST of the time, there is a point at which the designer of an application must consider that his program will be interacted with by humans. Unlike the database designer, whose work will (mostly) be expected to interact with other code, the programmer's job is to take the machine and make it accessible to a much more random human-being. This art does not map quite so well to such standards like normalization.
All that said, n-tier, MVC, MVVC, and other paradims DO establish some guidelines. But in the end, the problem-space of application design is usually not as easy to fit into such discrete modelling steps as a relational databse.
Wow. Apologies for the length. If this is a breach of ettiquette here, do let me know . . .

Significant Challengers to OOP

From what I understand, OOP is the most commonly used paradigm for large scale projects. I also know that some smaller subsets of big systems use other paradigms (e.g. SQL, which is declarative), and I also realize that at lower levels of computing OOP isn't really feasible. But it seems to me that usually the pieces of higher level solutions are almost always put together in a OOP fashion.
Are there any scenarios where a truly non-OOP paradigm is actually a better choice for a largescale solution? Or is that unheard of these days?
I've wondered this ever since I've started studying CS; it's easy to get the feeling that OOP is some nirvana of programming that will never be surpassed.
In my opinion, the reason OOP is used so widely isn't so much that it's the right tool for the job. I think it's more that a solution can be described to the customer in a way that they understand.
A CAR is a VEHICLE that has an ENGINE. That's programming and real world all in one!
It's hard to comprehend anything that can fit the programming and real world quite so elegantly.
Linux is a large-scale project that's very much not OOP. And it wouldn't have a lot to gain from it either.
I think OOP has a good ring to it, because it has associated itself with good programming practices like encapsulation, data hiding, code reuse, modularity et.c. But these virtues are by no means unique to OOP.
You might have a look at Erlang, written by Joe Armstrong.
Wikipedia:
"Erlang is a general-purpose
concurrent programming language and
runtime system. The sequential subset
of Erlang is a functional language,
with strict evaluation, single
assignment, and dynamic typing."
Joe Armstrong:
“Because the problem with
object-oriented languages is they’ve
got all this implicit environment that
they carry around with them. You
wanted a banana but what you got was a
gorilla holding the banana and the
entire jungle.”
The promise of OOP was code reuse and easier maintenance. I am not sure it delivered. I see things such as dot net as being much the same as the C libraries we used to get fro various vendors. You can call that code reuse if you want. As for maintenance bad code is bad code. OOP did not help.
I'm the biggest fan of OOP, and I practice OOP every day.
It's the most natural way to write code, because it resembles the real life.
Though, I realize that the OOP's virtualization might cause performance issues.
Of course that depends on your design, the language and the platform you chose (systems written in Garbage collection based languages such as Java or C# might perform worse than systems which were written in C++ for example).
I guess in Real-time systems, procedural programming may be more appropriate.
Note that not all projects that claim to be OOP are in fact OOP. Sometimes the majority of the code is procedural, or the data model is anemic, and so on...
Zyx, you wrote, "Most of the systems use relational databases ..."
I'm afraid there's no such thing. The relational model will be 40 years old next year and has still never been implemented. I think you mean, "SQL databases." You should read anything by Fabian Pascal to understand the difference between a relational dbms and an SQL dbms.
" ... the relational model is usually chosen due to its popularity,"
True, it's popular.
" ... availability of tools,"
Alas without the main tool necessary: an implementation of the relational model.
" support,"
Yup, the relational model has fine support, I'm sure, but it's entirely unsupported by a dbms implementation.
" and the fact that the relational model is in fact a mathematical concept,"
Yes, it's a mathematical concept, but, not being implemented, it's largely restricted to the ivory towers. String theory is also a mathematical concept but I wouldn't implement a system with it.
In fact, despite it's being a methematical concept, it is certainly not a science (as in computer science) because it lacks the first requirement of any science: that it is falsifiable: there's no implementation of a relational dbms against which we can check its claims.
It's pure snake oil.
" ... contrary to OOP."
And contrary to OOP, the relational model has never been implemented.
Buy a book on SQL and get productive.
Leave the relational model to unproductive theorists.
See this and this. Apparently you can use C# with five different programming paradigms, C++ with three, etc.
Software construction is not akin to Fundamental Physics. Physics strive to describe reality using paradigms which may be challenged by new experimental data and/or theories. Physics is a science which searches for a "truth", in a way that Software construction doesn't.
Software construction is a business. You need to be productive, i.e. to achieve some goals for which someone will pay money. Paradigms are used because they are useful to produce software effectively. You don't need everyone to agree. If I do OOP and it's working well for me, I don't care if a "new" paradigm would potentially be 20% more useful to me if I had the time and money to learn it and later rethink the whole software structure I'm working in and redesign it from scratch.
Also, you may be using another paradigm and I'll still be happy, in the same way that I can make money running a Japanese food restaurant and you can make money with a Mexican food restaurant next door. I don't need to discuss with you whether Japanese food is better than Mexican food.
I doubt OOP is going away any time soon, it just fits our problems and mental models far too well.
What we're starting to see though is multi-paradigm approaches, with declarative and functional ideas being incorporated into object oriented designs. Most of the newer JVM languages are a good example of this (JavaFX, Scala, Clojure, etc.) as well as LINQ and F# on the .net platform.
It's important to note that I'm not talking about replacing OO here, but about complementing it.
JavaFX has shown that a declarative
solution goes beyond SQL and XSLT,
and can also be used for binding
properties and events between visual
components in a GUI
For fault tolerant and highly
concurrent systems, functional
programming is a very good fit,
as demonstrated by the Ericsson
AXD301 (programmed using Erlang)
So... as concurrency becomes more important and FP becomes more popular, I imagine that languages not supporting this paradigm will suffer. This includes many that are currently popular such as C++, Java and Ruby, though JavaScript should cope very nicely.
Using OOP makes the code easier to manage (as in modify/update/add new features) and understand. This is especially true with bigger projects. Because modules/objects encapsulate their data and operations on that data it is easier to comprehend the functionality and the big picture.
The benefit of OOP is that it is easier to discuss (with other developers/management/customer) a LogManager or OrderManager, each of which encompass specific functionality, then describing 'a group of methods that dump the data in file' and 'the methods that keep track of order details'.
So I guess OOP is helpful especially with big projects but there are always new concepts turning up so keep on lookout for new stuff in the future, evaluate and keep what is useful.
People like to think of various things as "objects" and classify them, so no doubt that OOP is so popular. However, there are some areas where OOP has not gained a bigger popularity. Most of the systems use relational databases rather than objective. Even if the second ones hold some notable records and are better for some types of tasks, the relational model is unsually chosen due to its popularity, availability of tools, support and the fact that the relational model is in fact a mathematical concept, contrary to OOP.
Another area where I have never seen OOP is the software building process. All the configuration and make scripts are procedural, partially because of the lack of the support for OOP in shell languages, partially because OOP is too complex for such tasks.
Slightly controversial opinion from me but I don't find OOP, at least of a kind that is popularly applied now, to be that helpful in producing the largest scale software in my particular domain (VFX, which is somewhat similar in scene organization and application state as games). I find it very useful on a medium to smaller scale. I have to be a bit careful here since I've invited some mobs in the past, but I should qualify that this is in my narrow experience in my particular type of domain.
The difficulty I've often found is that if you have all these small concrete objects encapsulating data, they now want to all talk to each other. The interactions between them can get extremely complex, like so (except much, much more complex in a real application spanning thousands of objects):
And this is not a dependency graph directly related to coupling so much as an "interaction graph". There could be abstractions to decouple these concrete objects from each other. Foo might not talk to Bar directly. It might instead talk to it through IBar or something of this sort. This graph would still connect Foo to Bar since, albeit being decoupled, they still talk to each other.
And all this communication between small and medium-sized objects which make up their own little ecosystem, if applied to the entire scale of a large codebase in my domain, can become extremely difficult to maintain. And it becomes so difficult to maintain because it's hard to reason about what happens with all these interactions between objects with respect to things like side effects.
Instead what I've found useful is to organize the overall codebase into completely independent, hefty subsystems that access a central "database". Each subsystem then inputs and outputs data. Some other subsystems might access the same data, but without any one system directly talking to each other.
... or this:
... and each individual system no longer attempts to encapsulate state. It doesn't try to become its own ecosystem. It instead reads and writes data in the central database.
Of course in the implementation of each subsystem, they might use a number of objects to help implement them. And that's where I find OOP very useful is in the implementation of these subsystems. But each of these subsystems constitutes a relatively medium to small-scale project, not too large, and it's at that medium to smaller scale that I find OOP very useful.
"Assembly-Line Programming" With Minimum Knowledge
This allows each subsystem to just focus on doing its thing with almost no knowledge of what's going on in the outside world. A developer focusing on physics can just sit down with the physics subsystem and know little about how the software works except that there's a central database from which he can retrieve things like motion components (just data) and transform them by applying physics to that data. And that makes his job very simple and makes it so he can do what he does best with the minimum knowledge of how everything else works. Input central data and output central data: that's all each subsystem has to do correctly for everything else to work. It's the closest thing I've found in my field to "assembly line programming" where each developer can do his thing with minimum knowledge about how the overall system works.
Testing is still also quite simple because of the narrow focus of each subsystem. We're no longer mocking concrete objects with dependency injection so much as generating a minimum amount of data relevant to a particular system and testing whether the particular system provides the correct output for a given input. With so few systems to test (just dozens can make up a complex software), it also reduces the number of tests required substantially.
Breaking Encapsulation
The system then turns into a rather flat pipeline transforming central application state through independent subsystems that are practically oblivious to each other's existence. One might sometimes push a central event to the database which another system processes, but that other system is still oblivious about where that event came from. I've found this is the key to tackling complexity at least in my domain, and it is effectively through an entity-component system.
Yet it resembles something closer to procedural or functional programming at the broad scale to decouple all these subsystems and let them work with minimal knowledge of the outside world since we're breaking encapsulation in order to achieve this and avoid requiring the systems to talk to each other. When you zoom in, then you might find your share of objects being used to implement any one of these subsystems, but at the broadest scale, the systems resembles something other than OOP.
Global Data
I have to admit that I was very hesitant about applying ECS at first to an architectural design in my domain since, first, it hadn't been done before to my knowledge in popular commercial competitors (3DS Max, SoftImage, etc), and second, it looks like a whole bunch of globally-accessible data.
I've found, however, that this is not a big problem. We can still very effectively maintain invariants, perhaps even better than before. The reason is due to the way the ECS organizes everything into systems and components. You can rest assured that an audio system won't try to mutate a motion component, e.g., not even under the hackiest of situations. Even with a poorly-coordinated team, it's very improbable that the ECS will degrade into something where you can no longer reason about which systems access which component, since it's rather obvious on paper and there are virtually no reasons whatsoever for a certain system to access an inappropriate component.
To the contrary it often removed many of the former temptations for hacky things with the data wide open since a lot of the hacky things done in our former codebase under loose coordination and crunch time was done in hasty attempts to x-ray abstractions and try to access the internals of the ecosystems of objects. The abstractions started to become leaky as a result of people, in a hurry, trying to just get and do things with the data they wanted to access. They were basically jumping through hoops trying to just access data which lead to interface designs degrading quickly.
There is something vaguely resembling encapsulation still just due to the way the system is organized since there's often only one system modifying a particular type of components (two in some exceptional cases). But they don't own that data, they don't provide functions to retrieve that data. The systems don't talk to each other. They all operate through the central ECS database (which is the only dependency that has to be injected into all these systems).
Flexibility and Extensibility
This is already widely-discussed in external resources about entity-component systems but they are extremely flexible at adapting to radically new design ideas
in hindsight, even concept-breaking ones like a suggestion for a creature which is a mammal, insect, and plant that sprouts leaves under sunlight all at once.
One of the reasons is because there are no central abstractions to break. You introduce some new components if you need more data for this or just create an entity which strings together the components required for a plant, mammal, and insect. The systems designed to process insect, mammal, and plant components then automatically pick it up and you might get the behavior you want without changing anything besides adding a line of code to instantiate an entity with a new combo of components. When you need whole new functionality, you just add a new system or modify an existing one.
What I haven't found discussed so much elsewhere is how much this eases maintenance even in scenarios when there are no concept-breaking design changes that we failed to anticipate. Even ignoring the flexibility of the ECS, it can really simplify things when your codebase reaches a certain scale.
Turning Objects Into Data
In a previous OOP-heavy codebase where I saw the difficulty of maintaining a codebase closer to the first graph above, the amount of code required exploded because the analogical Car in this diagram:
... had to be built as a completely separate subtype (class) implementing multiple interfaces. So we had an explosive number of objects in the system: a separate object for point lights from directional lights, a separate object for a fish eye camera from another, etc. We had thousands of objects implementing a few dozen abstract interfaces in endless combinations.
When I compared it to ECS, that required only hundreds and we were able to do the exact same things before using a small fraction of the code, because that turned the analogical Car entity into something that no longer requires its class. It turns into a simple collection of component data as a generalized instance of just one Entity type.
OOP Alternatives
So there are cases like this where OOP applied in excess at the broadest level of the design can start to really degrade maintainability. At the broadest birds-eye view of your system, it can help to flatten it and not try to model it so "deep" with objects interacting with objects interacting with objects, however abstractly.
Comparing the two systems I worked on in the past and now, the new one has more features but takes hundreds of thousands of LOC. The former required over 20 million LOC. Of course it's not the fairest comparison since the former one had a huge legacy, but if you take a slice of the two systems which are functionally quite equal without the legacy baggage (at least about as close to equal as we might get), the ECS takes a small fraction of the code to do the same thing, and partly because it dramatically reduces the number of classes there are in the system by turning them into collections (entities) of raw data (components) with hefty systems to process them instead of a boatload of small/medium objects.
Are there any scenarios where a truly non-OOP paradigm is actually a
better choice for a largescale solution? Or is that unheard of these
days?
It's far from unheard of. The system I'm describing above, for example, is widely used in games. It's quite rare in my field (most of the architectures in my field are COM-like with pure interfaces, and that's the type of architecture I worked on in the past), but I've found that peering over at what gamers are doing when designing an architecture made a world of difference in being able to create something that still remains very comprehensible at it grows and grows.
That said, some people consider ECS to be a type of object-oriented programming on its own. If so, it doesn't resemble OOP of a kind most of us would think of, since data (components and entities to compose them) and functionality (systems) are separated. It requires abandoning encapsulation at the broad system level which is often considered one of the most fundamental aspects of OOP.
High-Level Coding
But it seems to me that usually the pieces of higher level solutions
are almost always put together in a OOP fashion.
If you can piece together an application with very high-level code, then it tends to be rather small or medium in scale as far as the code your team has to maintain and can probably be assembled very effectively using OOP.
In my field in VFX, we often have to do things that are relatively low-level like raytracing, image processing, mesh processing, fluid dynamics, etc, and can't just piece these together from third party products since we're actually competing more in terms of what we can do at the low-level (users get more excited about cutting-edge, competitive production rendering improvements than, say, a nicer GUI). So there can be lots and lots of code ranging from very low-level shuffling of bits and bytes to very high-level code that scripters write through embedded scripting languages.
Interweb of Communication
But there comes a point with a large enough scale with any type of application, high-level or low-level or a combo, that revolves around a very complex central application state where I've found it no longer useful to try to encapsulate everything into objects. Doing so tends to multiply complexity and the difficulty to reason about what goes on due to the multiplied amount of interaction that goes on between everything. It no longer becomes so easy to reason about thousands of ecosystems talking to each other if there isn't a breaking point at a large enough scale where we stop modeling each thing as encapsulated ecosystems that have to talk to each other. Even if each one is individually simple, everything taken in as a whole can start to more than overwhelm the mind, and we often have to take a whole lot of that in to make changes and add new features and debug things and so forth if you try to revolve the design of an entire large-scale system solely around OOP principles. It can help to break free of encapsulation at some scale for at least some domains.
At that point it's not necessarily so useful anymore to, say, have a physics system encapsulate its own data (otherwise many things could want to talk to it and retrieve that data as well as initialize it with the appropriate input data), and that's where I found this alternative through ECS so helpful, since it turns the analogical physics system, and all such hefty systems, into a "central database transformer" or a "central database reader which outputs something new" which can now be oblivious about each other. Each system then starts to resemble more like a process in a flat pipeline than an object which forms a node in a very complex graph of communication.

Where can I find UML diagrams (instead of reinventing the wheel)?

I am currently trying to draw a set of UML diagrams to represent products, offers, orders, deliveries and payments. These diagrams have probably been invented by a million developers before me.
Are there any efforts to standardize the modeling of such common things? Or even the modeling of specific domains (for example car-manufacturing).
Do you know if there is some sort of repository containing UML diagrams (class diagrams, sequence diagrams, state diagrams...)?
There is a movement for documenting (as opposed to standardizing) models for certain domains. These are called analysis patterns and is a term Martin Fowler came up with. He actually wrote a book called Analysis patterns. Also, he has a dedicated section on his website where he presents some of these patterns accompanied by UML diagrams.
Maybe you'll find some inspiration that will help you in modeling your domain. I've stressed the word inspiration as I think different businesses have different requirements although they operate the same domain so the solutions you might read about may not be appropriate for your problem.
There are many tools out there that do both - but they're generally not free!
Microsoft Visio does both and is extensible. For UML artefacts they come with auto generators into VB/Java template code - but you can modify them to auto-generate any code. There are many users of Visio that have created models from which to use as templates.
Artisan Enterprize is by far the most powerful UML tool (but it's not cheap).
Some would argue that Rational Rose or RUP is the better tool
But for Car-Manufacturing and other similar real world modelling, by far the best tool is Mathworks Simulink (not because it's one of the most expensive). It is by far the best tool beccause you can animate the model - you can prove the model working before generating the slik code (in whatever grammar/language/other Models you care to push it)!
You can obtain a student license for around £180; with the 'real thing' pushing £4000 (for car-related artefacts). The full product with all the trimmings is about £15k. Simulink is also extensible with a C like language though there is a .Net addin and APIs to use a plethora of other langhuages. And, just like Visio there is a world-wide forum creating saleable, shareware & freeware real world model templates. Many world-wide Auto-Manufacturers are already using Simulink.
I think that MiniQuark question is really good and will sooner or later be provided by vendors such as Omondo, Rational IBM etc... Users doesn't just need tools, they need models out of the box and just add their business rules inside an existing well defined architecture. Why to develop from scratch a new architecture if the job has already be done ? In Java we use plenty of frameworks, existing methods etc...so why not to go one level higher and reuse architecture ? It is today impossible to guess how a project will evole and new demands are coming every day. We therefore need a stable architecture which has been tested previously and is extensible. I have seen so many projects starting with a nice architecture then realizing in the middle of the project that this is not what is the best and then changing their architecture. Renaming classes, splitting classes, creating packages etc...after the first iteration it is getting a real mess. Could you imagine what we found after 10 iterations !! a total mess !!
This mess would had been avoided if using a predefined model which has been tested previously because the missing class, or package etc..would have already been created and only a class rename would be sufficient for architecture purposes. Adding business rules methods will end the codding stage before deployment test.
I think there is a confusion between patterns and the initial question which is related to UML model re usability.
There is no today any reusable model out of the box which has been developped. This is really strange but the job has never been done or never been shared.
Omondo has tried to launch an initiative without real success. I have heard that they are working on hundred of out of box models which will be open source and given for free to the community. I hope this will be done because this is really important for me and would save me a lot of time at the beginning of a project.