OpenComponents for rendering a whole page - anti-patterns

Since OpenComponents are intended for rendering small reusable components (with a well defined interface) inside a web page, would using an OpenComponent to render the whole page content including nested OpenComponents be a reasonable use case or would it be considered an anti-pattern?

I have seen indeed OC used for whole pages and I have done it myself in the past.
One thing to reason about, in my opinion, is where the boundaries are in regards of modularising different parts as libraries (so, composition during build time) vs OCs (composition during render time).
In my opinion, despite that is a decision that can be easily reversed during the development, it is worth spending some time at the beginning, perhaps with some Proof of concepts, in order to understand how the development experience is in regard of developing, testing, and deploying.
In my experience, when the ownership of the entire OC content is on a specific team, it's ok to have everything inside modularised as libraries, as there is enough autonomy. When instead, multiple parts are owned by different stakeholders, it makes sense to reason about modularising via OCs, because OC makes easy to explicitly establish the contracts between interfaces and gives autonomy to different owners to develop and deploy independently.
In conclusion, I wouldn't say it is an anti-pattern.

Related

Method JavaFx TreeItem getRoot() is not visible. What is the OOP/MVC reason it is not?

I needed to get the root item of a TreeView. The obvious way to get it is to use the getRoot() on the TreeView. Which I use.
I like to experiment, and was wondering if I can get same root, buy climbing up the tree from a leaf item (a TreeItem), using recursively getParent() until the result is NULL.
It is working as well, and, in my custom TreeItem, I added a public method 'getRoot()' to play around with it. Thus finding out this method does already exist in parent TreeItem, but is not exposed.
My question : Why would it not be exposed ? Is is a bad practice regarding OOP / MVC architecture ?
The reason for the design is summed up by kleopatra's comment:
Why would it not be exposed I would pose it the other way round: why should it? It's convenience api at best, easy to implement by clients, not really needed - adding such to a framework/toolkit tends to exploding api/implementation to maintain.
JavaFX is filled with decisions like this on purpose. A lot of the reasoning is based on experience (good and bad) from AWT/Spring. Just some examples:
For specifying execution on the UI thread, there is a runLater API, but no invokeAndWait API like Swing, even though it would be easy for the framework to provide such an API and it has been requested.
Providing an invokeAndWait API means that naive (and experienced :-) developers could use it incorrectly to accidentally deadlock threads.
Lots of classes are final and not extensible.
Sometimes developers want to extend classes, but can't because they are final. This means that they can't over-ride a lot of the built-in tested functionality of the framework and accidentally break it that way. Instead they can usually use aggregation over inheritance to do what they need. The framework forces them to do so in order to protect itself and them.
Color objects are immutable.
Immutable objects in general make stuff easier to maintain.
Native look and feels aren't part of the framework.
You can still create them if you want, and there are 3rd party libraries that do that, but it doesn't need to be in the core framework.
The application programming interface is single threaded not multi-threaded.
Because the developers of the framework realized that multi-threaded UI frameworks are a failed dream.
The philosophy was to code to make the 80% use case easier and the the 20% use case (usually) possible, using additional user or 3rd party code, while making it difficult for the user code to accidentally (or intentionally) break the framework. You just stumbled upon one instance of an application of this philosophy.
There are a whole host of catch-phrases that you could use to describe the reason for this design approach. None of them are OOP or MVC specific. The underlying principles have been around far longer than software engineering, they are just approaches towards work and engineering in general. Here are some links if interested:
You ain't going to need it YAGNI
Minimal viable product MVP
Worse-is-better
Muntzing
Feature creep prevention
Keep it simple stupid KISS
Occam's razor

What are the disadvantages of the ECS (Entity-Component-System) architectural pattern, compared to OOP (or other paradigms)?

Because of Unity ECS, I've been reading a lot about ECS lately.
There are many obvious advantages to an ECS architecture:
ECS is data-oriented: Data tends to be stored linearly, which is the most optimal way for the system to access it. In decent ECS implementations, data is stored and processed sequentially, with few or no interruptions for any given system processing it's set of components.
ECS is very compartmentalized: It naturally separates data from behavior, enforces 'composition over inheritance' (google it), etc.
ECS is very friendly to parallel-processing and multi-threading: Because of the way things are structured, many entities and components can avoid conflicts and be processed in parallel to other systems.
However, disadvantages to ECS (compared to OOP, or Entity-Component [without systems], as is common in game-engines including Unity up until recently) are rarely, if ever, talked about. Do they exist? And if they do, what are they?
Here's a few points I gathered from my research:
Systems are very dependent on their ordering. Introducing new systems in between already existing Systems can be a challenge.
You also need to plan ahead your data as much as possible, since they will potentially be used by a LOT of systems. Changing the content of components could potentially break quite a few systems.
Although it's easy to debug the flow of a system, it's also harder to debug single component changes and not have a global view of what happened to the entity across all it's components. I'm not sure if Unity introduced new debug features for this.
If you're planning to use ECS in your team, introducing a new paradigm to devs that are not familiar with it could be a challenge. The onboarding time could be longer with more overhead.
Hope this gives you a good starting point.
When it comes to Unity3D, one disadvantage which comes to my mind is that the ECS there is quite restricted to the Unity classes (e.g. MonoBehaviour) and lifecycle. That means that the components are not easy to share with other C# code whereas a well-designed OOP class is reusable by other platforms than Unity.
Another point which comes to my mind is that using Interfaces with Components is sometimes not easy in Unity because only in the newest version serialization of interfaces are supported. Without serialization there don't appear inside of the inspector.

How to get non-programmers to understand domain model

When working on a complicated project, many people will involved in the developemnt, over a long time span. Therefore comes the problem of how to get everyone involved to understand the domain model.
When a project is first developed following DDD, it's probably well discussed among all people, and is designed carefully. At this stage it's relatively easy for everyone to understand and agree on the underlying domain model.
However as the project iterated over a longer period, different groups of people may be involved and few people could still hold the full picture. Even if the code is very well maintained, it's hard for non-programmers, including domain expert/ product managers/ testers, to grasp the business rules embeded in the code.
The only way out I could think of, is to keep the documents/umls/graphs well maintained for each changes, and always reflect the underlying model. However I think this is a huge challenge for any non-trival project. And it's very hard to decide how much details need to be included in the document.
Is there any best practice which I could learn from, such that the domain model could be well understood by people, and is also easy to evolve with the product?
Use Behaviour Driven Development (BDD).
BDD is like TDD. But BDD always focuses on testing domain behaviour, and tests are defined using the domain's Ubiquitous Language. All stories/features/scenarios can be written in a structured, human readable form (like this)
And because the tests are tightly coupled to your code, they have to stay in sync (assuming that your team is disciplined in keeping tests up-to-date).
In my experience, this offers the most economical solution for exposing up-to-date domain rules in a moderately readable format.
The most important strategy that comes from the DDD are the Bounded contexts. This corresponds to the (sub)domains in the business. Ideally they should map one to one. So, the first step that should be done is to clearly identify the (sub)domains; this is also the hardest.
Instead of keeping a lot of diagrams with a lot of details that are hard to keep up to date with the project, one should draw a context map.
After this you have already split the problem in pieces that are more manageable. Seeing a context map one could understand the big picture in a manner of minutes.
For each (sub)domain you can keep a list of main business rules. The Task management software (i.e. Jira) could be the single source of truth for these.

API library decoupling approaches?

Imagine a set of libraries that represent some APIs. By using an inversion of control mechanisms, concrete implementations will be injected in a consuming project.
Here is a situation. I have some of the API libraries depending on other API libraries for certain functionalities - therefore the API libraries themselves are coupled at some point. This coupling can become an issue later, because changing one API will result in changes of the dependent APIs, and the corresponding implementations will also need to be changed, so in the worst case we end up with quite a number of projects that need to be modified to reflect a change form only one of them.
Now I have in mind two possible solutions for this:
Create a monolith API project that unites the related API libraries.
Further decouple APIs by making each library provide interfaces for all functionalities that are dependent on the other API, so the direct dependency is removed. This might result in a similar code in both libraries, but gives freedom to the implementations chosen via the IoC mechanisms and also allows the APIs to improve independently from each other (when an API is changed, the changes would affect only its implementation libraries, not other APIs or their implementatons).
The problem with the second approach is the duplicating of code, and the result might be of having too much api libraries that need to be referenced (for instance, in .NET application each API will be a separate DLL. In some scenarios, like Silverlight applications, this can be an issue with app size - download time and client performance overally).
Is there a better solution for the situation. When is it better to merge some API libs into one bigger and when not? I know this is a very general question I am asking, but lets ignore the due dates, estimations, client requirements and technologies for a moment, I want to be able to determine the right approach based on both achieving maximum scalability and minimum maintanance time. So, what could be a good reason to choose either approach, or another one you might suggest me?
Edit:
I feel like I must clarify something about the question. I have in mind decoupling APIs from each other, not the API from its implementation. So, for instance if I have security API for validating permissions of access, and user accounts API that uses (references) the security API, changing security API will bring the need of changing the user accounts API, and the implementations of both of them. The more APIs that happen to be coupled this way, the more changes will have to be applied. It is what I want to avoid.
The choice is between few huge libraries and a myriad of small libraries.
If you have a huge library, the code within will tend to be tightly coupled simply because there's no force providing pressure to design the various elements in a loosely coupled way. The risk is that it becomes harder and harder to evolve that library because there are so many interdependencies that must be coordinated. Think about the .NET Base Class Library as an example.
If you have a myriad of small libraries, you might risk dll hell. Yes, we were promised many years ago that this was over, but it's not. Just try to consume a lot of fine-grained open source libraries in your application code base and you'll know what I mean.
Still, the Single Responsibility Principle also applies at the package level, so I'd recommend small, focused libraries instead of huge general-purpose libraries. This also makes it easier to always pick best-of-breed libraries.
Small libraries can always be composed/compiled into larger libraries (in .NET with an Assembly Linker / Merger / Repacker utility), while it's much harder to split a big library.
No matter what you do, the most important thing to keep in mind is backwards compatibility. The fewer breaking changes you introduce, the easier those libraries will be to manage.
I don't see this as a problem, really.
Some library will depend on other libraries, and this is fine to me: improving one library will improve all the dependents! The "owner" of a library will have the responsibility not to break existing code, when making a change, but this is normal and can easily be handled if the code is well designed.
If you have changes rippling through all dependent code you should reconsider your design. If your library surfaces a certain API it should isolate its consumers from changes to underlying classes or libraries.
Update 1:
If your application uses Library1 with API1 it should not have to deal with the fact that Library1 uses Lib2, Lib3, .. , LibX.
E.g. The Moq mocking library depends on CastleDynamicProxy. Why should you have to care about that? You get an assembly where DynamicProxy is already merged in and you can just use Moq. You never see, use or have to care about DynamicProxy. So if the DP API changes, that would not affect your tests written using Moq. Moq isolates your code from changes in the API of the underlying DP.
Update 2:
Finding a problem valid for more than one branch causes modifications
of all of them
If that is the case you don't build a library but a helper for a very specific problem that should NEVER be forced upon other projects. Shared libraries tend to degenerate to a collection of "might be useful somewhere in the distant future". Don't! This will always bite you in the a**! If you have a solution for a problem that occurs in more than one place (like Guard classes): share it. If you believe that you might find a use for some solution to a problem: leave it in the project until you really have that situation. Then share it. Never do that "just in case".

How should Web Applications Interfaces be designed?

I am building a web application and have been told that using object oriented programming paradigms affects the performance of the application.
I am looking for input and recommendations about design choices that come from moving from One Giant Function to a Object-Oriented Programming Interface.
In order to be more specific: If a Web Application used OOP and created objects that live for a very short time period. Would the performance hit of creating objects on the server justify using a more functional ( I am thinking static functions here ) design.
Wow, big question, but in short (and comment if you want more info) OOP code/practises (ideally well written at that) will give you far more maintainability, testability and joy to code in that OGF coding.
As for the speed arguement, that really is only an issue if you are really trying to squeeze every possible last ounce of CPU out of a server thats going to get hammered. In which case you are problably doing something wrong and need to think about better/more servers or you work for NASA or are doing it for a dare.
I dont know about performance but it definitely makes it easier to maintain.
I am looking for input and
recommendations about design choices
that come from moving from One Giant
Function to a Object-Oriented
Programming Interface.
as David suggested, answering the above will require lot of pages.
Perhaps you should be looking at frameworks. They make some design choices for you.
The most popular way of designing a web app with OOD/OOP is to use the Model-View-Controller pattern. To summarise the 3 main participants:
Model - I'm the stuff in the problem domain that you're manipulating.
View - I'm responsible for drawing and managing what you see in the browser. In web apps, this often means setting up a html template and pushing name-value pairs out into it.
Controller - I'm responsible for handling requests that come in from the web and working out what to do with them and arranging with the other objects to get that work done.
Start with the controller...
Views and Controllers often come in pairs. The controller accepts the HTTP request, works out what needs doing - and it either does it (if the work is trivial) or delegates the work to another object to do. Typically it find the view that's to be used and gives it to the object that's doing the actual work to write output into.
What I've described here corresponds with what you'd expect to find in something like Ruby on Rails.
Making lots of objects that you use once is certainly a concern - but I wouldn't worry about that aspect of performance up front. Proper virtual machines know how to manage short-lived objects. There's plenty of things you can do to speed up a web app - and I would start by sacrificing the benefit of clear organization for a speed up that might not even be the most important optimization...
MVC isn't the only way to go, there are other patterns like Model-View-Presenter and some really unusual approaches like continuation servers (e.g. Seaside) - which reuse the same objects between HTTP requests...
Yes. When doing an OO approach to web app development (using Seaside) I can deliver functionality so much faster that I have sufficient time to think about how to deliver the right amount of performance.