'Uncle Bob' Martin makes a good argument in 'Agile Software Development' pp318-9 that 'Interfaces belong to the client, not to the derivative ... clients tend to be packaged with the interfaces they control'.
In practice, how do people package interfaces with clients in statically typed languages? From the point of view of the individual interface - where a good OO interface has several implementations - Martin's recommendation makes sense. But at the package level, there are typically many clients to each server, and having the server depend on its clients seems to make the dependency arrow point in the wrong direction. Indeed, later in the 'Principles of Package Design' Martin introduces a maxim 'Depend in the direction of stability' which intuitively suggests client -> server.
'Uncle Bob' is correct, but his advice is not always practical. When building a commercial 3rd party library, it is obviously not pracitcal to collect required interfaces from all future customers. That being said, the approach of designing the interfaces to satisfy all known use cases remains valid. Conceptually the interfaces belong to the clients, not the library.
When building in-house utilities, package the interfaces directly with the clients just as 'Uncle Bob' suggests.
Related
I know that frameworks provide useful interfaces and classes that save a lot of time in implementation phase, so my question is:
Should the framework interfaces and classes be included in my project's class
diagram design or not?
and if it is,
Does this affect the reusability of the design if I decided to change
the framework in the future?
UML diagrams are intended to be read by different interest groups. Business likes to see requirements, use cases and activities. Architects/testers need that as a basis to develop/test the system. And the results produced by the architects (static and behavioral class diagrams) are meant to be read by programmers. Each reader group has a focus on certain parts but will eventually peek more or less into border areas (from their perspective).
So to answer your question: yes, frameworks shall be part of the model. Architects should pay attention as to how to cut the system. Frameworks should be designed with a different (broader) scope. So eventually you have frameworks that will be used only partially in a system. Or a system has a potential for a framework and it will be designed to be easily decoupled. Of course, this is a tricky task and architects need lots of experience to fulfill all the needs that come from business and eventually other stakeholders.
No, theoretically it shouldn't, but you're also free to do so.
As stated by the authors of UML: Rumbaugh, Jacobson, Booch
on The Unified Modeling Language Reference Manual at page 25
Across implementation languages and platforms. The UML is intended to be usable for systems implemented in various implementation languages and platforms, including programming languages, databases, 4GLs, organization documents, firmware, and so on. The front-end work should be identical or similar in all cases, while the back-end work will differ somewhat for each medium.
As i grow in my professional career i consider naming conventions very important. I noticed that people throw around controller, LibraryController, service, LibraryService, and provider, LibraryProvider and use them somewhat interchangeable. Is there any specific reasoning to use one vs the other?
If there are websites that have more concrete definitions that would be great.
In Java Spring, Spring Boot and also .NET you would have:
Repository: persist data in the database and perform SQL queries.
Service: contain most of the business logic
Controller: define REST endpoints, which contains as little logic as possible.
Conceptually this means that the WHAT (functional) is separated from the HOW (technical) as much as possible. The services try to stay technologically neutral. By contrast a controller only wants to define an external contract for communication. And finally the repository only wants to facilitate the access to the database.
Organizing your code in this way keeps the business logic short, clean and maintainable. Unfortunately it is not always easy to keep them separated. e.g. It is tempting to pollute or enrich your objects with meta-data in the form of decorators/annotations. (e.g. database column name and data type).
Some developers don't see harm in this and get away with it. Others keep their objects strictly separated and define multiple sets of objects.
The objects for the database are often referred to as "entities" or "models".
For a REST controller they are often referred to as DTOs which stands for data-transfer-object.
Having multiple objects means that you need Mappers to convert one type of object to another. Some frameworks can do this for you (e.g. MapStruct).
It would be easy to claim that strictness is always a good thing, but it can slow you down. It's okay to strike a balance.
In Node.js, the concepts of controllers and services are identical. However the term Repository isn't used very often. Instead, they would call that a Provider or sometimes they would just generalize Repositories as a kind of Service.
NestJS has stronger opinions about this (which can be a good thing). The naming conventions of NestJS (a Node.js framework) are strongly influenced by the naming conventions of Angular, which is of course a totally different kind of framework (front-end).
(For completeness, in Angular, a Provider is actually just something that can be injected as a dependency. Most providers are Services, but not necessarily. (It could be a Guard or even a Module. A Module would be more like a set of tools/driver or a connector.)
PS: Anyway, the usage of the term Module is a bit confusing because there also are "ES6 modules", which is a totally different thing.)
ES6 and more modern version of javascript (including typescript) are extremely powerful when it comes to (de)constructing objects. And that makes mappers unnecessary.
Having said that, most Node.js and Angular developers prefer to use typescript these days, which has more features than java or C# when it comes to defining types.
So, all these frameworks are influencing each other. And they pretty much all agree on what a Controller and a Service is. It's mostly the Repository and Provider words that have different meanings. It really is just a matter of conventions. If your framework has a convention, then stick to that. If there isn't one, then pick one yourself.
These terms can be synonymous with each other depending on context, which is why each framework or language creator is free to explicitly declare them as they see fit... think function/method/procedure or process/service, all pretty much the same thing but slight differences in different contexts.
Just going off formal English definitions:
Provider: a person or thing that provides something.
i.e. the provider does a service by controlling some process.
Service: the action of helping or doing work for someone.
i.e. the service is provided by controlling some work process.
Controller: a person or thing that directs or regulates something.
i.e. the controller directs something to provide a service.
These definitions are just listed to the explain how the developer looks at common English meanings when defining the terminology of a framework or language; it's not always one for one and the similarity in terminology actually provides the developer with a means of naming things that are very very similar but still are slightly different.
So for example, lets take AngularJS. Here the developers decided to use the term Controller to imply "HTML Controller", a Service to imply something like a "Quasi Class" since they are instantiated with the New keyword and a Provider is really a super-set of Service and Factory which is also similar. You could really program any application using any of them and really wouldn't lose anything much; though one might be a little better than another in certain context, I don't personally believe its worth the extra confusion... essentially they are all providers. The Angular people could have just defined factory, provider and service as a single term "provider" and then passed in modifiers for things like "static" and "void" like most languages and the exact same functionality could have been provided; this would have been my preference, however I've learned not to fight the conventions and terminology of the frameworks your working no matter how strongly you disagree.
Looking myself too for a more meaningful name than Provider :)
And found this useful post
Old dev here that stumbled on this. My opinion and how I’ve seen it used over the last 20 years shows that it varies by language but the Java C# crowd mostly uses them as follows.
A service handles business logic and deals with domain objects. You find services in controllers and other services.
A repository does NOT handle business logic, but instead acts like a pool of domain objects (with helper methods for finding or persisting them. Services often contain repositories. Repositories often contain a context and are responsible for mapping from infrastructure shaped data to domain shaped data if the definitions have drifted apart. Controllers also often contain repositories for crud endpoints.
A context handles infrastructure the domain owns. Most often this is a database, but context means that anything that touches this data does so through (in) this context. A context returns infrastructure shaped data. A repository often contains a context. Context directly in services is sometimes appropriate. Context in controller is a hard no.
A provider provides access to infrastructure some other app owns. Most often these are rest apis, but can also be kafka streams or rpc classes that read data from or push data to someone else. If the source of truth for some of your domain objects fields changes you will probably see a provider next to a context in your repository, and your repository handles insulating the rest of your code from that change. Providers that provide rpc functionality are often found in services. In micro services or gateways or vertical slice architecture you sometimes see providers directly in controllers.
One old guy’s opinion but I hope it helps.
From the Dependency Injection in .NET book I know that the object graph should be created at the Composition Root of the application which makes a lot of sense to me when you are using an IoC Container.
In all the applications I've seen when an attempt to use DI is being made, there are always two constructors:
the one with the dependencies as parameters and
the "default" one with no parameters which in turn calls the other one "newing" up all the dependencies
In the aforementioned book, however, this is called the "Bastard Injection anti-pattern" and that is what I used to know as "Poor Man's Injection".
Now considering all this, I would say then that "Poor Man's Injection" would be just not using an IoC Container and instead coding all the object graph by hand on the said Composition Root.
So my questions are:
Am I understanding these concepts correctly or am I completely off track?
If you still need to register all the dependencies in the IoC container vs. coding them by hand in the exact same Composition Root, what's the real benefit of using an IoC container?
If I have misunderstood what "Poor Man's Injection" really is, could someone please clarify it?
When it comes to DI, there's a lot of conflicting use of terminology out there. The term Poor Man's DI is no exception. To some people, it means one thing and to others it means something different.
One of the things I wanted to do with the book was to supply a consistent pattern language for DI. When it came to all of those terms with conflicting use, I had two options: Come up with a completely new term, or pick the most prevalent use (according to my subjective judgment).
In general, I've preferred to re-use existing terminology instead of making up a completely new (and thus alien) pattern language. That means that in certain cases (such as Poor Man's DI), you may have a different notion of what the name is than the definition given in the book. That often happens with patterns books.
At least I find it reassuring that the book seems to have done its job of explaining exactly both Poor Man's DI and Bastard Injection, because the interpretation given in the O.P. is spot on.
Regarding the real benefit of a DI Container I will refer you to this answer: Arguments against Inversion of Control containers
P.S. 2018-04-13: I'd like to point out that I've years ago come to acknowledge that the term Poor Man's DI does a poor (sic!) job of communicating the essence of the principle, so for years, now, I've instead called it Pure DI.
P.P.S. 2020-07-17: We removed the term Bastard Injection from the second edition. In the second edition we simply use the more general term Control Freak to specify that your code "depend[s] on a Volatile Dependency in any place other than a Composition Root."
Some notes to the part 2) of the question.
If you still need to register all the dependencies in the IoC container vs. coding them by hand in the exact same Composition Root, what's the real benefit of using an IoC container?
If you have a tree of dependencies (clasess which depend on dependencies which depend on other dependencies and so on): you can't do all the "news" in a composition root, because you new up the instances on each "bastard injection" constructor of each class, so there are many "composition roots" spreaded along your code base
Wheter you have a tree of dependencies, or not, using an IoC container will spare typing some code. Imagine you have 20 different classes that depend on the same IDependency. If you use a container you can provide a configuration to let it know which instance to use for IDependency. You'll make this in a single place, and the container will take care to provide the instance in all dependent classes
The container can also control the object lifetime, which offers another advantage.
All of this, apart of the other obvious advantages provided by DI (testability, maintainability, code decouplig, extensibility...)
We've found, when refactoring legacy applications and decoupling dependencies, that things tend to be easier when done with a two step process. The process includes both "poor man" and formal IoC container systems.
First: set up interfaces and establish "poor mans ioc" to implement them.
This decouples dependencies without the added overhead (and learning
curve) of a formal IoC set up.
This also reduces interference with the existing legacy code. Nothing like introducing yet another set of issues to debug.
Development team members are never at the same level of expertise or understanding. So it saves a lot of implementation time.
This also allows a footing for test cases.
This also establishes standards for a formal IoC container system later.
This can be implemented in steps over time by many people.
Secondly: Each IoC system has pros & cons.
Now that an application standard is established, an educated decision
can be made in choosing an IoC container system.
Implementing the IoC system becomes a task of swapping the "poor mans" code with the
new IoC system.
This can be implemented in steps over time and in parallel with "poor man". It is better to head this with one person.
I work for a polygot organisation where we code in multiple different languages and architectural styles.
I have been writing Service Orientated Application's for around two years now, and have gotten comfortable with the way I do things, and that's the problem.
At the Big SOA level we all agree on how to use SOA principles to connect different pieces of the solution/enterprise.
At the component level we all differ slightly;
Currently I take the every high level component as a service approach to SOA, favouring capability driven interfaces and softeware fortresses. Be the implemenation beans or wcf services the pattern remains unchanged.
Like so, SOA Design Pattern
Others in my organisation opt for the rich domain model of standard classes underneath a facade.
Architectural styles like SOAP, REST have both been used at this level.
We also differ in the style of method call, command style messages vs more activity descriptive messages.
I have used both and am happy with either, my question are there other methods are other engineers using to compose their SOA.
I am interesed in new ideas, how ever wacky, to stimulate new ways of thinking around the topic of building a SOA.
I spent awhile building up a components-based approach to SOA called SoaKit that might be helpful. See http://bradjcox.blogspot.com for rationale.
The basic idea is to avoid tools-based approaches (JAX-WS) in favor of a suite of pre-built components (provided by SoaKit) that each do commonly-needed functions and can be snapped together easily to do the whole job. Example components: add SAML signed header, decrypt/encrypt message parts, XSLT/XQUERY transforms, and so forth and so on, with each such component independently configurable.
If an enterprise is a city, a service is a house in that city, and SoaKit components are bricks for building houses. The blog has articles that contrast that with the mud-brick approach commonly used today. The analogy is to evoke the impact Roman brick architecture brought to building construction, seeking to bring the same impact to software.
Hope the notion is helpful. Shelved the idea because the world seemed bent on monolithic magic push-button approaches (JAX-WS) that are nearly impossible to control or understand. That's been my experience with JAX-WS/Metro and WSO2 at least.
There's been a lot of interest in Service-Oriented Architecture (SOA) at my company recently. Whenever I try to see how we might use it, I always run up against a mental block. Crudely:
Object-orientation says: "keep data and methods that manipulate data (business processes) together";
Service-orientation says: "keep the business process in the service, and pass data to it".
Previous attempts to develop SOA have ended up converting object-oriented code into data structures and separate procedures (services) that manipulate them, which seems like a step backwards.
My question is: what patterns, architectures, strategies etc. allow SOA and OO to work together?
Edit: The answers saying "OO for internals, SOA for system boundaries" are great and useful, but this isn't quite what I was getting at.
Let's say you have an Account object which has a business operation called Merge that combines it with another account. A typical OO approach would look like this:
Account mainAccount = database.loadAccount(mainId);
Account lesserAccount = database.loadAccount(lesserId);
mainAccount.mergeWith(lesserAccount);
mainAccount.save();
lesserAccount.delete();
Whereas the SOA equivalent I've seen looks like this:
Account mainAccount = accountService.loadAccount(mainId);
Account lesserAccount = accountService.loadAccount(lesserId);
accountService.merge(mainAccount, lesserAccount);
// save and delete handled by the service
In the OO case the business logic (and entity awareness thanks to an ActiveRecord pattern) are baked into the Account class. In the SOA case the Account object is really just a structure, since all of the business rules are buried in the service.
Can I have rich, functional classes and reusable services at the same time?
My opinion is that SOA can be useful, at a macro level, but each service probably still will be large enough to need several internal components. The internal components may benefit from OO architecture.
The SOA API should be defined more carefully than the internal APIs, since it is an external API. The data types passed at this level should be as simple as possible, with no internal logic. If there is some logic that belongs with the data type (e.g. validation), there should preferably be one service in charge of running the logic on the data type.
SOA is a good architecture for communicating between systems or applications.
Each application defines a "service" interface which consists of the requests it will handle and the responses expected.
The key points here are well defined services, with a well defined interface. How your services are actually implemented is irrelevant as far as SOA is concerned.
So you are free to implement your services using all the latest and greatest OO techniques, or any other methodology that works for you. ( I have seen extreme cases where the "service" is implemented by actual humans entering data on a screen -- yet everything was still text book SOA!).
I really think SOA is only useful for external interfaces (generally speaking, to those outside your company), and even then, only in cases when performance doesn't really matter, you don't need ordered delivery of messages.
In light of that, I think they can coexist. Keep your applications working and communicating using the OO philosophy, and only when external interfaces (to third parties) are needed, expose them via SOA (this is not essential, but it is one way).
I really feel SOA is overused, or at least architectures with SOA are getting proposed too often. I don't really know of any big systems that use SOA internally, and I doubt they could. It seems like more of a thing you might just use to do mashups or simple weather forecast type requests, not build serious systems on top of.
I think that this is a misunderstanding of object orientation. Even in Java, the methods are generally not part of the objects but of their class (and even this "membership" is not necessary for object orientation, but that is a different subject). A class is just a description of a type, so this is really a part of the program, not the data.
SOA and OO do not contradict each other. A service can accept data, organize them into objects internally, work on them, and finally give them back in whatever format is desired.
I've heard James Gosling say that one could implement SOA in COBOL.
If you read Alan Kay's own description of the origins of OOP, it describes a bunch of little computers interacting to perform something useful.
Consider this description:
Your X should be made up of Ys. Each Y should be responsible for a single concept, and should be describable completely in terms of its interface. One Y can ask another Y to do something via an exchange of messages (per their specified interfaces).
In some cases, an X may be implemented by a Z, which it manages according to its interface. No X is allowed direct access to another X's Z.
I think that the following substitutions are possible:
Term Programing Architecture
---- --------------- ------------
X Program System
Y Objects Services
Z Data structure Database
---- --------------- ------------
result OOP SOA
If you think primarily in terms of encapsulation, information hiding, loose coupling, and black-box interfaces, there is quite a bit of similarity. If you get bogged down in polymorphism, inheritance, etc. you're thinking programming / implementation instead of architecture, IMHO.
If you allow your services to remember state, then they can just be considered to be big objects with a possibly slow invocation time.
If they are not allowed to retain state then they are just as you've said - operators on data.
It sounds like you may be dividing your system up into too many services. Do you have written, mutually agreed criteria for how to divide?
Adopting SOA does not mean throw out all your objects but is about dividing your system into large reusable chunks.