What is the counterpart of a "consumer" called? - oop

By "consumer," I mean a class/system that calls or uses another through its interface or API. What's the name for the other class/system, the one that gets called?
In a network context, a consumer is called a "client" and its counterpart is a "server", but I'm looking for a term that doesn't necessarily involve a network.
I thought the right word was "producer", but Martin Fowler calls it a "supplier" in some articles (like this one). Can anyone point to an authoritative source that defines this?

I thought it was "producer" as well, based on my experience with AMQP terminology. Based on that a consumer can be regarded literally as a client endpoint of information.
Exceptions might be made where by information can be sent back, but in "fan-out" data architectures, the data typically only goes one way, and the producer has little to no obligation to ensure the consumer interpreted the data correctly.
The terminology is rarely used in OOP design.

I'd most likely want to say "provider" as a counter pairing to "consumer." You could also have dealer, merchant, seller, vendor etcetera, but all of those have some nuances which make them more restrictive and moor them to certain industries or types. It's not as universally applicable as provider-consumer.

Related

May a Repository call an UseCase in Clean Architecture?

This is a very tricky question, because when we check the rules it's not explicit that a repository couldn't call an UseCase. However, it doesn't seem logical.
Is there any definition/good practices and why it shouldn't do this?
Thanks!
The short answer is "No" - it shouldn't, regardless of the context (in most of all cases). As to why - the definitions, principles and good practices - it may be helpful to think in terms of clear separation of concerns across your whole Clean Architecture implementation.
Consider this illustration, as background at thinking about how one could organize the interactions (and dependencies) between main parts of a Clean Architecture.
The main principles illustrated are, that -
Through its execution, the Use Case has different "data needs" (A and B). It doesn't implement the logic to fulfill them itself (since they require some specific technology). So the Use Case declares these as two Gateway-interfaces ("ports"), in this example. And then calls them amidst its logic.
Both of these interfaces declare some distinct set of operations that should be provided (implemented) from "outside". The Use Case, in its logic, needs and invokes all of those A and B operations. They are separated into A and B, because they are different kinds of responsibilities - and might be implemented by different parts of the system (but not necessarily). Let's say that the Use Case needs loading of persisted domain objects (as part of A operations), but it also needs to retrieve configuration (as some key-value pairs), which are B operations. These interfaces are segregated since both sets of operations serve distinct purposes for the Use Case. Anyhow, it's important design-wise, that they both explicitly "serve" the Use Case needs - meaning, they are not generic entity-centric DAO / Repository interfaces; they ONLY have operations that the Use Case actually needs and invokes, in exactly the shape and form (parameters, return values) that the Use Case specifically needs them. They are "ports" to be "plugged into", as part of the whole Use Case.
The "outside" providers of these responsibilities are the Adapters (the implementers) of those needs. To fulfill them, they typically use some specific technology or framework - a database, a network call to some server, a message producer, a file operation, Spring's configuration properties, etc.
The Use Case is invoked (called) only by Drivers side of the architecture (that is, the initiating side). The Use Case itself, in fact, is one of the "initiators" for its further collaborating parts (eg, the Adapters).
On the other hand, the Use Case is "technically supported" (the declared parts of its needs "implemented") by Adapters side of the architecture.
Effectively, there is a clear separation of who calls what - meaning, at runtime the call stack progresses in a clear directional flow of control across this architecture.
The flow of control is always from Drivers towards Adapters (via the Use Case), never the other way around.
These are principles I have learned, researched, implemented and corrected purely across my career in different projects. In other words, they've been shaped by the real world in terms of what has been practical and useful - in terms of separation of concerns and clear division of responsibilities - in my experience. Yours naturally may differ, and there is no universal fit - CA is not a recipe, it is a mindset of software design, implementable in (better and worse) several ways.
Thinking simply though, I would imagine in your situation Repository is your "data storage gateway" implementation of the Use Case's (Data) Gateway. The UC needs that data from "somewhere" - without caring where it comes from or how its is stored. This is very important - the whole core domain, along with the Use Case needs to be framework and I/O agnostic.
Your Repository fulfills that need - provides persisted domain objects. But the Use Case must not call it directly, instead it declares a Gateway (in Hexagonal eg Ports & Adapters architecture, named a Port) - with needed operation(s) that your Repository needs to implement. By using some specific (DB / persistence) technology, your Repository fulfills it -it implements one of the Use Case's "ports", as an Adapter.
With the above being said - on rare occasions, some Gateway implementations may demand exceptions. They might need several back-and-forth-going interactions, even across your architecture. They are rare and indeed complex situations - likely not necessary for a Repository implementation.
But, if that is really an inevitable case - then it's best if the Use Case, when calling the Gateway, provides a callback
interface as a parameter of the call. So during its processing the Gateway's implementer can call back using the operations in that interface - effectively implementing the back-and-forth necessity. In most of all cases though, this implies excessive logic and complexity at the adapters' level, which should be avoided - and serves as a strong cue that the current solution should be re-designed.

Clean Architecture: UseCase Output Port

I have a question regarding the "Use Case Output Port" in Uncle Bob´s Clean Architecture.
In the image, Uncle Bob describes the port as an interface. I am wondering if it has to be that way or if the invoked Use Case Interactor could also return a "simple" value. In either case the Application and Business Rules Layer would define its interface that the Interface Adapters Layer has to use. So I think for simple invocations just returning a value would not violate the architectural idea.
Is that true?
Additionally, I think this Output Port Interface implemented by the presenter should work like the Observer pattern. The presenter simply observes the interactor for relevant "events". In the case of .NET where events are first-class citizens, I think using one of these is the same idea.
Are these thoughts compatible with the ideas behind Clean Architecture?
Howzit OP. I see your question is still unanswered after all these years and I hope we can reason about this and provide some clarity. I also hope I am understanding your question correctly. So with that in mind, here is how I see the solution:
The short answer is, a use case interactor should be able to return a simple value (by which I assume string, int, bool etc) without breaking any architectural rules.
If we go over the onion architecture, which is very similar to the clean architecture, the idea is to encapsulate the core business logic in the center of the architecture, the domain. The corresponding concept in the clean architecture is the entities and the use cases on top of it. We do this because we want to dictate our understanding of the business in a consistent way when we write our business rules.
The interface adapters allow us to convert the outside world to our understanding. What we want is a contract in our domain (use cases or entities) that ensures we will get what we need from the outside world, without knowing any implementation details. We also don't care what the outside world calls it, we convert their understanding to ours.
A common way to do this, is to define the interface in the domain to establish a contract that says, we expect to give "x", and you must then tell us what "y" is. The implementation can then sit outside the domain.
Now to get to the core of your question. Let's assume that the core of our application is to track some complicated process with various stages. During one of these stages, we need to send data to a couple of external parties and we want to keep a reference of some sort for auditing purposes. In such a case our interface may sit in the domain and state we send our complicated object to some party, and we expect a string reference back. We can then use this string reference and fire some domain event etc. The implementation can sit completely outside of the domain and call external APIs and do it's thing, but our core domain is unaffected. Hence returning a simple value has no impact on the architecture. The reverse of the above scenario may also hold true. We can say that we have a reference id of some sort, and the outside world needs to return us our understanding of some object.
For the second part of your question. I would imagine it depends on the use case itself. If you present some idea out there and need to constantly react to it, domain events will get involved and you will have a structure very similar to the observer pattern. .NET encapsulates events very nicely and fits very well with clean architecture and Domain driven design.
Please let me know if the above makes sense or if I can clarify it in any way.

Using NSStringFromSelector to send method over a network

I'm currently making a client-client approach on some simulation with objective-c with two computers (mac1 and mac2).
I have a class Client, and every computer has a instance of the "Client" on it (client1,client2). I expect that both clients will be synchronized: they will both be equal apart from memory locations.
When a user presses a key on mac1, I want both client1 and client2 to receive a given method from class Client (so that they are synchronized, i.e. they are the same apart from it's memory location on each mac).
To this approach, my current idea is to make 2 methods:
- (void) sendSelector:(Client*)toClient,...;
- (void) receiveSelector:(Client*)fromClient,...;
sendSelector: uses NSStringFromSelector() to transform the method to a NSString, and send it over the network (let's not worry about sending strings over net now).
On the other hand, receiveSelector: uses NSSelectorFromString() to transform a NSString back to a selector.
My first question/issue is: to what extent is this approach "standard" on networking with objective-c?
My second question:
And the method's arguments? Is there any way of "packing" a given class instance and send it over the network? I understand the pointer's problem when packing, but every instance on my program as an unique identity, so that should be no problem since both clients will know how to retrieve the object from its identity.
Thanks for your help
Let me address your second question first:
And the method's arguments? Is there any way of "packing" a given
class instance and send it over the network?
Many Cocoa classes implement/adopt the NSCoding #protocol. This means they support some default implementation for serializing to a byte stream, which you could then send over the network. You would be well advised to use the NSCoding approach unless it's fundamentally not suited to your needs for some reason. (i.e. use the highest level of abstraction that gets the job done)
Now for the more philosophical side of your first question; I'll rephrase your question as "is it a good approach to use serialized method invocations as a means of communication between two clients over a network?"
First, you should know that Objective-C has a not-often-used-any-more, but reasonably complete, implementation for handling remote invocations between machines with a high level of abstraction. It was called Distributed Objects. Apple appears to be shoving it under the rug to some degree (with good reason -- keep reading), but I was able to find an old cached copy of the Distributed Objects Programming Topics guide. You may find it informative. AFAIK, all the underpinnings of Distributed Objects still ship in the Objective-C runtime/frameworks, so if you wanted to use it, if only to prototype, you probably could.
I can't speculate as to the exact reasons that you can't seem to find this document on developer.apple.com these days, but I think it's fair to say that, in general, you don't want to be using a remote invocation approach like this in production, or over insecure network channels (for instance: over the Internet.) It's a huge potential attack vector. Just think of it: If I can modify, or spoof, your network messages, I can induce your client application to call arbitrary selectors with arbitrary arguments. It's not hard to see how this could go very wrong.
At a high level, let me recommend coming up with some sort of protocol for your application, with some arbitrary wire format (another person mentioned JSON -- It's got a lot of support these days -- but using NSCoding will probably bootstrap you the quickest), and when your client receives such a message, it should read the message as data and make a decision about what action to take, without actually deriving at runtime what is, in effect, code from the message itself.
From a "getting things done" perspective, I like to share a maxim I learned a while ago: "Make it work; Make it work right; Make it work fast. In that order."
For prototyping, maybe you don't care about security. Maybe when you're just trying to "make it work" you use Distributed Objects, or maybe you roll your own remote invocation protocol, as it appears you've been thinking of doing. Just remember: you really need to "make it work right" before releasing it into the wild, or those decisions you made for prototyping expedience could cost you dearly. The best approach here will be to create a class or group of classes that abstracts away the network protocol and wire format from the rest of your code, so you can swap out networking implementations later without having to touch all your code.
One more suggestion: I read in your initial question a desire to 'keep an object (or perhaps an object graph) in sync across multiple clients.' This is a complex topic, but you may wish to employ a "Command Pattern" (see the Gang of Four book, or any number of other treatments in the wild.) Taking such an approach may also inherently bring structure to your networking protocol. In other words, once you've broken down all your model mutation operations into "commands" maybe your protocol is as simple as serializing those commands using NSCoding and shipping them over the wire to the other client and executing them again there.
Hopefully this helps, or at least gives you some starting points and things to consider.
These days it would seem that the most standard way is to package everything up on JSON.

How do you determine how coarse or fine-grained a 'responsibility' should be when using the single responsibility principle?

In the SRP, a 'responsibility' is usually described as 'a reason to change', so that each class (or object?) should have only one reason someone should have to go in there and change it.
But if you take this to the extreme fine-grain you could say that an object adding two numbers together is a responsibility and a possible reason to change. Therefore the object should contain no other logic, because it would produce another reason for change.
I'm curious if there is anyone out there that has any strategies for 'scoping', the single-responsibility principle that's slightly less objective?
it comes down to the context of what you are modeling. I've done some extensive writing and presenting on the SOLID principles and I specifically address your question in my discussions of Single Responsibility.
The following first appeared in the Jan/Feb 2010 issue of Code Magazine, and is available online at "S.O.L.I.D. Software Development, One Step at a Time"
The Single Responsibility Principle
says that a class should have one, and
only one, reason to change.
This may seem counter-intuitive at
first. Wouldn’t it be easier to say
that a class should only have one
reason to exist? Actually, no-one
reason to exist could very easily be
taken to an extreme that would cause
more harm than good. If you take it to
that extreme and build classes that
have one reason to exist, you may end
up with only one method per class.
This would cause a large sprawl of
classes for even the most simple of
processes, causing the system to be
difficult to understand and difficult
to change.
The reason that a class should have
one reason to change, instead of one
reason to exist, is the business
context in which you are building the
system. Even if two concepts are
logically different, the business
context in which they are needed may
necessitate them becoming one and the
same. The key point of deciding when a
class should change is not based on a
purely logical separation of concepts,
but rather the business’s perception
of the concept. When the business
perception and context has changed,
then you have a reason to change the
class. To understand what
responsibilities a single class should
have, you need to first understand
what concept should be encapsulated by
that class and where you expect the
implementation details of that concept
to change.
Consider an engine in a car, for
example. Do you care about the inner
working of the engine? Do you care
that you have a specific size of
piston, camshaft, fuel injector, etc?
Or, do you only care that the engine
operates as expected when you get in
the car? The answer, of course,
depends entirely on the context in
which you need to use the engine.
If you are a mechanic working in an
auto shop, you probably care about the
inner workings of the engine. You need
to know the specific model, the
various part sizes, and other
specifications of the engine. If you
don’t have this information available,
you likely cannot service the engine
appropriately. However, if you are an
average everyday person that only
needs transportation from point A to
point B, you will likely not need that
level of information. The notion of
the individual pistons, spark plugs,
pulleys, belts, etc., is almost
meaningless to you. You only care that
the car you are driving has an engine
and that it performs correctly.
The engine example drives straight to
the heart of the Single Responsibility
Principle. The contexts of driving the
car vs. servicing the engine provide
two different notions of what should
and should not be a single concept-a
reason for change. In the context of
servicing the engine, every individual
part needs to be separate. You need to
code them as single classes and ensure
they are all up to their individual
specifications. In the context of
driving a car, though, the engine is a
single concept that does not need to
be broken down any further. You would
likely have a single class called
Engine, in this case. In either case,
the context has determined what the
appropriate separation of
responsibilities is.
I tend to think in term of "velocity of change" of the business requirements rather than "reason to change" .
The question is indeed how likely stuffs will change together, not whether they could change or not.
The difference is subtle, but helps me. Let's consider the example on wikipedia about the reporting engine:
if the likelihood that the content and the template of the report change at the same time is high, it can be one component because they are apparently related. (It can also be two)
but if the likelihood that the content change without the template is important, then it must be two components, because they are not related. (Would be dangerous to have one)
But I know that's a personal interpretation of the SRP.
Also, a second technique that I like is: "Describe your class in one sentence". It usually helps me to identify if there is a clear responsibility or not.
I don't see performing a task like adding two numbers together as a responsibility. Responsibilities come in different shapes and sizes but they certainly should be seen as something larger than performing a single function.
To understand this better, it is probably helpful to clearly differentiate between what a class is responsible for and what a method does. A method should "do only one thing" (e.g. add two numbers, though for most purposes '+' is a method that does that already) while a class should present a single clear "responsibility" to it's consumers. It's responsibility is at a much higher level than a method.
A class like Repository has a clear and singular responsibility. It has multiple methods like Save and Load, but a clear responsibility to provide persistence support for Person entities. A class may also co-ordinate and/or abstract the responsibilities of dependent classes, again presenting this as a single responsibility to other consuming classes.
The bottom line is if the application of SRP is leading to single-method classes who's whole purpose seems to be just to wrap the functionality of that method in a class then SRP is not being applied correctly.
A simple rule of thumb I use is that: the level or grainularity of responsibility should match the level or grainularity of the "entity" in question. Obviously the purpose of a method will always be more precise than that of a class, or service, or component.
A good strategiy for evaluating the level of responsibility can be to use an appropriate metaphor. If you can relate what you are doing to something that exists in the real world it can help give you another view of the problem you're trying to solve - including being able to identify appropriate levels of abstraction and responsibility.
#Derick bailey: nice explanation
Some additions: It is totally acceptable that application of SRP is contextual base.
The question still remains: are there any objective ways to define if a given class violates SRP ?
Some design contexts are quite obvious ( like the car example by Derick ) but otherwise contexts in which a class's behaviour has to defined remains fuzzy many-a-times.
For such cases, it might well be helpful if the fuzzy class behaviour is analysed by splitting it's responsibilities into different classes and then measuring the impact of new behavioural and structural relations that has emanated because of the split.
As soon the split is done, the reasons to keep the splitted responsibilities or to back-merge them into single responsibility becomes obvious at once.
I have applied this approach and which has lead good results for me.
But my search to look for 'objective ways of defining a class responsibility' still continues.
I respectful don't agree when Chris Nicola's above says that "a class should presents a single clear "responsibility" to it's consumers
I think SRP is about having a good design inside the class, not class' customers.
To me it's not very clear what a responsability is, and the prove is the number of questions that this concept arises.
"single reason to change"
or
"if the description contains the word
"and" then it needs to be split"
leads to the question: where is the limit? At the end, any class with 2 public methods has 2 reasons to change, isn't it?
For me, the true SRP leads to the Facade pattern, where you have a class that simply delegades the calls to other classes
For example:
class Modem
send()
receive()
Refactors to ==>
class ModemSender
class ModelReceiver
+
class Modem
send() -> ModemSender.send()
receive() -> ModemReceiver.receive()
Opinions are wellcome

responsibility based modeling versus class reasons to change

In this text I read
Be alert for a component that is just
a glorified responsibility. A
component is supposed to capture an
abstraction that has a purpose in the
system. It may happen that what
appears at one moment as a meaningful
component is really just a single
responsibility left on its own. That
responsibility could be assigned to a
component.
This confuses me. If a class should have only one reason to change, it seems like it should have one responsibility. But now it seems I'm taking this too narrow. Can somehow give an explanation of responsibility and reason to change in the context of responsibility based modeling? Can a class have more than two responsibilities and still have one reason to change (or the other way around)?
Read about Class-Responsibility-Collaboration modeling (or design)
http://www.agilemodeling.com/artifacts/crcModel.htm
http://alistair.cockburn.us/Using+CRC+cards
http://users.csc.calpoly.edu/~jdalbey/SWE/CaseStudies/ATMSim/CRCmodel.html
http://c2.com/doc/oopsla89/paper.html
A class may have several responsibilities. It always represents a single "thing".
The "one reason to change" rule doesn't apply to responsibilities. Period.
The "one reason to change" rule should be used as follows.
It doesn't mean "1". It means "as few as possible".
It applies to the "interface" or "underlying abstraction" or "concept" for a class. A class should encapsulate few concepts. When that the core concept changes, the class changes.
Many simple things are better than a few complex things. It's easier to recombine and modify simple things.
Inside every complex thing are many simple things trying to be free.
It's hard to define "simple", but "one concept" is close. "one thing to change" is also a helpful test for "simplicity".
Finally, "one reason to change" doesn't literally mean "1".
The way I understand it, the danger of "glorifying a responsibility to a component" means that you need to be careful to not translate responsibilities to system components directly.
For example, in an email system, the user may approach the system with the goal of initiating a message to a recipient. It's the system's responsibility to make this possible.
The user may also approach the system to read and reply to an email. It's the system's responsibility to make this possible, too.
But does this mean that there need to be two components "initiate new email" and "reply to email" in the system? No. A general "compose email" component would be able to handle both requirements.
So in this case, the component "compose email" is responsible for the user goals "initiate new mail" and "reply to mail". But it would only need to change if its core concept changes ("how emails are composed").
Look again closely at the following phrase by Cockburn: "A component is supposed to capture an abstraction that has a purpose in the system". A purpose in the system (reason to change) is not the same as the purpose of satisfying a user goal (responsibility).
To make the long story short: As to my understanding, a component ideally has one core concept. It may have several responsibilities. But as I see it, the one responsibility may not be assigned to more than one component.