Advice on splitting up a process involving multiple actors into Use Cases - requirements

Let's say I am modelling a process that involves a conversation or exchnage between two actors. For this example, I'll use something easily understandable:-
Supplier creates a price list,
Buyer chooses some items to buy and sends a Purchase Order,
Supplier receives the purchase order and sends the goods.
Supplier sends an invoice
Buyer receives the invoice and makes a payment
Of course each of those steps in itself could be quick complicated. How would you split this up into use cases in your requirements document?
If this process was treated as a single use-case it could fill a book.
Alternatively, making a use case out of each of the above steps would hide some of the essential interaction and flow that should be captured. Would it make sense to have a use case that starts at "Received a purchase order" and finishes at "Send an Invoice" and then another that starts at "Receive an Invoice" and ends at "Makes a Payment"?
Any advice?

The way I usually approach such tasks is by just starting to create UML Use Case and high-level Activity diagrams for the process. Don't bother about specifics, just give it your best shot.
When you will have a draft you would almost immediately see from it how it could be improved. You could then go on refactoring it - getting the use case smaller, structuring large Activities and so on. Alternatively you could lump a couple of Use Cases together if they are too small.
Without knowing the details of your project I would just go ahead and make each step a separate Use Case - they all seem to be self-contained and could be described without any cross-references. If while doing so you will find any dependencies you could always rethink the approach.
Also consider use 'extend' and 'include' blocks for common elements like logging, security etc.

Yes, there are many possibilities here. In your example above it could be even more complicated by the Buyer making multiple partial payments to pay the bill.
You probably need to create complete workflow use cases. Splitting each of the above steps into their own use cases may not prove useful as some of the steps will have pre & post conditions.
I work on the QuickBooks source code and the number of ways that a transaction can flow through the system is daunting. It is almost impossible for our QA guys to test every combination.

Related

Authentication process by blockchain database

I am new into blockchain technology and I'd like to start my project with product authentication. I am curious if it would be good choice to make use of it.
For example lets say I got some real physical products and I want to check their originality. They got their unique serial number or attached electronic identifier (RFID for example).
According to this simple python blockchain: https://medium.com/crypto-currently/lets-build-the-tiniest-blockchain-e70965a248b
In the block class
class Block:
def __init__(self, index, timestamp, data, previous_hash):
self.index = index
self.timestamp = timestamp
self.data = data
self.previous_hash = previous_hash
self.hash = self.hash_block()
We would create new block every time some product will be scanned (for example by phone). What info should be in data then? Product name, serial number, action type?
At the start, right after all the products have been created, everyone will be initial scanned. So for example for 100 first initial products and theirs scans, there would be 100 blocks in blockchain.
How the authentication process can works here? Is there a way to scan some product (its ID) and use this blockchain database to make sure its original one? Is that technology useful in that situation?
Yes, the use-case is actually a pretty standard one. There are many such examples in the supply-chain management industry regarding such uses. Everledger for instance, verifies diamond pieces and their origin.
How the authentication process can works here? Is there a way to scan
some product (its ID) and use this blockchain database to make sure
its original one? Is that technology useful in that situation?
I think you should refer to it as product (origin) verification. It is quite simple as long as you abstract out the blockchain technology itself. This is what I mean by abstracting out the blockchain technology- think of a blockchain as an immutable ledger (database) in which data can be inserted once, but then can never be changed or removed from the middle, and you can always read from it.
Just assume there is a blockchain technology (I'll add the details about the blockchain in the end.) Now, by definition, you can always add data to it, in your case some tracking number/ QR/ ID, etc. When you have to verify the product, you have to make sure that the entry exists in the blockchain for the corresponding product. Simple as that. And yes, this is one of the best known use cases of the blockchain, especially in a shared data ecosystem with multiple systems interacting with the same database.
The article that you're referring to is a very simple explanation of a blockchain from a programmer's perspective. Block time, block frequency are all variables that vary for different use cases. I'd suggest you look into already mature blockchain technologies and deploy them, and focus on your use case. You can use Ethereum technology as a local node. You can then use web3.py, a very mature Python library to interact with your blockchain. Or you can simply use distributed ledger, Hyperledger Projects would be an example, or even simpler (and in my opinion, much better) BigchainDB. With all these technolgies, you can store any kind of information you like on the blockchain.

Good practice to fetch detail api data in react-redux app

Whats the best practice to fetch details data in react app when you are dealing with multiple master details view?
For an example if you have
- /rest/departments api which returns list of departments
- /rest/departments/:departmentId/employees api to return all employees within department.
To fetch all departments i use:
componentDidMount() {
this.props.dispatch(fetchDepartments());
}
but then ill need a logic to fetch all employees per department. Would be a great idea to call employee action creator for each department in department reducer logic?
Dispatching employees actions in render method does not look like a good idea to me.
Surely it is a bad idea to call an employee action creator inside the department reducer, as reducers should be pure functions; you should do it in your fetchDepartments action creator.
Anyway, if you need to get all the employees for every department (not just the selected one), it is not ideal to make many API calls: if possible, I would ask to the backend developers to have an endpoint that returns the array of departments and, for each department, an embedded array of employees, if the numbers aren't too big of course...
Big old "It depends"
This is something that in the end, you will need to pick a way and see how it works out with your specific data and user needs. This somewhat deals with network issues as well, such as latency. In a very nicely networked environment, such as a top-3 insurance company I was a net admin for, you can achieve super low latency network calls. In such a case, multiple network requests would be significantly different than a homeowner internet based environment could be. Even then, you have to consider a wide range of possibilities. And you ALWAYS need to consider your end goals.
(Not to get too down in the technical aspects, but latency can fairly accurately be defined as "the time you are waiting for a network request to actually start sending data". A classic example of where this can be important is online first person shooter gaming. You click shoot, and the data is not transmitted as fast as you would like since the network is waiting to send the data, then you die. A classic example where bandwidth is more useful than latency is downloading or uploading large files. If you have to wait a second or two for the actual data to move, but when it moves you can download a GB in seconds, then oh well, I'll take it.)
Currently, I have our website making multiple calls to load dynamic menus and dynamic content. It is very small data. It is done in three separate calls. On the internet. It's "ok", but I would not say that it is "good". Since users are waiting for all of it to even start, I might as well throw it all in a single network call. Also, in case two calls go ok, then the third chokes a bit, the user may start to navigate, then more menus pop in and it is not ideal. This is why regardless, you have to think about your specific needs, and what range of possible use cases may likely apply. (I am currently re-writing the entire site anyways)
As a previous (in my opinion "good") answer stated, it probably makes sense to have the whole data set shot to you in one gulp. It appears to me this is an internal, or at least commercial app, with decent network and much more importantly, no risk of losing customers because your stuff did not load super fast.
That said, if things do not work out well with that, especially if you are talking large data sets, then consider a lazy loading architecture. For example, your user cannot get to an employee until they see the departments. So it may be ok, depending on your network and large data size, to load departments, and then after it returns initiate an asynchronous load of the employee data. The employee data is now being loaded while your user browses the department names.
A huge question you may want to clarify is whether or not any employee list data is rendered WITH the departments. In one of my cases, I have a work order system that I load after login, but lazy, and when it is loaded it throws a badge on the Work Order menu to show how many are outstanding. Since I do not have a lot of orders, it is basically a one second wait. No biggie. It is not like the user has to wait for it to load to begin work. If you wanted a badge per department, then it may get weird. You could, if you load by department, have multiple badges popping in randomly. In this case, it may cause user confusion, and it probably a good choice to load it in one large chunk. If the user has to wait anyways, it may produce one less call with a user asking "is it ok that it is doing this?". Especially with software for the workplace, it is more acceptable to have to wait for an initial load at the beginning of the work day.
To be clear, with all of these complications to consider, it is extremely important that you develop with as good of software coding practices as you are able. This way, you can code one solution, and if it does not meet your performance or user needs, it is not a nightmare to make a change. In a general case with small data, I would just load it in one big gulp to start, and if there are problems with load times complicate it from there. Complicating code from the beginning for no clearly needed reason is a good way to clutter your code up to the point of making it completely unwieldy to maintain.
On a third note, if you are dealing with enterprise size data sets, that is a whole different thing. Then you have to deal with pagination, and yes it gets a bit more complicated.
Regards,
DB
I'm not sure what fetchDepartments does exactly but I'd ensure the actual fetch request is executed from a Redux middleware. By doing it from middleware, you can fingerprint / cache / debounce all your requests and make a single one across the app no matter how many components request the thing.
In general, middleware is the best place to handle asynchronous side effects.

Clean Architecture - Robert Martin - Use Case Granularity

I am considering implementing Robert Martin's Clean Architecture in a project and I am trying to find out how to handle non-trivial use cases.
I am finding it difficult to scale the architecture to complex/composed use cases, especially use cases where the actor is the system as opposed to a user, as in system performing some sort of batch processing.
For illustration purposes, let's assume a use case like "System updates all account balances" implemented in pseudocode like
class UpdateAllAccountBalancesInteraction {
function Execute() {
Get a list of all accounts
For each account
Get a list of all new transactions for account
For each transaction
Perform some specific calculation on the transaction
Update account balance
}
}
In addition, "Get a list of all accounts", "Get a list of all new transactions for account", "Perform some specific calculation on the transaction", "Update account balance" are all valid use cases of their own and each of them is already implemented in its own interaction class.
A few questions arise:
Is the use case "System updates all account balances" even a valid
use case or should it be broken down into smaller use cases (although
from a business prospective it seems to make sense, it is a
legitimate business scenario)?
Is UpdateAllAccountBalancesInteraction
a legitimate interaction?
Is an interaction allowed to/supposed to orchestrate other interactions?
Is code that orchestrates other
interactions really belonging somewhere else?
Is it just OK to have
UpdateAllAccountBalancesInteraction as an interaction, but have it
call functions shared by the other interactors rather than act as an
orchestrator of other interactors?
Clearly, you have a new for high level interactions that share some (or a lot of) common functionality with lower level interactions. This is ok.
If the business requires a use case called UpdateAllAccountBalances, then it is a valid use case, and it's good that you're naming it in a way that reflects the business logic.
It's o.k. for one interaction to call other interactions, if this reflects your business logic accurately. Ask yourself the following question: If the requirements for UpdateAccountBalance change, should this also affect UpdateAllAccountBalances in exactly the same way? If the answer is yes, then the best way to achieve this is to have UpdateAllAccountBalances call UpdateAccountBalance, because otherwise, you'll need to make a change in two places in order to keep them consistent. If the answer is no, then you want to decouple the two interactions, and this can be done by having them call shared functions.
My suggestion is to approach the problem differently. Represent the problem itself in a domain model, rather than using a procedural approach. Your seeing some of the problems with Use Cases, one of which is that their granularity is generally indeterminate.
In a domain model, the standard way to represent a specific thing (i.e. an "account") is with two objects. One representing the specific account, and an associated object representing those things common to all accounts.
AccountCatalog (1) ---- (*) SpecificAccount
In your example, SpecificAccount would have a service (method) "UpdateBalance". AccountCatalog has a service (method) "UpdateAllBalances", which sends a message UpdateBalance to all SpecificAccounts in its collection.
Now anything can send the UpdateAllBalances message. Another object, human interaction, or another system.
I should note, that it can be common for an account to "know" (i.e. maintain) its own balance, rather than it being told to update.

Should the rule "one transaction per aggregate" be taken into consideration when modeling the domain?

Taking into consideration the domain events pattern and this post , why do people recomend keeping one aggregate per transaction model ? There are good cases when one aggregate could change the state of another one . Even by removing an aggregate (or altering it's identity) will lead to altering the state of other aggregates that reference it. Some people say that keeping one transaction per aggregates help scalability (keeping one aggregate per server) . But doesn't this type of thinking break the fundamental characteristic about DDD : technology agnostic ?
So based on the statements above and on your experience, is it bad to design aggregates, domain events, that lead to changes in other aggregates and this will lead to having 2 or more aggregates per transaction (ex. : when a new order is placed with 100 items change the customer's state from normal to V.I.P. )?
There are several things at play here and even more trade-offs to be made.
First and foremost, you are right, you should think about the model first. Afterall, the interplay of language, model and domain is what we're doing this all for: coming up with carefully designed abstractions as a solution to a problem.
The tactical patterns - from the DDD book - are a means to an end. In that respect we shouldn't overemphasize them, eventhough they have served us well (and caused major headaches for others). They help us find "units of consistency" in the model, things that change together, a transactional boundary. And therein lies the problem, I'm afraid. When something happens and when the side effects of it happening should be visible are two different things. Yet all too often they are treated as one, and thus cause this uncomfortable feeling, to which we respond by trying to squeeze everything within the boundary, without questioning. Still, we're left with that uncomfortable feeling. There are a lot of things that logically can be treated as a "whole change", whereas physically there are multiple small changes. It takes skill and experience, or even blunt trying to know when that is the case. Not everything can be solved this way mind you.
To scale or not to scale, that is often the question. If you don't need to scale, keep things on one box, be content with a certain backup/restore strategy, you can bend the rules and affect multiple aggregates in one go. But you have to be aware you're doing just that and not take it as a given, because inevitably change is going to come and it might mess with this particular way of handling things. So, fair warning. More subtle is the question as to why you're changing multiple aggregates in one go. People often respond to that with the "your aggregate boundaries are wrong" answer. In reality it means you have more domain and model exploration to do, to uncover the true motivation for those synchronous, multi-aggregate changes. Often a UI or service is the one that has this "unreasonable" expectation. But there might be other reasons and all it might take is a different set of abstractions to solve the same problem. This is a pretty essential aspect of DDD.
The example you gave seems like something I could handle as two separate transactions: an order was placed, and as a reaction to that, because the order was placed with a 100 items, the customer was made a VIP. As MikeSW hinted at in his answer (I started writing mine after he posted his), the question is when, who, how, and why should this customer status change be observed. Basically it's the "next" behavior that dictates the consistency requirements of the previous behavior(s).
An aggregate groups related business objects while an aggregate root (AR) is the 'representative' of that aggregate. Th AR itself is an entity modeling a (bigger, more complex) domain concept. In DDD a model is always relative to a context (the bounded context - BC) i.e that model is valid only in that BC.
This allows you to define a model representative of the specific business context and you don't need to shove everything in one model only. An Order is an AR in one context, while in another is just an id.
Since an AR pretty much encapsulates all the lower concepts and business rules, it acts as a whole i.e as a transaction/unit of work. A repository always works with AR because 1) a repo always deals with business objects and 2) the AR represents the business object for a given context.
When you have a use case involving 2 or more AR the business workflow and the correct modelling of that use case is paramount. In a lot of cases those AR can be modified independently (one doesn't care about other) or an AR changes as a result of other AR behaviour.
In your example, it's pretty trivial: when the customer places an order for 100 items, a domain event is generated and published. Then you have a handler which will check if the order complies with the customer promotions rules and if it does, a command is issued which will have the result of changing the client state to VIP.
Domain events are very powerful and allows you to implement transactions but in an eventual consistent environment. The old db transaction is an implementation detail and it's usually used when persisting one AR (remember AR are treated as a logical unit but persisting one may involve multiple tables hence db transaction).
Eventual consistency is a 'feature' of domain events which fits naturally a rich domain (and the real world actually). For some cases you might need instant consistency however those are particular cases and they are related to UI rather than how Domain works. Of course, it really depends from one domain to another. In your example, the customer won't mind it became a VIP 2 seconds or 2 minutes after the order was placed instead of the same milisecond.

OOD: order.fill(warehouse) -or- warehouse.fill(order)

which form is a correct OO design?
"Matter of taste" is a mediocre's easy way out.
Any good reads on the subject?
I want a conclusive prove one way or the other.
EDIT: I know which answer is correct (wink!). What I really want is to see any arguments in support of the former form (order.fill(warehouse)).
There is no conclusive proof and to a certain extent it is a matter of taste. OO is not science - it is art. It also depends on the domain, overall software structure, etc. and so your small example cannot be extrapolated to any OO problem.
However, here is my take based on your information:
Warehouses store things. They don't fill orders. Orders request things. They don't know which warehouse (or warehouses) the things come from. So a dependency in either direction between the two does not feel right.
In the real world, and the software, something would be a mediator between the two. #themel indicated the same in the comment to your question, though I prefer something less programming pattern sounding. Perhaps something like:
ShippingPlan plan = shippingPlanner.fill(order).from(warehouses).ship();
However, it is a matter of taste :-)
In its simplest form warehouse is an inventory storage place.
But it also would be correct to view a warehouse as a facility comprised of storage space, personal, shipping docks etc. If you assume that view of a warehouse then it would be appropriate to say that a warehouse (as a facility) can be charged with filling out orders, or in expanded form:
a warehouse facility is capable of assembling a shipment according to a given specification (an order)
above is a justification (if not proof) for: warehouse.fill(order); form. Notice that this form substantially equivalent to SingleShot's and themel's suggestions. The trick is to consolidate shippingPlanner (an order fulfillment authority) and a warehouse (a inventory storage space). Simply put in my example warehouse is a composition of an order fulfillment authority and an inventory storage space and in SingleShot's those two are presented separately. It means that if such consolidation is (or becomes) unacceptable (for example due to complexity of the parts), then the warehouse can be decomposed into these two sub components.
I can not come up with a justification for assigning fill operation to an order object.
hello? warehouse? yes, please take this order and fill it. thank you. -- that I can understand.
hey, order! the warehouse is over there. do your thing and get fulfill yourself. -- makes no sense to me.