In my application I have products traveling between stations in a production line. Every pass of the product at a station a result is recorded: success of failure.
The relationship between products and stations is many to many.
If I were programming in a procedural language I would have the following function:
get_last_pass_result($station_id, $product_id) {...}
That returns the result of the last time this particular product passed on this particular station.
Now how would you model this logic in OOP terms?
I would definitely have class station, and class product.
But should I do (php syntax):
$station->get_last_product_pass_result($product_id)
Or
$product->get_last_pass_on_station_result($station_id)
The situation seems symmetric and I wonder what considerations exist do decide between the two (or maybe even some third solution?)
I can't provide here all the existing information about the domain, but feel free to include considerations like: if [an assumption about the domain] then [your design solution], if it feels appropriate
My take, but based on DDD principles, so I don't know it this suits your needs, but anyway...
So you have a Station, and a Product. I would say that they are both entities that can have references to each other, but the logic you are talking about encompasses these entities and could probably be put in a domain service like ProductPassingService with an operation like GetLastPassFor(product, station).
This domain service would have the responsibility to use the underlying domain entities Station and Product (and repositories to query them) and execute the logic that does not belong either to Station and Product. It keeps the entities Station and Product clean of too much responsibility.
Also, domain entities should not use repositories (DDD - the rule that Entities can't access Repositories directly) so this logic belongs in a domain service.
It is not completely clear to me whether the Product represents a type of a product (e.g. a chair) or an individual instance of produce (e.g. chair-001, chair-002). From your example it seem like latter is the case, so I will use that, otherwise get_last_pass_result doesn't make much sense.
I believe that I would introduce a Path type (without knowing lot about the domain, though). Now, depending on other use cases, this might be an aggregate root (in DDD lingo) or not.
This means that it would be accessible via Product instance or directly from DB/repository/whatever. With path instance, I can do simply:
var path = product.GetPath(); // if it is accessible only via product
var path = Path.GetPathForProduct(product); // or pathRepository.GetPathFor(), or ...
var result = path.LastResult;
This approach decouples the factory process from the product itself, and enables some other scenarios (e.g. find average duration, etc...)
As always - it depends on how you'd use it.
But there is a nice "how it works" sample on Discovery channel - an automobile factory. During the journey trough conveyor, an automobile receives more and more additional parts. Each automobile has a kind of job schedule attached - a list of jobs to be done in order to complete the task. While it moves through the line, persons responsible for a job make marks about job completion. So when a defect is found - you know the source for sure.
So, going back to a procedural approach. First, it's more natural to use structure+procedure approach instead of pure oop. But it's up to you, of course.
Second - I'd suggest to separate 'product' from a 'production line log' object, which is in one-to-one relationships with a product, but is not probably necessary for it after the product is released. 'Production line log' stores events related to an object processing by stations. Moreover, you can use it as a schedule, i.e. include instructions how a particular product should be processed (as automobiles to include or not certain features like conditioner or fog lights). And 'planned' action should be marked as 'complete' by a worker.
In nowadays terms it can be also expressed in 'event sourcing' terms: during the movement, product modifications are written into a log; so a product can be re-constructed by replaying modification events one-by-one.
I would suggest to put it in the product. My concern is that the number of product is big, but the stations should be fixed, and it would be natural to record specific product's state in the object of that product. For the station, it may only need to record some statistics.
Related
Aiming to enable consistent user experience and have every business process be able to follow identical high level structure.
Relevant end user experience target state vision (context / requirements):
They could start in Dynamics 365 from Company screen, select the process they want to initiate
New Case screen opened along with visible process steps unique to that process. User inputs generic Case fields, next step
User inputs fields that are specific to that business process and next
One or more processing phases which conclude the initial user flow (and instead show up in queues of different user groups - or queues that various automation implementations read & execute on)
Aim is that every single process follows this model. Every single process (of which there are hundreds in total) is a Case (core entity) and the fields which are specific to are located in a custom entity (which is specific to the business process).
I.e. there are in target state hundreds of business processes and each has (with some exceptions):
Custom entity (only the fields that are specific to that process - we don't want to pollute the core entities with use case specific fields after all)
Custom page / default view (tab within Case editing / view screen to display the custom entity's information) - to user it should simply appear as part of Case information (they never even know there is a separate entity in play)
Business process flow (BPF) specific to the process
This enables:
Everything is a Case - full visibility from a single view into what is happening related to specific customer and overall (the default design approach of Dynamics of having everything fragmented all over the place is just plain idiotic)
Decoupling at several levels - business process, custom entity, custom page can safely be developed by stream-aligned teams with minimal overlap with the core shared entities and areas (which are taken care of by centralized team). Automation steps and exact technology are entirely decoupled from BPF implementation (interaction via queues / specific subsets of CRM API)
For this to work (BPF specific):
For "Everything is a Case" to work the BPF needs to be associated to Case entity (second / some processing steps associated to the related custom entity) because that is the standard way
Problem:
There is a maximum of 10 BPFs for a specific Entity which makes no sense. I get restriction of active processes for an Entity instance but for overall Entity. But there is no getting over this directly
So how to mitigate this without breaking the entire approach?
The primary approach seems to be:
Associate the BPF to the custom entity instead (since one exists for each process)
Problem with this:
** No we can no longer open the Case and see the associated process - at least not directly. Same applies to creating new case of specific type (Case type = specific biz process = specific BPF + custom entity)
So how to get around this? How to design the system so it maximally utilizes Dynamics core elements and approaches but also enables this "Everything is a Case" where users have a consistent approach regardless of what type of process this is - and also where we have unifying technical abstraction for everything (also enables very clear API towards external systems and automation techs).
Microsoft Dynamics 365, Power Automate Business Process Flows
I am studying interface design.
Here is what I curious about.
Some of open API support 2 different interfaces to implement toggling. i.e. instagram like interface. It separates like interface(like, cancel like)
What is the advantage of separate those two.(separating into two interfaces makes end-user more complicated in my view)
I question this, since it could be implemented with toggle.
i.e. user send item_id and user_id. server check database(this item is already liked or not), and update.
Thanks for answer!
The real benefit to having two interfaces for toggling is that it doesn't require the user to know the current state of the thing they are attempting to change (i.e. it doesn't require me to first query for the state).
If I am a consumer of an API, typically I will want to perform actions such as like-ing something. Very rarely can I think of a case where I would want to perform the action of do the opposite of what I did previously (unless I'm feeling like flip-flopping). If you didn't have two endpoints for like and unlike then you'd first have to poll the API to get the current status, and then perform the toggle that you're talking about if needed.
This situation introduces more logic into your code, requires that you make 1-2 calls to the API, and assumes that the state didn't change between calls; whereas having two endpoints reduces the logic, limits your API calls to 1 per action, and you don't have to worry about the state changing unexpectedly.
In the case where you try to like something that the user has already liked, then the API would simply return a successful result and not alter the underlying data.
One reason to prefer an interface where you specify the desired state explicitly is that it will be idempotent. That is, the resulting state is the same even if the request is made multiple times.
This is a pretty contrived example, but if two different people sharing the same account tried to like the same thing within a small enough window, you could end up with it being un-liked instead.
I'm putting something together using a CQRS pattern (no event sourcing, nor DDD, but a clear difference between command and query).
The operation I'm trying to model is a "get-or-create", given a set of parameters. The item being created (or gotten) is effectively a unique communications link ID. Either of two parties can say "get-or-create comms link between me and the other" and a new temporary random ID is returned (which would be valid between them both). They can then send/receive messages using that ID (a PostMessage command or GetRecentMessages query). This temporary ID can be passed around, but can also be centrally invalidated, controlled, etc. Different sessions between the two parties should be recorded separately.
I know that the more typical "insert-then-get-me-the-ID-back" is handled by the command having a GUID parameter. But this doesn't seem to apply here because of course the item might already exist..
My options, I believe:
Execute a GetOrCreateCommsLink command followed by a GetActiveCommsLinkId query, i.e. command, then query. Feels wrong because commands are supposedly typically asynchronous (though not in my simple prototype so far), and is it right to wait for a command then run a query in my service layer?
Run a GetExistingOrNewActiveCommsLinkId query, which will either return an existing session ID, or create and return one. Feels wrong because it's a dirty cheat, both reading and mutating state in a query..
Don't use CQRS for this part of the app
Have each client use their own ID for the session - NotifyCommsLinkIdentifier command from each side specifies the parameters and their own ID, which is linked internally to the actual ID by the command. Then run a GetUnderlyingCommsLinkId query, given the identifier previously specified, to uncover the ID if needed. Feels wrong to because inventing this extra concept seems to be only because of the CQRS pattern, rather than any actual domain/business need
I suppose my question in general is how to deal with potential get-then-act, or act-then-get scenarios. Should I simply chain them together in my service layer, as per option 1.
Is there a standard approach, or standard approaches, to this?
So you are talking about CQS, not CQRS. Basically you are trying to find workarounds in order to strictly implement CQS pattern for something that naturally may not really be an asynchronous command.
My advice is: don't try to apply a pattern because of the pattern, but because it makes sense. Does it make sense in your case? What would be the benefit? Remember that you are not Amazon. Do you really need it?
That said, what I typically do is not the purist way, but allowing a command to return a simple ID if it's needed. This will make your architecture a lot more simple; and you still separate commands from queries which to me is the most important advantage.
I'm moving from pure DDD paradigm to CQRS. My current concern is with Event Sourcing and, more specifically, organizing Event Store. I've read tons of blogs posts but still can't understand some things. So correct me if I'm wrong.
Each event basically consists of:
- Event date/time
- type of Event (we can figure out type of AggregateRoot from this as well)
- AggregateRoot id (Guid)
- AggregateRoot version (to maintain the order of updates)
- Event data (some serialized class with data necessary to make update)
Now, if my Event data consists of simple value types (ints, strings, enums, etc.) then it's easy. But what if I have to pass another AggregateRoot? I can't serialize the whole AR as a part of Event data (think of all the data and lazy loading), basically I only need to store Id of that AR. But then, when I need to apply that event, I'd need to get that AR from database first. And it doesn't feel right to do so from my Domain Model (calling Repositories and working with AR Ids).
What's the best approach for this?
p.s. For a concrete example, let's assume there's a Model which consists of Task and User entities (both ARs). Task hold a reference to User responsible. But the responsible User can be changed.
Update: I think I've found the source of my confusion. I believe event sourcing should be used only for building read model. And in this case passing Ids and raw data is ok. But the same events used on aggregates themselves. And this I cannot understand.
In DDD an aggregate is a consistency/invariant boundary, so one may never depend on another to maintain its invariants. When we start using this design restriction we find very few situations where is necessary to store a full reference to the other, usually we store its id and (if necessary) version and a copy of the relevant attributes.
For example, using the usual Order/LineItem and Product problem we would copy the Product's id and price in the LineItem, instead of a full reference. This way prevents changes in the Product's price affect the Order/LineItem aggregate's invariants. If is necessary to update the LineItem price after Product price changes we need to keep track of the PriceChanged event from used Products and send a compensating command to the Order/LineItem. Usually this coordination/synchronization is handled by a saga.
In Event Sourcing, the state of the aggregate is defined by Events, and nothing more. All domain model stuff (ala DDD) is there just to decide what domain events should be raised. Event should know nothing about your Domain, it should be simple DTO. In fact, it is perfectly OK to have Event Sourcing without DDD.
As i understand Event Sourcing, it is supposed to help people get rid of relational data models and ORM like NHibernate or Entity Framework, since each of them is a science on its own. Programmers could then simply focus on business logic. I saw here some relational schemas used for event stores, and they were simply ID, Version, Timestamp plus an NClob or NVarchar(max) column to store the event payload schema-less.
I'm creating an app in Rails that is essentially a holiday management tool. Employee requests holiday; email sent to manager for approval; manager approves/rejects etc.
The app will allow whole or half-day holidays to be taken and I'm wondering about the best way to handle the half-days. I don't want to present the user with a time picker. I would prefer to offer a date-picker and AM/PM checkboxes.
I suppose I'm looking for opinion on whether I should 1) use the chosen date in conjunction with say the AM checkbox to create a DateTime entry in the DB e.g. leave starts on 10 February in the AM = "2011-02-10 00:00" or 2) should I simply record a Date in the DB with a string reference to AM in a separate field.
I want to output leave in the form of .ics files and a stream so the first option to me makes the most sense but is likely to create a real fudge in the code. Any thoughts or further options appreciated.
Thanks
Robin
Why not create durations (pairs of datetimes) for every holiday rather than just one? That should model the ical representation better than just storing single times as events.
As far as how to handle that at the view level, you're probably going to want to use the Presenter Pattern since you're really manipulating events rather than times.
A presenter is basically a proxy with added business logic that represents a better mapping for how the view interacts with the model.
It's a lightweight layer (they're normally just normal Ruby classes, rather than AR::Base or other heavyweight rails models) that wrap your models, and are usually instantiated at the controller level, and passed to your views rather than the model themselves.
http://blog.jayfields.com/2006/09/rails-model-view-controller-presenter.html
http://blog.jayfields.com/2007/03/rails-presenter-pattern.html
http://www.slideshare.net/adorepump/presenting-presenters-on-rails
Here's what I mean: https://gist.github.com/984025