Is it okay having if else inside sequence diagram based on the actor decision, in example if actor think the price is reasonable enough then execute if, and if it is not reasonable price then execute else like this :
Thankyou
Yes, but...
The combined fragment alt may have several operands separated by horizontal lines and guarded, either by an interaction constraint like [Reasonable price], or by an [else]. If we go by the book:
An InteractionConstraint is shown in square brackets covering the lifeline where the first event occurrence will occur, positioned above that event (...).
This graphical instruction is important (even if I'm the first to forget about it). Accordingly, [Reasonable price] should cover the lifeline of your actor. In practice, if you don't, your readers will do the mapping implicitly, but the diagram might be more ambiguous.
Why?
It is important to realize that this graphical instruction hides a deeper reality: in your case, if it's the actor who shall take initiative of beginning the relevant alt operand and send some message, then the actor should be able to evaluate if [Reasonable price] applies.
We should keep here in mind that the sequence diagram describes the possible path of execution ("traces" in UML jargon, because path of execution means too many things to too many people) of an interaction:
Interactions are units of behavior of an enclosing Classifier.
The semantics of an Interaction is given as a pair of sets of traces. The two trace sets represent valid traces and invalid traces.
A trace is a sequence of event occurrences (...)
And since the sequence diagram is made of interactions between its lifelines, the guards should be evaluated based on the available knowledge, i.e. the knowledge of the lifeline that starts the operand (and in some cases knowledge of the enclosing classifier).
This is not an explicit UML requirement, but a logical deduction considering the principle of encapsulation and the potential visibility constraints. It makes the model more useful and avoids irrealistic expectations. By antagony, what would be the leaning of a guard [beautiful weather] if none of the participating lifeline would have a window to look at the sky and see what the weather looks like?
Conclusion
If your actor has a lifeline in the diagram, make sure the guard covers it.
If your actor has no lifeline in the diagram, this guard would still be syntactically correct. But ask yourself which of the participant of the interaction could at the same time know the price and determine what is a reasonable price.
Related
I want to check what is the definition of «iterative» in expansion regions in activity diagrams. For me personally this was never a question because I understand it as letting me do a For loop, e.g.,
For i=1 to 10
Do-Something // So it does it 10 times
End For
However, while I was presenting my UML diagram to an audience, an engineer team leader (not a UML maven) objected against the term ‘iterative’, because he understood ‘iterative’ to mean an 'iterative process' such that each step improves a result. I am also aware of this definition, but I assume the UML definition is not that, but rather means a simple For-Loop.
Please confirm that the UML definition of «iterative» and iteration is like a simple For-loop. Or otherwise, if so.
No, it has a different meaning. UML 2.5 states in p. 480:
The mode of an ExpansionRegion controls how its expansion executions proceed.
If the value is iterative, the expansion executions must occur in an iterative sequence, with one completing before another can begin. The first expansion execution begins immediately when the ExpansionRegion starts executing, with subsequent executions starting when the previous execution is completed. If the input collections are ordered, then the expansion executions are sequenced in the order induced by the input collection. Otherwise, the order of the expansion executions is not defined.
Other values for this keyword are parallel and stream. You can guess that behavior defined in a parallel region can be executed in parallel. stream is a bit more complicated and you might read on that page in the UML spec.
The for-loop itself comes from the input collection you pass to the region. This can be processed in either of the above ways.
tl;dr
So rather than a for loop the keyword «iterative» for the region tells that it's behavior may not be handeled in parallel.
Ahhh, semantics...
First a disclaimer - I am not a native English speaker. Yet my believe both my level of English and IT experience are sufficient to answer this question.
Let's have a look at the dictionary definition of iterative first:
iterative adjective
/ˈɪtərətɪv/
/ˈɪtəreɪtɪv/, /ˈɪtərətɪv/
(of a process) that involves repeating a process or set of instructions again and again, each time applying it to the result of the previous stage
We used an iterative process of refinement and modification.
an iterative procedure/method/approach
The highlight with a script font is mine.
Of course this is a pure word definition, not in the context of software development.
In real life a process can quite easily be considered repetitive but in itself not really iterative. Imagine an assembly line in a mass production factory. On one of the positions a particular screw/set of screws is applied to join two or more elements. For every next run, identical set of elements the same type and number of screws is applied. There is a virtually endless stream of similar part sets, each set consisting of the same type of parts as previously and requiring the same kind of connection. From the position perspective joining the elements is a repetitive process but it is not iterative, as each join is applied to a different set of elements - it does not apply to those already joined.
If you think of a code, it's somewhat different though. When applying a loop, almost always you have some sort of a resulting set impacted by it and one can argue that with every loop step that resulting set is being further changed, meaning the next loop step is applied on the result of the previous step. From this perspective almost every loop is iterative.
On the other hand, you can have a loop like that:
loop
wait 10
while buffer is empty
read buffer
You can clearly say it is a loop and nothing is being changed. All the code does is waiting for a buffer to fill. So it is not iterative.
For UML specifically though the precise meaning is included in qwerty_so's answer so I will not repeat it here.
I have the following case.
A reservation, this reservation be canceled, it can be newly created it can be Confirmed it can be rejected.
There might be different reasons for cancelation. Lets say the reservation has expired, or it may have not been processed within certain timelimit or some other reason.
In order for a reservation to be confirmed a multiple sub - transactions should be performed. This mean that there is a flow within the Confirmation itself. The solution my team came with is some sort of work table holding many different statuses. Which is fine. I felt the need to uniquely identify the state of a reservation by declaring a field ReservationStatus that depicts certain variation of statuses that are already defined in the table. In this case the Reservation status would be NEW,CONFIRMED,CANCELED,REJECTED. Each state will depict certain variation of statuses in the work table.
My team was convinced that this is adding additional complexity. I think this is the opposite it simplifyes the flow. It also declares a natural discriminator and polymorphism. We are supposed to use Queues and asynchroneus processes.
How can I actualy jsutify that we should have such column it apears the arguments I already mentioned were not enough and deep down inside I know I am right :)?
Wanted this to be a comment but it came out too long so here it goes.
#AlexandarPetrov I would add the following questions:
Do all the Statuses concretely represent every State a Reservation could have?
Are there clear rules for all Status migration paths? For e.g. Expired -> CONFIRMED and so forth.
Do you need to model the state changes? And is it a finite state machine?
I'd personally expose the status field but only if it is concrete enough by itself to define state. For e.g. I've seen cases where there are 2 layers of statuses - status and sub-status. In a case like that boundaries are lost and state becomes a complex VO rather than a simple field and state transition rules could become blurry.
Additionally:
For me it seems like Event Sourcing and CQRS could be a good fit for all those Reservations. Especially having in mind the complex flows you mention. Then transitions will be events being applied and the statuses - a simple way to expose state. Tracking status changes separately will also be needless as the Event Stream holds all historical data.
Finally:
How can I actualy jsutify that we should have such column it apears the arguments I already mentioned were not enough and deep down inside I know I am right :)?
Well at the end you can always put your foot down and take responsibility. And if it turns out to be a wrong decision in time - bare the responsibility and admit the mistake.
I understand that many design principles conflict with each other in some cases . So, we have to weigh them and see which one is more beneficial.
Till now I was aware of SRP principle and did my lot of designs solely based on that but internally I was feeling sometimes wrong while following
this principle. Now I came to know about TDA , My feeling got more support with that :)
SRP :- Object should be worried about its own concern not anyone else
TDA :- Behavior (which is dependent only on its object state) should be kept inside the object itself
Example :- I have different Shapes like Rectangle, Square, Circle etc. Now i have to calculate area.
My design till now :- I was following SRP where I had AreaCalculatorService class which will ask state of shape and calculate the area. The reasoning behind this design
was shape should not be worried about area calculation as it's not shape responsibility. But ideally i used to think area calc code should reside under each shape
as if down the line if new shape comes up I have to modify AreaCalculatorService class(which violates the Open for extension and closed for modification principle(OECM)).
But always gave preference to SRP. That seems to be wrong
Myth is broken(At least mine) :- With TDA, looks like my feeling was correct where I should not ask about the state of the object but tell the shape to calculate it's area. Though
it will violate the SRP principle but will support OECM principle. As I said design principles conflict with each other some times, but I believe where behavior
is completely dependent on its object state, behavior and state should be together.
Another Example :- say I have to calculate the salary of all departments of all Employees in an organization, then we should follow the SRP where SalaryCalculatorService
will depend on department and employees.
It will ask the salary of each employee and then do summation on all salaries. So here I am asking for state of employee but
still not violating TDA calcSalary is not dependent only on salary of each employee.
Let me know if my interpretation of both these principle is correct or not where I should follow TDA in first case but SRP in second ?
I think your TDA understanding is correct.
The problem is with SRP, in my experience it is the SOLID principle most misunderstood.
SRP says that one class should have only one reason to change. The "reason to change" part is often confused with "it should have only one responsibility" so "it must do only one thing". No, it's not like that.
The "reason to change" depends completely from the context of the application where the class resides. Particularly it depends on the actors that interact with the software and that in the future could be able to ask for changes.
The actors could be: the customer who pays for the software, the normal users and some super-users. The DBAs that will manage the database of the application or the IT department that handles the hardware where the application runs. Once you have enumerated all the actors around your software, in order to follow what SRP says, you have to write your class in a way that it has only one single responsibility, therefore only the requests from one actor could require some changes on your class.
So, I think you should follow TDA putting data and behaviors that use those data inside the same object. In this way you can manage the relations among the objects telling them what to do instead of asking data, reducing coupling and reaching a better encapsulation.
SRP as explained above will guide you to decide which behaviors put in one object rather than another one.
Taking into consideration the domain events pattern and this post , why do people recomend keeping one aggregate per transaction model ? There are good cases when one aggregate could change the state of another one . Even by removing an aggregate (or altering it's identity) will lead to altering the state of other aggregates that reference it. Some people say that keeping one transaction per aggregates help scalability (keeping one aggregate per server) . But doesn't this type of thinking break the fundamental characteristic about DDD : technology agnostic ?
So based on the statements above and on your experience, is it bad to design aggregates, domain events, that lead to changes in other aggregates and this will lead to having 2 or more aggregates per transaction (ex. : when a new order is placed with 100 items change the customer's state from normal to V.I.P. )?
There are several things at play here and even more trade-offs to be made.
First and foremost, you are right, you should think about the model first. Afterall, the interplay of language, model and domain is what we're doing this all for: coming up with carefully designed abstractions as a solution to a problem.
The tactical patterns - from the DDD book - are a means to an end. In that respect we shouldn't overemphasize them, eventhough they have served us well (and caused major headaches for others). They help us find "units of consistency" in the model, things that change together, a transactional boundary. And therein lies the problem, I'm afraid. When something happens and when the side effects of it happening should be visible are two different things. Yet all too often they are treated as one, and thus cause this uncomfortable feeling, to which we respond by trying to squeeze everything within the boundary, without questioning. Still, we're left with that uncomfortable feeling. There are a lot of things that logically can be treated as a "whole change", whereas physically there are multiple small changes. It takes skill and experience, or even blunt trying to know when that is the case. Not everything can be solved this way mind you.
To scale or not to scale, that is often the question. If you don't need to scale, keep things on one box, be content with a certain backup/restore strategy, you can bend the rules and affect multiple aggregates in one go. But you have to be aware you're doing just that and not take it as a given, because inevitably change is going to come and it might mess with this particular way of handling things. So, fair warning. More subtle is the question as to why you're changing multiple aggregates in one go. People often respond to that with the "your aggregate boundaries are wrong" answer. In reality it means you have more domain and model exploration to do, to uncover the true motivation for those synchronous, multi-aggregate changes. Often a UI or service is the one that has this "unreasonable" expectation. But there might be other reasons and all it might take is a different set of abstractions to solve the same problem. This is a pretty essential aspect of DDD.
The example you gave seems like something I could handle as two separate transactions: an order was placed, and as a reaction to that, because the order was placed with a 100 items, the customer was made a VIP. As MikeSW hinted at in his answer (I started writing mine after he posted his), the question is when, who, how, and why should this customer status change be observed. Basically it's the "next" behavior that dictates the consistency requirements of the previous behavior(s).
An aggregate groups related business objects while an aggregate root (AR) is the 'representative' of that aggregate. Th AR itself is an entity modeling a (bigger, more complex) domain concept. In DDD a model is always relative to a context (the bounded context - BC) i.e that model is valid only in that BC.
This allows you to define a model representative of the specific business context and you don't need to shove everything in one model only. An Order is an AR in one context, while in another is just an id.
Since an AR pretty much encapsulates all the lower concepts and business rules, it acts as a whole i.e as a transaction/unit of work. A repository always works with AR because 1) a repo always deals with business objects and 2) the AR represents the business object for a given context.
When you have a use case involving 2 or more AR the business workflow and the correct modelling of that use case is paramount. In a lot of cases those AR can be modified independently (one doesn't care about other) or an AR changes as a result of other AR behaviour.
In your example, it's pretty trivial: when the customer places an order for 100 items, a domain event is generated and published. Then you have a handler which will check if the order complies with the customer promotions rules and if it does, a command is issued which will have the result of changing the client state to VIP.
Domain events are very powerful and allows you to implement transactions but in an eventual consistent environment. The old db transaction is an implementation detail and it's usually used when persisting one AR (remember AR are treated as a logical unit but persisting one may involve multiple tables hence db transaction).
Eventual consistency is a 'feature' of domain events which fits naturally a rich domain (and the real world actually). For some cases you might need instant consistency however those are particular cases and they are related to UI rather than how Domain works. Of course, it really depends from one domain to another. In your example, the customer won't mind it became a VIP 2 seconds or 2 minutes after the order was placed instead of the same milisecond.
I would like to know if somebody often uses metrics to validate its code/design.
As example, I think I will use:
number of lines per method (< 20)
number of variables per method (< 7)
number of paremeters per method (< 8)
number of methods per class (< 20)
number of field per class (< 20)
inheritance tree depth (< 6).
Lack of Cohesion in Methods
Most of these metrics are very simple.
What is your policy about this kind of mesure ? Do you use a tool to check their (e.g. NDepend) ?
Imposing numerical limits on those values (as you seem to imply with the numbers) is, in my opinion, not very good idea. The number of lines in a method could be very large if there is a significant switch statement, and yet the method is still simple and proper. The number of fields in a class can be appropriately very large if the fields are simple. And five levels of inheritance could be way too many, sometimes.
I think it is better to analyze the class cohesion (more is better) and coupling (less is better), but even then I am doubtful of the utility of such metrics. Experience is usually a better guide (though that is, admittedly, expensive).
A metric I didn't see in your list is McCabe's Cyclomatic Complexity. It measures the complexity of a given function, and has a correlation with bugginess. E.g. high complexity scores for a function indicate: 1) It is likely to be a buggy function and 2) It is likely to be hard to fix properly (e.g. fixes will introduce their own bugs).
Ultimately, metrics are best used at a gross level -- like control charts. You look for points above and below the control limits to identify likely special cases, then you look at the details. For example a function with a high cyclomatic complexity may cause you to look at it, only to discover that it is appropriate because it a dispatcher method with a number of cases.
management by metrics does not work for people or for code; no metrics or absolute values will always work. Please don't let a fascination with metrics distract from truly evaluating the quality of the code. Metrics may appear to tell you important things about the code, but the best they can do is hint at areas to investigate.
That is not to say that metrics are not useful. Metrics are most useful when they are changing, to look for areas that may be changing in unexpected ways. For example, if you suddenly go from 3 levels of inheritance to 15, or 4 parms per method to 12, dig in and figure out why.
example: a stored procedure to update a database table may have as many parameters as the table has columns; an object interface to this procedure may have the same, or it may have one if there is an object to represent the data entity. But the constructor for the data entity may have all of those parameters. So what would the metrics for this tell you? Not much! And if you have enough situations like this in the code base, the target averages will be blown out of the water.
So don't rely on metrics as absolute indicators of anything; there is no substitute for reading/reviewing the code.
Personally I think it's very difficult to adhere to these types of requirements (i.e. sometimes you just really need a method with more than 20 lines), but in the spirit of your question I'll mention some of the guidelines used in an essay called Object Calisthenics (part of the Thoughtworks Anthology if you're interested).
Levels of indentation per method (<2)
Number of 'dots' per line (<2)
Number of lines per class (<50)
Number of classes per package (<10)
Number of instance variances per class (<3)
He also advocates not using the 'else' keyword nor any getters or setters, but I think that's a bit overboard.
Hard numbers don't work for every solution. Some solutions are more complex than others. I would start with these as your guidelines and see where your project(s) end up.
But, regarding these number specifically, these numbers seem pretty high. I usually find in my particular coding style that I usually have:
no more than 3 parameters per method
signature about 5-10 lines per method
no more than 3 levels of inheritance
That isn't to say I never go over these generalities, but I usually think more about the code when I do because most of the time I can break things down.
As others have said, keeping to a strict standard is going to be tough. I think one of the most valuable uses of these metrics is to watch how they change as the application evolves. This helps to give you an idea how good a job you're doing on getting the necessary refactoring done as functionality is added, and helps prevent making a big mess :)
OO Metrics are a bit of a pet project for me (It was the subject of my master thesis). So yes I'm using these and I use a tool of my own.
For years the book "Object Oriented Software Metrics" by Mark Lorenz was the best resource for OO metrics. But recently I have seen more resources.
Unfortunately I have other deadlines so no time to work on the tool. But eventually I will be adding new metrics (and new language constructs).
Update
We are using the tool now to detect possible problems in the source. Several metrics we added (not all pure OO):
use of assert
use of magic constants
use of comments, in relation to the compelxity of methods
statement nesting level
class dependency
number of public fields in a class
relative number of overridden methods
use of goto statements
There are still more. We keep the ones that give a good image of the pain spots in the code. So we have direct feedback if these are corrected.