how to show periodic call of a state machine in UML state diagram? - embedded

I work in the field of embedded software.
In a project, we are using a time-trigged software architecture so that each component is called
periodically (with component's tick accordingly) and the component has a predetermined time to do its task.
Now suppose that one of these components has a state machine which is active whenever the scheduler calls the component. As the architecture is a time-trigged architecture, some time-based transitions in the state machine shall be synchronized with the component's call tick (suppose that the component is call every 10ms via the scheduler and , say, there is a transition from state A to B in the component's state machine that is triggered after 50ms).
The question is that is it necessary to show (in a way) the call tick of the component in its state machine?
If so, how to show?

A UML statemachine describes the behaviour of the system, not its implementation. Your method of calling each component at some time interval in order to progress/update its statemachine is an implementation detail.
The question is that is it necessary to show (in a way) the call tick of the component in its state machine?
So, no it is not necessary it would be an implementation constraint, and those should generally be avoided. The same statemachine would work just the same (at least for time triggered events) if you called each component asynchronously as fast as possible.
Your example:
_____
| A |
|_____|
| after 50ms/
|
__V__
| B |
|_____|
would work just the same, (and be more responsive for non-time triggered events).
The point is the UML state machine diagram should describe the required behaviour (i.e. the design) not the code or implementation. That is what it must do not the how it does it. You are not describing how to implement a statemachine here.

UML is agnostic on the diagram purpose: you may model the requirements, the high-level design, or the actual implementation. Whatever the purpose here, the key is separation of concerns. Therefore, in general:
If you want to show the timing of component interactions -- including at state level for some components -- you'd better go for a timing diagram. It does not show the general state machine but perfectly documents synchronization in a given scenario.
If you need to show active / on-hold states, you could consider two orthogonal state machines (i.e. one with your state machine, one for the process states, if both are independent)
If you have an important timing constraint, note it as a constraint in the diagram (i.e. { duration < 50 ms } ), rather than artificially defining timing events that are in reality just means to the end.
But if you cannot separate state transition and timing and precise timing it's part of your design, you may use time events as any other transition events:
after t for a relative time expression, i.e. a duration after having entered a state.
at t for an absolute time expression, e.g. a fixed time (e.g. 20:45) or a point in time (at 50 ms), the time origin being probably the very start of the state machine.

It sounds like the code servicing the "tick" should be disconnected from the general state machine of the target. Or rather, the state machine operates on a higher abstraction layer. The logic for providing data to the "tick" would be buried in some lower layer state handling code.
Store away data necessary to service the "tick" somewhere. If there's a lot of work to do for the MCU in relation to the "tick" time, then let some ISR or DMA design grab the latest available data when it's time to act, independent of the state machine. Or alternatively, if the MCU isn't busy and can easily finish one lap in it's main loop/state machine between ticks, you could even use a polling design.

Related

what are the relationships among procedural, object oriented and event driven paradigms?

I think procedural, object oriented and event driven paradigms are the main paradigm in the software development .And how do I build a relationship among them.
what are the relationships among procedural, object oriented and event driven paradigms?
it is hard to clarify what are the relationship among them.
Procedural and Event Driven describe the general workflow of the application or decision making logic where as Object Oriented describes more the structure of the decision making logic
Procedural describes a sequential workflow of logic, in general there are many steps that must be performed in a sequence, there may be criteria between each step that might be dependent on the outcome from previous steps, however the sequence of logic is pre-determined and hard coded into the application.
In procedural programming the State of the system is generally passed directly between the steps so it does not need to exist in a context outside of the executing logic and there is less need to formally manage the shape or structure of this State.
Prodedural logic complements Functional programming architectures but can be used in many contexts.
Procedural logic suits scenarios where interaction with external systems is instantaneous or not required or if it is OK for your logic to halt processing until the external system responds.
Procedural logic may raise events for external event driven logic to respond to, that doesn't make it event driven.
From a testing point of view, to properly test a pure Procedural Programming application will require the whole application to be completed. You could also test each step in the process by directly evaluating the state or result of each step but in pure PP the state is not maintained in a context that we can easily access outside of the logic, so the only way to test is to run each process to completion to review the results.
This means that the external state is generally less of a concern for testing PP logic.
End-To-End testing is greatly simplified because there are less permutations of outcomes to consider.
Event Driven describes a workflow where the system raises event messages, or responds to events raised from other systems. The application logic is executed in direct response to these events, in explicit contrast to Procedural Programming the timing of the events is considered not controllable and due to this many events may need to be serviced concurrently, this is in direct contract the procedural programming where each step needs to run to completion to be ready for the next step in the chain can be executed.
This means that in Event Driven logic it is more important to check the state of the system when performing decision logic. As the steps could conceivably be executed in any order, at any time, the state needs to be managed in a context outside of most of the logic, this is where OO concepts can start to become really helpful.
Most Graphical User Interfaces will implement forms of Event Driven programming. Think of a simple button click event, the user controls the timing of the execution, or if the button is clicked at all.
From a testing point of view, the current state of the system is important to evaluate or control before testing a process. Depending on the type of events this can raise complications during testing, you may need to simulate, impersonate or intercept other systems or events from or to other
Object Oriented Programming describes a style where the state of the system is modelled using classes that describe a set of metadata and the behaviours and interactions with other objects. We can create Instances of a class to create objects. In this way you can think of OO as first defining a series of templates, and then creating objects from those templates.
OO therefore ends up with a lot of additional boiler plate logic and a lot more effort needs to go into the design of the state and environment before you really get into the behavioural or reactionary logic.
OO pairs really well with Event Driven programming, objects make it easier to represent the environment and nuanced changes to it.
From a testing point of view, OO makes it possible to replicate state or the environment without having access to the real operating environment. Because the logic is defined a more granular set of behaviours within each object we can easily test these behaviours in isolation from the rest of the system.
This same boon can become a burden though, more care needs to be taken to ensure the state is defined accurately enough to obtain meaningful test results. To complete end-to-end testing there can be a lot of moving parts, because the timing of events is less constrained (if at all) compared to PP, there is a greater permutation of potential outcomes to define and automate. Therefor in OOP it becomes more important to test properly at a granular level to verify discrete logic blocks to gain confidence before testing in larger cascading sets of rules.

Commands always handles in one place?

I've read that, in NServiceBus, commands are always handled in a single place. Is this a general CQRS/event sourcing rule of thumb? If yes, what are the advantages? Why is it a bad idea to scale out command handling nodes?
A command represents the intention to change a specific part of the business state. It makes sense to have only one command handler i.e one place where that functionality is implemented. Also, inside a command handler we implement a business use case which has its own model and boundaries.
You can scale command handlers by adding more endpoints but it's the same code running in parallel and it's a risky affair, especially in distributed apps. It's easier and cheaper to scale vertically, but I'd say that very few app types need to scale the command side.
The underlying goal of messaging can help us to achieve loose coupling between components by maintaining the autonomy of each component and logical “Service”.
In general by using explicit naming, it should be clear what it is you expect the message handler to do. Combined with the “Single Responsibility Principle” (SRP) we can achieve better decomposition of our systems. For example, “UpdateUser” means nothing, while “UpdateUserPhoneNumber” or “ChangeUserPassword” is more like it :-).
We want to make sure we don’t mix logical (all this belongs in my Finance “Service” for example) and the physical deployable service (process).
There could be many physical processes (windows services, IIS/WEB processes, WCF, Desktop application) that host parts of a logical “Service” or a mix multiple logical “Services”.
Using the Command/Event semantics clarifies what is the intent and logical boundary of a message.
Commands:
– Internal communication between components inside the boundary of a logical “Service” is done using commands.
– Commands (as the name implies) can command another component within the boundary of the logical “Service”.
– They change the state of the processing component.
– They contain the data and context that the component (handler) needs in order to execute its task.
– Commands are “Sent” (using bus.Send() using NServiceBus) to exactly one component (handler) (one to one communication).
Events:
– Events are used for cross logical “Service” communications.
– They notify of things that happened in the past.
– They are light and contain only reference data like Ids and a small amount of context data.
– Events are published (bus.Publish() using NServiceBus)
– There is only one logical publisher and one or more subscribers.
– Events can also be used inside a logical “Service” between internal loosely coupled components.
To summaries:
Use Commands with data inside you logical “Service” boundary to change state.
Use Events with reference data for cross logical “Service” communications.
Follow the Single Responsibility Principle, it will help reduce the size of your units of work.
Reducing coupling is our objective.
Does that make sense?

How to illustrate an interrupt-driven process?

This question is related to diagraming a software process. As an electrical engineer, much of the software I do is for embedded micro-controllers. In school, we learned to illustrate our algorithm using a flowchart. However, nowadays, many of my embedded projects are heavily interrupt-driven where the main process runs some basic algorithm a variety of interrupt sources provide its stimulus. So, my question is, what are some diagramming techniques that I can use to illustrate my process such that future developers can understand what I am doing easily and get involved in development?
Here are some key features that I am looking for:
Shows data structures and how data is passed between processes & interrupts
Shows conditions that cause each interrupt
Shows how data is gathered and passed through a downlink
Shows how command messages are received, parsed, and executed
Ideally is well suited for hierarchical breakdown into smaller processes with greater levels of detail
I've always seen interrupt timing drawn as follows:
Or inline line so:
But I prefer the former as it gives more room for annotation.
In response to your comment, perhaps a UML state machine diagram (with some adaptation) may be closer suited to your purpose:
There are many of interesting approaches you can find in diagram drawing. I will post a few here. You will find a lot of Operation System and Architecture scpecific names in there such as register , event, function names and etc. It is more for representation so far, right? So he we are.
Use UML class diagrams for showing data structures. Use sequence diagrams to show interactions between classes and interrupt service routines (showing function calls only). Use activity diagrams to show how interrupts interact with processes (signals are good for this). An activity diagram can also be used to show the process of receiving data, parsing it, and dispatching it. This can also be represented in a static view by a package diagram where the command handler is in one package and the command parser is in another, connected by a dependency line. Use cases are good for a high level view of functionality.
The main point is that UML supports many different views (static, dynamic, logical, deployment) into your system. Don't try to express everything at once.
The diagram below shows an example of an interrupt to a process.

What are common alternatives to maintaining state in a desktop application other than state machines?

I am working on a desktop application that is a few years old. The application's state (regarding what the user is currently performing (multi-step actions), what computation is being performed, the state of/permissions on the data, background jobs, etc) is maintained through many different methods (event subscription, member variables in controller classes, dependance on the internal logic/behaviour of other classes, etc...)
So my question is, what common patterns (other than explicit state machines) exist to manage the state of the application that are flexible enough to allow:
state nesting/localization to specific modules (every component's state isn't necessarily needed by every other component. A wizard, for example, would have a private/nested/local state that is exposed to any part of it but not to the entire application)
state easily exposed/shared/reachable (i.e: the selection in some view needs to be reachable/visible to a copy button and the button would also need to be aware of the context (is the user performing a multi-step operation or is some task running in the background so I can only copy and not cut))
It's a GUI application so we can depend on the hierarchical nature of the application when sharing/reaching different states.
State machines are simple enough to be understood by novice programmer, so it may be easier to find a person capable of helping with development later. Also there are few existing libraries and tools to work with state machines, so it may be easier from other aspects. You can also use more than one state machine and let them communicate via some simple pub/sub infrastructure.
Similar approach is Petri Nets. It is a bit more complicated, I have no real experience with implementation yet, but it allows multiple states to be active at once to express parallel processes. Otherwise it looks very similar to traditional finite state machine.

Event handling in component based game engine design

I imagine this question or variations of it get passed around a lot, so if what I'm saying is a duplicate, and the answers lie elsewhere, please inform me.
I have been researching game engine designs and have come across the component-based entity model. It sounds promising, but I'm still working out its implementation.
I'm considering a system where the engine is arranged of several "subsystems," which manage some aspect, like rendering, sound, health, AI, etc. Each subsystem has a component type associated with it, like a health component for the health subsystem. An "entity," for example an NPC, a door, some visual effect, or the player, is simply composed of one or more components, that when together give the entity its functionality.
I identified four main channels of information passing: a component can broadcast to all components in its current entity, a component can broadcast to its subsystem, a subsystem can broadcast to its components, and a subsystem can broadcast to other subsystems.
For example, if the user wanted to move their characters, they would press a key. This key press would be picked up by input subsystem, which then broadcasts the event and would be picked up by the player subsystem. The player subsystem then sends this event to all player components (and thus the entities those components compose), and those player components would communicate to its own entity's position component to go ahead and move.
All of this for a key press seems a bit winded, and I am certainly open to improvements to this architecture. But anyway, my main question still follows.
As for the events themselves, I considered where an event behaves as in the visitor pattern. The importance of what I want is that if an event comes across a component it doesn't support (as in a move event has nothing directly to do with AI or health), it would ignore the component. If an event doesn't find the component it's going after, it doesn't matter.
The visitor pattern almost works. However, it would require that I have virtual functions for every type of component (i.e. visitHealthComponent, visitPositionComponent, etc.) even if it doesn't have anything to do with them. I could leave these functions empty (so if it did come across those components, it would be ignored), but I would have to add another function every time I add a component.
My hopes were that I would be able to add a component without necessarily adding stuff to other places, and add an event without messing with other stuff.
So, my two questions:
Are there any improvements my design could allow, in terms of efficiency, flexibility, etc.?
What would be the optimal way to handle events?
I have been thinking about using entity systems for one of my own projects and have gone through a similar thought process. My initial thought was to use an Observer pattern to deal with events - I too, originally considered some kind of visitor pattern, but decided against it for the very reasons you bring up.
My thoughts are that the subsystems will provide a subsystem specific publish/subscribe interface, and thus subsystem dependencies will be resolved in a "semi-loosely" coupled fashion. Any subsystem that depends on events from another subsystem will know of the subscriber interface to that subsystem and thus can effectively make use of it.
Unfortunately, how these subscribers get handles to their publishers is still somewhat of an issue in my mind. At this point, I am favoring some kind of dynamic creation where each subsystem is instantiated, and then a second phase is used to resolve the dependencies and put all the subsystems into a "ready state".
Anyway, I am very interested in what worked out for you and any problems you encountered on your project :)
Use an event bus, aka event aggregator. What you want is an event mechanism that requires no coupling between subsystems, and an event bus will do just that.
http://martinfowler.com/eaaDev/EventAggregator.html
http://stackoverflow.com/questions/2343980/event-aggregator-implementation-sample-best-practices
etc
this architecture described here http://members.cox.net/jplummer/Writings/Thesis_with_Appendix.pdf
There are at least three problems I encountered implementing this in a real project:
systems aren't notified when something happen - only way is to ask about it - player is dead? wall isn't visible? and so on - to avoid this you can use simple MVC instead of observer pattern.
what if your object is a composit (i.e. consists of objects)? system will traverse through all hierarchy and asking about component state.
And main disadvantage is that this architecture mixes all together -for e.g why do player need to know that you pressed a key?
i think that answer is layered architectures with abstracted representation...
Excuse my bad English.
I am writing a flexible and scalable java 3d Game Engine based on Entity-Component System. I have finished some basic parts of it.
First i want to say something about ECS architecture, I don't agree that a component can communicate with other components in a same entity. Components should only store data and systems process them.
In event handling part, I think the basic input handling should not be included in a ECS. Instead, I have a System called Intent System and have a Component called Intent Component which contains many intents. A intent means a entity wants to do something toward a entity.
the Intent System process all the intents, When it processes a intent, it broadcasts the corresponding information to other systems or add other components to the entity.
I also write a interface called Intent Generator. In local game, you can implement a Keyboard Input or Mouse Input Generator and in multiple-player game, you can implement network intent generator. In AI system, you can also generate intents.
You may think the Intent System processes too many things in the game. But in fact, it shares many processing to other systems And I also write a Script System. For specific special entity it has a script component doing special things.
Originally when I develop something, I always want to make a great architecture which includes every thing. But for game developing sometimes it is very inefficient. Different game object may have completely different functions. ECS is great as data-oriented programming system. but we can not include every thing in it for a complete game.
By the way, Our ECS-based game engine will be open source in near future, then you can read it. If u are interested in it, I also invite u to join us.