What are common alternatives to maintaining state in a desktop application other than state machines? - oop

I am working on a desktop application that is a few years old. The application's state (regarding what the user is currently performing (multi-step actions), what computation is being performed, the state of/permissions on the data, background jobs, etc) is maintained through many different methods (event subscription, member variables in controller classes, dependance on the internal logic/behaviour of other classes, etc...)
So my question is, what common patterns (other than explicit state machines) exist to manage the state of the application that are flexible enough to allow:
state nesting/localization to specific modules (every component's state isn't necessarily needed by every other component. A wizard, for example, would have a private/nested/local state that is exposed to any part of it but not to the entire application)
state easily exposed/shared/reachable (i.e: the selection in some view needs to be reachable/visible to a copy button and the button would also need to be aware of the context (is the user performing a multi-step operation or is some task running in the background so I can only copy and not cut))
It's a GUI application so we can depend on the hierarchical nature of the application when sharing/reaching different states.

State machines are simple enough to be understood by novice programmer, so it may be easier to find a person capable of helping with development later. Also there are few existing libraries and tools to work with state machines, so it may be easier from other aspects. You can also use more than one state machine and let them communicate via some simple pub/sub infrastructure.
Similar approach is Petri Nets. It is a bit more complicated, I have no real experience with implementation yet, but it allows multiple states to be active at once to express parallel processes. Otherwise it looks very similar to traditional finite state machine.

Related

what are the relationships among procedural, object oriented and event driven paradigms?

I think procedural, object oriented and event driven paradigms are the main paradigm in the software development .And how do I build a relationship among them.
what are the relationships among procedural, object oriented and event driven paradigms?
it is hard to clarify what are the relationship among them.
Procedural and Event Driven describe the general workflow of the application or decision making logic where as Object Oriented describes more the structure of the decision making logic
Procedural describes a sequential workflow of logic, in general there are many steps that must be performed in a sequence, there may be criteria between each step that might be dependent on the outcome from previous steps, however the sequence of logic is pre-determined and hard coded into the application.
In procedural programming the State of the system is generally passed directly between the steps so it does not need to exist in a context outside of the executing logic and there is less need to formally manage the shape or structure of this State.
Prodedural logic complements Functional programming architectures but can be used in many contexts.
Procedural logic suits scenarios where interaction with external systems is instantaneous or not required or if it is OK for your logic to halt processing until the external system responds.
Procedural logic may raise events for external event driven logic to respond to, that doesn't make it event driven.
From a testing point of view, to properly test a pure Procedural Programming application will require the whole application to be completed. You could also test each step in the process by directly evaluating the state or result of each step but in pure PP the state is not maintained in a context that we can easily access outside of the logic, so the only way to test is to run each process to completion to review the results.
This means that the external state is generally less of a concern for testing PP logic.
End-To-End testing is greatly simplified because there are less permutations of outcomes to consider.
Event Driven describes a workflow where the system raises event messages, or responds to events raised from other systems. The application logic is executed in direct response to these events, in explicit contrast to Procedural Programming the timing of the events is considered not controllable and due to this many events may need to be serviced concurrently, this is in direct contract the procedural programming where each step needs to run to completion to be ready for the next step in the chain can be executed.
This means that in Event Driven logic it is more important to check the state of the system when performing decision logic. As the steps could conceivably be executed in any order, at any time, the state needs to be managed in a context outside of most of the logic, this is where OO concepts can start to become really helpful.
Most Graphical User Interfaces will implement forms of Event Driven programming. Think of a simple button click event, the user controls the timing of the execution, or if the button is clicked at all.
From a testing point of view, the current state of the system is important to evaluate or control before testing a process. Depending on the type of events this can raise complications during testing, you may need to simulate, impersonate or intercept other systems or events from or to other
Object Oriented Programming describes a style where the state of the system is modelled using classes that describe a set of metadata and the behaviours and interactions with other objects. We can create Instances of a class to create objects. In this way you can think of OO as first defining a series of templates, and then creating objects from those templates.
OO therefore ends up with a lot of additional boiler plate logic and a lot more effort needs to go into the design of the state and environment before you really get into the behavioural or reactionary logic.
OO pairs really well with Event Driven programming, objects make it easier to represent the environment and nuanced changes to it.
From a testing point of view, OO makes it possible to replicate state or the environment without having access to the real operating environment. Because the logic is defined a more granular set of behaviours within each object we can easily test these behaviours in isolation from the rest of the system.
This same boon can become a burden though, more care needs to be taken to ensure the state is defined accurately enough to obtain meaningful test results. To complete end-to-end testing there can be a lot of moving parts, because the timing of events is less constrained (if at all) compared to PP, there is a greater permutation of potential outcomes to define and automate. Therefor in OOP it becomes more important to test properly at a granular level to verify discrete logic blocks to gain confidence before testing in larger cascading sets of rules.

OOP - Can a part of the application you are designing also be an actor?

I am studying Object Oriented Design and am using usecases with actors and scenario''s to plan out the application i am trying to build. No specific language yet, just the theory at the moment.
I have come to the point where i have identified and written out the use cases for the users, administrator, owner, etc and also the external systems like the feed generator.
but i have come to realise that my application actually consists of multiple smaller apps. like a data gathering application and a analysis application.
Can/should i use the data gathering and analysis app as an actor in the overall application too?
I can write specific use cases for them, with scenarios etc.
Typically, no.
Actor is an entity that sits outside of the system and produces some action. It gets to the system boundaries, but then all interactions between system components are modeled not as usecases, but as i.e. dynamic diagrams or sequence diagrams.
For the record, I think this approach is flawed and doesn't really help you in building applications. I personally prefer thinking about components and their interactions directly, without forcing the idea of architecture to fit a particular modeling scheme.

How to Design a generic business entity and still be OO?

I am working on a packaged product that is supposed to cater to multiple clients with varying requirements (to a certain degree) and as such should be built in a manner to be flexible enough to be customizable by each specific client. The kind of customization we are talking about here is that different client's may have differing attributes for some of the key business objects. Also, they could have differing business logic tied in with their additional attributes as well
As an very simplistic example: Consider "Automobile" to be a business entity in the system and as such has 4 key attributes i.e. VehicleNumber, YearOfManufacture, Price and Colour.
It is possible that one of the clients using the system adds 2 more attributes to Automobile namely ChassisNumber and EngineCapacity. This client needs some business logic associated with these fields to validate that the same chassisNumber doesnt exist in the system when a new Automobile gets added.
Another client just needs one additional attribute called SaleDate. SaleDate has its own business logic check which validates if the vehicle doesnt exist in some police records as a stolen vehicle when the sale date is entered
Most of my experience has been in mostly making enterprise apps for a single client and I am really struggling to see how I could handle a business entity whose attributes are dynamic and also has a capacity for having dynamic business logic as well in an object oriented paradigm
Key Issues
Are there any general OO principles/patterns that would help me in tackling this kind of design?
I am sure people who have worked on generic / packaged products would have faced similar scenarios in most of them. Any advice / pointers / general guidance is also appreciated.
My technology is .NET 3.5/ C# and the project has a layered architecture with a business layer that consists of business entities that encompass their business logic
This is one of our biggest challenges, as we have multiple clients that all use the same code base, but have widely varying needs. Let me share our evolution story with you:
Our company started out with a single client, and as we began to get other clients, you'd start seeing things like this in the code:
if(clientName == "ABC") {
// do it the way ABC client likes
} else {
// do it the way most clients like.
}
Eventually we got wise to the fact that this makes really ugly and unmanageable code. If another client wanted theirs to behave like ABC's in one place and CBA's in another place, we were stuck. So instead, we turned to a .properties file with a bunch of configuration points.
if((bool)configProps.get("LastNameFirst")) {
// output the last name first
} else {
// output the first name first
}
This was an improvement, but still very clunky. "Magic strings" abounded. There was no real organization or documentation around the various properties. Many of the properties depended on other properties and wouldn't do anything (or would even break something!) if not used in the right combinations. Much (possibly even most) of our time in some iterations was spent fixing bugs that arose because we had "fixed" something for one client that broke another client's configuration. When we got a new client, we would just start with the properties file of another client that had the configuration "most like" the one this client wanted, and then try to tweak things until they looked right.
We tried using various techniques to get these configuration points to be less clunky, but only made moderate progress:
if(userDisplayConfigBean.showLastNameFirst())) {
// output the last name first
} else {
// output the first name first
}
There were a few projects to get these configurations under control. One involved writing an XML-based view engine so that we could better customize the displays for each client.
<client name="ABC">
<field name="last_name" />
<field name="first_name" />
</client>
Another project involved writing a configuration management system to consolidate our configuration code, enforce that each configuration point was well documented, allow super users to change the configuration values at run-time, and allow the code to validate each change to avoid getting an invalid combination of configuration values.
These various changes definitely made life a lot easier with each new client, but most of them failed to address the root of our problems. The change that really benefited us most was when we stopped looking at our product as a series of fixes to make something work for one more client, and we started looking at our product as a "product." When a client asked for a new feature, we started to carefully consider questions like:
How many other clients would be able to use this feature, either now or in the future?
Can it be implemented in a way that doesn't make our code less manageable?
Could we implement a different feature that what they are asking for, which would still meet their needs while being more suited to reuse by other clients?
When implementing a feature, we would take the long view. Rather than creating a new database field that would only be used by one client, we might create a whole new table which could allow any client to define any number of custom fields. It would take more work up-front, but we could allow each client to customize their own product with a great degree of flexibility, without requiring a programmer to change any code.
That said, sometimes there are certain customizations that you can't really accomplish without investing an enormous effort in complex Rules engines and so forth. When you just need to make it work one way for one client and another way for another client, I've found that your best bet is to program to interfaces and leverage dependency injection. If you follow "SOLID" principles to make sure your code is written modularly with good "separation of concerns," etc., it isn't nearly as painful to change the implementation of a particular part of your code for a particular client:
public FirstLastNameGenerator : INameDisplayGenerator
{
IPersonRepository _personRepository;
public FirstLastNameGenerator(IPersonRepository personRepository)
{
_personRepository = personRepository;
}
public string GenerateDisplayNameForPerson(int personId)
{
Person person = _personRepository.GetById(personId);
return person.FirstName + " " + person.LastName;
}
}
public AbcModule : NinjectModule
{
public override void Load()
{
Rebind<INameDisplayGenerator>().To<FirstLastNameGenerator>();
}
}
This approach is enhanced by the other techniques I mentioned earlier. For example, I didn't write an AbcNameGenerator because maybe other clients will want similar behavior in their programs. But using this approach you can fairly easily define modules that override default settings for specific clients, in a way that is very flexible and extensible.
Because systems like this are inherently fragile, it is also important to focus heavily on automated testing: Unit tests for individual classes, integration tests to make sure (for example) that your injection bindings are all working correctly, and system tests to make sure everything works together without regressing.
PS: I use "we" throughout this story, even though I wasn't actually working at the company for much of its history.
PPS: Pardon the mixture of C# and Java.
That's a Dynamic Object Model or Adaptive Object Model you're building. And of course, when customers start adding behaviour and data, they are programming, so you need to have version control, tests, release, namespace/context and rights management for that.
A way of approaching this is to use a meta-layer, or reflection, or both. In addition you will need to provide a customisation application which will allow modification, by the users, of your business logic layer. Such a meta-layer does not really fit in your layered architecture - it is more like a layer orthoganal to your existing architecture, though the running application will probably need to refer to it, at least on initialisation. This type of facility is probably one of the fastest ways of screwing up the production application known to man, so you must:
Ensure that the access to this editor is limited to people with a high level of rights on the system (eg administrator).
Provide a sandbox area for the customer modifications to be tested before any changes they are testing are put on the production system.
An "OOPS" facility whereby they can revert their production system to either your provided initial default, or to the last revision before the change.
Your meta-layer must be very tightly specified so that the range of activities is closely defined - George Orwell's "What is not specifically allowed, is forbidden."
Your meta-layer will have objects in it such as Business Object, Method, Property and events such as Add Business Object, Call Method etc.
There is a wealth of information about meta-programming available on the web, but I would start with Pattern Languages of Program Design Vol 2 or any of the WWW resources related to, or emanating from Kent or Coplien.
We develop an SDK that does something like this. We chose COM for our core because we were far more comfortable with it than with low-level .NET, but no doubt you could do it all natively in .NET.
The basic architecture is something like this: Types are described in a COM type library. All types derive from a root type called Object. A COM DLL implements this root Object type and provides generic access to derived types' properties via IDispatch. This DLL is wrapped in a .NET PIA assembly because we anticipate that most developers will prefer to work in .NET. The Object type has a factory method to create objects of any type in the model.
Our product is at version 1 and we haven't implemented methods yet - in this version business logic must be coded into the client application. But our general vision is that methods will be written by the developer in his language of choice, compiled to .NET assemblies or COM DLLs (and maybe Java too) and exposed via IDispatch. Then the same IDispatch implementation in our root Object type can call them.
If you anticipate that most of the custom business logic will be validation (such as checking for duplicate chassis numbers) then you could implement some general events on your root Object type (assuming you did it something like the way we do.) Our Object type fires an event whenever a property is updated, and I suppose this could be augmented by a validation method that gets called automatically if one is defined.
It takes a lot of work to create a generic system like this, but the payoff is that application development on top of the SDK is very quick.
You say that your customers should be able to add custom properties and implement business logic themselves "without programming". If your system also implements data storage based on the types (ours does) then the customer could add properties without programming, by editing the model (we provide a GUI model editor.) You could even provide a generic user application that dynamically presents the appropriate data-entry controls depending on the types, so your customers could capture custom data without additional programming. (We provide a generic client application but it's more a developer tool than a viable end-user application.) I don't see how you could allow your customers to implement custom logic without programming... unless you want to provide some kind of drag-n-drop GUI workflow builder... surely a huge task.
We don't envisage business users doing any of this stuff. In our development model all customisation is done by a developer, but not necessarily an expensive one - part of our vision is to allow less experienced developers produce robust business applications.
Design a core model that acts as its own independent project
Here's a list of some possible basic requirements...
The core design would contain:
classes that work (and possibly be extended) in all of the subprojects.
more complex tools like database interactions (unless those are project specific)
a general configuration structure that should be considered standard across all projects
Then, all of the subsequent projects that are customized per client are considered extensions of this core project.
What you're describing is the basic purpose of any Framework. Namely, create a core set of functionality that can be set apart from the whole so you don't have to duplicate that development effort in every project you create. Ie, drop in a framework and half your work is done already.
You might say, "what about the SCM (Software Configuration Management)?"
How do you track revision history of all of the subprojects without including the core into the subproject repository?
Fortunately, this is an old problem. Many software projects, especially those in the the linux/open source world, make extensive use of external libraries and plugins.
In fact git has a command that's specifically used to import one project repository into another as a sub-repository (preserving all of the sub-repository's revision history etc). In fact, you can't modify the contents of the sub-repository because the project won't track it's history at all.
The command I'm talking about is called 'git submodule'.
You may ask, "what if I develop a really cool feature in one client's project that I'd like to use in all of my client's projects?".
Just add that feature to the core and run a 'git submodule sync' on all the other projects. The way git submodule works is, it points to a specific commit within the sub-repository's history tree. So, when that tree is changed upstream, you need to pull those changes back downstream to the projects where they're used.
The structure to implement such a thing would work like this. Lets say that you software is written specifically to manage a car dealership (inventory, sales, employees, customers, orders, etc...). You create a core module that covers all of these features because they are expected to be used in the software for all of your clients.
But, you have recently gained a new client who wants to be more tech savvy by adding online sales to their dealership. Of course, their website is designed by a separate team of web developers/designers and webmaster but they want a web API (Ie, service layer) to tap into the current infrastructure for their website.
What you'd do is create a project for the client, we'll call it WebDealersRUs and link the core submodule into the repository.
The hidden benefit of this is, once you start to look as a codebase as pluggable parts, you can start to design them from the start as modular pieces that are capable of being dropped in to a project with very little effort.
Consider the example above. Lets say that your client base is starting to see the merits of adding a web-front to increase sales. Just pull the web API out of the WebDealersRUs into its own repository and link it back in as a submodule. Then propagate to all of your clients that want it.
What you get is a major payoff with minimal effort.
Of course there will always be parts of every project that are client specific (branding, ect). That's why every client should have a separate repository containing their unique version of the software. But that doesn't mean that you can't pull parts out and generalize them to be reused in subsequent projects.
While I approach this issue from the macro level, it can be applied to smaller/more specific parts of the codebase. The key here is code that you wish to re-use needs to be genericized.
OOP comes into play here because: where the functionality is implemented in the core but extended in client's code you'll use a base class and inherit from it; where the functionality is expected to return a similar type of result but the implementations of that functionality may be wildly different across classes (Ie, there's no direct inheritance hierarchy) it's best to use an interface to enforce that relationship.
I know your question is general, not tied to a technology, but since you mention you actually work with .NET, I suggest you look at a new and very important technology piece that is part of .NET 4: the 'dynamic' type.
There is also a good article on CodeProject here: DynamicObjects – Duck-Typing in .NET.
It's probably worth to look at, because, if I have to implement the dynamic system you describe, I would certainly try to implement my entities based on the DynamicObject class and add custom properties and methods using the TryGetxxx methods. It also depends whether you are focused on compile time or runtime. Here is an interesting link here on SO: Dynamically adding members to a dynamic object on this subject.
Two approaches is what I feel:
1) If different clients fall on to same domain (as Manufacturing/Finance) then it's better to design objects in such a way that BaseObject should have attributes which are very common and other's which could vary in between clients as key-value pairs. On top of it, try to implement rule engine like IBM ILog(http://www-01.ibm.com/software/integration/business-rule-management/rulesnet-family/about/).
2) Predictive Model Markup Language(http://en.wikipedia.org/wiki/PMML)

Event handling in component based game engine design

I imagine this question or variations of it get passed around a lot, so if what I'm saying is a duplicate, and the answers lie elsewhere, please inform me.
I have been researching game engine designs and have come across the component-based entity model. It sounds promising, but I'm still working out its implementation.
I'm considering a system where the engine is arranged of several "subsystems," which manage some aspect, like rendering, sound, health, AI, etc. Each subsystem has a component type associated with it, like a health component for the health subsystem. An "entity," for example an NPC, a door, some visual effect, or the player, is simply composed of one or more components, that when together give the entity its functionality.
I identified four main channels of information passing: a component can broadcast to all components in its current entity, a component can broadcast to its subsystem, a subsystem can broadcast to its components, and a subsystem can broadcast to other subsystems.
For example, if the user wanted to move their characters, they would press a key. This key press would be picked up by input subsystem, which then broadcasts the event and would be picked up by the player subsystem. The player subsystem then sends this event to all player components (and thus the entities those components compose), and those player components would communicate to its own entity's position component to go ahead and move.
All of this for a key press seems a bit winded, and I am certainly open to improvements to this architecture. But anyway, my main question still follows.
As for the events themselves, I considered where an event behaves as in the visitor pattern. The importance of what I want is that if an event comes across a component it doesn't support (as in a move event has nothing directly to do with AI or health), it would ignore the component. If an event doesn't find the component it's going after, it doesn't matter.
The visitor pattern almost works. However, it would require that I have virtual functions for every type of component (i.e. visitHealthComponent, visitPositionComponent, etc.) even if it doesn't have anything to do with them. I could leave these functions empty (so if it did come across those components, it would be ignored), but I would have to add another function every time I add a component.
My hopes were that I would be able to add a component without necessarily adding stuff to other places, and add an event without messing with other stuff.
So, my two questions:
Are there any improvements my design could allow, in terms of efficiency, flexibility, etc.?
What would be the optimal way to handle events?
I have been thinking about using entity systems for one of my own projects and have gone through a similar thought process. My initial thought was to use an Observer pattern to deal with events - I too, originally considered some kind of visitor pattern, but decided against it for the very reasons you bring up.
My thoughts are that the subsystems will provide a subsystem specific publish/subscribe interface, and thus subsystem dependencies will be resolved in a "semi-loosely" coupled fashion. Any subsystem that depends on events from another subsystem will know of the subscriber interface to that subsystem and thus can effectively make use of it.
Unfortunately, how these subscribers get handles to their publishers is still somewhat of an issue in my mind. At this point, I am favoring some kind of dynamic creation where each subsystem is instantiated, and then a second phase is used to resolve the dependencies and put all the subsystems into a "ready state".
Anyway, I am very interested in what worked out for you and any problems you encountered on your project :)
Use an event bus, aka event aggregator. What you want is an event mechanism that requires no coupling between subsystems, and an event bus will do just that.
http://martinfowler.com/eaaDev/EventAggregator.html
http://stackoverflow.com/questions/2343980/event-aggregator-implementation-sample-best-practices
etc
this architecture described here http://members.cox.net/jplummer/Writings/Thesis_with_Appendix.pdf
There are at least three problems I encountered implementing this in a real project:
systems aren't notified when something happen - only way is to ask about it - player is dead? wall isn't visible? and so on - to avoid this you can use simple MVC instead of observer pattern.
what if your object is a composit (i.e. consists of objects)? system will traverse through all hierarchy and asking about component state.
And main disadvantage is that this architecture mixes all together -for e.g why do player need to know that you pressed a key?
i think that answer is layered architectures with abstracted representation...
Excuse my bad English.
I am writing a flexible and scalable java 3d Game Engine based on Entity-Component System. I have finished some basic parts of it.
First i want to say something about ECS architecture, I don't agree that a component can communicate with other components in a same entity. Components should only store data and systems process them.
In event handling part, I think the basic input handling should not be included in a ECS. Instead, I have a System called Intent System and have a Component called Intent Component which contains many intents. A intent means a entity wants to do something toward a entity.
the Intent System process all the intents, When it processes a intent, it broadcasts the corresponding information to other systems or add other components to the entity.
I also write a interface called Intent Generator. In local game, you can implement a Keyboard Input or Mouse Input Generator and in multiple-player game, you can implement network intent generator. In AI system, you can also generate intents.
You may think the Intent System processes too many things in the game. But in fact, it shares many processing to other systems And I also write a Script System. For specific special entity it has a script component doing special things.
Originally when I develop something, I always want to make a great architecture which includes every thing. But for game developing sometimes it is very inefficient. Different game object may have completely different functions. ECS is great as data-oriented programming system. but we can not include every thing in it for a complete game.
By the way, Our ECS-based game engine will be open source in near future, then you can read it. If u are interested in it, I also invite u to join us.

What is a workflow system? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
How can I differentiate a Workflow system from a normal application that automates some work? Are there any specific feature a system must have to be categorized as a workflow system?
Workflow systems manage objects (often logically or actual electronic replacements for documents) that have an associated state. The state of an object in the system is node in a state machine (or a Petri net).
State transitions move an object from one state to another. The transitions can be triggered by people, automated events, timers, calendars, etc. Usually the transitions represent steps in a process in the real world.
That's pretty abstract, so consider an example: bug tracking software. A bug report probably starts out unvalidated, and as such is in a QA tester's queue. The QA tester will validate the report and make sure the steps are clear, grade the report for severity etc., and assign it to a developer or developer group. It is then in the developer's queue, who will ultimately fix or decide not to fix the bug, which will send it back to QA for validation. If there's a dispute over the bug, it might go into a state in which it bubbles up the management stack.
A trivial implementation of the above is to use an enumeration for the state associated with every object, and make everybody's inbox be a query for objects with a state of a particular enumeration value.
That's the gist of it, but things can get more complex, such as splitting off new objects, reacting to non-human events such as timing, internal or external (i.e. third-party) services, etc.
A workflow management system pushes users through a process that hops across more than one function within a system and potentially more than one participant in the workflow. The workflow engine knows about the state of the process, and stashes this in its own storage, which may or may not be a part of the main application database.
Workflows can be modelled using state machines, petri nets or various other constructs, including plain old scripting languages. An independent orchestration system can be used to manage workflows across multiple applications. Many (but not all) support BPEL (Business Process Execution Language) as a standard description language for workflows. If your applications are set up as a service-oriented architecture the workflow system can control the application functionality to do this. The other feature of a workflow engine is that it should be possible to abort the workflow and undo any state changes while maintaining database consistency.
www.workflowpatterns.com has a collection of documents about workflow systems, along with a library of patterns.
Examples of applications for workflow systems:
Insurance Claims Administration. Essentially the grand-daddy of them all. Typically this process will have rules covering who can authorise how much, who can process claims on different classes of business, what documents need to be present and linked to provide an audit trail, issuing and tracking the issue of payments and authorising loss-adjustment work. A typical claims workflow would track a claim from notification to authorisation of reserves and payments, through to issuing the payment, with side-processes for arranging loss-adjustment work (possibly through third parties), holding payment authority until this is finished and issuing and paying loss-adjusting expenses. In practice, these processes can get quite complicated.
Ordering, quoting and configuration management of a complex product such as a computer system.
One unusual example was a system developed by a pharmaceutical manufacturer that tracked the approval process for a new pharmaceutical product. This also incorporated an expert system that had the regulatory framework coded in it and could pick a shortest path through the regulatory hoops. This dropped their average R&D life-cycle time to market from 9 years to 5.5 years.
I would consider the application a workflow system if the user is guided through the business process without to the need to refer to any external documentation of that business process.
Expanding on Barry's bug tracking example I would say your bug tracking application is a workflow application if for example there is a button called "Close" that when pressed transitions the bug into a closed state, maybe allows the user to enter a closing comment, records the timestamp and user name and then sends an notification e-mail.
It is not a workflow system if the user has to pick the new state from a drop down and then send a notification e-mail himself.
I don't thing there is a precise definition. Here are some loose criteria:
coordinates the work of more then one person (but not groupware),
in a complex, organizational setting,
usually as part of a managed business process (as in BPM).
What is workflow?
The automation of a business process,
in whole or part, during which
documents, information or tasks are
passed from one participant* to
another for action, according to a set
of procedural rules.
*participant = resource (human or machine)
In my opinion, there are two types of workflow in terms of software. Static (or built-in) workflow and dynamic (programmable) workflow. Many document management, bug tracking, or even blog software have built-in workflow using simple state transition.
When people say "workflow," I think they generally mean the ones that you can program business logics into some existing software, like if the inventory is short on carrots, call sysco automatically.
For a real-life example of workflow feature, see Microsoft Dynamics CRM.