Get BPMN process's "Normal" Workflow - bpmn

Using Camunda's REST API what would be the best way to get and/or define a workflow's "normal" or "happy path"? I've seen some mention of this in BPMN documentation, but it seems like more of a general 'idea' rather than something specifically supported by the specification. If necessary I can also customize my BPMN workflow to accommodate this normal path notation.
Eventually what I would like is a way to pull a task's process history and display its current path through the workflow along with what the next step in the "normal" case would be. What I'm looking for is just the data (not the diagram) so that I can customize the UI as I see fit.
For now I see I can get the task's history:
https://docs.camunda.org/manual/7.5/reference/rest/history/user-operation-log/
and the workflow diagram (or XML):
https://docs.camunda.org/manual/7.5/reference/rest/process-definition/get-diagram/

Related

Tools available for creating a BPMN file

Does anyone know if there is a free online tool available to create BPMN files except bpmn.io?
I have been using BPMN io for a while, and it does not allow me to change the task/event's ID from the GUI. Because of this, I have to do this manually. But it's not practical when there is a large number of events/tasks. Can someone tell me if there is a free online alternative for bpmn.io that can change the event's ID or if there is a way to change the id in bpmn.io? Did a background check on this and couldn't find one.
There is also the offering from Camunda - Camunda Web Modeler (CaWeMo). I don't think it does what you are asking though. I didn't think event IDs were part of the BPMN specification, since they are likely more about implementation than modeling, but I've not actually looked into the BPMN specification that deeply.
If the one you are using exports in a format that you find useful, you could update the event nodes as a post processing step.
You can try using https://kiegroup.github.io/kogito-online/#/editor/bpmn for bpmn authoring.

RESTful for Axon Repositories

PROBLEM: Application uses Axon Framework and org.axonframework.eventsourcing.EventSourcingRepository and building _links in HAL format is needed in responses.
RESEARCH: Can be tuned with Spring Hateoas, but a lot requires to be handcoded in rest-controller. Spring Data REST offers autogeneration of links with an only annotation on CRUD repository. The project is not RDBS & JPA-based, so Spring Data REST is not an option.
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
Gotcha, so you are essentially looking to expose a service's capabilities when it comes to which commands can be handled by a given Command Handling Component, disregarding whether that component is an Aggregate or an External Command Handler.
Note, that interaction between a component which dispatches commands and one which handles them resides within the CommandBus. When an Axon application starts up, it's the CommandBus which receives all the registrations for known command handlers.
That way, the CommandBus provides the location transparency for this part of the application. And it's location transparency which provides clear and cleanly segregated components; essentially what will help you to take an evolutionary microservices approach (as AxonIQ describes here).
I'd thus argue the necessity of sharing all known command handlers on a given service/aggregate through REST.
Regardless, whether it makes sense is always a question of "it depends". I for one have created a means to share the known commands a service could handle as JSON schema, as you can see here in a sample project I helped built between AxonIQ and Pivotal.
So, to come round to your question:
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
No, Axon does not provide something like this out of the box, as it expect you use the CommandBus for communication. I do know you might need a starting point somewhere, for which REST makes sense, but even then exposing all known commands can be regarded as exposing your internal domain to the outside world. In the majority of scenarios, that would be undesirable, but as stated this highly "depends" on your use case.

Can I use JDT to generate control flow graph?

Does JDT provide any APIs to generate control flow graph?
I used soot to generate control flow graph,but can I use JDT to generate control flow graph?
JDT sure provides all necessary information, but you may have to invest some code of your own, to get exactly that data structure you are seeking.
Much depends on the level of detail you are interested in: A call graph between methods? Detailed flow of basic blocks within a method? A combination of both?
If your interest is related in spirit to refactoring, you may get some inspiration from the internal code in JDT/UI that is used for flow analysis on behalf of refactorings. Have a look at the following sections of source code:
data structures below org.eclipse.jdt.ui/core refactoring/org/eclipse/jdt/internal/corext/refactoring/code/flow
usage of the above in classes like
org/eclipse/jdt/internal/corext/refactoring/code/CallInliner.java
org/eclipse/jdt/internal/corext/refactoring/code/ExtractMethodAnalyzer.java

Create an API blueprint from entity specification

I'm building an API and I have modeled the entities I need inside it. By example
User
Name
Email
City
Company
Name
Website
I'm using Blueprint to specify the API itself and I need to create endpoints for CRUD operations in pretty much every entity. The task seems very redundant to me - besides some tuning that is needed in some specific entities, most of the basic skeleton looks like the same.
I wonder if there is any tool that allows me to write down my entities, its fields and types and generates this basic skeleton.
I was about to start creating one and then I stopped to look around if there already is one but I did not find anything yet...
API Blueprint contains a tool to write, use, reuse, compose, inherit your data structures, and it's MSON.
Basically it's a way to describe your data structures within an API Blueprint. We do also provide an html renderer for that, and it's the Attributes Kit. Try also to have a look to its Playground.
You can find an useful tutorial on official website, as well more information.
Hopefully it should be enough to get started.
Cheers,
V.

In the Diode library for scalajs, what is the distinction between an Action, AsyncAction, and PotAction, and which is appropriate for authentication?

In the scala and scalajs library Diode, I have used but not entirely understood the PotAction class and only recently discovered the AsyncAction class, both of which seem to be favored in situations involving, well, asynchronous requests. While I understand that, I don't entirely understand the design decisions and the naming choices, which seem to suggest a more narrow use case.
Specifically, both AsyncAction and PotAction require an initialModel and a next, as though both are modeling an asynchronous request for some kind of refreshable, updateable content rather than a command in the sense of CQRS. I have a somewhat-related question open regarding synchronous actions on form inputs by the way.
I have a few specific use cases in mind. I'd like to know a sketch (not asking for implementation, just the concept) of how you use something like PotAction in conjunction with any of:
Username/password authentication in a conventional flow
OpenAuth-style authentication with a third-party involved and a redirect
Token or cookie authentication behind the scenes
Server-side validation of form inputs
Submission of a command for a remote shell
All of these seem to be a bit different in nature to what I've seen using PotAction but I really want to use it because it has already been helpful when I am, say, rendering something based on the current state of the Pot.
Historically speaking, PotAction came first and then at a later time AsyncAction was generalized out of it (to support PotMap and PotVector), which may explain their relationship a bit. Both provide abstraction and state handling for processing async actions that retrieve remote data. So they were created for a very specific (and common) use case.
I wouldn't, however, use them for authentication as that is typically something you do even before your application is loaded, or any data requested from the server.
Form validation is usually a synchronous thing, you don't do it in the background while user is doing something else, so again Async/PotAction are not a very good match nor provide much added value.
Finally for the remote command use case PotAction might be a good fit, assuming you want to show the results of the command to the user when they are ready. Perhaps PotStream would be even better, depending on whether the command is producing a steady stream of data or just a single message.
In most cases you should use the various Pot structures for what they were meant for, that is, fetching and updating remote data, and maybe apply some of the ideas or internal models (such as the retry mechanism) to other request types.
All the Pot stuff was separated from Diode core into its own module to emphasize that they are just convenient helpers for working with Diode. Developers should feel free to create their own helpers (and contribute back to Diode!) for new use cases.