Camunda BPMN/CMMN: Access to historical User Tasks and Form Data - bpmn

Trying to wrap my head around how a BPMN/CMMN model can be used in my application.
There are several CMMN User Tasks with Forms as part of my application BPMN process.
I use Embedded User Task Forms
The data submitted by my forms get stored in the task variables and passed out to the parent process using all to all variables mapping.
To progress with the process, the user needs to [claim task], fill out the form and then complete it (via a REST call).
After the User Task with the form is completed is disappears from the list of available tasks in the /task REST endpoint (as well as in the Admin UI).
But what if I'd like to show users the variables that they have submitted to their tasks before completion after they have completed the task?
First, I thought to use the Get Tasks (Historic) (POST).
And that works in a sense that I can see the metadata about the tasks completed by the users before.
But how can I see the variables and actually the HTML form that had been used at the point of task completion? That is, the data available via
/task/{id}/variables
/task/{id}/form
before the task is completed?
The response from /history/task contains neither variables nor the form key.
Trying to access the completed task by its id, like {{camunda}}/task/46386/form or {{camunda}}/task/46386/variables
results in
{
"type": "RestException",
"message": "Cannot get form for task 46386"
}
or
{
"type": "NullValueException",
"message": "task 46386 doesn't exist: task is null"
}
respectively.
I think that I am missing something basic here.
That is probably the principle of the BPMN engine. When tasks are completed, they are considered to be gone forever with no option to access its data later any more (except for basic audit log details)?
Another side-question is whether the task access permissions that were set up in the Authorizations apply to the results returned by the /history/task endpoint?
Update:
Found the way to access the historical variables: Get Variable Instances but not the historical Task form keys.
Found a similar question.

Related

Auto pause and resume azure synapse analysis database

Want to pause database on Saturday, Sunday and Monday morning want to resume automatically using any script or any option is to do ? or how?
Thanks!
There is no specific feature for this task, but it is doable using a combination of techniques.o perform the Pause/Restart functionality, you can use the Azure REST API.
Recurrence Scheduling
I recommend Logic Apps which has a robust recurrence trigger. You will most likely need to run it daily but you can specify the hour(s). To only continue on specific days, you'll need to add some additional processing to parse the DayOfWeek from the run time:
dayOfWeek(convertFromUtc(utcNow(), 'Eastern Standard Time'))
Get Bearer Token
In this example, I'm using a Service Principle to authenticate, and Azure Key Vault to store the relevant secrets:
Check the Status of the Resource
The next step is to check the status of the Data Warehouse: if it is already Paused, we only want to attempt to pause it if the status is "Online". To do this, we'll call the API again, this time passing the Bearer Token we acquired above:
In this example I'm using Variables instead of Key Vault to demonstrate different approaches.
We'll use the StatusCode property of the previous operation to make this determination:
Check if there are any running Jobs
If the Data Warehouse's Status is "Online", the next thing to check is whether or not there are any active processes. We accomplish this by running a query on the Data Warehouse itself:
We'll then capture the results in a variable and use that in another Condition activity:
body('Get_Ops_Count')?['resultsets']['Table1'][0]['OpsCount']
Issue the Pause Command
If there are no active jobs, we are free to finally issue the Pause command. Once again, we'll leverage the REST API using the Bearer Token we acquired previously:
Restarting the Data Warehouse
The Restart process is very similar, only without the need to check for active processes, so you should be able to extrapolate that from this example. To restart the Data Warehouse, the REST endpoint is "resume" instead of "pause".

Pulling summary report for monitoring using reporting task in NiFi

Working on a piece of the project where a report needs to be generated with all the flow details(memory used, number of records processed, Processes ran successful, failed, etc). Most of the details are present on the Summary tab, but the requirement is to have separate reports.
Can any one help me with solution/steps/examples/screens/videos.
Thanks much.
Every underlying behavior of the UX/UI that Apache NiFi provides is also accessible through an API (in fact, the UI calls the API to perform each of these tasks). So you can invoke the GET /system-diagnostics API to return that information in JSON form, and then parse this data and present it in whatever form you like.

Capture start of long running POST VB.net MVC4

I have a subroutine in my Controller
<HttpPost>
Sub Index(Id, varLotsOfData)
'Point B.
'By the time it gets here - all the data has been accepted by server.
What I would like to do it capture the Id of the inbound POST and mark, for example, a database record to say "Id xx is receiving data"
The POST receive can take a long time as there is lots of data.
When execution gets to point B I can mark the record "All data received".
Where can I place this type of "pre-POST completed" code?
I should add - we are receiving the POST data from clients that we do not control - that is, it is most likely a client's server sending the data - not a webbrowser client that we have served up from our webserver.
UPDATE: This is looking more complex than I had imagined.
I'm thinking that a possible solution would be to inspect the worker processes in IIS programatically. Via the IIS Manager you can do this for example - How to use IIS Manager to get Worker Processes (w3wp.exe) details information ?
From your description, you want to display on the client page that the method is executing and you can show also a loading gif, and when the execution completed, you will show a message to the user that the execution is completed.
The answer is simply: use SignalR
here you can find some references
Getting started with signalR 1.x and Mvc4
Creating your first SignalR hub MVC project
Hope this will help you
If I understand your goal correctly, it sounds like HttpRequest.GetBufferlessInputStream might be worth a look. It allows you to begin acting on incoming post data immediately and in "pieces" rather than waiting until the entire post has been received.
An excerpt from Microsoft's documentation:
...provides an alternative to using the InputStream propertywhich waits until the whole request has been received. In contrast, the GetBufferlessInputStream method returns the Stream object immediately. You can use the method to begin processing the entity body before the complete contents of the body have been received and asynchronously read the request entity in chunks. This method can be useful if the request is uploading a large file and you want to begin accessing the file contents before the upload is finished.
So you could grab the beginning of the post, and provided your client-facing page sends the ID towards the beginning of its transmission, you may be able to pull that out. Of course, this would be reading raw byte data which would need to be decoded so you could grab the inbound post's ID. There's also a buffered one that will allow the stream to be read in pieces but will also build a complete request object for processing once it has been completely received.
Create a custom action filter,
Action Filters for executing filtering logic either before or after an action method is called. Action Filters are custom attributes that provide declarative means to add pre-action and post-action behavior to the controller's action methods.
Specifically you'll want to look at the
OnActionExecuted – This method is called after a controller action is executed.
Here are a couple of links:
http://www.infragistics.com/community/blogs/dhananjay_kumar/archive/2016/03/04/how-to-create-a-custom-action-filter-in-asp-net-mvc.aspx
http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/understanding-action-filters-vb
Here is a lab, but I think it's C#
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-custom-action-filters

CQS and CRUD operation

I working on high-scalability web site for learning purpose. I decided to use CQS pattern and some ideas of CQRS. I have separate write and read layers used by command handlers and event handlers which system sends and receives from message buses (two separate message- buses).
I have some problem dealing with commands. I read that command shouldn't return anything. And now the point is: for example I have a form on with user can create an event or for example change something in his profile (photo or name). After user clicked save i want to show him, his profile or add a new event to his wall. But how can I know that his profile has been already updated when command is only send to the bus ? I How connect idea of command and CRUD operations ? Or maybe this wrong idea at all ?
Well first off, the split should not be between commands and events, but rather between domain and read models. You can't really map CQRS commands to CRUD operations as a general rule, although most of the commands in your system will change the state of your repositories. I will give you a general overview of how this works. Say you want to add a user, you create a command AddUserCommand and assign an id to that message. On the back end, you have an handler for that command and you're right that the command does not return nothing. Once that command is handled you should publish and event reflecting the change: UserWasAddedEvent. The id of this message will be unique, but it can and should have an id related to the command which you created in the UI. Your read models should handle the event and update a read model with the command status (waiting, processing, completedOnError, completedSuccessfully) depending on the event you published. On the UI after you submited the command, you should start querying the read models whith the ID of the command you created to get the status and then update your UI accordigly.
Your right that CQRS handlers return void, but you should bear in mind that typically in an architecture like this, the backend should return the validation results of the submited commands, not the handler itself but the infrastucture around your cqrs handlers.
Just update te UI on the assumption the command succeeds - which most of the time it will.
If validation is required on the user input, you could run validation as the user types or tabs to increase the likelyhood the command will succeed.

How to update file upload messages using backbone?

I am uploading multiple files using javascript.
After I upload the files, I need to run several processing functions.
Because of the processing time that is required, I need a UI on the front telling the user the estimated time left of the entire process.
Basically I have 3 functions:
/upload - this is an endpoint for uploading the files
/generate/metadata - this is the next endpoint that should be triggered after /upload
/process - this is the last endpoint. SHould be triggered after /generate/metadata
This is how I expect the screen to look like basically.
Information such as percentage remaining and time left should be displayed.
However, I am unsure whether to allow server to supply the information or I do a hackish estimate solely using javascript.
I would also need to update the screen like telling the user messages such as
"currently uploading"
if I am at function 1.
"Generating metadata" if I am at function 2.
"Processing ..." if I am at function 3.
Function 2 only occurs after the successful completion of 1.
Function 3 only occurs after the successful completion of 2.
I am already using q.js promises to handle some parts of this, but the code has gotten scarily messy.
I recently come across Backbone and it allows structured ways to handle single page app behavior which is what I wanted.
I have no problems with the server-side returning back json responses for success or failure of the endpoints.
I was wondering what would be a good way to implement this function using Backbone.js
You can use a "progress" file or DB entry which stores the state of the backend process. Have your backend process periodically update this file. For example, write this to the file:
{"status": "Generating metadata", "time": "3 mins left"}
After the user submits the files have the frontend start pinging a backend progress function using a simple ajax call and setTimeout. the progress function will simply open this file, grab the JSON-formatted status info, and then update the frontend progress bar.
You'll probably want the ajax call to be attached to your model(s). Have your frontend view watch for changes to the status and update accordingly (e.g. a progress bar).
Long Polling request:
Polling request for updating Backbone Models/Views
Basically when you upload a File you will assign a "FileModel" to every given file. The FileModel will start a long polling request every N seconds, until get the status "complete".