Application History is not getting generated in formsflow.ai - bpmn

I am using Opensource formsflow.ai version v4.0.2. This is the Repo Link. I could get the Repo up and Running using docker deployment as instructed.
I had created a form and BPMN workflow for a specific use-case in formsflow.ai. The BPMN workflow is similar to the examples provided in the formsflow.ai opensource. This workflow consists of 4 statuses eg: New, Approved, Rejected, and Completed as shown Below. But when I am submitting the application from the client the application history is not getting created.
I am not sure what went wrong here, can anyone please help me with a solution. Could see the sample workflows they provided created history but not my custom Workflow.

You should use Application Audit Listener as mentioned in the document here.
org.camunda.bpm.extension.hooks.listeners.ApplicationAuditListener
This component can be used on any event of task or execution. Upon configuration, this send value from cam variables: "applicationStatus" and "formUrl" to formsflow.ai system for capturing audit.

Related

How to start a VM instance using Cloud Scheduler

Background and Goal
I have a Debian/Linux VM on GCP which I manually start every morning and after it runs, it shuts down by itself using a Linux command. I want to automate the start of the VM by using the Cloud Scheduler. The question asked in GCP auto shutdown and startup using Google Cloud Schedulers has several answers and I am interested in pursuing the answer (https://stackoverflow.com/a/65062924/10322004) proposed by #nikelone because it seems to be simple and also it has been endorsed by #Damien and #RayFoss as being easy. I am a neophyte in these matters and I could not comprehend their replies fully. So this post was created to elicit more clear answers for a person like me.
What I have tried
I have gone to https://cloud.google.com/compute/docs/reference/rest/v1/instances/start (call this page A) and tried the API and was able to successfully start my already stopped VM when I clicked on the execute button. I presume that this means that my entries were fine and can be used in conjunction with appropriate software like Cloud Scheduler to perform the start function on a predefined schedule. But the problem is that I do not know or understand how to proceed from here. I give below my questions.
My Questions
On page A, the last three paragraphs are titled Authorization Scopes, IAM permissions, and Examples, and none of them say anything specific about what the user should do. Is it correct to assume that they have nothing to do with the Cloud Scheduler, but related to other methods to achieve the same goal? If this is not correct then my next question is what should I be doing to follow the statements in these three paragraphs?
Assuming that the answer to question 1 is "yes", meaning I can now start scheduling with the Cloud Scheduler, I next looked at the quickstart for Cloud Scheduler at https://cloud.google.com/scheduler/docs/quickstart (call this page B). The list of items to do is quite large including installing Cloud SDK, running a quite a few commands on the console, enabling some features, set up Pub/Sub, create a job, run the job and verify the results in Pub/Sub. This looks like a daunting set of tasks and I could not understand why it is necessary to jump through the hoops to use something that has already been achieved with just a few keystrokes earlier. So are these steps all necessary? Or is there a way to use the Cloud Scheduler directly without going through so many intermediate steps?
Now assume that the answer to question 2 is that I have to perform all steps stated on page B. If I run into some problem while accomplishing the tasks outlined on page B, my VM may get messed up irretrievably. Is there a way in which the Cloud Platform or its components can be used to reset my VM to its current state as of today, which is working fine? I really do not want to end up with something worse than what I have now.
To answer your questions:
Auth Scopes and IAM permissions are required for you to call the Compute Engine API methods such as instance.start & instance.stop. You need to set the right scope and the right IAM permission on your job or else it will fail. They are indeed related to the method that you're interested to call so you must keep them in mind. What you see on the examples are the ways to call the {API} using different programming languages so you don't need to pay attention to them as you will create the job through the Cloud Console. To further address this part, see the full steps I included below.
The answer that you're trying to follow uses HTTP target while the quickstart you've linked uses Pub/Sub and they are different with each other because they have separate use cases. This link shows a proper instruction how to create a scheduler job with an HTTP target. You can create this kind of job straight from the Cloud Console or a one-liner gcloud command. If your config is incorrect, the trigger will not execute the endpoint URL and you will see an error that you must fix.
Addressed on answer #2
Basically, you just need to follow the instructions to the link you've sent. However, I'll post it here as well along with my explanation:
Go to https://cloud.google.com/scheduler. Click on Go to Console. Click on Create Job. Fill up the required fields (those with red asterisks) when creating a Scheduler Job.
Select HTTP as target type.
Enter this as your URL (modify the capitalized words).
https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_ZONE/instances/INSTANCE_NAME/start
Choose HTTP method POST.
Click show more and choose Auth Header "Add OAuth Token"
Enter your service account. This is used to pass an OAuth Token when your scheduler job calls the Compute API. Make sure that the service account you will use have the "Compute Instance Admin" role because this role contains the permissions to start/stop your instance. See this instruction how to grant access on a service account. If you're not sure what service account to use, feel free to use the Compute Engine default service account.
Add this on Scope:
https://www.googleapis.com/auth/cloud-platform
The description of this scope:
See, edit, configure, and delete your Google Cloud Platform data.
Repeat for Stop instance job and change URL in #3.

How To Working With Offline Droppoint In Flowgear

Here i am troubled out with a issue with my drop point.
Here is the scenario:
-> we are using JAVA POST api for inserting values in sage database(Using flowgear sage evolution node).
-> when we are online, and the workflow is called from api then everything works fine.
-> But when i am offline or out of internet(my workflow is not call)
then it gives workflow offline error.
i.e "DropPoint '****-***' is offline and is required for this Workflow".
So, is there any way to manage the hits and dataloss when we are offline. [i will miss my data to be inserted in sage when i am offine and api will be called]
Please can you guide me on the same.
Thanks
Flowgear isn't really intended to handle this. It would be best to cache content to be sent at the source and have the ability to keep unsent data until it is successfully integrated.
That said, here would be the recommended way:
Decide where to store unprocessed data. If it's a small amount of data you could use the Flowgear Cacher or Statistics but it's probably best to have a database (eg. SQL in Azure)
The workflow that is bound to a REST endpoint and is called from your app should be modified to ONLY store data in the intermediate store described above. (i.e. its role is to queue data).
Create a second workflow that uses a timer or trigger to check for data in the intermediate store and process it.

Is it possible to retrieve the audit-log in github.com via the API?

I found nothing in the API docs, only the enterprise version mentions that you can retrieve the audit-logs using the staff-tools.
Any idea? I'd love to periodically check the audit log and send the new entries to our IM channel (ChatOps).
Thanks in advance,
As VonC points out, there is no API (as of October 2017).
Unfortunately the "Export" function in the GitHub audit logs produces JSON or CSV of the audit events but the data is missing the payload with the details.
For example the export would show that an issue_comment.update had been made but the web UI gives a link to the comment itself. The export would show that one user executed org.update_member on another user but the web UI would show what role change was made for that user.
To get the details of each event, at the moment (October 2017), the only way is via the web UI.
Here is a ruby tool which scrapes the web UI, fetching the audit log entries with details.
Update Dec. 2020, 5 years later:
Audit Log Git events and REST API now available
(in limited public beta)
In GitHub Enterprise Cloud, the Audit Log now includes Git events and has a new REST API.
Both are available as a limited public beta.
The new Git events will allow you as an administrator to review activities for users interacting with your Git repositories.
You can view events for git.clone, git.fetch, and git.push.
Additionally, the new REST API provides you with another option to interface with your Audit Log events. During the limited public beta, Git events can only be viewed via the REST API and can be exported.
How can you get access to this limited public beta? To be added to the limited public beta, please contact Sales or Support.
Feb. 2021, still for GHE (GitHub for Enterprise):
GitHub Actions: Workflow run events are now included in the Audit Log
The Audit Log now includes events associated with GitHub Actions workflow runs.
This data provides enterprise customers with a greatly expanded data set for security and compliance audits.
New events will be incorporated into the audit log when:
A workflow run is created, completed, deleted, or re-run
A workflow job is prepared. Importantly, this job will include the list of secrets that were provided to the runner
A self-hosted runner's version is updated
These new events are only available to customers on the Enterprise plan. All events are available in the REST API, and all events except for workflow run created, workflow run completed, and workflow job prepared are available in the UI and exports.
Learn more about Audit Log events
2015:
Not yet possible through the GitHub API.
But at least, it is possible to export it (since May, 5th 2015) in either JSON or CSV format.
See "Exporting the audit log".

Asana: Does API provide user activity logs?

Does the Asana API provide some method of getting user activity logs? I am interested in activity logs and login history logs.Logs could look e.g like "user created a task xyz", "user created a project".
I went through the documentation and could not find any such API/REST Endpoint. Does Asana keep such logs
in their system? If yes then is there a way to get them? If not then is it planned in a future release of the API?
(Asana dev here.)
This isn't something we currently provide. We're working on a system for getting semi-realtime updates to "subscriptions", but we were primarily thinking about subscribing to tasks, projects, workspaces, and so on. Subscribing to a user's activity would be an interesting use case, but one we haven't considered up until now, and one that might be a bit trickier.
Thanks for the feedback!

Test Manager for Trac questions

I'm looking for a way to better manage a list of test cases within Trac. The Test Manager plugin for Trac seems to be the obvious choice.
Anyone have experience, comments and/or concerns with the Test Manager plugin for Trac?
We've also looked at Testuff, which has an awesome "Test Runner" app that integrates with Trac for creating tickets. However, it stores all of the test cases, labs, etc on their servers. I'd really like to have a single destination for documentation, tickets and tests (i.e. Trac).
We're using SnagIt for screen capture and annotations today. We're looking at the Problem Steps Recorder in Windows. We'd like to find something that can send captures to Trac, similar to the Testuff-Trac integration.
Any suggestions for an app that can capture video/images with each mouse click and key press logged? BONUS: Attach capture(s) into a new Trac ticket.
For capturing movies and uploading into Trac, try http://www.bbtestassistant.com/
For single screen captures, annotation and uploading into Trac, try http://fat-bug.com
I have also written a plugin for Cropper, http://cropper.codeplex.com/, that will let you upload images but it does not support annotations before hand like fat bug. Let me know if you want a copy of that.
We used Test Manager Plugin For Trac in one of my previous projects and I can say that it worked fine for us. It provides bidirectional traceability between Test Cases and user stories, support test plans (which in my point of view are execution plans but incorporated with wiki page become test plan) and it also as most of the Trac plugins allow to be customized in order to satisfy the process needs.