Difference between documentum workflow and D2 documentum workflow? - documentum

Can someone please suggest technical differences between normal documentum workflows and D2 based documentum workflows? In terms of different object types and so on?
What is difference between content handling and advanced content handling in D2 documentum?
Kind regards
Tom

Documentum workflow is the base for D2 based Documentum workflow. The key in configuring effective D2 workflows is to create the most basic form of the Documentum workflow. D2 workflows facilitates more configuration capabilities on top of documentum workflows like....condition based intiation, configurable controls for performer selection, task completion based on configured options....
Functionality of the workflow within D2 is dependent on the following variables:
All tasks created in the workflow must be manual.
In order to add performers to the workflow the add groups must be configured for Documentum Workflow. Any configurations on top of these should be configured in the D2-Config.
Some of Advanced Content handling capabilities of D2 are audit; lifecycles and workflows which can all be modified to suit the requirements of the specific type of content or user ; supporting operations on different types of documents using Plugins( c2, o2...) and working with Virtual documents.

Related

Simplify .net core web api design complexity

This is a design related problem.
I have a legacy application in asp.net webforms which have 3 kind of users: Admin, Customer and Provider, which access multiple services like Product, Account, Sale, Purchase etc. All these 3 users share same set of class libraries for services, logic and database until now. And the deployment is single for all 3 users.
Now we are migrating this into .net core web api + angular. And I am thinking about the options. Till now I have figured out this is the best for our application:
Create separate web api for Admin, Customer and Provider. Then for any changes in Admin, the deployment will not impact Customer.
But the problem with this approach is the class libraries will be duplicated. Some common methods will be duplicated.
Is there any alternative/good approach for this?
My answer is too large, so I decided to add another answer.
To migrate your monolithic app into Microservice or Macroservice it should be better to follow below steps:
Identify all component groups, which means you should decompose your application into several small projects, in your example, they would be AdminPeoject, CustomerProject, and ProviderProject.
then define several endpoints and APIs for all your data access scenario. for example if you need to access or manipulate data located in AdminProject and your request source is other project, you would have an API for this purpose in your AdminProject, and from now on every request which related to data manipulation in AdminProject should be done by these APIs.
In the next step, every project should be deployable and independent of the deployment of other projects.
If your system is not complex, it does not need to migrate your Macroservice into Microservice because it will add so many complexities to your project.
it's better to use a single datastore. after a while, if there is a need for separation, you just need to separate the data stores.
The separation of projects can be beneficial in case of :
updating one project don't impact the others
release cycle can be very small in this approach which obviously results in faster development and deployment
but if you just want to separate your project and they still have a single datastore, this architecture is a Macroservice architecture and the communication between micros should be done by APIs
for your shared code, you can define a Nuget package and every project can add it into their project to prevent repetitive code

Reuse microservices across different project

We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...

How does API based data integration work?

I am managing a web application which have to be integrated with other systems such as SAP/Oracle ERP. I am pretty familiar with the middleware method of data integration where I use my stored procedures to read/write data from/to the middleware database and the other system(SAP/Oracle ERP,etc) use their methods or custom applications to read/write their data from/to the middleware DB.
Now I know that these companies like SAP have their own API for integration.So i want to understand how does API based integrations work. Can you guys please help ?.
One of the best resources for SAP integration is the SAP API Business Hub: https://api.sap.com/. You can use it to search for predefined APIs that are available within the SAP system. To use these APIs, you will need to configure and activate them in the SAP system. These predefined solutions are designed to be used for a particular business process. For instance, to send/receive employee data for HR records, or to send/receive purchase orders. SAP aims to provide sufficient APIs that almost any integration needs can be met with their predefined solutions.
Regarding the types of API solutions SAP uses, SAP allows for the creation and consumption of OData, an open protocol for REST based APIs. This blog series contains a good introduction to how OData is used in SAP: https://blogs.sap.com/2016/02/08/odata-everything-that-you-need-to-know-part-1/. OData uses HTTP requests, so the two systems can interact using the standard CRUD operations (create, read, update, delete). Two important transaction codes to work with IDocs are:
SEGW (gateway service builder): create OData services
/IWFND/MAINT_SERVICE (activate and maintain services): activate and query the services
In addition to this, as you mentioned, SAP has its own API technologies. Two key SAP technologies for integration are:
IDoc (Intermediate Documents):
This is a document format that you can use to send data to external systems (outbound) and receive data from external systems (inbound).
You set up partner profiles for the system you are sending data to (t code WE20).
There are predefined IDoc types that define the data contained in the IDoc (this is the 'basic type' and 'message type'). IDoc data is organised into segments and, for a given IDoc type, you can append the segments so that only the specific data you require is sent.
You will need to map the data structure from the outbound and inbound systems using your middleware.
For a detailed guide to IDocs I strongly recommend the ALE (application link enabling) e-book on the SAP Learning Hub if you have access. You can use t code WEDI to browse the relevant ALE t codes.
BAPI (Business Application Programming Interfaces):
These are similar to function modules, but, unlike function modules, they can be called remotely. Like IDocs, they use RFC (remote function call).
BAPIs can be executed using SE37. You need to set up a test sequence (Test -> Test Sequences) because BAPIs do not automatically commit. Give the name of the BAPI, then 'BAPI_TRANSACTION_COMMIT'. Then execute the sequence to use the BAPI.
Many pre-existing BAPIs are already in SAP. You can browse them using t code 'BAPI' (BAPI Explorer).
Please see this guide for further information on BAPIs and for instructions to make your own BAPI from scratch. https://www.guru99.com/all-about-bapi.html

questions/tagged/botframework LUIS

I am planning to build a similar chatbot built by Microsoft team for a super mall in china (https://microsoft.github.io/techcasestudies/bot%20framework/2017/06/21/CaaP-SuperBrandMall.html), I have below requirement :I am using visual studio 5 for Bot Framework.
I want to get the details of different clothes available in the store
I want to fetch the data from Azure SQL ( Already have data in csv format)
I want to connect LUIS too
Integration with Skype
I have the following queries:
Which type of dialog is used here? Can I do it with only FormFlow dialog?
Integration of Bot framework with Azure SQL( Mainly fetching the details of location of clothes for particular ocassion,gender,brand in mall)
What modification is required in Model folder in bot framework?
Integration of LUIS
Please help me with this if anybody can suggest/add.
Thanks in advance.
Which type of dialog is used here? Can I do it with only FormFlow dialog?
FormFlow suits for handling and managing a guided conversation, based upon the specified guidelines (or collecting information from the user).
Based on your scenario and requirements, your bot would have more complex logic, integrate with LUIS service (recognize user intent and then do different operations based on what user said) and perform database etc, I recommend using dialogs to manage conversation flow, which would be more flexible.
Integration of LUIS and Integration of Bot framework with Azure SQL
In your bot application, you can create&use LuisDialog to integrate with a LUIS.ai application easily, which could help detect what a user wants to do by identifying their intent, and then you can get the matching entities from LuisResult within intent handler method in LuisDialog.
After you know user intent and the matching entities, you can call different methods or child dialogs to do different business logic or operate database to get location, stores or products etc details.
Note:
LUIS Bot Sample
Best practices of building LUIS app

Approach at designing ASP.NET Core 2 applications that share functionality

Currently I've been tasked to create a bunch of small-to-medium applications, each of them having some common functionality.
Implement a preapproved boostrap-based graphical design. Therefore, they will use the same assets, images, css and JavaScript components.
Share the same licensing-based mechanism. An application service will be built where it will scan a file or database to get the number of licenses available for each app, thus granting or denying access to users. The only thing that varies is the name of the application instance itself.
Use AzureAD authentication.
Each must use the same authorization mechanism. A claims-based mechanism will be built to retrieve the claims from the database, given a user AAD account.
Each must share the same administration console.This console will be the one needed to populate user information and common catalogs.
A service will be built, to show toast notifications within the apps.
An email notification service will be built, to send emails to users when triggered by business rules.
And some other, less important features, but these are the core ones.
A first, perhaps naïve approach, was to create an ASP.NET Core 2 solution for each application, and implement the shared functionality in a sort-of Core assembly that can be shared by each app. However, while this could work for points 2 to 5, I'd still be repeating the graphical UI design for each app (basically, copying the wwwroot folder as well as the shared razor views five times). So, a change tomorrow in a CSS would have to be replicated five times.
Another approach would be to create a single ASP.NET Core 2 solution, implement the shared functionality and the UI, and then use the "areas" feature o ASP.NET Core 2, each area being a different app. The problem to this approach is shipping the app: if I have to install the five apps in a customer's server, no problem. If I have to install, say, only two apps, then I'd have to ship the five apps anyway and find a way to disable the other three apps.
So, I'd like to know if there is a feature in ASP.NET Core 2 for handling this type of scenarios, or alternatively, what are industry-standard architectural designs that could apply here.
In Windows Presentation Foundation with Unity, I can create a common shell, and then load modules in that shell, within the same shell window. So, using configuration files, I can add or remove modules as I see fit. What I'm looking is something similar in concept. I do not want to create five ASP.NET Core solutions and copy-paste the wwwroot folder and implement the same mechanisms of authorization, notification, email, etc., but rather, find a way to load the core, common features and then load additional features.
Thanks in advance.