Can MuleSoft API/Anypoint copy data from one database's table to another database's table without any additional step (any custom code)? - api

Goal:
I have two SQL server databases (DB-A and DB-B) located on two different severs in same network.
DB-A has a table T1 and I want to copy data from DB-A's Table T1 (source) to DB-B's Table T2 (Destination). This DB sync should take palace anytime any record in T1 is added, updated, and deleted.
Please note: All db to db data syc options are out of consideration, I must use MuleSoft API for this job.
Background:
I am new to MuleSoft and its offered products, I am told mule soft platform can help with building and managing API’s.
I explored web for MuleSoft offering, there are many articles (mentioned below) which are suggesting that MuleSoft itself can read and write from one DB table and write to another DB table (using DB connectors etc).
Questions:
Is it possible that MuleSoft itself can get this data sync job done without us writing own MuleSoft API invoker or MuleSoft API Consumer (to trigger MuleSoft API from one end or to receive data from MuleSoft API on the other end and write to DB table)?
What are all key steps to get this data transfer working? If you can provide any reference which shows step by step journey to achieve the goal will be huge help.
Links:
https://help.mulesoft.com/s/question/0D52T00004mXXGDSA4/copy-data-from-one-oracle-table-to-another-oracle-table
https://help.mulesoft.com/s/question/0D52T00004mXStnSAG/select-insert-data-from-one-database-to-another
https://help.mulesoft.com/s/question/0D72T000003rpJJSAY/detail

First let's clarify the terminology since the questions mixes several concepts in a confusing way. MuleSoft is a company that has several products that may apply. A MuleSoft API should be considered an API created by MuleSoft. Since you clearly are talking about APIs created by you or your organization that would be an incorrect description. What you are talking about are really Mule applications, which are applications that are deployed and executed in a Mule runtime. Mule applications may implement your APIs, or may implement integrations. After all Mule originally was an ESB product used to integrate other systems, before REST APIs where a thing. You may deploy Mule applications to Anypoint Platform. Specifically to the CloudHub component of the platform, or to an on-prem instance of Mule runtime.
In any case, a Mule application is perfectly capable of implementing APIs, integrations or both. There is no need that it implements an API or call another API if that is not what you want. You need to trigger the flow somehow, either reading directly from the database to find new rows, with a scheduler to execute a query at a given time, an HTTP request or even have an API listening for requests to trigger the flow.
As an example the application can use the <db:listener> source of the Database connector to start the flow fetching rows. You need to take care of any watermark columns configurations to detect only new rows. See the documentation https://docs.mulesoft.com/db-connector/1.13/database-documentation#listener for details.
Alternatively you can trigger the flow in another way and just use a select operation.
After that use DataWeave to transform the records as needed. Then use insert or update operations.
There are examples in the documentation that can help you to get started. If you are not familiar with Mule you should start with reading the documentation and do some training until you get the concepts.

Related

How to architect scheduled API to API integration

My organization moves data for customers between systems, these integrations are in BizTalk and are done by file, sometimes to/from APIs. More and more customers are switching to APIs so we are facing more and more API to API integrations.
I'm mostly a backend developer but have been tasked with finding out how we can find a more generic pattern or system to make these integrations, we are talking close to a thousand of integrations.
But not thousands of different APIs, many customers use the same sort of systems.
What I want is a solution that:
Fetches data from the source api
Transforms the data to the format for the target api
Sends the data to the target api
Another requirement is that it should be possible to set a schedule when these jobs should run.
This is easily done in BizTalk but as mentioned there will be thousands of integrations and if we need to change something in one of the steps it will be a lot of work.
My vision is something that holds interfaces to all APIs that we communicate with and also contains the scheduled jobs we want to be run between them. Preferrably with logging/tracking.
There must be something out there that does this?
Suggestions?
NOTE: No cloud-based solutions since they are not allowed in our organization.
You can easily implement this using temporal.io open source project. You can code your integrations using a general-purpose programming language. Temporal ensures that the integration runs to completion in the presence of all sorts of intermittent failures. Scheduling is also supported out of the box.
Disclaimer: I'm a founder of the Temporal project.

In DataFactory, what is a good strategy to migrate data into Dynamics365 using Dynamics Web API?

I need to migrate data using DataFactory to Dynamics365. The Dynamics365 connector is not enough for me since one of the requirements is to only update those attributes that have been modified since last migration - not the whole register. The other requirement is that sometimes we have to 'null' values in destination.
I believe that I can do that by generating a different JSON for register and migrate them using the Web API.
I thought in putting these calls in an Azure Functions, but I believe that they are not meant to be used like this - even though with the right pricing plan they can run with no limit of time.
I think I'm doing it wrong and I can't figure out the right way.
Could you share your experience or point of view?
The correct way to interact with Dynamics 365 from other application is either directly with the WebAPI or using C#'s SDK, in both scenarios, for create or update multiple records the best way to do it (as far as i know) is using ExecuteMultipleRequest Message, this allow you to set it with updates, creates, deletes and execute then in one request.

Can HL7 2.x only be used for receiving messages or also to pull data?

I am quite new in the HL7 field and not a developer, so sorry if my question might seem to be too obvious.
We want to develop an app for a hospital which visualises performance and patient-flow data by aggregating data from other hospital applications. Our app will both visualise realtime data and historic data. During talks with the head of IT I got confused, he explained I need to:
Develop an HL7 listener like Mirth which can receive messages of other applications which communicate via HL7 2.x standards to catch realtime data and after this organise to migrate historic data from other applications via sql queries. Sounds pretty logic, though not sure if he's an expert since he had no idea what an API was and knew nothing about FHIR.
My questions are:
1 What triggers an application to send an HL7 2.x message around to other application when for instance someone changes the status of a patient? Is it programmed to automatically send a message with each change in record just randomly around? So assuming all applications do this standardly and you just need a listener like Mirth to catch those messages and migrate into my own database?
2 Can't I use the HL7 2.x standard to pull info via a query out of a database? Meaning can it be used for two-way communication? I send query, application sends me the data in an HL7 message? Meaning I can also use it to pull historic data from another database?
3 What kind of difference would the use of FHIR standard have in this situation? I believe it can definitely be used to pull information from another database. But would it in fact make a difference compared with the tactic which the tech guy is advising me, which is migrating historic data to my own database and further just catch new changes by receiving hl7 2.x messages?
4 Would it be an advise to use an FHIR RESTful API to pull/receive info from applications which still use HL7 2.x standard? So for both historic as realtime changes? Would this be a faster way of integration, or better to use the old fashioned way the Tech guy advises me.
Very keen to know more about this, since I want to organise a strategy which is future proof and won't cost months of integration time every time we migrate to a new hospital.
Thanks for your help guys!
depends on the application. most only send data, and it's configurable when and why.
no, you use hl7 v2 to pull data out of an application, not a database - if, that is, the application supports it. Many (most?) don't. And you can only do what the applcation allows
FHIR would be a lot easier to use, but it's still settling, and you'll have trouble finding applications that offer a fhir interface this year. you'll have to talk to potential customers to find out whether it's possible. btw, FHIR can do what v2 can in this regsard - both pull and push
it's always to advisable to use FHIR - if you can. mostly, though, you'll have to use v2 because that's what's on offer.

How to use Apache Nifi to query a REST API?

For a project i need to develop an ETL process (extract transform load) that reads data from a (legacy) tool that exposes its data on a REST API. This data needs to be stored in amazon S3.
I really like to try this with apache nifi but i honestly have no clue yet how i can connect with the REST API, and where/how i can implement some business logic to 'talk the right protocol' with the source system. For example i like to keep track of what data has been written so far so it can resume loading where it left of.
So far i have been reading the nifi documentation and i'm getting a better insight what the tool provdes/entails. However it's not clear to be how i could implement the task within the nifi architecture.
Hopefully someone can give me some guidance?
Thanks,
Paul
The InvokeHTTP processor can be used to query a REST API.
Here is a simple flow that
Queries the REST API at https://api.exchangeratesapi.io/latest every 10 minutes
Sets the output-file name (exchangerates_<ID>.json)
Stores the query response in the output file on the local filesystem (under /tmp/data-out)
I exported the flow as a NiFi template and stored it in a gist. The template can be imported into a NiFi instance and run as is.

Using an ESB system to replicate data among databases

I work in a small supermarket chain (4 stores). Each store has its own local database which contains information of each product, prices, and transactions that have ocurred on the store. In addition, each store needs to replicate this information back and forth to a central location.
Right now we are using something called SQLRemote, which is a feature of Sybase's SQL Anywhere database. It works, but sometimes fails and is difficult to manage. To its' credit, SQLRemote actually wasn't designed for this type of scenarios, so it could be said that we are using it incorrectly.
I was thinking that an ESB system such as Mule (or ChainBuilder which seems easier to set up) might be a good alternative to SQL remote. I understand that these systems can detect when changes occur in the database (i.e. when records are added, modified or deleted), and can be set up to deliver a message in a transaction.
Would this be a viable solution to my scenario?
Best regards,
Edgard
Yeah I am sure Mule should be able to do this.
However I work for a company which provides Fuse ESB which is using Apache projects such as Apache ServiceMix, Apache ActiveMQ, Apache Camel and Apache CXF.
We have a user story about a very big retailler in US which uses Fuse ESB to integrate their stores and warehouses and whatnot
http://fusesource.com/collateral/17
Fuse ESB
http://fusesource.com/products/enterprise-servicemix/
Yes, Mule can support this scenario thought it might be overkill. There are targeted database replication solutions out there. The advantage of Mule would be it's ability to handle failure and other scenarios where you need the workflow to be adapted based on what is happening. This allows you to build a very robust solution.
Mule flows could be a very good choice to address this problem. It's a new feature of Mule 3 designed for orchestrating integrations like this.