How to add processor programmatically Draco data flow in the cluster - automation

Is it possible to implement processor programmatically Draco data flow in the cluster?
I am planning to automate data processor flow in the 3-node cluster, using Nifi-registry
Can you please let us know how to handle these automatically? please suggest me
Appreciate your help!!

Related

spring batch API to dynamically create tasks and steps

Our application allows the users to dynamically assemble/construct Tasks and steps ( Remote calls to various services). Once the user has defined/constructed a Task with various Steps, the system will allow the Task to be run manually or scheduled.
To implement above functionality, what would it take to use spring batch for programmatically create a given Task and its constituent Steps ? I'm assuming that such tasks can be scheduled with the help of Quartz, etc.
I understand that spring XD and spring cloud watch incorporate similar features -- some pointers to the relevant XD codebase/examples(or any other project codebase) which would help in my task would help.
What considerations should I keep in mind ? Any gotchas ?
Currently not using any cloud platform but the next version will be deployed to a cloud platform.
Thanks very much.

Best practice with azure container instances

Was asked to build a cloud architecture plan in azure with serverless, yet with ability to transition to a kubernetes. So was thinking to use azure container instance since function not really a container deployment model. My limited understanding of aci seems to lacking a lot, e.g. health check, scale, auto heal etc
Question, whats the best practice for aci is there a workaround to support these capabilities, looking at msg website it looks promising but hard to dig out the exact recommended design
Combine ACI with the ACI Logic Apps connector, Azure queues and Azure Functions to build robust infrastructure that can elastically scale out containers on demand. With Azure Container Instances, you can run complex tasks that are capable of responding to events.
For the Azure container Instance itself, it mainly benefits for its fastest and simplest compare with the VM, AKS, Web App and so on. But you do not have much control on it. And its main aim is to test your image if it can run as you expect.
The Azure Logic or Azure Function, just help you to create and delete the Container Instance in the time you want. Or they can get the state or some message from the Container Instance and no more. So if you want to use the Azure Container Instance and other services such as Azure Logic, you need to know what it can help you.
If you have another questions about this issue, please let me know and I will try my best to help you.

Mosaic-Decisions: Flow import/export by API

We have 21 mosaic instances, It is very difficult to migrate flows on 21 environment. We have to make this process automatically by CICD pipeline.
How can we import/export mosaic flow by API? If it is available please mention steps.
Any advice is greatly appreciated.
Yes, Mosaic Decisions has the provision of Flow migration. Following migrations are available in Mosaic Decisions -
Single flow export-import
Bulk flow export-import
Whole Project export-import
As you mentioned about triggering it through terminal, It can be done in 2 steps,
Hitting curl command on the API meant to export the flow/s
Hitting curl command on the API meant to import the flow/s
Please note, you need to have access to the cluster and the project where the flow/s are getting imported.
In the coming versions, Mosaic Decisions will also come with export-import happening through a single hit through UI or hitting a single API.
Hope this resolves your query.
For API related queries, you can connect with the product support of Mosaic.

RabbitMQ and NiFi

I am new to NiFi, and advice welcomed.
We get data sent in from external sources in many small records. I am thinking of pulling those records into NiFi via RabbitMQ. I'd like to "spool" or "batch" those records up into larger grouping (perhaps based on some index in the records), and when a group of records reaches a certain size threshold write out to S3.
How to best accomplish this in NiFi? Any other suggestions?
Thanks, Gary
RabbitMQ is based upon AMQP. Nifi supports a processor for AMQP called as ConsumeAMQP. You will find Additional details in link which has documentation specific to RabbitMQ. Configure the processor according to the documentation and you are good to go.
For the second part you need to use PutS3Object processor and there you will be able define the thresholds.
This should be achievable... I don't know that much about RabbitMQ, but assuming that it supports a JMS interface, then you could probably use NiFi's ConsumeJMS processor, followed by MergeContent to merge until your threshold is reached, and then PutS3Object to write to S3.

How to use Apache Nifi to query a REST API?

For a project i need to develop an ETL process (extract transform load) that reads data from a (legacy) tool that exposes its data on a REST API. This data needs to be stored in amazon S3.
I really like to try this with apache nifi but i honestly have no clue yet how i can connect with the REST API, and where/how i can implement some business logic to 'talk the right protocol' with the source system. For example i like to keep track of what data has been written so far so it can resume loading where it left of.
So far i have been reading the nifi documentation and i'm getting a better insight what the tool provdes/entails. However it's not clear to be how i could implement the task within the nifi architecture.
Hopefully someone can give me some guidance?
Thanks,
Paul
The InvokeHTTP processor can be used to query a REST API.
Here is a simple flow that
Queries the REST API at https://api.exchangeratesapi.io/latest every 10 minutes
Sets the output-file name (exchangerates_<ID>.json)
Stores the query response in the output file on the local filesystem (under /tmp/data-out)
I exported the flow as a NiFi template and stored it in a gist. The template can be imported into a NiFi instance and run as is.