We are using Airflow in dev environment. How to migrate changes from one environment to another environment? - migration

Currently DAG code is in a github and can be migrated to other environments easily. But what is the best way to migrate variables and connections to another environments?
We just browsed and found that we can do it through CLI but that is not working. Is there any other way?

You could write a DAG that uses the API to read variables and connections from dev and creates or updates them on the other environments.
Airflow REST API Variables
Airflow REST API Connections
In order to use the API, you need to activate API authentication.

Related

Is it possible to share datasource testcontainer between quarkus apps in dev mode?

For kafka, redis and other testcontainer services there is a quarkus.*.devservices.shared configuration option (e.g. https://quarkus.io/guides/dev-services#quarkus-kafka-client-config-group-kafka-dev-services-build-time-config_quarkus.kafka.devservices.shared), which will reuse testcontainers of that type if there is already an existing one running.
Is there a way to achieve something similar with datasources/dbs?
Example:
I have two quarkus apps and I want to share a mysql db between them in dev mode. Setting up the tables is done with flyway.

How to build a development and production environment in apache nifi

I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.

How to setup hosting for Multiple mock services using Karate DSL , which will be hosted for long

We have lot many vendors and not all have sandboxed environments made available to test integration.
I was looking to mock them and thus would host them myself, we are using karate extensively as our BDD tool.
How can multiple mock services be hosted using single project?(Multiple Feature files)
How can I achieve different hostname for different mock services?
Can it be used as a regular server running for long?
Similar question : Using mocks in Karate DSL feature file with stanalone run
How can multiple mock services be hosted using single project
Refer the answer you linked. Use Java code for the best way to start multiple mocks.
How can I achieve different hostname for different mock services?
Normally you change your services config to point to where the mock is running, typically localhost + : + portNumber - also refer the docs on using Karate as am HTTP proxy, and also search the net on modifying etc.hosts entry if needed.
Can it be used as a regular server running for long?
Keep in mind that Karate is a "mock" :) but if you don't keep adding data to what is in-memory it should be fine. No guarantees though :P

Push code from VSTS repository to on-prem TFS?

this is my first post on here so forgive me if I've missed an existing answer to this question.
Basically my company conducts off-site development for various clients in government. Internally, we use cloud VSTS, Octopus deploy and Selenium to ensure a continuous delivery pipeline in our internal Azure environments. We are looking to extend this pipeline into the on-prem environments of our clients to cut down on unnecessary deployment overheads. Unfortunately, due to security policies we are unable to use our VSTS/Octopus instances to push code directly into the client environment, so I'm looking for a way to get code from our VSTS environment into an on-prem instance of TFS hosted on their end.
What I'm after, really, is a system whereby the client logs into our VSTS environment, validates the code, then pushes some kind of button which will pull it to their local TFS, where a replica of our automated build and test process will manage the CI pipeline through their environments and into prod.
Is this at all possible? What are my options here?
There is not a direct way to achieve migrating source code with history from VSTS to a on-premise TFS. You would need 3rd party tool, like Commercial Edition of OpsHub (note it is not free).
It sounds like you need a new feature that is comming to Octopus Deploy, see https://octopus.com/blog/roadmap-2017 --> Octopus Release Promotions
I quote:
Many customers work in environment where releases must flow between more than one Octopus server - the two most common scenarios being:
Agencies which use one Octopus for dev/test, but then need an Octopus server at each of their customer's sites to do production deployments
I will suggest the following. Though it contains small custom script.
Add build agent to your vsts which is physically located on customer's premises. This is easy, just register agent with online endpoint.
Create build definition in vsts that gets code from vsts. But instead of building commits it to local tfs. You will need a small powershell code here. You can add it as custom powershell step in build definition.
Local tfs orchestrates the rest.
Custom code:
Say your agent is on d:/agent
1. Keep local tfs mapped to some directory (say c:/tfs)
The script copies new sources over some code from d:/agent/work/ to c:/tfs
Commits from c:/tfs to local tfs.
Note:You will need /force option (and probably some more) to prevent conflicts.
I believe this not as ugly as it sounds.

Fake EC2 endpoint for testing

Is there an open source package that implements a "fake Amazon EC2" endpoint out there? Specifically, one that can be used for testing against clients that talk to EC2 (in particular, using boto)?
I know there are several open source cloud solutions out there that implement the EC2 API (e.g., OpenStack, Eucalyptus, CloudStack), but I'm looking for something where I can quickly bring up a fake EC2 server and configure it with canned responses for testing purposes.
You might want to check out moto. It basically mocks boto itself using HTTPretty to mock the HTTP layer. Its nicely done and seems really useful.
Eucalyptus have run a community cloud for many years which is freely available at http://www.eucalyptus.com/eucalyptus-cloud/community-cloud - it won't work if you're wanting to mock out different EC2 API responses (and one thing to note is that the Eucalyptus API doesn't follow the EC2 API completely, particularly in how they set different fields) - mocking out your calls to Boto seems like the best bet if you really want to test with real EC2 responses