Is it possible to share datasource testcontainer between quarkus apps in dev mode? - datasource

For kafka, redis and other testcontainer services there is a quarkus.*.devservices.shared configuration option (e.g. https://quarkus.io/guides/dev-services#quarkus-kafka-client-config-group-kafka-dev-services-build-time-config_quarkus.kafka.devservices.shared), which will reuse testcontainers of that type if there is already an existing one running.
Is there a way to achieve something similar with datasources/dbs?
Example:
I have two quarkus apps and I want to share a mysql db between them in dev mode. Setting up the tables is done with flyway.

Related

How to build a development and production environment in apache nifi

I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.

We are using Airflow in dev environment. How to migrate changes from one environment to another environment?

Currently DAG code is in a github and can be migrated to other environments easily. But what is the best way to migrate variables and connections to another environments?
We just browsed and found that we can do it through CLI but that is not working. Is there any other way?
You could write a DAG that uses the API to read variables and connections from dev and creates or updates them on the other environments.
Airflow REST API Variables
Airflow REST API Connections
In order to use the API, you need to activate API authentication.

How can I deploy data grid application?

I am developing web application based in Spring. I added Apache ignite in maven dependency.
It is very simple application, which is only 2 rest api.
One is querying by key, which return object. another is put data.
But I have a problem: when I develop additional implementation, I don't know how I can deploy this application.
The application always should be available. but I deploy it to one node, then the node may not available.
Is there good method for distributed memory application deploy?
In your case you will typically start an Ignite server node embedded in your application. You can then start multiple instances of application, and as long as nodes discover each other, they will share the data. For more information about discovery configuration see here: https://apacheignite.readme.io/docs/cluster-config

how to handle configuration for accept and production environment in glassfish

I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.

What is the correct way to connect to a database from an eclipse plugin?

I am evaluating the Rich Ajax Platform (RAP) and I need to connect to a DB2 database (and perhaps others).
Having done a fair amount of J2EE work I usually fetch a DataStore object via JNDI and use that to connect to a database. The actual connection parameters are configured outside of the application and can be adapted for development, test and production environments.
-- How should I go about this from within a plugin in RAP?
-- What is the best way to handle connections in different enviroments?
-- I also don't want to include the DB2 JDBC jars in the plugin as they may differ slightly between development and production.
Check this: http://www.eclipse.org/datatools