Clone an existing MobileFrist Platform Foundation (MFP) V8 environment - ibm-mobilefirst

How do I create a clone of a MFP V8 environment? The clone will have the same topology, with foundation server and database Oracle server on a new host. We have three oracle databases for MFP Core, Admin and App Center. Then we have our major Analytics file based database. We also plan to apply the latest ifix on the foundation server and the underline Liberty server.

High level steps for moving the runtime would be
Install Oracle on new host.
Move data from current DB to new DB.
Install Liberty on new host.
Install MFP on new host.
Run Server configuration tool or ant tasks to configure the runtime
environment (point to the new liberty and new Oracle instance)
Restart liberty server.
Please note the important update in MFP8.0 IFIX release notes here -
https://mobilefirstplatform.ibmcloud.com/blog/2017/03/09/8-0-ifix-release/
Regarding analytics data movement, please consider the following points
In an analytics Cluster, the indivdual nodes stores data under a
folder configured via jndi property called analytics/datapath - By
default on every node it is under ./analyticsData directory. (within the
MFP Analytics location)
If you are moving data from an one Cluster to a new Analytics
Cluster, make sure the cluster has same number of nodes.
Copy of analytics data should follow below pattern (make sure the
servers nodes are stopped while copying to avoid any lock files in
place)
Node-1 of Old Analytics Server ---> Node-1 of New Analytics Server
Node-2 of Old Analytics Server ---> Node-2 of New Analytics Server
... and so on...
Make sure the ./analyticsData directory on the new analytics cluster
nodes are empty and the nodes are stopped.
Make sure the analyticsData copied to the new machine holds the same
directory structure as before in the previous cluster nodes. (By default
analyticsData directory contains a directory called worklight) Keep the
structure same.
Make sure the analytics JNDI properties are set as similar to the old
machine (node IPs can be changed thats fine according to new machines) -
especially the analytics server.xml
Start the new analytics nodes and verify the data. (make sure on the
analytics console, the date filter is set corrrectly to view the data).
Important Note: Make sure a backup of all the node-wise analyticsData is
kept safe. Use of tar.gz is better for copy of data from node to node.

Related

How to manually deploy Mule application package on the on-premises cluster?

I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes

How to build a development and production environment in apache nifi

I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.

SQL Database Blue / Green deployments

I am dealing with the infrastructure for a new project. It is a standard Laravel stack = PHP, SQL server, and Nginx. For the PHP + Nginx part, we are using Kubernetes cluster - so scaling and blue/green deployments are taken care of.
When it comes to the database I am a bit unsure. We don't want to use Kubernetes for SQL, so the current idea is to go for Google Cloud SQL managed service (Are the competitors better for blue/green deployment of SQL?). The question is can it sync the data between old and new versions of the database nodes?
Let's say that we have 3 active Pods and at least 2 active database nodes (and a load balancer).
So the standard deployment should look like this:
Pod with the new code is created.
New database node is created with current data.
The new Pod gets new environment variables to connect to the new database.
Database migrations are run on the new database node.
Health check for the new Pod is run, if it passes Pod starts to receive traffic.
One of the old Pods is taken offline.
It should keep doing this iteration until all of the Pods and Database nodes are replaced.
The question is can this work with the database? Let's imagine there is a user on the website that is using the last OLD database node to write some data and when switched to the NEW database node the data are simply not there until the last database node is upgraded. Can they be synced behind the scenes? Does Google Cloud SQL managed service provide that?
Or is there a completely different and better solution to this problem?
Thank you!
I'm not 100% sure if this is what you are looking for, but for my understanding, Cloud SQL replicas would be a better solution. You can have read replicas [1], that are a copy of the master instance and have different options [2]
A read replica is a copy of the master that reflects changes to the master instance in almost real time. You create a replica to offload read requests or analytics traffic from the master. You can create multiple read replicas for a single master instance.
or a failover replica [3], that in case the master goes down, the data continue to be available there.
If an instance configured for high availability experiences an outage or becomes unresponsive, Cloud SQL automatically fails over to the failover replica, and your data continues to be available to clients. This is called a failover.
You can combine those if you need.

Runtime synchronization failed in MobileFirst 7.1 on Bluemix container with cloudant noSql DB

I followed the tutorial instructions :
Install MobileFirst Platform Server 7.1 on Bluemix (https://mobilefirstplatform.ibmcloud.com/labs/administrators/7.1/bluemix/)
I used Cloudant NoSQL DB as database.
It works well for several days.
But after a weekend without use, it doesn't work and i have this message on MobileFirst Operations console: Runtime synchronization failed.
console message
I tried to restart the container and the database application server (liberty) but i've always the same message.
I have to remove the container and repeat the whole procedure.
This is the third time it happens.
Try setting JNDI ibm.worklight.admin.farm.reinitialize value to true in server.xml. This will re-initalize the farm entries in other words it will clear the stale entries when the application crashes.
Reference : List of JNDI Properties for MFP Administration
seems like you are using the Cloudant shared plan. The shared plan response is not guaranteed like the dedicated Plan. To account for this vagarancies, there was a fix released to 7.1 that you should apply that adds the resiliency needed for the non response from the Cloudant shared plan. Pl apply the latest iFix and this should get solved.

IBM Worklight 6.2. Analytics JNDI properties in WAS ND

About Worklight 6.2 Analytics.
https://www-01.ibm.com/support/knowledgecenter/api/content/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/t_setting_up_production_cluster.html
There are several JNDI properties to configure but It is not explained how to configure them in WAS ND and in which scope must be configured (if this has sense)
For example the worklight.properties are configured as application properties during the application installation.
How are configured the analytics JNDI properties on WAS?
And also in which scope should them be configured, this is also struggling me. For example it says that properties like "analytics/shards" or "analytics/replicas_per_shard" must be configured in the first node, but for me these properties should be properties configured at cluster level, not at node level.
Also WAS ND topology is something completely dynamic and flexible, what happens if I remove that "first" node?
Ok, now I understand that when in the Worklight Analytics documentation talk about cluster it is not talking about WAS Cluster but about Elasticsearch cluster.
Taking into account this, configuring a cluster for Analytics does not mean to install the analytics.war in a WAS cluster, it means that you will install analytics.war file in a number of WAS servers (not WAS clusters, not WAS nodes) and with the ElasticSearch properties you will configure the ElasticSearch cluster.
Is this correct?
The specific answer to my question is that the value of the properties are set during the detailed installation of the analytics.war file as it is done with the Application Project WAR file, worklightadmin.war or worklightconsole.war.
It is only needed to set those properties if you are configuring Analytics in more than one server.