Pivotal GemFire 9.1.1: Replicated Region not created in cluster - gemfire

I have a GemFire cluster with 2 Locators and 2 Servers in two unix machines. I am running a Spring Boot app which joins the GemFire cluster as a peer and tries to create Replicated Regions, loading the Regions using Spring Data GemFire. Once the Spring Boot app terminates, I am not seeing the Region/data in cluster.
Am I missing something here?
GemFire cluster is not using cache.xml or Spring XML to bootstrap the Regions. My idea is to create Regions through a standalone program and make it available in the cluster. SDGF version is 2.0.7.
gemfire-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd">
<util:properties id="gemfireProperties">
<prop key="locators">unix_machine1[10334],unix_machine2[10334]</prop>
<prop key="mcast-port">0</prop>
</util:properties>
<bean id="autoSerializer" class="org.apache.geode.pdx.ReflectionBasedAutoSerializer">
<gfe:cache properties-ref="gemfireProperties" pdx-serializer-ref="autoSerializer" pdx-read-serialized="true"/>
<gfe:replicated-region id="Test" ignore-if-exists="true"/>
<gfe:replicated-region id="xyz" ignore-if-exists="true"/>
</beans>
Expectation is when the Spring Boot app terminates, Region should be created in the cluster.

The simplest approach here would be to use Cluster Configuration Service, and create regions via gfsh. See the below link for more information
https://docs.spring.io/spring-gemfire/docs/current/reference/html/#bootstrap:cache:advanced
See the section
Using Cluster-based Configuration
For more information on Cluster configuration please see the below link
http://gemfire.docs.pivotal.io/97/geode/configuring/cluster_config/gfsh_persist.html
The your client code probably be a simple gemfire client connecting to the gemfire cluster.

Your expectations are not correct. This is not a limitation with Spring per say, but is a side effect of how GemFire works.
If you were to configure a GemFire peer cache instance/member of the cluster using the GemFire API or pure cache.xml, then the cluster would not "remember" the configuration either.
When using the GemFire API, cache.xml or Spring config (either Spring XML or JavaConfig), the configuration is local to the member. Before GemFire's Cluster Configuration Service, administrators would need to distribute the configuration (e.g. cache.xml) across all the peer members that would form a cluster.
Then along came the Cluster Configuration Service, which enabled users to define their configuration using Gfsh. In Spring config, when configuring/bootstrapping a peer cache member of the cluster, you can enable the use of cluster configuration to configure the member, for example:
<gfe:cache use-cluster-configuration="true" ... />
As was pointed out here (bullet 4).
However, using Gfsh to configure each and every GemFire object (Regions, Indexes, DiskStores, etc) can be quite tedious, especially if you have a lot of Regions. Plus, not all developer want to use a shell tool. Some development teams want to version the config along with the app, which makes good sense.
Given you are using Spring Boot, you should have a look at Spring Boot for Pivotal GemFire, here.
Another way to start your cluster is to configure and bootstrap the members using Spring, rather than Gfsh. I have example of this here. You can, of course, run the Spring Boot app from the command-line using a Spring Boot FAT JAR.
Of course, some administrators/operators prevent development teams from bootstrapping the GemFire cluster in this manner and instead expect the teams to use the tools (e.g. Gfsh) provided by the database.
If this is your case, then it might be better to develop Spring Boot, GemFire ClientCache applications connecting to a standalone cluster that was started using Gfsh.
You can still do very minimal config, such as:
start locator --name=LocatorOne
start server --name=ServerOne
start server --name=ServerTwo
...
And then let your Spring Boot, client application drive the configuration (i.e. Regions, Indexes, etc) of the cluster using SDG's cluster configuration push feature.
There are many different options, so the choice is yours. You need to decide which is best for your needs.
Hope this helps.

Related

Integration tests with Cucumber using embedded GemFire for a Spring Boot application deployed in an Apache Geode client/server topology

I intend to write integration tests with Cucumber for a GemFire cache client application using Spring Boot and deployed in an Apache Geode client/server topology. I referred to the question - How to start Spring Boot app without depending on Pivotal GemFire cache which was answered in 2018 and also referred to the integration test documentation here - Integration Testing with STDG.
The link to an example concrete client/server Integration Test extending STDG’s ForkingClientServerIntegrationTestsSupprt class appears to be broken.
The purpose of my integration tests would be to:
run an embedded locator and a server during the integration test phase
define the regions for the servers using cluster.xml
create, read, update and delete cache entries and verify the different use cases
Any help regarding the ideal approach to write integration tests (probably using an embedded GemFire locator and server) will be very helpful.
Tried an embedded GemFire CacheServer instance for integration tests using #CacheServerApplication annotation but not sure on how to create ClientCache objects to use the embedded GemFire or whether this is the right way to write the integration tests.
Edit: Also came across this - Is it possible to start a PIvotal GemFire Server, Locator and Client in one JVM? where it is mentioned as - In short, NO, you cannot have a peer Cache instance (with embedded Locator) and a ClientCache instance in the same JVM (or Java application process).
DISCLAIMER: I do not have experience with Apache Cucumber...
However, it is not difficult to spin up multiple GemFire or Geode server-side processes, such as 1 or more Locator and [multiple] CacheServers in a single test class. The Locators can be standalone JVM processes or embedded, as part of the servers.
In this typical test configuration arrangement the GemFire or Geode server-side processes are forked, yet coordinated, and the test class itself acts as the ClientCache instance.
You can see 1 such test configuration in the SBDG Multi-site Caching sample, here.
The key to this test configuration is the extension of the ForkingClientServerIntegrationTests class from STDG, as well as the forking of the 2 clusters (and specifically), in the test class setup method.
The configuration for each cluster is handled by Spring config and the coordination is all handled using GemFire/Geode properties (specifically) combined with some Spring Profiles (for example, then see here) to control which configuration gets applied for each GemFire/Geode JVM process.
Of course, this example and test configuration is quite complex given the fact that the test also employs GemFire/Geode's WAN capabilities, hence the "multi-site" caching reference, but serves to demonstrate that Spring and SBDG/SDG/STDG supports as complex or as simple of a setup as your testing needs require.
You can start any number of GemFire/Geode processes (Locators, CacheServers, etc). And, in nearly all cases, the test class (JVM) itself is the cache client (ClientCache instance).
Here are a couple more examples from the Spring Data for Apache Geode (SDG) codebase and test suite: here and here.
I am certain I have another test class or example (somewhere) that for a single Locator, then joined 2 CacheServer instances, and then the test (JVM process) proceeded as ClientCache instance, but I cannot seem to find it at the moment.
In any case, I hope this gives you some ideas.

How to deploy 2 services in an Apache Ignite cluster

I have a spring boot service that configures ignite at startup and executes Ignition.start(). There are 2 more services also on spring boot that need to be placed in one Ignite cluster. How can I do this?
You are running Apache Ignite in embedded mode using maven dependency. For sharing the same Ignite instance across services, you need to create an Ignite cluster in distributed mode and then connect to the same Cluster from all the services using Thin/Thick client as per your need.
For e.g Creating Ignite cluster using Docker refer to the link: https://ignite.apache.org/docs/latest/installation/installing-using-docker
There are other options available to create Ignite cluster.
Once the Cluster is created then you can use a Thick/Thin client to connect to the same cluster.
Please refer :
https://www.gridgain.com/docs/latest/getting-started/concepts

Connecting to pivotal cloud cache from a spring boot gemfire client app on non PCF (VSI) Platform

I have Pivotal cloud cache service with https URL , i can connect to the https service via gfsh .
I have a spring boot app annotated with #ClientCacheAPplication which is running on a VSI , on a seperate VSI server , on a non PCF / non cloud environment .
Is there a way to connect to the https PCC service from the spring boot client app ?
First, you should be using Spring Boot for Apache Geode [alternatively, VMware Tanzu GemFire] (SBDG); see project page and documentation for more details.
By using SBDG, it eliminates the need to explicitly annotate your Spring Boot (and Apache Geode or GemFire ClientCache) application with SDG's #ClientCacheApplication annotation. See here for more details.
NOTE: If you are unfamiliar with SBDG, then you can follow along in the Getting Started Sample. Keep in mind that SBDG is just an extension of Spring Boot dedicated to Apache Geode (and GemFire).
I also have documentation on connecting your Spring Boot app to a Pivotal Cloud Cache (or now know as VMware Tanzu GemFire for VMs) instance.
1 particular piece of documentation that is not present in SBDG's docs is when you are running your Spring Boot application off-platform (that is when your Spring Boot app has not been deployed to Pivotal CloudFoundry (or rather, VMware Tanzu Application Service)) and you are "connecting" to the Pivotal Cloud Cache (VMware Tanzu GemFire for VMs) service on platform (that is GemFire running in PCF as PCC, or running VMW TAS as VMW Tanzu GemFire for VMs).
To do this, you need to use the new SNI Services Gateway provided by the GemFire itself. This interface allows GemFire/Geode clients (whether Spring Boot application or otherwise) to run off-platform, yet still connect to the GemFire service (PCC or VMW Tanzu GemFire for VMs) on-platform (e.g. PCF or VMW TAS).
This is also required if you are deploying your Spring Boot application in its own foundation on-platform, separately from the services foundation where the GemFire service is running. For example, if you deploy and run your Spring Boot app in the APP_A_FOUNDATION and the GemFire service is running the the SERV_2_FOUNDATION, both on-platform, then you would also need to use the GemFire SNI Service Gateway feature.
This can be configured using Spring Boot easily enough.
I have posted an internal query, reaching out to the people who have more information on this subject, and I am currently waiting to hear back from them.
Supposedly (so I was told) there is an acceptance test (SNIAcceptanceTest) that would demonstrate how this feature works and how to use, but I cannot find any references to this in the Apache Geode codebase (develop branch).
I will get back to you (in the comments below) if I hear back from anyone.

Apache Ignite with Spring framework

Does the Apache Ignite operate on a spring framework basis?
Can I register a spring controller in classpath at server remote node and use it?(using component , like #Controller)
Apache Ignite is integrated with Spring but isn't based on it.
You can register spring beans when starting remote node (using normal spring approach) and then use them from e.g. compute or distributed services.
I'm not sure if you can register beans remotely in runtime, but I don't see why not.

spring cloud bus rabbitmq

We're using spring cloud config server. Spring config clients get updates using spring control bus (RabbitMQ).
Looks like every config client instance creates a queue connected to the 'spring.cloud.bus' exchange.
Any scalability limits on how many app instances can connect to a 'spring.cloud.bus' exchange ?
I suppose RabbitMQ could be scaled to handle this.
Looking for any guidelines on this.
Many thanx,
The spring cloud config server can have multiple instances since it is stateless. That coupled with a RabbitMQ cluster should scale to a very large number of instances.
A viable solution would be spring cloud config behind a load balancer with a RabbitMQ cluster.