Deploy jars in gemfire cache - gemfire

I am studying gemfire and getting my hands dirty. I came across this concept of deploying jars into gemfire.
My question is why and what type of jars would one deploy in to gemfire cache that would become native to gemfire.
Can you list some scenarios that would clarify this concept of deploying jars in to gemfire ?

You can write Functions that can be executed within the Geode server process. Functions are a quick way of iterating over the data in Geode in parallel OR implementing your custom aggregate.
You can also implement CacheLoaders to load data into Geode and CacheListeners/AsyncEventListeners to write data from Geode into another data source.
Your functions, Listeners, Writers can be bundled in a jar and then deployed on the Geode servers.

Related

Integration tests with Cucumber using embedded GemFire for a Spring Boot application deployed in an Apache Geode client/server topology

I intend to write integration tests with Cucumber for a GemFire cache client application using Spring Boot and deployed in an Apache Geode client/server topology. I referred to the question - How to start Spring Boot app without depending on Pivotal GemFire cache which was answered in 2018 and also referred to the integration test documentation here - Integration Testing with STDG.
The link to an example concrete client/server Integration Test extending STDG’s ForkingClientServerIntegrationTestsSupprt class appears to be broken.
The purpose of my integration tests would be to:
run an embedded locator and a server during the integration test phase
define the regions for the servers using cluster.xml
create, read, update and delete cache entries and verify the different use cases
Any help regarding the ideal approach to write integration tests (probably using an embedded GemFire locator and server) will be very helpful.
Tried an embedded GemFire CacheServer instance for integration tests using #CacheServerApplication annotation but not sure on how to create ClientCache objects to use the embedded GemFire or whether this is the right way to write the integration tests.
Edit: Also came across this - Is it possible to start a PIvotal GemFire Server, Locator and Client in one JVM? where it is mentioned as - In short, NO, you cannot have a peer Cache instance (with embedded Locator) and a ClientCache instance in the same JVM (or Java application process).
DISCLAIMER: I do not have experience with Apache Cucumber...
However, it is not difficult to spin up multiple GemFire or Geode server-side processes, such as 1 or more Locator and [multiple] CacheServers in a single test class. The Locators can be standalone JVM processes or embedded, as part of the servers.
In this typical test configuration arrangement the GemFire or Geode server-side processes are forked, yet coordinated, and the test class itself acts as the ClientCache instance.
You can see 1 such test configuration in the SBDG Multi-site Caching sample, here.
The key to this test configuration is the extension of the ForkingClientServerIntegrationTests class from STDG, as well as the forking of the 2 clusters (and specifically), in the test class setup method.
The configuration for each cluster is handled by Spring config and the coordination is all handled using GemFire/Geode properties (specifically) combined with some Spring Profiles (for example, then see here) to control which configuration gets applied for each GemFire/Geode JVM process.
Of course, this example and test configuration is quite complex given the fact that the test also employs GemFire/Geode's WAN capabilities, hence the "multi-site" caching reference, but serves to demonstrate that Spring and SBDG/SDG/STDG supports as complex or as simple of a setup as your testing needs require.
You can start any number of GemFire/Geode processes (Locators, CacheServers, etc). And, in nearly all cases, the test class (JVM) itself is the cache client (ClientCache instance).
Here are a couple more examples from the Spring Data for Apache Geode (SDG) codebase and test suite: here and here.
I am certain I have another test class or example (somewhere) that for a single Locator, then joined 2 CacheServer instances, and then the test (JVM process) proceeded as ClientCache instance, but I cannot seem to find it at the moment.
In any case, I hope this gives you some ideas.

Apache Ignite Application

I am building a Java based online application using Java and Ignite. Ignite is used an in memory store/cache and also for data persistence for the application. My question is if I should deploy Ignite in a separate server or should it reside int the same server as the application?
You can do it both ways, it's up to you.

Why Does Ignite Use Spring framework?

I had used Spring framework in my apps and while it is nice conceptually, it is not suitable for real-time apps due to its run-time overhead. For instance, http://apache-ignite-users.70518.x6.nabble.com/Failed-to-map-keys-for-cache-all-partition-nodes-left-the-grid-td23510.html shows the actual run-time Spring stack.
The Spring features that Ignite uses for loading application-defined beans are just many layers wrapped around simple Java reflection features. So Why Ignite uses Spring instead of straight Java'reflection ?
To make Ignite more performant, is there plan with Ignite to switch from Spring framework to Java reflection features ?
Similarly, if Ignite uses Spring Boot to handle port requests, why does it not use a light-weight framework such as www.sparkjava.com ?
Ignite uses Spring only to convert XML configuration files into configuration beans during startup. This way Ignite provides a convenient well-known way of configuring instead of introducing a custom one. In the runtime, after node is started, Spring is not used for anything.
In the thread you provided it's actually other way around - Spring invokes Ignite. Apparently, that's a Spring application with an embedded Ignite node.

How can I deploy data grid application?

I am developing web application based in Spring. I added Apache ignite in maven dependency.
It is very simple application, which is only 2 rest api.
One is querying by key, which return object. another is put data.
But I have a problem: when I develop additional implementation, I don't know how I can deploy this application.
The application always should be available. but I deploy it to one node, then the node may not available.
Is there good method for distributed memory application deploy?
In your case you will typically start an Ignite server node embedded in your application. You can then start multiple instances of application, and as long as nodes discover each other, they will share the data. For more information about discovery configuration see here: https://apacheignite.readme.io/docs/cluster-config

Redis datasource for spring cloud dataflow server

As per the documentation, Spring Cloud Dataflow Service used RDBMS for storing stream/task definitions, application registration and job repositories. Instead of using RDBMS, is there a way to use Redis for storing this information.
RDBMS is the default repository implementation for the Data Flow server core. You can still override this default repositories (except task/job execution repositories) by having Redis based implementations from your custom Data Flow server configuration. While you can have 'redis' based repositories for stream/task definitions and application registration, you still need to have RDBMS for the task/batch job execution repositories. That's the reason Spring Cloud Data Flow by default goes with RDBMS based ones for all.