Using javaconfig to create regions in gemfire - gemfire

Is it possible to do Javaconfig i.e annotations in spring instead of xml to create client regions in Spring gemfire?
I need to plug in cache loader and cache writer also to the regions created...how is that possible to do?
I want to perform the client pool configuration as well..How is that possible?

There is a good example of this in the spring.io guides. However, GemFire APIs are factories, wrapped by Spring FactoryBeans in Spring Data Gemfire, so I find XML actually more straightforward for configuring Cache and Regions.

Regarding... "how can I create a client region in a distributed environment?"
In the same way the Spring IO guides demonstrate Regions defined in a peer cache on a GemFire Server, something similar to...
#Bean
public ClientRegionFactoryBean<Long, Customer> clientRegion(ClientCache clientCache) {
return new ClientRegionFactoryBean() {
{
setCache(clientCache);
setName("Customers");
setShortcut(ClientRegionShortcut.CACHING_PROXY); // Or just PROXY if the client is not required to store data, or perhaps another shortcut type.
...
}
}
}
Disclaimer, I did not test this code snippet, so it may need minor tweaking along with additional configuration as necessary by the application.
You of course, will defined a along with a Pool in your Spring config, or use the element abstraction on the client-side.

Related

WebFlux + RSocket + Spring

Can someone tell me or give a ready-made CRUD example using WebFlux, RScoket and Spring (or SpringBoot)?
I studied the RSocket documentation, WebFlux, also wrote my simple examples, but I would like to see a real CRUD application using basic methods using RSocket.
I'll be very grateful.
Thanks.
I have maintained a Spring/RSocket sample project with 4 basic interaction modes of RSocket.
If you only require request/reply case for simple CRUD operations, check the request and response mode, and select a transport protocol, TCP or WebSocket.
To implement CRUD operations, just define 4 different routes for them, like define the RESTful APIs using URI, you have to have a good plan for the naming, but in RSocket there are no HTTP methods to help you to differentiate the same routes.
For example, in the server side, we can declare a #Controller to handling messages like this.
#Controller
class ProfileController {
#MessageMapping("fetch.profile.{name}")
public Mono<Profile> greet(#DestinationVariable String name) {
}
#MessageMapping("create.profile")
public Mono<Message> greet(#Payload CreateProfileRequest p) {
}
#MessageMapping("update.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name, #Payload UpdateProfileRequest p) {
}
#MessageMapping("delete.profile.{name}")
public Mono<Message> greet(#DestinationVariable String name) {
}
}
In the client side, if it is a Spring Boot application, you can use RSocket RSocketRequester to interact with the server side like this.
//fetch a profile by name
requester.route("fetch.profile.hantsy").retrieveMono()
//create a new profile
requester.data(new CreateProfileRequest(...)).route("create.profile").retrieveMono()
//update the existing profile
requester.data(new UpdateProfileRequest(...)).route("update.profile.hantsy").retrieveMono()
//delete a profile
requester.route("delete.profile.hantsy").retrieveMono()
Of course, if you just build a service exposed by rsocket protocol, the client can be a rsocket-js project or other languages and frameworks, such as Angular, React or Android etc.
Update: I've added a crud sample in my rsocket sample codes, and I have published a post on Medium.

Gemfire spring example

The example at https://spring.io/guides/gs/caching-gemfire/ shows that if there is a cache miss, we have to fetch the data from a server and store in the cache.
Is this an example of Gemfire running as the Gemfire server or is it a Gemfire client? I thought a client would automatically fetch the data from a Server if there is a cache miss. If that is the case, would there ever be a cache miss for the client?
Regards,
Yash
First, I think you are missing the point of the core Spring Framework's Cache Abstraction. I encourage you to read more about the Cache Abstraction's intended purpose here.
In a nutshell, if one of your application objects makes a call to some "external", "expensive" service to access a resource, then caching maybe applicable, especially if the inputs passed result in the exact same output every single time.
So, for a moment, lets imagine your application makes a call to the Geocoding API in the Google Maps API to translate a addresses and (the inverse,) latitude/longitude coordinates.
You might have a application Spring #Service component like so...
#Service("AddressService")
class MyApplicationAddressService {
#Autowired
private GoogleGeocodingApiDao googleGeocodingApiDao;
#Cacheable("Address")
public Address getAddressFor(Point location) {
return googleGeocodingApiDao.convert(location);
}
}
#Region("Address")
class Address {
private Point location;
private State state;
private String street;
private String city;
private String zipCode;
...
}
Clearly, given a latitude/longitude (input), it should produce the same Address (result) everytime. Also, since making a (network) call to an external API like Google's Geocoding service can be very expensive, to both access the resource and perform the conversion, then this type of service call is a perfect candidate for use to cache in our application.
Among many other caching providers (e.g. EhCache, Hazelcaset, Redis, etc), you can, of course, use Pivotal GemFire, or the open source alternative, Apache Geode to back Spring's Caching Abstraction.
In your Pivotal GemFire/Apache Geode setup, you can of course use either the peer-to-peer (P2P) or client/server topology, it doesn't really matter, and GemFire/Geode will do the right thing, once "called upon".
But, the Spring Cache Abstraction documentation states, when you make a call to one of your application components methods (e.g. getAddressFor(:Point)) that support caching (with #Cacheable) the interceptor will first "consult" the cache before making the method call. If the value is present in the cache, then that value is returned and the "expensive" method call (e.g. getAddressFor(:Point)) will not be invoked.
However, if there is a cache miss, then Spring will proceed in invoking the method, and upon successful return from the method invocation, cache the result of the call in the backing cache provider (such as GemFire/Geode) so that the next time the method call is invoked with the same input, the cached value will be returned.
Now, if your application is using the client/sever topology, then of course, the client cache will forward the request onto the server if...
The corresponding client Region is a PROXY, or...
The corresponding client Region is a CACHING_PROXY, and the client's local client-side Region does not contain the requested Point for the Address.
I encourage you to read more about different client Region data management policies here.
To see another working example of Spring's Caching Abstraction backed by Pivotal GemFire in Action, have a look at...
caching-example
I used this example in my SpringOne-2015 talk to explain caching with GemFire/Geode as the caching provider. This particular example makes a external request to a REST API to get the "Quote of the Day".
Hope this helps!
Cheers,
John

Fetch all regions from Gemfire with Spring-data-gemfire

I am developing a very simple dashboard to clear Gemfire regions for testing purposes. I am mainly doing this to get testers a tool for doing this by themselves.
I would like to dynamically fetch the current available Regions names to clear.
I am searching spring-data-gemfire documentation but I couldn't find a way to get all region names.
The best hint I have so far is <gfe:auto-region-lookup/>, but I guess I would still need to have a cache.xml with all region names and also I am not sure how to dynamically displaying their names and how to remove all data from those regions.
Thanks
<gfe:auto-region-lookup> is meant to automatically create beans in the Spring ApplicationContext for all GemFire Regions that have been explicitly created outside the Spring context (i.e. cache.xml or using GemFire's relatively new Cluster-based Configuration Service). However, a developer must use and/or enable those mechanisms to employ the auto-region-lookup functionality.
To get a list of all Region names in the GemFire "cluster", you need something equivalent to Gfsh's 'list region' command, which employs a Function to gather up all the Regions defined in the GemFire (Cache) cluster.
Note that members can define different Regions, i.e. all members participating in the cluster do not necessarily have to define the same Regions. In most cases they do since it is beneficial for replication and HA purposes. Still some members may define local Regions only that member will use.
To go on to clear the Regions from the list, you would again need to employ a GemFire Function to "clear" the other Regions in the cluster that the inquiring, acting member does not currently define.
Of course, this problem is real simple if you only want to clear Regions defined on the member itself...
#Autowired
private Cache gemfireCache;
...
public void clearRegions() {
for (Region rootRegion : gemfireCache.rootRegions()) {
for (Region subRegion : rootRegion.subregions(true)) {
subRegion.clear();
}
rootRegion.clear());
}
}
See rootRegions() and subregions(recursive:boolean) for more details.
Note, GemFire's Cache interface implements the RegionService interface.
Hope this helps.
Cheers!

SpringData Gemfire inserting fake date on Dev env

I am developing some app using Gemfire and it would be great to be able to provide some fake data while in Dev environment.
So instead of doing it in the code like I do today, I was thinking about using spring application-context.xml do pre-load some dummy data in the region I am currently working on. Something close to what DBUnit does but for DEV not Test scope.
Later I could just switch envs on Spring and that data would not be loaded.
Is it possible to add data using SpringData Gemfire to a local data grid?
Thanks!
There is no direct support in Spring Data GemFire to load data into a GemFire cluster. However, there are several options afforded to a SDG/GemFire developer to load data.
The most common approach is to define a GemFire CacheLoader attached to the Region. However, this approach is "lazy" and only loads data from a (potentially) external data source on a cache miss. Of course, you could program the logic in the CacheLoader to "prefetch" a number of entries in a somewhat "predictive" manner based on data access patterns. See GemFire's User Guide for more details.
Still, we can do better than this since it is more likely that you want to "preload" a particular data set for development purposes.
Another, more effective technique, is to use a Spring BeanPostProcessor registered in your Spring ApplicationContext that post processes your "Region" bean after initialization. For instance...
Where the RegionPutAllBeanPostProcessor is implemented as...
package example;
public class RegionPutAllBeanPostProcessor implements BeanPostProcessor {
private Map regionData;
private String targetRegionBeanName;
protected Map getRegionData() {
return (regionData != null ? regionData : Collections.emptyMap());
}
public void setRegionData(final Map regionData) {
this.regionData = regionData;
}
protected String getTargetRegionBeanName() {
Assert.state(StringUtils.hasText(targetRegionBeanName), "The target Region bean name was not properly specified!");
return targetBeanName;
}
public void setTargetRegionBeanName(final String targetRegionBeanName) {
Assert.hasText(targetRegionBeanName, "The target Region bean name must be specified!");
this.targetRegionBeanName = targetRegionBeanName;
}
#Override
public Object postProcessBeforeInitialization(final Object bean, final String beanName) throws BeansException {
return bean;
}
#Override
#SuppressWarnings("unchecked")
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (beanName.equals(getTargetRegionBeanName()) && bean instanceof Region) {
((Region) bean).putAll(getRegionData());
}
return bean;
}
}
It is not too difficult to imagine that you could inject a DataSource of some type to pre-populate the Region. The RegionPutAllBeanPostProcessor was designed to accept a specific Region (based on the Region beans ID) to populate. So you could defined multiple instances each taking a different Region and different DataSource (perhaps) to populate the Region(s) of choice. This BeanPostProcess just take a Map as the data source, but of course, it could be any Spring managed bean.
Finally, it is a simple matter to ensure that this, or multiple instances of the RegionPutAllBeanPostProcessor is only used in your DEV environment by taking advantage of Spring bean profiles...
<beans>
...
<beans profile="DEV">
<bean class="example.RegionPutAllBeanPostProcessor">
...
</bean>
...
</beans>
</beans>
Usually, loading pre-defined data sets is very application-specific in terms of the "source" of the pre-defined data. As my example illustrates, the source could be as simple as another Map. However, it would be a JDBC DataSource, or perhaps a Properties file or well, anything for that matter. It is usually up to the developers preference.
Though, one thing that might be useful to add to Spring Data GemFire would be to load data from a GemFire Cache Region Snapshot. I.e. data that may have been dumped from a QA or UAT environment, or perhaps even scrubbed from PROD for testing purposes. See GemFire Snapshot Service for more details.
Also see the JIRA ticket (SGF-408) I just filed to add this support.
Hopefully this gives you enough information and/or ideas to get going. Later, I will add first-class support into SDG's XML namespace for preloading data sets.
Regards,
John

API modularization in Restlet

I have developed a web application based on Restlet API. As I am adding more features over time, I need sometimes to reuse similar group of REST API under different endpoints, which provides slightly different context of execution (like switching different instances of databases with same schema). I like to refactor my code to make the API reusable and reuse them at different endpoints. My initial thinking was to design an Application for each reusable API and attach them on the router:
router.attach("/context1",APIApplication.class)
router.attach("/foo/context2",APIApplication.class)
The API should be agnostic of configuration of the REST API. What is the best way to pass context information (for example the instance of database) to the Application API? Is this approach viable and correct? What are the best practices to reuse REST API in Restlet? Some code samples would be appreciated to illustrate your answer.
Thanks for your help.
I have see this basic set-up running using a Component as the top level object, attaching the sub applications to the VirtualHost rather than a router, as per this skeleton sample.
public class Component extends org.restlet.Component
{
public Component() throws Exception
{
super();
// Client protocols
getClients().add(Protocol.HTTP);
// Database connection
final DataSource dataSource = InitialContext.doLookup("java:ds");
final Configuration configuration = new Configuration(dataSource);
final VirtualHost host = getDefaultHost();
// Portal modules
host.attach("/path1", new FirstApplication());
host.attach("/path2", new SecondApplication(configuration));
host.attach("/path3", new ThirdApplication());
host.attachDefault(new DefaultApplication(configuration));
}
}
We used a custom Configuration object basically a pojo to pass any common config information where required, and used this to construct the Applications, we used separate 'default' Contexts for each Application.
This was coded originally against restlet 1.1.x and has been upgraded to 2.1.x via 2.0.x, and although it works and is reasonably neat there may possibly be an even better way to do it in either versions 2.1.x or 2.2.x.