Apache Geode CacheServerException Region not found during key set request - gemfire

I am new to Geode, and have started the default locator and server according to Geode in 5 minutes and then the .Net client with which I am running the tests from here
// 1. Create a cache
CacheFactory cacheFactory = CacheFactory.CreateCacheFactory();
Cache cache = cacheFactory.Create();
// 2. Create default region attributes using region factory
RegionFactory regionFactory =
cache.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
// 3. Create a region
IRegion<int, string> region =
regionFactory.Create<int, string>("exampleRegion");
// 4. Put some entries
region[111] = "Value1";
region[123] = "123";
// 5. Get the entries
string result1 = region[111];
string result2 = region[123];
// 6. Close cache
cache.Close();
// 7. Print result
Console.WriteLine(result1);
Console.WriteLine(result2);
and when it comes to step 4, to put some entries into the region, it's giving out the error:
Apache.Geode.Client.CacheServerException : Region::serverKeys: CacheableString( org.apache.geode.cache.RegionDestroyedException: Server connection from [identity(0.0.0.0(default_GeodeDS:6420:loner):2:GFNative_qxrXVNzVzR6420:default_GeodeDS,connection=1; port=55687]: Region named /exampleRegion was not found during key set request
Both the .Net client and the server are running on the same machine. Why isn't the client finding the server?
Thanks

The error message is saying that the server couldn't find the region, not that the client couldn't connect to the server: Region named /exampleRegion was not found during key set request. Have you defined the exampleRegion on the server side?.
If you're using the Cluster Configuration Service the easiest way to do so is through the GFSH commands, namely create region: gfsh create region --name=exampleRegion --type=REPLICATE.
If you're configuring your members individually using the cache.xml file, the region can be configured as below:
<?xml version="1.0" encoding="UTF-8"?>
<cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<cache-server/>
<region name="exampleRegion" refid="REPLICATE"/>
</cache>
I'm using REPLICATE for the sake of simplicity, but you should choose the type of region according to your use case.
Hope this helps.

Related

Spring Cloud Server serving multiple property files for the same application

Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer

GemFire getRegion() returns null whereas OQL query gives result

I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John

Configuring and Using Geode Regions and Locks for Atomic Data Structures

I am currently using Spring Boot Starter 1.4.2.RELEASE, and Geode Core 1.0.0-incubating via Maven, against a local Docker configuration consisting of a Geode Locator, and 2 cache nodes.
I've consulted the documentation here:
http://geode.apache.org/docs/guide/developing/distributed_regions/locking_in_global_regions.html
I have configured a cache.xml file for use with my application like so:
<?xml version="1.0" encoding="UTF-8"?>
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool">
<locator host="localhost" port="10334"/>
</pool>
<region name="testRegion" refid="CACHING_PROXY">
<region-attributes pool-name="serverPool"
scope="global"/>
</region>
</client-cache>
In my Application.java I have exposed the region as a bean via:
#SpringBootApplication
public class Application {
#Bean
ClientCache cache() {
return new ClientCacheFactory()
.create();
}
#Bean
Region<String, Integer> testRegion(final ClientCache cache) {
return cache.<String, Integer>getRegion("testRegion");
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
And in my "service" DistributedCounter.java:
#Service
public class DistributedCounter {
#Autowired
private Region<String, Integer> testRegion;
/**
* Using fine grain lock on modifier.
* #param counterKey {#link String} containing the key whose value should be incremented.
*/
public void incrementCounter(String counterKey) {
if(testRegion.getDistributedLock(counterKey).tryLock()) {
try {
Integer old = testRegion.get(counterKey);
if(old == null) {
old = 0;
}
testRegion.put(counterKey, old + 1);
} finally {
testRegion.getDistributedLock(counterKey).unlock();
}
}
}
I have used gfsh to configure a region named /testRegion - however there is no option to indicate that it's type should be "GLOBAL", only a variety of other options are available - ideally this should be a persistent, and replicated cache though so the following command:
create region --name=/testRegion --type=REPLICATE_PERSISTENT
Using the how-to at: http://geode.apache.org/docs/guide/getting_started/15_minute_quickstart_gfsh.html it is easy to see the functionality of persistence and replication on my two node configuration.
However, the locking in DistributedCounter, above, does not cause any errors - but it just does not work when two processes attempt to acquire a lock on the same "key" - the second process is not blocked from acquiring the lock. There is an earlier code sample from the Gemfire forums which uses the DistributedLockService - which the current documentation warns against using for locking region entries.
Is the use-case of fine-grained locking to support a "map" of atomically incremental longs a supported use case, and if so, how to appropriately configure it?
The Region APIs for DistributedLock and RegionDistributedLock only support Regions with Global scope. These DistributedLocks have locking scope within the name of the DistributedLockService (which is the full path name of the Region) only within the cluster. For example, if the Global Region exists on a Server, then the DistributedLocks for that Region can only be used on that Server or on other Servers within that cluster.
Cache Clients were originally a form of hierarchical caching, which means that one cluster could connect to another cluster as a Client. If a Client created an actual Global region, then the DistributedLock within the Client would only have a scope within that Client and the cluster that it belongs to. DistributedLocks do not propagate in anyway to the Servers that such a Client is connected to.
The correct approach would be to write Function(s) that utilize the DistributedLock APIs on Global regions that exist on the Server(s). You would deploy those Functions to the Server and then invoke them on the Server(s) from the Client.
In general, use of Global regions is avoided because every individual put acquires a DistributedLock within the Server's cluster, and this is a very expensive operation.
You could do something similar with a non-Global region by creating a custom DistributedLockService on the Servers and then use Functions to lock/unlock around code that you need to be globally synchronized within that cluster. In this case, the DistributedLock and RegionDistributedLock APIs on Region (for the non-Global region) would be unavailable and all locking would have to be done within a Function on the Server using the DistributedLockService API.
This only works for server side code (in Functions for example).
From client code you can implement locking semantics using "region.putIfAbsent".
If 2 (or more) clients call this API on the same region and key, only one will successfully put, which is indicated by a return value of null. This client is considered to hold the lock. The other clients will get the object that was put by the winner. This is handy because, if the value you "put" contains a unique identifier of the client, then the losers even know who is holding the lock.
Having a region entry represent a lock has other nice benefits. The lock survives across failures. You can use region expiration to set the maximum lease time for a lock, and, as mentioned previously, its easy to tell who is holding the lock.
Hope this helps.
It seems that GFSH does not provide an option to provide the correct scope=GLOBAL.
Maybe you could start a server with --cache-xml-file option... which would point to a cache.xml file.
The cache.xml file should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schema.pivotal.io/gemfire/cache" xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd" version="8.1" lock-lease="120" lock-timeout="60" search-timeout="300" is-server="true" copy-on-read="false">
<cache-server port="0"/>
<region name="testRegion">
<region-attributes data-policy="persistent-replicate" scope="global"/>
</region>
</cache>
Also the client configuration does not need to define the scope in region-attributes

WebSharingAppDemo-CEProviderEndToEnd Queries peerProvider for NeedsScope before any files are batched to the server. This seems out of order?

I'm building an application based on the WebSharingAppDemo-CEProviderEndToEnd. When I deploy the server portion on a server, the code gives the error "The path is not valid. Check the directory for the database." during the call to NeedsScope() in the CeWebSyncService.cs file.
Obviously the server can't access the client's sdf but what is supposed to happen to make this work? The app uses batching to send the data and the batches have to be marshalled across to the temp directory but this problem is occurring before any files have been batched over. There is nothing for the server to look at to determine whether the peerProivider needs scope. What am I missing?
public bool NeedsScope()
{
Log("NeedsSchema: {0}", this.peerProvider.Connection.ConnectionString);
SqlCeSyncScopeProvisioning prov = new SqlCeSyncScopeProvisioning();
return !prov.ScopeExists(this.peerProvider.ScopeName, (SqlCeConnection)this.peerProvider.Connection);
}
I noticed that the sample was making use of a proxy to speak w/ the CE file but a provider (not a proxy) to speak w/ the sql server.
I switched it so there is a proxy to reach the SQL server and a provider to access the CE file.
That seems to work for me.
stats = synchronizationHelper.SynchronizeProviders(srcProvider, destinationProxy);
vs.
SyncOperationStatistics stats = syncHelper.SynchronizeProviders(srcProxy, destinationProvider);

NHibernate HybridSessionBuilder, how to switch hibernate cfg based upon url values

I am using the HybridSessionBuilder supplied by Palermo and his team .. link ..
We have our staging environments set up so that the url will be one of the following based on the environment
web-test.company.com
web-cert.company.com
web.company.com
what we normally do is take a look at the url and if it has "-test" we use the test configurations and so on (connection strings, etc).
This is the first project that uses nhibernate in this type of environment. What would be a good way to tell the Session Builder to use the correct hibernate cfg (I will build 1 for each environment).
The HybridSessionBuilder lives in an infrastructure layer and is injected into repositories via StructureMap.
Here's how I select a single configuration file using the HybridSessionBuilder:
public Configuration GetConfiguration()
{
var configuration = new Configuration();
string cfgFile = Path.GetDirectoryName(Assembly.GetAssembly(this.GetType()).CodeBase) +
"\\com.Data.nHibernate.cfg.xml";
configuration.Configure(cfgFile);
configuration.AddAssembly("com.Data");
return configuration;
}
If you want to select configuration files based on the URL I would just identify the call stack that leads to this function and pass in either an enum value or the config file's name directly.