GemFire 8.2.0 embeded locator service - gemfire

I am trying to cluster servers in gemfire using an embedded locator service.
server 1
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml")
.set("mcast-port", "0")
.set("start-locator", "11001")
.set("locators", "localhost[11001],10.0.0.193[11002]").create();
server 2
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml")
.set("mcast-port", "0")
.set("start-locator", "11002")
.set("locators", "10.0.0.192[11001],localhost[11002]").create();
but they cant connect
from server 1
[warn 2016/02/08 20:37:41.510 UTC tid=0x28] Locator discovery task could not exchange locator information localhost[11001] with ip-10-0-0-193.ec2.internal[11002] after 55 retry attempts. Retrying in 10,000 ms.
from server 2
[warn 2016/02/08 20:46:27.867 UTC tid=0x28] Locator discovery task could not exchange locator information localhost[11002] with ip-10-0-0-192.ec2.internal[11001] after 102 retry attempts. Retrying in 10,000 ms.
it close but i am missing something
Yes, using the .set("bind-address", "10.0.0.193") answer seemed to do the trick. just to comfirm on the logs, did i make a cluster
server1
[info 2016/02/09 09:39:07.445 UTC tid=0x3c] Membership: Processing addition < ip-10-0-0-192(14522):14968 >
[info 2016/02/09 09:39:07.445 UTC tid=0x3c] Admitting member :14968>. Now there are 2 non-admin member(s).
[info 2016/02/09 09:39:07.460 UTC tid=0x41] Member ip-10-0-0-192(14522):14968 is not equivalent or in the same redundancy zone.
[info 2016/02/09 09:39:12.923 UTC tid=0x28] Locator discovery task exchanged locator information ip-10-0-0-193.ec2.internal[11001] with ip-10-0-0-192.ec2.internal[11001]: {-1=[ip-10-0-0-192.ec2.internal[11001], ip-10-0-0-193.ec2.internal[11001]]}.
[info 2016/02/09 09:39:13.245 UTC tid=0x46] Initializing region _gfe_non_durable_client_with_id_ip-10-0-0-186(3936:loner):49683:5b2966c5_2_queue
[info 2016/02/09 09:39:13.247 UTC tid=0x46] Initialization of region _gfe_non_durable_client_with_id_ip-10-0-0-186(3936:loner):49683:5b2966c5_2_queue completed
[info 2016/02/09 09:39:13.252 UTC tid=0x46] Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180
[info 2016/02/09 09:39:13.435 UTC tid=0x46] Initializing region _gfe_non_durable_client_with_id_ip-10-0-0-189(4036:loner):51441:762a66c5_2_queue
[info 2016/02/09 09:39:13.437 UTC tid=0x46] Initialization of region _gfe_non_durable_client_with_id_ip-10-0-0-189(4036:loner):51441:762a66c5_2_queue completed
[info 2016/02/09 09:39:13.438 UTC tid=0x46] Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180
and server 2
[info 2016/02/09 09:39:07.245 UTC tid=0x1] Attempting to join distributed system whose membership coordinator is ip-10-0-0-193(16745):57474 using membership ID ip-10-0-0-192(14522):14968
[info 2016/02/09 09:39:07.408 UTC tid=0x1] Membership: lead member is now ip-10-0-0-193(16745):57474
[info 2016/02/09 09:39:07.412 UTC tid=0x23] GemFire failure detection is now monitoring ip-10-0-0-193(16745):57474
[info 2016/02/09 09:39:07.413 UTC tid=0x1] Entered into membership with ID ip-10-0-0-192(14522):14968.
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Starting DistributionManager ip-10-0-0-192(14522):14968. (took 272/ ms)
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Initial (membershipManager) view = [ip-10-0-0-193(16745):57474{lead}, ip-10-0-0-192(14522):14968]
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Admitting member :57474>. Now there are 1 non-admin member(s).
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Admitting member :14968>. Now there are 2 non-admin member(s).
[info 2016/02/09 09:39:07.446 UTC :57474 shared unordered uid=1 port=39916> tid=0x28] Member ip-10-0-0-193(16745):57474 is not equivalent or in the same redundancy zone.
Thanks.

Actually, the locator is binding to localhost, so you should set bind-address for each cache server with set("bind-address", "10.0.0.192"). Also obviously have your locators point at these addresses.

Have you tried replacing "localhost" with the actual IP address of the box? In other words, both lists should look like this:
.set("locators", "10.0.0.192[11001],10.0.0.193[11002]")
I believe the locator by default binds to the public IP address of your machine, not localhost (127.0.0.1).

Related

Why can't the security context be found?

There are 6 server nodes and 4 client nodes in the cluster. When the cluster is first started, servers 5 and 6 cannot find the security context of client 4. After the cluster restarts, server 6 cannot find the client 2 security context.
There is only this kind of exception in the log, there are no other exceptions. Why can't the security context be found?
All nodes are restarted sequentially. This problem occurs in the production environment and is not reproduced in the test environment.
2022 Aug 09 20:53:51:378 GMT +08 cep-data-010.ds-cache6 ERROR [sys-stripe-41-#42%cep-data-010.ds-cache6%] - [org.apache.ignite] Failed to obtain a security context.
java.lang.IllegalStateException: Failed to find security context for subject with given ID : be1fded5-1450-4fc6-b16f-1c580899db2f
at org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:167)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1530)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
at org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1423)
at org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at java.base/java.lang.Thread.run(Thread.java:834)
2022 Aug 09 20:53:51:383 GMT +08 cep-data-010.ds-cache6 INFO [disco-event-worker-#351%cep-data-010.ds-cache6%] - [org.apache.ignite] Added new node to topology: TcpDiscoveryNode [id=be1fded5-1450-4fc6-b16f-1c580899db2f, consistentId=cep-master-017.ds-realtrail2, addrs=ArrayList [192.168.229.9], sockAddrs=HashSet [cep-master-017/192.168.229.9:0], discPort=0, order=16, intOrder=16, lastExchangeTime=1660049631321, loc=false, ver=2.13.0#20220420-sha1:551f6ece, isClient=true]
2022 Aug 09 20:53:51:389 GMT +08 cep-data-010.ds-cache6 ERROR [sys-stripe-41-#42%cep-data-010.ds-cache6%] - [org.apache.ignite] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Failed to find security context for subject with given ID : be1fded5-1450-4fc6-b16f-1c580899db2f]]

Apache Geode Redis Adapter can not PERSISTENT

I want to create GeodeRedisServer what the regiontype is REPLICATE_PERSISTENT. According to the documentation:
gfsh> start server --name=server1 --redis-bind-address=localhost \
--redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
I use the command, but it is not successful. An error is :
gfsh>start server --name=server2 --server-port=40405 --redis-bind-address=192.168.16.36 --redis-port=6372 --J=-Dgemfireredis.regiontype=REPLICATE_PERSISTENT
Starting a Geode Server in /home/apache-geode-1.10.0/my_geode/server2...
The Cache Server process terminated unexpectedly with exit status 1. Please refer to the log file in /home/apache-geode-1.10.0/my_geode/server2 for full details.
Exception in thread "main" java.lang.NullPointerException
at org.apache.geode.internal.cache.LocalRegion.findDiskStore(LocalRegion.java:7436)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:595)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:541)
...
the log i can see:
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] _monitoringRegion_192.168.16.36<v1>41001 is done getting image from 192.168.16.36(locator1:62278:locator)<ec><v0>:41000. isDeltaGII is false
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] Initialization of region _monitoringRegion_192.168.16.36<v1>41001 completed
[info 2019/12/10 15:55:11.379 CST <main> tid=0x1] Initialized cache service org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl
[info 2019/12/10 15:55:11.396 CST <main> tid=0x1] Initialized cache service org.apache.geode.cache.lucene.internal.LuceneServiceImpl
[info 2019/12/10 15:55:11.397 CST <main> tid=0x1] Starting GeodeRedisServer on bind address 192.168.16.36 on port 6379
[error 2019/12/10 15:55:11.402 CST <main> tid=0x1] java.lang.NullPointerException
[info 2019/12/10 15:55:11.405 CST <Distributed system shutdown hook> tid=0x15] VM is exiting - shutting down distributed system
but when i change the the regiontype is REPLICATE, success.
And I want to make my redis service highly available, now i can create redis adapter service, but once it hangs, it is not available. Is there any way to create a redis adapter cluster? I haven't found an available method in the official documentation, I hope someone can help me, thank you very much。
As a side note, the version I tested was 1.9.0.
Unfortunately this appears to be broken and I don't think there is a good workaround. I've opened a ticket for this: https://issues.apache.org/jira/browse/GEODE-7721
I think Jens is right that the REPLICATE_PERSISTENT policy seems to be broken.
You also asked "Is there any way to create a redis adapter cluster?" If you use the REPLICATE or PARTITION_REDUNDANT data policies, I think it should work. If you start additional geode servers they will make redundant copies of your data.
Do be aware that this adapter is still experimental and in development.

Yarn report flink job as FINISHED and SUCCEED when flink job failure

I am running flink job on yarn, we use "fink run" in command line to submit our job to yarn, one day we had an exception on flink job, as we didn't enable the flink restart strategy so it simply failed, but eventually we found that the job status was "SUCCEED" from the yarn application list, which we expect to be "FAILED".
Flink CLI log:
06/12/2018 03:13:37 FlatMap (getTagStorageMapper.flatMap)(23/32) switched to CANCELED
06/12/2018 03:13:37 GroupReduce (ResultReducer.reduceGroup)(31/32) switched to CANCELED
06/12/2018 03:13:37 FlatMap (SubClassEDFJoinMapper.flatMap)(29/32) switched to CANCELED
06/12/2018 03:13:37 CHAIN DataSource (SubClassInventory.AvroInputFormat.createInput) -> FlatMap (SubClassInventoryMapper.flatMap)(27/32) switched to CANCELED
06/12/2018 03:13:37 GroupReduce (OutputReducer.reduceGroup)(28/32) switched to CANCELED
06/12/2018 03:13:37 CHAIN DataSource (SubClassInventory.AvroInputFormat.createInput) -> FlatMap (BIMBQMInstrumentMapper.flatMap)(27/32) switched to CANCELED
06/12/2018 03:13:37 GroupReduce (BIMBQMGovCorpReduce.reduceGroup)(30/32) switched to CANCELED
06/12/2018 03:13:37 FlatMap (BIMBQMEVMJoinMapper.flatMap)(32/32) switched to CANCELED
06/12/2018 03:13:37 Job execution switched to status FAILED.
No JobSubmissionResult returned, please make sure you called ExecutionEnvironment.execute()
2018-06-12 03:13:37,625 INFO org.apache.flink.yarn.YarnClusterClient - Sending shutdown request to the Application Master
2018-06-12 03:13:37,625 INFO org.apache.flink.yarn.YarnClusterClient - Start application client.
2018-06-12 03:13:37,630 INFO org.apache.flink.yarn.ApplicationClient - Notification about new leader address akka.tcp://flink#ip-10-97-46-149.tr-fr-nonprod.aws-int.thomsonreuters.com:45663/user/jobmanager with session ID 00000000-0000-0000-0000-000000000000.
2018-06-12 03:13:37,632 INFO org.apache.flink.yarn.ApplicationClient - Sending StopCluster request to JobManager.
2018-06-12 03:13:37,633 INFO org.apache.flink.yarn.ApplicationClient - Received address of new leader akka.tcp://flink#ip-10-97-46-149.tr-fr-nonprod.aws-int.thomsonreuters.com:45663/user/jobmanager with session ID 00000000-0000-0000-0000-000000000000.
2018-06-12 03:13:37,634 INFO org.apache.flink.yarn.ApplicationClient - Disconnect from JobManager null.
2018-06-12 03:13:37,635 INFO org.apache.flink.yarn.ApplicationClient - Trying to register at JobManager akka.tcp://flink#ip-10-97-46-149.tr-fr-nonprod.aws-int.thomsonreuters.com:45663/user/jobmanager.
2018-06-12 03:13:37,688 INFO org.apache.flink.yarn.ApplicationClient - Successfully registered at the ResourceManager using JobManager Actor[akka.tcp://flink#ip-10-97-46-149.tr-fr-nonprod.aws-int.thomsonreuters.com:45663/user/jobmanager#182802345]
2018-06-12 03:13:38,648 INFO org.apache.flink.yarn.ApplicationClient - Sending StopCluster request to JobManager.
2018-06-12 03:13:39,480 INFO org.apache.flink.yarn.YarnClusterClient - Application application_1528772982594_0001 finished with state FINISHED and final state SUCCEEDED at 1528773218662
2018-06-12 03:13:39,480 INFO org.apache.flink.yarn.YarnClusterClient - YARN Client is shutting down
2018-06-12 03:13:39,582 INFO org.apache.flink.yarn.ApplicationClient - Stopped Application client.
2018-06-12 03:13:39,583 INFO org.apache.flink.yarn.ApplicationClient - Disconnect from JobManager Actor[akka.tcp://flink#ip-10-97-46-149.tr-fr-nonprod.aws-int.thomsonreuters.com:45663/user/jobmanager#182802345].
Flink job manager Log:
FlatMap (BIMBQMEVMJoinMapper.flatMap) (32/32) (67a002e07fe799c1624a471340c8cf9d) switched from CANCELING to CANCELED.
Try to restart or fail the job Flink Java Job at Tue Jun 12 03:13:17 UTC 2018 (1086cedb3617feeee8aace29a7fc6bd0) if no longer possible.
Requesting new TaskManager container with 8192 megabytes memory. Pending requests: 1
Job Flink Java Job at Tue Jun 12 03:13:17 UTC 2018 (1086cedb3617feeee8aace29a7fc6bd0) switched from state FAILING to FAILED.
Could not restart the job Flink Java Job at Tue Jun 12 03:13:17 UTC 2018 (1086cedb3617feeee8aace29a7fc6bd0) because the restart strategy prevented it.
Unregistered task manager ip-10-97-44-186/10.97.44.186. Number of registered task managers 31. Number of available slots 31
Stopping JobManager with final application status SUCCEEDED and diagnostics: Flink YARN Client requested shutdown
Shutting down cluster with status SUCCEEDED : Flink YARN Client requested shutdown
Unregistering application from the YARN Resource Manager
Waiting for application to be successfully unregistered.
Can anybody help me understand why does yarn say my flink job was "SUCCEED"?
The reported application status in Yarn does not reflect the status of the executed job but the status of the Flink cluster since this is the Yarn application. Thus, the final status of the Yarn application only depends on whether the Flink cluster finished properly or not. Differently said, if a job fails, then it does not necessarily mean that the Flink cluster failed. These are two different things.

GemFire; starting the REST API server issues

I have installed Gemfire 9.x on Ubuntu and was able to start locator and Server but was unable to start the REST server with the command option start server --name=server1 --start-rest-api=true
--http-service-port=8080 --http-service-bind-address=localhost .
In the server logs I see below error message. Please guide me i the right way.
Thnak you.
Error message..
[info 2017/07/10 13:36:58.131 EDT server1 tid=0x1] geode-web-api war found: /opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war
[info 2017/07/10 13:36:58.144 EDT server1 tid=0x1] Logging initialized #4705ms
[info 2017/07/10 13:36:58.210 EDT server1 tid=0x1] jetty-9.3.6.v20151106
[info 2017/07/10 13:36:58.747 EDT server1 tid=0x1] NO JSP Support for /gemfire-api, did not find org.eclipse.jetty.jsp.JettyJspServlet
[info 2017/07/10 13:36:58.907 EDT server1 tid=0x1] Initializing Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:01.560 EDT server1 tid=0x1] Context refreshed
[info 2017/07/10 13:37:01.574 EDT server1 tid=0x1] Found 1 custom documentation plugin(s)
[info 2017/07/10 13:37:01.580 EDT server1 tid=0x1] Scanning for api listing references
[info 2017/07/10 13:37:01.721 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_1
[info 2017/07/10 13:37:01.765 EDT server1 tid=0x1] Generating unique operation named: readUsingGET_1
[info 2017/07/10 13:37:01.810 EDT server1 tid=0x1] Generating unique operation named: createUsingPOST_1
[info 2017/07/10 13:37:01.818 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_2
[info 2017/07/10 13:37:01.824 EDT server1 tid=0x1] Generating unique operation named: listUsingGET_1
[info 2017/07/10 13:37:01.848 EDT server1 tid=0x1] Generating unique operation named: updateUsingPUT_1
[info 2017/07/10 13:37:01.888 EDT server1 tid=0x1] Started o.e.j.w.WebAppContext#49754e74{/gemfire-api,[file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_gemfire-api/webapp/, jar:file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_gemfire-api/webapp/WEB-INF/lib/springfox-swagger-ui-2.6.0.jar!/META-INF/resources],AVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:02.182 EDT server1 tid=0x1] NO JSP Support for /geode, did not find org.eclipse.jetty.jsp.JettyJspServlet
[info 2017/07/10 13:37:02.229 EDT server1 tid=0x1] Initializing Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.502 EDT server1 tid=0x1] Context refreshed
[info 2017/07/10 13:37:04.518 EDT server1 tid=0x1] Found 1 custom documentation plugin(s)
[info 2017/07/10 13:37:04.528 EDT server1 tid=0x1] Scanning for api listing references
[info 2017/07/10 13:37:04.666 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_1
[info 2017/07/10 13:37:04.717 EDT server1 tid=0x1] Generating unique operation named: readUsingGET_1
[info 2017/07/10 13:37:04.767 EDT server1 tid=0x1] Generating unique operation named: createUsingPOST_1
[info 2017/07/10 13:37:04.776 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_2
[info 2017/07/10 13:37:04.783 EDT server1 tid=0x1] Generating unique operation named: listUsingGET_1
[info 2017/07/10 13:37:04.809 EDT server1 tid=0x1] Generating unique operation named: updateUsingPUT_1
[info 2017/07/10 13:37:04.857 EDT server1 tid=0x1] Started o.e.j.w.WebAppContext#353422fd{/geode,[file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_geode/webapp/, jar:file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_geode/webapp/WEB-INF/lib/springfox-swagger-ui-2.6.0.jar!/META-INF/resources],AVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.857 EDT server1 tid=0x1] Stopping the HTTP service...
[info 2017/07/10 13:37:04.859 EDT server1 tid=0x1] Stopped ServerConnector#2dbcee03{HTTP/1.1,[http/1.1]}{10.160.3.181:7070}
[info 2017/07/10 13:37:04.859 EDT server1 tid=0x1] Destroying Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.871 EDT server1 tid=0x1] Stopped o.e.j.w.WebAppContext#353422fd{/geode,null,UNAVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.871 EDT server1 tid=0x1] Destroying Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.880 EDT server1 tid=0x1] Stopped o.e.j.w.WebAppContext#49754e74{/gemfire-api,null,UNAVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.893 EDT server1 tid=0x1] Cache server connection listener bound to address 0.0.0.0/0.0.0.0:40404 with backlog 1,000.
[info 2017/07/10 13:37:04.902 EDT server1 tid=0x1] ClientHealthMonitorThread maximum allowed time between pings: 60,000
[info 2017/07/10 13:37:04.908 EDT server1 tid=0x1] CacheServer Configuration: port=40404 max-connections=800 max-threads=0 notify-by-subscription=true socket-buffer-size=32768 maximum-time-between-pings=60000 maximum-message-count=230000 message-time-to-live=180 eviction-policy=none capacity=1 overflow directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000 tcpNoDelay=true
I don't see an error message in your logs. I think the problem is you need to bind to the IP address not localhost it's to do with NIC settings.
So like start server --name=server1 --start-rest-api=true --http-service-port=8080 --http-service-bind-address=191.234.180.99 --server-bind-address=191.234.180.99

GemFire 8.2.0 ClientCacheFactory addPoolLocator, how to set list of locators on client

I have this code
ClientCacheFactory clientCacheFactory = new ClientCacheFactory();
clientCacheFactory.set("cache-xml-file", "client-cache.xml");
clientCacheFactory.set("mcast-port", "0");
List<String> locators = Arrays.asList(peersIp.split(","));
for (String locator : locators) {
clientCacheFactory.addPoolLocator(locator, 11001);
}
I am adding a list of locator ips to clientCacheFactory
In the clients logs i see
[info 2016/02/09 13:14:15.440 UTC tid=0x1] Running in local mode since mcast-port was 0 and locators was empty.
[info 2016/02/09 13:14:15.694 UTC tid=0x1] Pool DEFAULT started with multiuser-authentication=false
[info 2016/02/09 13:14:15.725 UTC tid=0x16] Updating membership port. Port changed from 0 to 49,879.
*
my client-cache.xml
<!DOCTYPE client-cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 6.6//EN" "http://www.gemstone.com/dtd/cache6_6.dtd">
<client-cache>
<region name="exampleRegion" refid="PROXY"/>
</client-cache>
I want to use the ClientCacheFactory, to define a list of locators for the client to connect to.
The method 'addPoolLocator' looks just what i want but, the logs say no.
The logs are misleading you in this case. It looks like your pool has been created, so you should be good to go. You can verify by connecting gfsh to the locator and issuing a describe region command.
$ cd $GEMFIRE_HOME
$ ./bin/gfsh
gfsh>connect --locator=localhost[10334]
gfsh>describe region --name=/MyRegion
gfsh>list members
gfsh>list clients
Please refer to gfsh documentation for more commands, or just type help in gfsh.
The logs actually mean you are not connected as a peer to the GemFire distributed system. The locators property refers to the peer-to-peer locators and not the one within the pool. I can see how this is misleading, I will file an issue against Apache Geode.