GemFire; starting the REST API server issues - gemfire

I have installed Gemfire 9.x on Ubuntu and was able to start locator and Server but was unable to start the REST server with the command option start server --name=server1 --start-rest-api=true
--http-service-port=8080 --http-service-bind-address=localhost .
In the server logs I see below error message. Please guide me i the right way.
Thnak you.
Error message..
[info 2017/07/10 13:36:58.131 EDT server1 tid=0x1] geode-web-api war found: /opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war
[info 2017/07/10 13:36:58.144 EDT server1 tid=0x1] Logging initialized #4705ms
[info 2017/07/10 13:36:58.210 EDT server1 tid=0x1] jetty-9.3.6.v20151106
[info 2017/07/10 13:36:58.747 EDT server1 tid=0x1] NO JSP Support for /gemfire-api, did not find org.eclipse.jetty.jsp.JettyJspServlet
[info 2017/07/10 13:36:58.907 EDT server1 tid=0x1] Initializing Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:01.560 EDT server1 tid=0x1] Context refreshed
[info 2017/07/10 13:37:01.574 EDT server1 tid=0x1] Found 1 custom documentation plugin(s)
[info 2017/07/10 13:37:01.580 EDT server1 tid=0x1] Scanning for api listing references
[info 2017/07/10 13:37:01.721 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_1
[info 2017/07/10 13:37:01.765 EDT server1 tid=0x1] Generating unique operation named: readUsingGET_1
[info 2017/07/10 13:37:01.810 EDT server1 tid=0x1] Generating unique operation named: createUsingPOST_1
[info 2017/07/10 13:37:01.818 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_2
[info 2017/07/10 13:37:01.824 EDT server1 tid=0x1] Generating unique operation named: listUsingGET_1
[info 2017/07/10 13:37:01.848 EDT server1 tid=0x1] Generating unique operation named: updateUsingPUT_1
[info 2017/07/10 13:37:01.888 EDT server1 tid=0x1] Started o.e.j.w.WebAppContext#49754e74{/gemfire-api,[file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_gemfire-api/webapp/, jar:file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_gemfire-api/webapp/WEB-INF/lib/springfox-swagger-ui-2.6.0.jar!/META-INF/resources],AVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:02.182 EDT server1 tid=0x1] NO JSP Support for /geode, did not find org.eclipse.jetty.jsp.JettyJspServlet
[info 2017/07/10 13:37:02.229 EDT server1 tid=0x1] Initializing Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.502 EDT server1 tid=0x1] Context refreshed
[info 2017/07/10 13:37:04.518 EDT server1 tid=0x1] Found 1 custom documentation plugin(s)
[info 2017/07/10 13:37:04.528 EDT server1 tid=0x1] Scanning for api listing references
[info 2017/07/10 13:37:04.666 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_1
[info 2017/07/10 13:37:04.717 EDT server1 tid=0x1] Generating unique operation named: readUsingGET_1
[info 2017/07/10 13:37:04.767 EDT server1 tid=0x1] Generating unique operation named: createUsingPOST_1
[info 2017/07/10 13:37:04.776 EDT server1 tid=0x1] Generating unique operation named: deleteUsingDELETE_2
[info 2017/07/10 13:37:04.783 EDT server1 tid=0x1] Generating unique operation named: listUsingGET_1
[info 2017/07/10 13:37:04.809 EDT server1 tid=0x1] Generating unique operation named: updateUsingPUT_1
[info 2017/07/10 13:37:04.857 EDT server1 tid=0x1] Started o.e.j.w.WebAppContext#353422fd{/geode,[file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_geode/webapp/, jar:file:///home/telirisuser/mygemfire/server1/GemFire_telirisuser/services/http/10.160.3.181_7070_geode/webapp/WEB-INF/lib/springfox-swagger-ui-2.6.0.jar!/META-INF/resources],AVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.857 EDT server1 tid=0x1] Stopping the HTTP service...
[info 2017/07/10 13:37:04.859 EDT server1 tid=0x1] Stopped ServerConnector#2dbcee03{HTTP/1.1,[http/1.1]}{10.160.3.181:7070}
[info 2017/07/10 13:37:04.859 EDT server1 tid=0x1] Destroying Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.871 EDT server1 tid=0x1] Stopped o.e.j.w.WebAppContext#353422fd{/geode,null,UNAVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.871 EDT server1 tid=0x1] Destroying Spring FrameworkServlet 'geode'
[info 2017/07/10 13:37:04.880 EDT server1 tid=0x1] Stopped o.e.j.w.WebAppContext#49754e74{/gemfire-api,null,UNAVAILABLE}{/opt/pivotal/pivotal-gemfire-9.0.4/tools/Extensions/geode-web-api-9.0.4.war}
[info 2017/07/10 13:37:04.893 EDT server1 tid=0x1] Cache server connection listener bound to address 0.0.0.0/0.0.0.0:40404 with backlog 1,000.
[info 2017/07/10 13:37:04.902 EDT server1 tid=0x1] ClientHealthMonitorThread maximum allowed time between pings: 60,000
[info 2017/07/10 13:37:04.908 EDT server1 tid=0x1] CacheServer Configuration: port=40404 max-connections=800 max-threads=0 notify-by-subscription=true socket-buffer-size=32768 maximum-time-between-pings=60000 maximum-message-count=230000 message-time-to-live=180 eviction-policy=none capacity=1 overflow directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000 tcpNoDelay=true

I don't see an error message in your logs. I think the problem is you need to bind to the IP address not localhost it's to do with NIC settings.
So like start server --name=server1 --start-rest-api=true --http-service-port=8080 --http-service-bind-address=191.234.180.99 --server-bind-address=191.234.180.99

Related

Why can't the security context be found?

There are 6 server nodes and 4 client nodes in the cluster. When the cluster is first started, servers 5 and 6 cannot find the security context of client 4. After the cluster restarts, server 6 cannot find the client 2 security context.
There is only this kind of exception in the log, there are no other exceptions. Why can't the security context be found?
All nodes are restarted sequentially. This problem occurs in the production environment and is not reproduced in the test environment.
2022 Aug 09 20:53:51:378 GMT +08 cep-data-010.ds-cache6 ERROR [sys-stripe-41-#42%cep-data-010.ds-cache6%] - [org.apache.ignite] Failed to obtain a security context.
java.lang.IllegalStateException: Failed to find security context for subject with given ID : be1fded5-1450-4fc6-b16f-1c580899db2f
at org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:167)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1908)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1530)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:243)
at org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1423)
at org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:637)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
at java.base/java.lang.Thread.run(Thread.java:834)
2022 Aug 09 20:53:51:383 GMT +08 cep-data-010.ds-cache6 INFO [disco-event-worker-#351%cep-data-010.ds-cache6%] - [org.apache.ignite] Added new node to topology: TcpDiscoveryNode [id=be1fded5-1450-4fc6-b16f-1c580899db2f, consistentId=cep-master-017.ds-realtrail2, addrs=ArrayList [192.168.229.9], sockAddrs=HashSet [cep-master-017/192.168.229.9:0], discPort=0, order=16, intOrder=16, lastExchangeTime=1660049631321, loc=false, ver=2.13.0#20220420-sha1:551f6ece, isClient=true]
2022 Aug 09 20:53:51:389 GMT +08 cep-data-010.ds-cache6 ERROR [sys-stripe-41-#42%cep-data-010.ds-cache6%] - [org.apache.ignite] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Failed to find security context for subject with given ID : be1fded5-1450-4fc6-b16f-1c580899db2f]]

Redis Won't Start Scheduled restart job, restart counter is at 5

My working and functional redis server 5.0.7 installation on Ubuntu 20.04, simply refuses to start without any changes having be made to the config file. It is identical to a config on a different different server running redis that starts, therefore I think the error message is systemd related
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-07-30 23:09:49 HKT; 32s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 8141 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=1/FAILURE)
redis-server.service: Control process exited, code=exited, status=1/FAILURE
redis-server.service: Failed with result 'exit-code'.
Failed to start Advanced key-value store.
redis-server.service: Scheduled restart job, restart counter is at 5.
Stopped Advanced key-value store.
redis-server.service: Start request repeated too quickly.
redis-server.service: Failed with result 'exit-code'.
Failed to start Advanced key-value store.
Any ideas?
Thanks

server.HiveServer2: Error starting priviledge synchonizer

Hive version 3.1.2
Hadoop components(hdfs/yarn/historyjob) with kerberos authentication.
hive kerberos config:
hive.server2.authentication=KERBEROS
hive.server2.authentication.kerberos.principal=hiveserver2/_HOST#BDP.COM
hive.server2.authentication.kerberos.keytab=/etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
hive.metastore.sasl.enabled=true
hive.metastore.kerberos.keytab.file=/etc/kerberos/hadoop/metastore.bdp-05.keytab
hive.metastore.kerberos.principal=metastore/_HOST#BDP.COM
First, start the Metastore:
./bin/hive --service metastore > /dev/null &
Nothing unnormal in the log.
Then start hiveserver2 :
./bin/hive --service hiveserver2 > /dev/null &
Here is the start logs :
2020-12-30T11:28:48,746 INFO [main] server.HiveServer2: Starting HiveServer2
2020-12-30T11:28:49,168 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:49,171 INFO [main] cli.CLIService: SPNego httpUGI not created, spNegoPrincipal: , ketabFile:
2020-12-30T11:28:49,187 INFO [main] SessionState: Hive Session ID = 0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,052 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,066 INFO [main] session.SessionState: Created local directory: /tmp/hive/0754b9bc-f2f9-4d4c-ab95-a7359764bc49
2020-12-30T11:28:50,069 INFO [main] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/0754b9bc-f2f9-4d4c-ab95-a7359764bc49/_tmp_space.db
2020-12-30T11:28:50,600 INFO [main] metastore.HiveMetaStoreClient: Trying to connect to metastore with URI thrift://bigdata-server-05:9083
2020-12-30T11:28:50,605 INFO [main] metastore.HiveMetaStoreClient: HMSC::open(): Could not find delegation token. Creating KERBEROS-based thrift connection.
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Opened a connection to metastore, current connections: 1
2020-12-30T11:28:50,653 INFO [main] metastore.HiveMetaStoreClient: Connected to metastore.
2020-12-30T11:28:50,654 INFO [main] metastore.RetryingMetaStoreClient: RetryingMetaStoreClient proxy=class org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient ugi=hiveserver2/bigdata-server-05#BDP.COM (auth:KERBEROS) retries=1 delay=1 lifetime=0
2020-12-30T11:28:50,781 INFO [main] service.CompositeService: Operation log root directory is created: /tmp/hive/operation_logs
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread pool size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread wait queue size: 100
2020-12-30T11:28:50,783 INFO [main] service.CompositeService: HiveServer2: Background operation thread keepalive time: 10 seconds
2020-12-30T11:28:50,784 INFO [main] service.CompositeService: Connections limit are user: 0 ipaddress: 0 user-ipaddress: 0
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:OperationManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:SessionManager is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:CLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is inited.
2020-12-30T11:28:50,787 INFO [main] service.AbstractService: Service:HiveServer2 is inited.
2020-12-30T11:28:50,835 INFO [pool-7-thread-1] SessionState: Hive Session ID = 693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,838 INFO [main] results.QueryResultsCache: Initializing query results cache at /tmp/hive/_resultscache_
2020-12-30T11:28:50,844 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,844 INFO [main] results.QueryResultsCache: Query results cache: cacheDirectory /tmp/hive/_resultscache_/results-23ae949b-6894-4a17-8141-0eacf5fe5a63, maxCacheSize 2147483648, maxEntrySize 10485760, maxEntryLifetime 3600000
2020-12-30T11:28:50,846 INFO [pool-7-thread-1] session.SessionState: Created local directory: /tmp/hive/693b0399-aabd-42b5-a4b2-a4cebbd325d4
2020-12-30T11:28:50,849 INFO [pool-7-thread-1] session.SessionState: Created HDFS directory: /tmp/hive/hiveserver2/693b0399-aabd-42b5-a4b2-a4cebbd325d4/_tmp_space.db
2020-12-30T11:28:50,861 INFO [main] events.NotificationEventPoll: Initializing lastCheckedEventId to 0
2020-12-30T11:28:50,862 INFO [main] server.HiveServer2: Starting Web UI on port 10002
2020-12-30T11:28:50,885 INFO [pool-7-thread-1] metadata.HiveMaterializedViewsRegistry: Materialized views registry has been initialized
2020-12-30T11:28:50,894 INFO [main] util.log: Logging initialized #4380ms
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:OperationManager is started.
2020-12-30T11:28:51,009 INFO [main] service.AbstractService: Service:SessionManager is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:CLIService is started.
2020-12-30T11:28:51,010 INFO [main] service.AbstractService: Service:ThriftBinaryCLIService is started.
2020-12-30T11:28:51,013 WARN [main] security.HadoopThriftAuthBridge: Client-facing principal not set. Using server-side setting: hiveserver2/_HOST#BDP.COM
2020-12-30T11:28:51,013 INFO [main] security.HadoopThriftAuthBridge: Logging in via CLIENT based principal
2020-12-30T11:28:51,019 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,019 INFO [main] security.HadoopThriftAuthBridge: Logging in via SERVER based principal
2020-12-30T11:28:51,023 INFO [main] security.UserGroupInformation: Login successful for user hiveserver2/bigdata-server-05#BDP.COM using keytab file /etc/kerberos/hadoop/hiveserver2.bdp-05.keytab
2020-12-30T11:28:51,030 INFO [main] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,033 INFO [main] security.TokenStoreDelegationTokenSecretManager: New master key with key id=0
2020-12-30T11:28:51,034 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2020-12-30T11:28:51,035 INFO [Thread[Thread-8,5,main]] security.TokenStoreDelegationTokenSecretManager: New master key with key id=1
2020-12-30T11:28:51,040 INFO [main] thrift.ThriftCLIService: Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2020-12-30T11:28:51,040 INFO [main] service.AbstractService: Service:HiveServer2 is started.
2020-12-30T11:28:51,041 ERROR [main] server.HiveServer2: Error starting priviledge synchonizer:
java.lang.NullPointerException: null
at org.apache.hive.service.server.HiveServer2.startPrivilegeSynchonizer(HiveServer2.java:985) ~[hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:726) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1037) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305) [hive-service-3.1.2.jar:3.1.2]
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) [hive-service-3.1.2.jar:3.1.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_271]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_271]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_271]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_271]
at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.3.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.3.jar:?]
2020-12-30T11:28:51,044 INFO [main] server.HiveServer2: Shutting down HiveServer2
In my case, the hiveserver2-sit.xml was created by Apache Ranger when turning the ranger-hive-plugin on. Once I disable the ranger-hive-plugin, hiveserver2-sit.xml was edited by Ranger.
Here are the remaining configurations:
<configuration>
<property>
<name>hive.security.authorization.enabled</name>
<value>true</value>
</property>
<property>
<name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider</value>
</property>
<property>
<name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value>
</property>
<property>
<name>hive.conf.restricted.list</name>
<value>hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager</value>
</property>
</configuration>
Start hiveServer2 will encounter the previous exception.
Remove hiveserver2-site.xml will work fine.
I don't know why? Somebody can explain?
is this still actual ? , if yes , check logs . You should see that it tries to connect to zookeeper , if not described it will try to connect to localhost:2181 , so either it must be there or zk quorum servers should be described.

Apache Geode Redis Adapter can not PERSISTENT

I want to create GeodeRedisServer what the regiontype is REPLICATE_PERSISTENT. According to the documentation:
gfsh> start server --name=server1 --redis-bind-address=localhost \
--redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
I use the command, but it is not successful. An error is :
gfsh>start server --name=server2 --server-port=40405 --redis-bind-address=192.168.16.36 --redis-port=6372 --J=-Dgemfireredis.regiontype=REPLICATE_PERSISTENT
Starting a Geode Server in /home/apache-geode-1.10.0/my_geode/server2...
The Cache Server process terminated unexpectedly with exit status 1. Please refer to the log file in /home/apache-geode-1.10.0/my_geode/server2 for full details.
Exception in thread "main" java.lang.NullPointerException
at org.apache.geode.internal.cache.LocalRegion.findDiskStore(LocalRegion.java:7436)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:595)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:541)
...
the log i can see:
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] _monitoringRegion_192.168.16.36<v1>41001 is done getting image from 192.168.16.36(locator1:62278:locator)<ec><v0>:41000. isDeltaGII is false
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] Initialization of region _monitoringRegion_192.168.16.36<v1>41001 completed
[info 2019/12/10 15:55:11.379 CST <main> tid=0x1] Initialized cache service org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl
[info 2019/12/10 15:55:11.396 CST <main> tid=0x1] Initialized cache service org.apache.geode.cache.lucene.internal.LuceneServiceImpl
[info 2019/12/10 15:55:11.397 CST <main> tid=0x1] Starting GeodeRedisServer on bind address 192.168.16.36 on port 6379
[error 2019/12/10 15:55:11.402 CST <main> tid=0x1] java.lang.NullPointerException
[info 2019/12/10 15:55:11.405 CST <Distributed system shutdown hook> tid=0x15] VM is exiting - shutting down distributed system
but when i change the the regiontype is REPLICATE, success.
And I want to make my redis service highly available, now i can create redis adapter service, but once it hangs, it is not available. Is there any way to create a redis adapter cluster? I haven't found an available method in the official documentation, I hope someone can help me, thank you very much。
As a side note, the version I tested was 1.9.0.
Unfortunately this appears to be broken and I don't think there is a good workaround. I've opened a ticket for this: https://issues.apache.org/jira/browse/GEODE-7721
I think Jens is right that the REPLICATE_PERSISTENT policy seems to be broken.
You also asked "Is there any way to create a redis adapter cluster?" If you use the REPLICATE or PARTITION_REDUNDANT data policies, I think it should work. If you start additional geode servers they will make redundant copies of your data.
Do be aware that this adapter is still experimental and in development.

GemFire 8.2.0 embeded locator service

I am trying to cluster servers in gemfire using an embedded locator service.
server 1
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml")
.set("mcast-port", "0")
.set("start-locator", "11001")
.set("locators", "localhost[11001],10.0.0.193[11002]").create();
server 2
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml")
.set("mcast-port", "0")
.set("start-locator", "11002")
.set("locators", "10.0.0.192[11001],localhost[11002]").create();
but they cant connect
from server 1
[warn 2016/02/08 20:37:41.510 UTC tid=0x28] Locator discovery task could not exchange locator information localhost[11001] with ip-10-0-0-193.ec2.internal[11002] after 55 retry attempts. Retrying in 10,000 ms.
from server 2
[warn 2016/02/08 20:46:27.867 UTC tid=0x28] Locator discovery task could not exchange locator information localhost[11002] with ip-10-0-0-192.ec2.internal[11001] after 102 retry attempts. Retrying in 10,000 ms.
it close but i am missing something
Yes, using the .set("bind-address", "10.0.0.193") answer seemed to do the trick. just to comfirm on the logs, did i make a cluster
server1
[info 2016/02/09 09:39:07.445 UTC tid=0x3c] Membership: Processing addition < ip-10-0-0-192(14522):14968 >
[info 2016/02/09 09:39:07.445 UTC tid=0x3c] Admitting member :14968>. Now there are 2 non-admin member(s).
[info 2016/02/09 09:39:07.460 UTC tid=0x41] Member ip-10-0-0-192(14522):14968 is not equivalent or in the same redundancy zone.
[info 2016/02/09 09:39:12.923 UTC tid=0x28] Locator discovery task exchanged locator information ip-10-0-0-193.ec2.internal[11001] with ip-10-0-0-192.ec2.internal[11001]: {-1=[ip-10-0-0-192.ec2.internal[11001], ip-10-0-0-193.ec2.internal[11001]]}.
[info 2016/02/09 09:39:13.245 UTC tid=0x46] Initializing region _gfe_non_durable_client_with_id_ip-10-0-0-186(3936:loner):49683:5b2966c5_2_queue
[info 2016/02/09 09:39:13.247 UTC tid=0x46] Initialization of region _gfe_non_durable_client_with_id_ip-10-0-0-186(3936:loner):49683:5b2966c5_2_queue completed
[info 2016/02/09 09:39:13.252 UTC tid=0x46] Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180
[info 2016/02/09 09:39:13.435 UTC tid=0x46] Initializing region _gfe_non_durable_client_with_id_ip-10-0-0-189(4036:loner):51441:762a66c5_2_queue
[info 2016/02/09 09:39:13.437 UTC tid=0x46] Initialization of region _gfe_non_durable_client_with_id_ip-10-0-0-189(4036:loner):51441:762a66c5_2_queue completed
[info 2016/02/09 09:39:13.438 UTC tid=0x46] Entry expiry tasks disabled because the queue became primary. Old messageTimeToLive was: 180
and server 2
[info 2016/02/09 09:39:07.245 UTC tid=0x1] Attempting to join distributed system whose membership coordinator is ip-10-0-0-193(16745):57474 using membership ID ip-10-0-0-192(14522):14968
[info 2016/02/09 09:39:07.408 UTC tid=0x1] Membership: lead member is now ip-10-0-0-193(16745):57474
[info 2016/02/09 09:39:07.412 UTC tid=0x23] GemFire failure detection is now monitoring ip-10-0-0-193(16745):57474
[info 2016/02/09 09:39:07.413 UTC tid=0x1] Entered into membership with ID ip-10-0-0-192(14522):14968.
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Starting DistributionManager ip-10-0-0-192(14522):14968. (took 272/ ms)
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Initial (membershipManager) view = [ip-10-0-0-193(16745):57474{lead}, ip-10-0-0-192(14522):14968]
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Admitting member :57474>. Now there are 1 non-admin member(s).
[info 2016/02/09 09:39:07.414 UTC tid=0x1] Admitting member :14968>. Now there are 2 non-admin member(s).
[info 2016/02/09 09:39:07.446 UTC :57474 shared unordered uid=1 port=39916> tid=0x28] Member ip-10-0-0-193(16745):57474 is not equivalent or in the same redundancy zone.
Thanks.
Actually, the locator is binding to localhost, so you should set bind-address for each cache server with set("bind-address", "10.0.0.192"). Also obviously have your locators point at these addresses.
Have you tried replacing "localhost" with the actual IP address of the box? In other words, both lists should look like this:
.set("locators", "10.0.0.192[11001],10.0.0.193[11002]")
I believe the locator by default binds to the public IP address of your machine, not localhost (127.0.0.1).