I am using Pivotal GemFire 9.1.1 through Pivotal Cloud Cache 1.3.1 and ran into the following error while using the #EnableClusterConfiguration SDG annotation:
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] org.springframework.context.ApplicationContextException: Failed to start bean 'gemfireClusterSchemaObjectInitializer'; nested exception is org.apache.geode.cache.client.ServerOperationException: remote server on ac62ca98-0ec5-4a30-606b-1cc9(:8:loner):47710:a6159523:: The function is not registered for function id CreateRegionFunction
2018-11-17T16:30:35.279-05:00 [APP/PROC/WEB/0] [OUT] at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:184)
Finally, I ran into this post - https://github.com/spring-projects/spring-boot-data-geode/issues/15
Is there any other annotation I can use with Spring Boot 2+ which will help me with GemFire Region creation, dynamically?
Thanks!
Unfortunately, NO; there is no other way to currently and "dynamically" push cluster/server-side configuration from a Spring/GemFire cache client to a cluster of PCC servers running in PCF, using SDG/SBDG at the moment.
This is now because of this underlying issue as well, SBDG Issue #16 - "HTTP client does not authenticate when pushing cluster config from client to server using #EnableClusterConfiguration with PCC 1.5."
For the time being, you must manually create Regions (and Indexes) using the documentation provided by PCC.
I am sorry for any inconvenience or trouble this has caused you. This will be resolved very soon.
This does work in a local, non-managed context, even when starting your cluster (servers) using Gfsh. It just does not work in PCF using PCC, yet.
Regards.
Related
I am using gemfire caching in a peer to peer set up. The system has been running fine with gemfire 6 for a number of years. I recently upgraded to gemfire 7 and get this error in agents and one of the processes-
[main] ERROR [GemfirePeer] Issues while creating gemfire distributed region : com.gemstone.gemfire.IncompatibleSystemException: Rejected new system node because mcast was disabled which does not match the distributed system it is attempting to join. To fix this make sure the "mcast-port" gemfire property is set the same on all members of the same distributed system.
The mcast-port=0 is set in configuration properties in all processes.
Can someone please give ideas what could be the issue here?
This message means that there is at-least one member you have started that has the mcast-port set to a non-zero value, it potentially could be from your 6.x install as well.
I would recommend that you use locator for member discovery.
While trying to configure an async replication on an ArangoDB database (using the document https://docs.arangodb.com/3.3/Manual/Administration/Replication/Asynchronous/Components.html) I got this error:
JavaScript exception in file
'/usr/share/arangodb3/js/client/modules/#arangodb/replication.js' at 209,7:
ArangoError 1470: replication API is not supported on a coordinator
! throw err;
! ^
stacktrace: ArangoError: replication API is not supported on a coordinator
at Object.exports.checkRequestResult
(/usr/share/arangodb3/js/client/modules/#arangodb/arangosh.js:96:21)
at waitForResult
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:207:16)
at setup
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:310:10)
at Object.setupReplication
(/usr/share/arangodb3/js/client/modules/#arangodb/replication.js:313:51)
at <shell command>:1:34
Any idea what could have caused it? I'm on the latest 3.3.3 version with a cluster up and running on 3 different machines.
You are accessing a coordinator. The replication API resides only on DB servers in an ArangoDB cluster. You will see a different behavior if you move to DB servers.
I am evaluating the DataStax OpsCenter on a virtual machine to start managing/monitoring cassandra. I am following the online docs to create cluster topology models via OpsCenter LCM, but the error message doesn't provide much information for me to continue. The jobs status are,
error- MeldError, 400 Client Error: Bad Request for url: http://[ip_address]:8888/api/v1/lcm/internal/nodes/6185c776-9034-45b4-a54f-6eb9511274a2/package_information
Meld failed on name="testnode1" ssh-management-address=[ip_address]" node-id="6185c776-9034-45b4-a54f-6eb9511274a2" node-name="testnode1" job-id="1b792c69-bcca-489f-ad12-a6285ba84d59" stdout=" Meld has started... " stderr=""
My question is what might be wrong and any hint how to resolve that?
I am new to the cassandra and DataStax communities, please forgive me if any silly question asked!
Q: I used to be a buildbot user and DataStax agent looks like a Buildbot's slave. Why we don't need agent setup on the remote machine to work with opscenter? The working directory of agent is configured in opscenter?
The opscenterd.log, https://pastebin.com/TJsvmr6t
According to the compatibility of the tools set mentioned in https://docs.datastax.com/en/landing_page/doc/landing_page/compatibility.html#compatibilityDocument__opsc-compatibility , I actually use the OpsCenter v5.2 for monitoring and basic db operations. After trial-and-error of .yaml of Agent and .conf of Cassandra 2.2, the Dashboard works!
Knowledge gained,
The OpsCenter 5.2 actually works with Cassandra 2.2 which is not listed in the compatibility table
For beginner, if not sure where to start, try to install all the components on one machine to get idea of the least viable working setup. And from there to configure the actual dev/test/production environment.
I followed the tutorial instructions :
Install MobileFirst Platform Server 7.1 on Bluemix (https://mobilefirstplatform.ibmcloud.com/labs/administrators/7.1/bluemix/)
I used Cloudant NoSQL DB as database.
It works well for several days.
But after a weekend without use, it doesn't work and i have this message on MobileFirst Operations console: Runtime synchronization failed.
console message
I tried to restart the container and the database application server (liberty) but i've always the same message.
I have to remove the container and repeat the whole procedure.
This is the third time it happens.
Try setting JNDI ibm.worklight.admin.farm.reinitialize value to true in server.xml. This will re-initalize the farm entries in other words it will clear the stale entries when the application crashes.
Reference : List of JNDI Properties for MFP Administration
seems like you are using the Cloudant shared plan. The shared plan response is not guaranteed like the dedicated Plan. To account for this vagarancies, there was a fix released to 7.1 that you should apply that adds the resiliency needed for the non response from the Cloudant shared plan. Pl apply the latest iFix and this should get solved.
Spring documentation says that Spring Session can transparently leverage Redis to back a web application’s HttpSession when using REST endpoints.
Does anyone know if Spring supports GemFire in this place instead of Redis to back a web application's HttpSession ?
Ref: http://docs.spring.io/spring-session/docs/current/reference/html5/guides/rest.html
Not yet, ;).
However, I did spend a little time researching the effort involved to implement a GemFire adapter for Spring Session to back (store/replicate) an HttpSession. I still need to dig a little deeper and I will be tracking this effort in JIRA here (SGF-373).
Also know that GemFire already has support for HTTP server session replication using GemFire's HTTP Session Management Module.
Will post back when I have more details.
Will these 3 steps (at a high level) be sufficient to allow Spring Session to write to Gemfire repository instead of Redis ?
Step 1: Implement just a Configuration class which provide all functions as the annotation
Allow spring to Load the configuration class
Register Spring Session Filter in Container
Establish Repo Connection Factory
Repo connection configuration
we will continue to re-use the Spring Session’s springSessionRepositoryFilter
Step 2: Need to develop an equivalent GemfireOperationsSessionRepository implementing the interface SessionRepository
Step 3: SessionMessageListener.java
3.1. Need to decide a technique to identify and save delta changes in Session to underlying repository
3.2. Need to see how session expire notification from underlying repository can be captured to invoke SessionDestroyEvent and cleanup operations -