how to put java class into the gemfire key-value struct - gemfire

gfsh>put --key=('id':'133abg125') --value=('firstname':'James','lastname':'Gosling') --region=/region --key-class=data.ProfileKey --value-class=data.ProfileDetails
Message : ClassNotFoundException data.ProfileKey
Result : false

You need to ensure that your domain classes are available on all servers as well. I'd suggest starting them with the --classpath <jarfile> option and pointing them at the jar containing the relevant classes.

Related

bytebuddy apply advice to interface methods - specifically spring data jpa

I am trying to LOG all methods that are invoked in my Springboot application using byte-buddy based java agent.
I am able to log all layers except Spring data JPA repositories, which are actually interfaces. Below is agent initialization:
new AgentBuilder.Default()
.type(ElementMatchers.hasSuperType(nameContains("com.soka.tracker.repository").and(ElementMatchers.isInterface())))
.transform(new AgentBuilder.Transformer.ForAdvice()
.include(TestAgent.class.getClassLoader())
.advice(ElementMatchers.any(), "com.testaware.MyAdvice"))
.installOn(instrumentation);
any hints or workaround that I can use to log when my repository methods are invoked. Below is a sample repository in question:
package com.soka.tracker.repository;
.....
#Repository
public interface GeocodeRepository extends JpaRepository<Geocodes, Integer> {
Optional<Geocodes> findByaddress(String currAddress);
}
Modified agent:
new AgentBuilder.Default()
.ignore(new AgentBuilder.RawMatcher.ForElementMatchers(any(), isBootstrapClassLoader().or(isExtensionClassLoader())))
.ignore(new AgentBuilder.RawMatcher.ForElementMatchers(nameStartsWith("net.bytebuddy.")
.and(not(ElementMatchers.nameStartsWith(NamingStrategy.SuffixingRandom.BYTE_BUDDY_RENAME_PACKAGE + ".")))
.or(nameStartsWith("sun.reflect."))))
.type(ElementMatchers.nameContains("soka"))
.transform(new AgentBuilder.Transformer.ForAdvice()
.include(TestAgent.class.getClassLoader())
.advice(any(), "com.testaware.MyAdvice"))
//.with(AgentBuilder.Listener.StreamWriting.toSystemOut())
.with(AgentBuilder.TypeStrategy.Default.REDEFINE)
.installOn(instrumentation);
I see my advice around controller and service layers - JPA repository layer is not getting logged.
By default, Byte Buddy ignores synthetic types in its agent. I assume that Spring's repository classes are marked as such and therefore not processed.
You can set a custom ignore matcher by using the AgentBuilder DSL. By default, the following ignore matcher is set to ignore system classes and Byte Buddy's own types:
new RawMatcher.Disjunction(
new RawMatcher.ForElementMatchers(any(), isBootstrapClassLoader().or(isExtensionClassLoader())),
new RawMatcher.ForElementMatchers(nameStartsWith("net.bytebuddy.")
.and(not(ElementMatchers.nameStartsWith(NamingStrategy.SuffixingRandom.BYTE_BUDDY_RENAME_PACKAGE + ".")))
.or(nameStartsWith("sun.reflect."))
.<TypeDescription>or(isSynthetic())))
You would probably need to remove the last condition.
for anybody visiting this question / problem - I was able to go around the actual problem with logging actual queries invoked during execution - Bytebuddy is awesome and very powerful - for ex- in my case I am simply advice'ing on my db connection pool classes and gathering all required telemetry -
.or(ElementMatchers.nameContains("com.zaxxer.hikari.pool.HikariProxyConnection"))

How to add ComponentScan.Filter in #EnableEntityDefinedRegion

When I added an includeFilter to #EnableEntityDefinedRegion, it still scanned the whole entity package and created all Region beans. How do I scan the specific Region class? For example, only "Address" Region.
package org.test.entity
#Getter
#Setter
#Region("Address")
public class GfAddress implements Serializable
package org.test.entity
#Getter
#Setter
#Region("CreditCard")
public class GfCreditCard implements Serializable
package org.test.package
public interface IAddressRepository extends GemfireRepository<GfAddress, String>
package org.test.package
public interface ICreditCardRepository extends GemfireRepository<GfCreditCard , String>
#Service
#ClientCacheApplication
#EnableGemfireRepositories(basePackages = IAddressRepository.class, includeFilters = #ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, classes=AddressRepository.class))
#EnableEntityDefinedRegion(basePackages = GfAddress.class, includeFilters = #ComponentScan.Filter(type = FilterType.REGEX, pattern="GfAddress*"))
public class AddressDataAccess
When I print all the beans that are loaded, I found out that the following beans are created.
Address
CreditCard
IAddressRepository
AddressDataAccess
Version
GemFire : 9.8.6
spring-data-gemfire : 2.1.0
Spring Boot : 2.1.0
Sorry for the delay.
First, have a look at the SDG JIRA ticket I filed, DATAGEODE-352 - "EnableEntityDefinedRegions.includeFilters are inappropriately overridden by framework provided include filters".
In this ticket, I describe a couple of workarounds to this bug (!) in the comments, starting here.
I'd also be careful about your REGEX. I am no Regular Expression expert, but I am certain "GfAddress*" will not properly match the application entity type you are searching for and trying to match even when you pick up the new SDG bits resolving the issue I filed.
I created a similar test, using REGEX, to verify the resolution of the issue, here. This is the REGEX I specified. Using "Programmer*" did not work, as I suspected! That is because the REGEX is not valid and does not match the FQCN as used in the Spring RegexPatternTypeFilter.
Technically, it would be better to be a bit more specific about your type matching and use a "ASSIGNABLE_TYPE" TypeFilter instead, as this test demonstrates.
Finally, while SDG 2.1.x is compatible with GemFire 9.8.x, SD[G] Lovelace, or 2.1.x (e.g. 2.1.18.RELEASE), is officially based on, and only "supports", VMware GemFire 9.5.x (currently 9.5.4).
SDG 2.2.x is officially based on, and "supports", VMware GemFire 9.8.x (currently at 9.8.8).
You can review the new SDG Version Compatibility Matrix for more details.
If you have more questions, please follow up here or in DATAGEODE-352.

NoValueFactoryException when using Zeroc Ice - Sliced vs. compact format?

I am trying to use an Ice client in an OSGi context. Running the server and a minimal example client in a non-OSGi environment works fine. With the client in an OSGi environment I get the following exception:
com.zeroc.Ice.NoValueFactoryException
reason = "no value factory found and compact format prevents slicing (the sender should use the sliced format instead)"
type = "::MyModule::Knowledge::CMKnowledge"
However, I am not 100% sure, if the OSGi runtime makes a difference here. The Slice file looks like this:
module MyModule{
module Knowledge{
class KnowledgePart{
string value;
}
class FMKnowledge extends KnowledgePart{}
class CMKnowledge extends KnowledgePart{}
interface IKnowledge{
void sendKnowledge(KnowledgePart knowledge);
FMKnowledge getFMKnowledge();
CMKnowledge getCMKnowledge();
}
}
}
What does this exception mean in this context and how can I fix it? I already tried to set ["format:sliced"] instead of the implicitly used compact format.
The error mean that Ice run-time try to load MyModule.Knowledge.CMKnowledge class but it failed to do so. You must ensure that the class loader used by the application can load MyModule.Knowledge.CMKnowledgeclass.
See also https://doc.zeroc.com/ice/3.7/language-mappings/java-mapping/custom-class-loaders

CMS hippo property reading from .yaml file

I need to read the properties which are stated in my one of the .yaml file(eg banner.yaml). These properties should be read in a java class so that they can be accessed and the operation can be performed wisely.
This is my label.yaml file
/content/documents/administration/labels:
jcr:primaryType: hippostd:folder
jcr:mixinTypes: ['mix:referenceable']
jcr:uuid: 7ec0e757-373b-465a-9886-d072bb813f58
hippostd:foldertype: [new-resource-bundle, new-untranslated-folder]
/global:
jcr:primaryType: hippo:handle
jcr:mixinTypes: ['hippo:named', 'mix:referenceable']
jcr:uuid: 31e4796a-4025-48a5-9a6e-c31ba1fb387e
hippo:name: Global
How should I access the hippo:name property which should return me Global as value in one of the java class ?
Any help will be appreciated.
Create a class which extends BaseHstComponent, which allows you to make use of HST Content Bean's
Create a session Object, for this you need to have valid credentials of your repository.
Session session = repository.login("admin", "admin".toCharArray());
Now, create object of javax.jcr.Node, for this you require relPath to your .yaml file.
In your case it will be /content/documents/administration/labels/global
Node node = session.getRootNode().getNode("content/articles/myarticle");
Now, by using getProperty method you can access the property.
node.getProperty("hippotranslation:locale");
you can refere the link https://www.onehippo.org/library/concepts/content-repository/jcr-interface.html
you can't read a yaml file from within the application. The yaml file is bootstrapped in the repository. The data you show represents a resource bundle. You can access it programmatically using the utility class ResourceBundleUtils#getBundle
Or on a template use . Then you can use keys as normal.
I strongly suggest you follow our tutorials before continuing.
more details here:
https://www.onehippo.org/library/concepts/translations/hst-2-dynamic-resource-bundles-support.html

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);