I'm running a gremlin server with bitsy using the conf and properties file given in the github repo (https://github.com/lambdazen/bitsy/tree/master/src/test/resources/gremlin-server). I have, of course, changed dbPath to an appropriate path.
My client is a kotlin application running gremlin console. I have no problem executing queries that result in string or map values, for example:
val encodedPasswd = getg().V().has("user", "login", login).values<String>("password").next()
However, when attempting to get vertices from queries:
val user = getg().V().has("user", "login", login).next()
I get a deserialization error:
[gremlin-driver-loop-1] WARN org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0 - Response [PooledUnsafeDirectByteBuf(ridx: 7446, widx: 7446, cap: 7446)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0.
org.apache.tinkerpop.shaded.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `org.apache.tinkerpop.gremlin.driver.message.ResponseMessage` (no Creators, like default construct, exist): cannot deserialize from Object value (no delegate- or property-based Creator)
at [Source: (byte[])"{"requestId":"5e78c0ba-64d2-4b31-8b04-262c3fa2b3b8","status":{"message":"Error during serialization: Direct self-reference leading to cycle (through reference chain: com.lambdazen.bitsy.store.VertexBean[\"id\"])","code":599,"attributes":{"#type":"g:Map","#value":["stackTrace","org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.jackson.databind.exc.InvalidDefinitionException: Direct self-reference leading to cycle (through reference chain: com.lambdazen.bi"[truncated 6946 bytes]; line: 1, column: 2]
at org.apache.tinkerpop.shaded.jackson.databind.exc.InvalidDefinitionException.from(InvalidDefinitionException.java:67)
at org.apache.tinkerpop.shaded.jackson.databind.DeserializationContext.reportBadDefinition(DeserializationContext.java:1451)
at org.apache.tinkerpop.shaded.jackson.databind.DeserializationContext.handleMissingInstantiator(DeserializationContext.java:1027)
at org.apache.tinkerpop.shaded.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1297)
at org.apache.tinkerpop.shaded.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:326)
at org.apache.tinkerpop.shaded.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserialize(GraphSONTypeDeserializer.java:212)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserializeTypedFromObject(GraphSONTypeDeserializer.java:86)
at org.apache.tinkerpop.shaded.jackson.databind.deser.BeanDeserializerBase.deserializeWithType(BeanDeserializerBase.java:1178)
at org.apache.tinkerpop.shaded.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:68)
at org.apache.tinkerpop.shaded.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4013)
at org.apache.tinkerpop.shaded.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3091)
at org.apache.tinkerpop.gremlin.driver.ser.AbstractGraphSONMessageSerializerV2d0.deserializeResponse(AbstractGraphSONMessageSerializerV2d0.java:134)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinResponseDecoder.decode(WebSocketGremlinResponseDecoder.java:50)
...
Cluster construction:
var cluster = Cluster.build()
.serializer (
GryoMessageSerializerV3d0 (
GryoMapper.build()
.addRegistry(BitsyIoRegistryV3d0.instance())
)
)
.serializer (
GraphSONMessageSerializerV3d0 (
GraphSONMapper.build()
.addRegistry(BitsyIoRegistryV3d0.instance())
.create()
)
)
.create()
and traversal source construction:
fun getg() : GraphTraversalSource {
return EmptyGraph.instance().traversal().withRemote(DriverRemoteConnection.using(cluster))
}
Relevant parts of my gradle:
dependencies {
...
implementation('org.apache.tinkerpop:gremlin-driver:3.3.4')
implementation("com.lambdazen.bitsy:bitsy:3.1.0")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin:2.9.+")
}
Why can't the driver deserialise messages from the server? What do I change to allow it to?
I'm not completely sure, but I'm guessing that Bitsy lacks full support for GraphSON 3.0. If you look at its IoRegistry implementation it registers no custom serializers for GraphSON (just for Gryo):
https://github.com/lambdazen/bitsy/blob/c0dd4b6c9d6dc9987d0168c91417eb04d80bf712/src/main/java/com/lambdazen/bitsy/BitsyIoRegistryV3d0.java
I assume that's the reason why you get a serialization error. If you switch to just using Gryo (which is probably recommended to some extent since you are on the JVM with Kotlin) I imagine you won't have this problem.
Note that your Cluster object definition is not configuring the way you expect either. By calling serializer() twice (i.e. once for Gryo and once for GraphSON) you actually overwrite the Gryo configuration and simply configure the object to work with GraphSON.
So, I think you would just do this:
var cluster = Cluster.build()
.serializer (
GryoMessageSerializerV3d0 (
GryoMapper.build()
.addRegistry(BitsyIoRegistryV3d0.instance())
)
)
.create()
Related
I'm using Reactive Panache for Postgresql. I need to take an application level lock(redis), inside which I need to perform certain operations. However, panache library throws the following error:
java.lang.IllegalStateException: HR000069: Detected use of the reactive Session from a different Thread than the one which was used to open the reactive Session - this suggests an invalid integration; original thread [222]: 'vert.x-eventloop-thread-3' current Thread [154]: 'vert.x-eventloop-thread-2'
My code looks something like this:
redissonClient.getLock("lock").lock(this.leaseTimeInMillis, TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> return Panache.withTransaction(() -> Uni.createFrom.nullItem())
.eventually(lock::release);
)
Solutions such as the ones mentioned in this issue show the correct use with AWS SDK but not when used in conjunction with something like redisson. Does anyone have this working with redisson?
Update:
I tried the following on lock acquire and release:
.runSubscriptionOn(MutinyHelper.executor(Vertx.currentContext())
This fails with the following error even though I have quarkus-vertx dependency added:
Cannot invoke "io.vertx.core.Context.runOnContext(io.vertx.core.Handler)" because "context" is null
Panache might not be the best choice in this case.
I would try using Hibernate Reactive directly:
#Inject
Mutiny.SessionFactory factory;
...
redissonClient.getLock("lock")
.lock(this.leaseTimeInMillis,TimeUnit.MILLISECONDS, this.lockId)
.chain(() -> factory.withTransaction(session -> Uni.createFrom.nullItem())
.eventually(lock::release))
I create a Kotlin scratch file in Intellij IDEA and use my current project's module classpath in order to access all libraries of the project (i'm using Jackson in this example)
In both scenarios I have declared the following class:
class Test(var first: String = "a", var second: String = "b")
Without REPL enabled
val jsonAsString = "{\"first\": \"a\", \"second\":\"b\"}"
println(ObjectMapper().readValue(jsonAsString, Test::class.java).first) // prints out "a"
"a" is printed out as expected
With REPL enabled the ObjectMapper.readValue() throws the following exception
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of `Line_2$Test` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator)
at [Source: (String)"{"first": "a", "second":"b"}"; line: 1, column: 2]
at com.fasterxml.jackson.databind.exc.InvalidDefinitionException.from(InvalidDefinitionException.java:67)
at com.fasterxml.jackson.databind.DeserializationContext.reportBadDefinition(DeserializationContext.java:1904)
at com.fasterxml.jackson.databind.DatabindContext.reportBadDefinition(DatabindContext.java:400)
at com.fasterxml.jackson.databind.DeserializationContext.handleMissingInstantiator(DeserializationContext.java:1349)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1415)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:351)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:184)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4674)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3629)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3597)
I reproduced the bug and created an issue - https://youtrack.jetbrains.com/issue/KTIJ-21598/Scratch-REPL:-%22InvalidDefinitionException:-Cannot-construct-inst. Feel free to follow it.
We have implemented a custom construction heuristic with Optaplanner 7. We didn't use a simple CustomPhaseCommand; Instead, we extend MoveSelectorConfig and override buildBaseMoveSelector to return our own MoveFactory wrapped in a MoveIteratorFactoryToMoveSelectorBridge. We decided to do so, because it gives us the following advantages:
global termination config is supported out of the box
type safe configuration from code (no raw Strings)
With Optaplanner 8 the method buildBaseMoveSelector is gone from the MoveSelectorConfig API and building a custom config class seems to be prevented in the new implementation of MoveSelectorFactory.
Is it still possible to inject a proper custom construction heuristic into the Optaplanner 8 configuration and if yes, how? Or should we be using a CustomPhaseCommand with a custom self-implemented termination?
EDIT:
For clarity, in Optaplanner 7 we had the following snippet in our Optaplanner-config (defined in kotlin code):
ConstructionHeuristicPhaseConfig().apply {
foragerConfig = ConstructionHeuristicForagerConfig().apply {
pickEarlyType = FIRST_FEASIBLE_SCORE
}
entityPlacerConfig = QueuedEntityPlacerConfig().apply {
moveSelectorConfigList = listOf(
CustomMoveSelectorConfig().apply {
someProperty = 1
otherProperty = 0
}
)
}
},
CustomMoveSelectorConfig extends MoveSelectorConfig and overrides buildBaseMoveSelector:
class CustomMoveSelectorConfig(
var someProperty: Int = 0,
var otherProperty: Int = 0,
) : MoveSelectorConfig<CustomMoveSelectorConfig>() {
override fun buildBaseMoveSelector(
configPolicy: HeuristicConfigPolicy?,
minimumCacheType: SelectionCacheType?,
randomSelection: Boolean,
): MoveSelector {
return MoveIteratorFactoryToMoveSelectorBridge(
CustomMoveFactory(someProperty, otherProperty),
randomSelection
)
}
To summarize: We really need to plug our own MoveSelector with the custom factory. I think this is not possible with Optaplanner 8 at the moment.
Interesting extension.
Motivation for the changes in 8:
the buildBaseMoveSelector was not public API (the config package was not in the api package, we only guaranteed XML backwards compatibility for package config in 7). Now, we also guarantee API backwards compatibility for package config, so including programmatic configuration, because we moved all build* methods out of it.
In 8.2 or later we want to internalize the configuration in the SolverFactory, so we can build thousands of Solver instances faster. For example, loading classes wouldn't to be done only at SolverFactory build, once, no longer at every Solver build.
Anyway, let's first see if you can use the default way to override the moves of the CH, by explicitly configuring the queuedEntityPlacer and it's MoveIteratorFactory? https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/#allocateEntityFromQueueConfiguration
I guess not, because you'd need mimic support... for the selected entity you need to generate n moves, but always the same entity during 1 placement (hence the need for mimicing).
Clearly, the changes in 8 prevent users from plugging in their own MoveSelectors (= which is an internal API, but anyway). We might be able to add an internal API to allow that again.
I'm new to Graphs in general.
I'm attempting to store a TinkerPopGraph that I've created dynamically to gremlin server to be able to issue gremlin queries against it.
Consider the following code:
Graph inMemoryGraph;
inMemoryGraph = TinkerGraph.open();
inMemoryGraph.io(IoCore.graphml()).readGraph("test.graphml");
GraphTraversalSource g = inMemoryGraph.traversal();
List<Result> results =
client.submit("g.V().valueMap()").all().get();
I need some glue code. The gremlin query here is issued against the modern graph that is a default binding for the g variable. I would like to somehow store my inMemoryGraph so that when I run a gremlin query, its ran against my graph.
All graph configurations in Gremlin Server must occur through its YAML configuration file. Since you say you're connected to the modern graph I'll assume that you're using the default "modern" configuration file that ships with the standard distribution of Gremlin Server. If that is the case, then you should look at conf/gremlin-server-modern.yaml. You'll notice that this:
graphs: {
graph: conf/tinkergraph-empty.properties}
That creates a Graph reference in Gremlin Server called "graph" which you can reference from scripts. Next, note this second configuration:
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/generate-modern.groovy]}}}
Specifically, pay attention to scripts/generate-modern.groovy which is a Gremlin Server initialization script. Opening that up you will see this:
// an init script that returns a Map allows explicit setting of global bindings.
def globals = [:]
// Generates the modern graph into an "empty" TinkerGraph via LifeCycleHook.
// Note that the name of the key in the "global" map is unimportant.
globals << [hook : [
onStartUp: { ctx ->
ctx.logger.info("Loading 'modern' graph data.")
org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerFactory.generateModern(graph)
}
] as LifeCycleHook]
// define the default TraversalSource to bind queries to - this one will be named "g".
globals << [g : graph.traversal()]
The comments should do most of the explaining. The connection here is that you need to inject your graph initialization code into this script and assign your inMemoryGraph.traversal() to g or whatever variable name you wish to use to identify it on the server. All of this is described in the Reference Documentation.
There is a way to make this work in a more dynamic fashion, but it involves extending Gremlin Server through its interfaces. You would have to build a custom GraphManager - the interface can be found here. Then you would set the graphManager key in the server configuration file with the fully qualified name of your instance.
I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John