I am using the JanusGraph 0.5.3 with Cassandra as backend. With spring-data-gremlin version 2.3.0 along with spring-boot:2.5.0, tinkerpop:3.4.6, I am trying to connect to Janusgraph. A vertex could be successfully created and using gremlin console, I could query and view the vertex. But with spring-data-gremlin and tinkerpop, the de-serialization step is failing. Following the other posts and discussion, I used different ones including GraphSONMessageSerializerV3d0. The error message is
at [Source: (byte[])"{"requestId":"2c9fc2cc-de02-4f19-b2d3-64d20c96530a","status":{"message":"","code":200,"attributes":{"#type":"g:Map","#value":["host","/172.19.0.1:35744"]}},"result":{"data":{"#type"
:"g:List","#value":[{"#type":"g:Vertex","#value":{"id":{"#type":"g:Int64","#value":1000},"label":"Product","properties":{"name":[{"#type":"g:VertexProperty","#value":{"id":{"#type":"janusgraph:RelationId
entifier","#value":{"relationId":"215s0t-rs-35x"}},"value":"Product-11","label":"name"}}],"_classname":[{"#type":"g:"[truncated 235 bytes]; line: 1, column: 404] (through reference chain: java.util.Linke
dHashMap["result"]->java.util.LinkedHashMap["data"])
at org.apache.tinkerpop.shaded.jackson.databind.JsonMappingException.from(JsonMappingException.java:271)
at org.apache.tinkerpop.shaded.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:1718)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserialize(GraphSONTypeDeserializer.java:194)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserializeTypedFromAny(GraphSONTypeDeserializer.java:101)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.UntypedObjectDeserializer.deserializeWithType(UntypedObjectDeserializer.java:312)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer._readAndBindStringKeyMap(MapDeserializer.java:529)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:364)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:29)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserialize(GraphSONTypeDeserializer.java:219)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserializeTypedFromAny(GraphSONTypeDeserializer.java:101)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.UntypedObjectDeserializer.deserializeWithType(UntypedObjectDeserializer.java:312)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer._readAndBindStringKeyMap(MapDeserializer.java:529)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:364)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:29)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserialize(GraphSONTypeDeserializer.java:212)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserializeTypedFromObject(GraphSONTypeDeserializer.java:86)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.MapDeserializer.deserializeWithType(MapDeserializer.java:400)
at org.apache.tinkerpop.shaded.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:68)
at org.apache.tinkerpop.shaded.jackson.databind.DeserializationContext.readValue(DeserializationContext.java:760)
at org.apache.tinkerpop.shaded.jackson.databind.DeserializationContext.readValue(DeserializationContext.java:747)
at org.apache.tinkerpop.gremlin.structure.io.graphson.AbstractObjectDeserializer.deserialize(AbstractObjectDeserializer.java:48)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserialize(GraphSONTypeDeserializer.java:212)
at org.apache.tinkerpop.gremlin.structure.io.graphson.GraphSONTypeDeserializer.deserializeTypedFromAny(GraphSONTypeDeserializer.java:101)
at org.apache.tinkerpop.shaded.jackson.databind.deser.std.StdDeserializer.deserializeWithType(StdDeserializer.java:136)
at org.apache.tinkerpop.shaded.jackson.databind.deser.impl.TypeWrappedDeserializer.deserialize(TypeWrappedDeserializer.java:68)
and the json, it is trying to serialize it is
[Source: (byte[])"{"requestId":"2c9fc2cc-de02-4f19-b2d3-64d20c96530a","status":{"message":"","code":200,"attributes":{"#type":"g:Map","#value":["host","/172.19.0.1:35744"]}},"result":{"data":{"#type"
:"g:List","#value":[{"#type":"g:Vertex","#value":{"id":{"#type":"g:Int64","#value":1000},"label":"Product","properties":{"name":[{"#type":"g:VertexProperty","#value":{"id":{"#type":"janusgraph:RelationId
entifier","#value":{"relationId":"215s0t-rs-35x"}},"value":"Product-11","label":"name"}}],"_classname":[{"#type":"g:"[truncated 235 bytes]; line: 1, column: 404] (through reference chain: java.util.Linke
dHashMap["result"]->java.util.LinkedHashMap["data"])
Using gremlin console, I serialized the vertex and the json of the vertex is
{
"id": {
"#type": "g:Int64",
"#value": 1000
},
"label": "Product",
"outE": {
"also_bought": [
{
"id": {
"#type": "janusgraph:RelationIdentifier",
"#value": {
"relationId": "215st9-rs-8sut-8hk"
}
},
"inV": {
"#type": "g:Int64",
"#value": 11000
}
},
{
"id": {
"#type": "janusgraph:RelationIdentifier",
"#value": {
"relationId": "215t7h-rs-8sut-99c"
}
},
"inV": {
"#type": "g:Int64",
"#value": 12000
}
}
]
},
"properties": {
"name": [
{
"id": {
"#type": "janusgraph:RelationIdentifier",
"#value": {
"relationId": "215s0t-rs-35x"
}
},
"value": "Product-11"
}
],
"_classname": [
{
"id": {
"#type": "janusgraph:RelationIdentifier",
"#value": {
"relationId": "215sf1-rs-3yd"
}
},
"value": "com.divvet.store.productgraph.domain.Product"
}
]
}
}
This does not match up with the spring-data-gremlin serialization which has vertex nested two level down
[Source: (byte[])"{"requestId":"2c9fc2cc-de02-4f19-b2d3-64d20c96530a","status":{"message":"","code":200,"attributes":{"#type":"g:Map","#value":["host","/172.19.0.1:35744"]}},"result":{"data":{"#type"
:"g:List","#value":[{"#type":"g:Vertex","#value":{"id":{"#type":"g:Int64","#value":1000},"label":"Product","properties":{"name":[{"#type":"g:VertexProperty","#value":{"id":{"#type":"janusgraph:RelationId
entifier","#value":{"relationId":"215s0t-rs-35x"}},"value":"Product-11","label":"name"}}],"_classname":[{"#type":"g:"[truncated 235 bytes]; line: 1, column: 404] (through reference chain: java.util.LinkedHashMap["result"]>java.util.LinkedHashMap["data"])
The properties in remote-graph.properties is
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
gremlin.remote.driver.clusterFile=conf/remote-objects.yaml
gremlin.remote.driver.sourceName=g
and remote-objects.yaml is
hosts: [localhost]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0,
config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
For the serializer, I tried with all the serializers that are listed here
Please let me know if you need more details and how to resolve this.
Related
I'm trying to query the TinkerPop server (hosted inside docker container) via CosmosDB client library, which uses under the hood Gremlin.Net. So I managed to connect it and insert the data, here's intercepted WebSocket request:
!application/vnd.gremlin-v1.0+json{
"requestId": "b64bd2eb-46c3-4095-9eef-768bca2a14ed",
"op": "eval",
"processor": "",
"args": {
"gremlin": "g.addV(\"User\").property(\"UserId\",2).property(\"CustomerId\",1)"
}
}
The response:
{
"requestId": "b64bd2eb-46c3-4095-9eef-768bca2a14ed",
"status": {
"message": "",
"code": 200,
"attributes": {
"host": "/172.19.0.1:38848"
}
},
"result": {
"data": [
{
"id": 0,
"label": "User",
"type": "vertex",
"properties": {}
}
],
"meta": {}
}
}
Problem is that I see those properties when I'm connected via gremlin console
gremlin> g.V().hasLabel("User").has("CustomerId",1).has("UserId",2).limit(1).valueMap()
==>{UserId=[2], CustomerId=[1]}
Also, I'm able to query the TinkerPop server with Gremlin.Net:
!application/vnd.gremlin-v1.0+json{
"requestId": "de35909f-4bc1-4aae-aa5f-28361b3c0933",
"op": "eval",
"processor": "",
"args": {
"gremlin": "g.V().hasLabel(\"User\").has(\"CustomerId\",1).has(\"UserId\",2).limit(1)"
}
}
But it returns a payload with zero-valued ID and without any properties included:
{
"requestId": "de35909f-4bc1-4aae-aa5f-28361b3c0933",
"status": {
"message": "",
"code": 200,
"attributes": {
"host": "/172.19.0.1:38858"
}
},
"result": {
"data": [
{
"id": 0,
"label": "User",
"type": "vertex",
"properties": {}
}
],
"meta": {}
}
}
Tried to swap between GraphSON v1, v2, v3 with no luck. Documentation says that script serializers should include all the properties. Do I have to tweak the config somehow to make this work and return properties?
So it seems that with a version of 3.4 of the Gremlin server ReferenceElementStrategy
was added by default to traversals, to preserve compatibility between binary and script serializers. In our case we wanted to mimic the behavior of the CosmosDB, so to adjust and receive desired behavior just remove the strategy from init script (in our case it was empty-sample.groovy
globals << [g : graph.traversal().withStrategies(ReferenceElementStrategy.instance())]
to
globals << [g : graph.traversal()]
I've successfully created my Knowledgebase using API.
But I forgot to add some alternative questions and metadata for one of the pairs.
I've noticed PATH method in the API to update the Knowledebase, so updating kb is supported.
I've created a payload which looked like this:
{
"add": {
},
"delete": {
},
"update": {
"qnaList": [
{
"id": 1,
"answer": "Answer",
"source": "link_to_source",
"questions": [
"Question 1?",
"Question 2?"
],
"metadata": [
{
"name": "oldMetadata",
"value": "oldMetadata"
},
{
"name": "newlyAddedMetaData",
"value": "newlyAddedMetaData"
}
]
}]}
}
I get back the following response HTTP 202 Accepted:
{
"operationState": "NotStarted",
"createdTimestamp": "2018-05-21T07:46:52Z",
"lastActionTimestamp": "2018-05-21T07:46:52Z",
"userId": "user_uuid",
"operationId": "operation_uuid"
}
So, looks like it worked. But in reality, this request doesn't take any affect.
When I check operation details, it returns me the following:
{
"operationState": "Succeeded",
"createdTimestamp": "2018-05-21T07:46:52Z",
"lastActionTimestamp": "2018-05-21T07:46:54Z",
"resourceLocation": "/knowledgebases/kb_uuid",
"userId": "user_uuid",
"operationId": "operation_uuid"
}
What am I doing wrong? And how should I update my kb via API properly?
Please help
I had the same problem, I discovered that it was necessary to have all the data of the json even if they were not used.
In your case you need "name" and "urls" in the "update" section and "Delete" in "update/qnaList/questions" section:
{
"add": {},
"delete": {},
"update": {
"name": "nameofKbBase", //this
"qnaList": [
{
"id": 2370,
"answer": "DemoAnswerEdit",
"source": "CustomSource",
"questions": {
"add": [
"DemoQuestionEdit"
],
"delete": [] //this
},
"metadata": { }
}
],
"urls": [] //this
}
}
Here's the cloudformation template I wrote to create a simple S3 bucket, How do I specify the name of the bucket? Is this the right way?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Simple S3 Bucket",
"Parameters": {
"OwnerService": {
"Type": "String",
"Default": "CloudOps",
"Description": "Owner or service name. Used to identify the owner of the vpc stack"
},
"ProductCode": {
"Type": "String",
"Default": "cloudops",
"Description": "Lowercase version of the product code (i.e. jem). Used for tagging"
},
"StackEnvironment": {
"Type": "String",
"Default": "stage",
"Description": "Lowercase version of the environment name (i.e. stage). Used for tagging"
}
},
"Mappings": {
"RegionMap": {
"us-east-1": {
"ShortRegion": "ue1"
},
"us-west-1": {
"ShortRegion": "uw1"
},
"us-west-2": {
"ShortRegion": "uw2"
},
"eu-west-1": {
"ShortRegion": "ew1"
},
"ap-southeast-1": {
"ShortRegion": "as1"
},
"ap-northeast-1": {
"ShortRegion": "an1"
},
"ap-northeast-2": {
"ShortRegion": "an2"
}
}
},
"Resources": {
"JenkinsBuildBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
"AccessControl": "Private"
},
"DeletionPolicy": "Delete"
}
},
"Outputs": {
"DeploymentBucket": {
"Description": "Bucket Containing Chef files",
"Value": {
"Ref": "DeploymentBucket"
}
}
}
}
Here's a really simple Cloudformation template that creates an S3 bucket, including defining the bucketname.
AWSTemplateFormatVersion: '2010-09-09'
Description: create a single S3 bucket
Resources:
SampleBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: sample-bucket-0827-cc
You can also leave the "Properties: BucketName" lines off if you want AWS to name the bucket for you. Then it will look like $StackName-SampleBucket-$uniqueIdentifier.
Hope this helps.
Your code has the BucketName already specified:
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
The BucketName is a string, and since you are using 'Fn Join', it will be combined of the functions you are joining.
"The intrinsic function Fn::Join appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter."
Your bucket name if you don't change the defaults is:
cloudops-stage-deplyment-yourAwsRegion
If you change the default parameters, then both cloudops, and stage can be changed, deployment is hard coded, yourAWSRegion will be pulled from where the stack is running, and will be returned in short format via the Mapping.
To extend 'cloudquiz' answer, this is what it'd look in yaml format:
Resources:
SomeS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Join: ["-", ["yourbucketname", {'Fn::Sub': '${AWS::Region}'}, {'Fn::Sub': '${Stage}'}]]
I'm working with Sabre REST API. I have a issue with the OTA_AirLowFareSearchRQ, I try limit the response number using the MaxResponses in the json structure but seems that I make something wrong because the response give to me 95 answers in the cert environment (https://api.cert.sabre.com/).
The json request that I use is:
{
"OTA_AirLowFareSearchRQ": {
"Target": "Production",
"PrimaryLangID": "ES",
"MaxResponses": "15",
"POS": {
"Source": [{
"RequestorID": {
"Type": "1",
"ID": "1",
"CompanyName": {}
}
}]
},
"OriginDestinationInformation": [{
"RPH": "1",
"DepartureDateTime": "2016-04-01T11:00:00",
"OriginLocation": {
"LocationCode": "BOG"
},
"DestinationLocation": {
"LocationCode": "CTG"
},
"TPA_Extensions": {
"SegmentType": {
"Code": "O"
}
}
}],
"TravelPreferences": {
"ValidInterlineTicket": true,
"CabinPref": [{
"Cabin": "Y",
"PreferLevel": "Preferred"
}],
"TPA_Extensions": {
"TripType": {
"Value": "Return"
},
"LongConnectTime": {
"Min": 780,
"Max": 1200,
"Enable": true
},
"ExcludeCallDirectCarriers": {
"Enabled": true
}
}
},
"TravelerInfoSummary": {
"SeatsRequested": [1],
"AirTravelerAvail": [{
"PassengerTypeQuantity": [{
"Code": "ADT",
"Quantity": 1
}]
}]
},
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "10ITINS"
}
}
}
}
}
MaxResponses could be something for internal development which is part of the schema but does not affect the response.
What you can modify is in the IntelliSellTransaction. You used 10ITINS, but the values that will work should be 50ITINS, 100ITINS and 200ITINS.
EDIT2 (as Panagiotis Kanavos said):
RequestType values depend on the business agreement between your company and Sabre. You can't use 100 or 200 without modifying the agreement.
"TPA_Extensions": {
"IntelliSellTransaction": {
"RequestType": {
"Name": "50ITINS"
}
}
}
EDIT1:
I have searched a bit more and found:
OTA_AirLowFareSearchRQ.TravelPreferences.TPA_Extensions.NumTrips
Required: false
Type: object
Description: This element allows a user to specify the number of itineraries returned.
I'm using elasticsearch and need to implement facet search for hierarchical object as follow:
category 1 (10)
subcategory 1 (4)
subcategory 2 (6)
category 2 (X)
...
So I need to get facets for two related objects. Documentation says that it's possible to get such kind of facets for numeric value, but I need it for strings http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-facets-terms-stats-facet.html
Here is another interesting topic, unfortunately it's old: http://elasticsearch-users.115913.n3.nabble.com/Pivot-facets-td2981519.html
Does it possible with elastic search?
If so, how can I do that?
The previous solution works really well until you have no more than a multi-level tag on a single-document. In this case a simple aggregation doesn't work, because the flat structure of the lucene fields mix the results on the internal aggregation.
See the example below:
DELETE /test_category
POST /test_category
# Insert a doc with 2 hierarchical tags
POST /test_category/test/1
{
"categories": [
{
"cat_1": "1",
"cat_2": "1.1"
},
{
"cat_1": "2",
"cat_2": "2.2"
}
]
}
# Simple two-levels aggregations query
GET /test_category/test/_search?search_type=count
{
"aggs": {
"main_category": {
"terms": {
"field": "categories.cat_1"
},
"aggs": {
"sub_category": {
"terms": {
"field": "categories.cat_2"
}
}
}
}
}
}
That's the WRONG response that I have got on ES 1.4, where the fields on the internal aggregation are mixed at a document level:
{
...
"aggregations": {
"main_category": {
"buckets": [
{
"key": "1",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1",
"doc_count": 1
},
{
"key": "2.2", <= WRONG
"doc_count": 1
}
]
}
},
{
"key": "2",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1", <= WRONG
"doc_count": 1
},
{
"key": "2.2",
"doc_count": 1
}
]
}
}
]
}
}
}
A Solution can be to use nested objects. These are the steps to do:
1) Define a new type in the schema with nested objects
POST /test_category/test2/_mapping
{
"test2": {
"properties": {
"categories": {
"type": "nested",
"properties": {
"cat_1": {
"type": "string"
},
"cat_2": {
"type": "string"
}
}
}
}
}
}
# Insert a single document
POST /test_category/test2/1
{"categories":[{"cat_1":"1","cat_2":"1.1"},{"cat_1":"2","cat_2":"2.2"}]}
2) Run a nested aggregation query:
GET /test_category/test2/_search?search_type=count
{
"aggs": {
"categories": {
"nested": {
"path": "categories"
},
"aggs": {
"main_category": {
"terms": {
"field": "categories.cat_1"
},
"aggs": {
"sub_category": {
"terms": {
"field": "categories.cat_2"
}
}
}
}
}
}
}
}
That's the response, now correct, that I have got:
{
...
"aggregations": {
"categories": {
"doc_count": 2,
"main_category": {
"buckets": [
{
"key": "1",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1",
"doc_count": 1
}
]
}
},
{
"key": "2",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "2.2",
"doc_count": 1
}
]
}
}
]
}
}
}
}
The same solution can be extended to a more than two-levels hierarchy facet.
Currently, elasticsearch does not support hierarchical facetting out-of-the-box. But the upcoming 1.0 release features a new aggregations module, that can be used to get these kind of facets (which are more like pivot-facets rather than hierarchical facets). Version 1.0 is currently in beta, you can download the second beta and test out aggregatins by yourself. Your example might look like
curl -XPOST 'localhost:9200/_search?pretty' -d '
{
"aggregations": {
"main category": {
"terms": {
"field": "cat_1",
"order": {"_term": "asc"}
},
"aggregations": {
"sub category": {
"terms": {
"field": "cat_2",
"order": {"_term": "asc"}
}
}
}
}
}
}'
The idea is, to have a different field for each level of facetting and bucket your facets based on the terms of the first level (cat_1). These aggregations then would have sub-buckets, based on the terms of the second level (cat_2). The result may look like
{
"aggregations" : {
"main category" : {
"buckets" : [ {
"key" : "category 1",
"doc_count" : 10,
"sub category" : {
"buckets" : [ {
"key" : "subcategory 1",
"doc_count" : 4
}, {
"key" : "subcategory 2",
"doc_count" : 6
} ]
}
}, {
"key" : "category 2",
"doc_count" : 7,
"sub category" : {
"buckets" : [ {
"key" : "subcategory 1",
"doc_count" : 3
}, {
"key" : "subcategory 2",
"doc_count" : 4
} ]
}
} ]
}
}
}