Serializer for type org.janusgraph.graphdb.relations.RelationIdentifier not found - serialization

Following error shows up in JanusGraph v0.5.3 server logs while retrieving edges from java client
12277786 [gremlin-server-exec-7] WARN org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor - The result [[e[ofncw-iyo-4avp-374][24576-hasTag->4144]]] in the request 098dc551-6558-497a-a066-b293edd29833 could not be serialized and returned.
org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.io.IOException: Serializer for type org.janusgraph.graphdb.relations.RelationIdentifier not found
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.writeValue(ResponseMessageSerializer.java:86)
at org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1.serializeResponseAsBinary(GraphBinaryMessageSerializerV1.java:143)
at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.makeFrame(AbstractOpProcessor.java:335)
at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.handleIterator(TraversalOpProcessor.java:580)
at org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor.lambda$iterateBytecodeTraversal$4(TraversalOpProcessor.java:411)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Serializer for type org.janusgraph.graphdb.relations.RelationIdentifier not found
at org.apache.tinkerpop.gremlin.structure.io.binary.TypeSerializerRegistry.validateInstance(TypeSerializerRegistry.java:392)
at org.apache.tinkerpop.gremlin.structure.io.binary.TypeSerializerRegistry.getSerializer(TypeSerializerRegistry.java:361)
at org.apache.tinkerpop.gremlin.structure.io.binary.GraphBinaryWriter.write(GraphBinaryWriter.java:90)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.EdgeSerializer.writeValue(EdgeSerializer.java:63)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.EdgeSerializer.writeValue(EdgeSerializer.java:34)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.writeValue(SimpleTypeSerializer.java:91)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.write(SimpleTypeSerializer.java:73)
at org.apache.tinkerpop.gremlin.structure.io.binary.GraphBinaryWriter.write(GraphBinaryWriter.java:112)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.TraverserSerializer.writeValue(TraverserSerializer.java:49)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.TraverserSerializer.writeValue(TraverserSerializer.java:33)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.writeValue(SimpleTypeSerializer.java:91)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.write(SimpleTypeSerializer.java:73)
at org.apache.tinkerpop.gremlin.structure.io.binary.GraphBinaryWriter.write(GraphBinaryWriter.java:112)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.CollectionSerializer.writeValue(CollectionSerializer.java:52)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.ListSerializer.writeValue(ListSerializer.java:44)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.ListSerializer.writeValue(ListSerializer.java:29)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.writeValue(SimpleTypeSerializer.java:91)
at org.apache.tinkerpop.gremlin.structure.io.binary.types.SimpleTypeSerializer.write(SimpleTypeSerializer.java:73)
at org.apache.tinkerpop.gremlin.structure.io.binary.GraphBinaryWriter.write(GraphBinaryWriter.java:112)
at org.apache.tinkerpop.gremlin.driver.ser.binary.ResponseMessageSerializer.writeValue(ResponseMessageSerializer.java:84)
... 10 more
Serializer used in Java client
Map config = new HashMap();
config.put("ioRegistries", Arrays.asList("org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry"));
MessageSerializer serializer = new GraphBinaryMessageSerializerV1();
serializer.configure(config, null);
Cluster.Builder builder = Cluster.build().port(port)
.serializer(serializer).addContactPoints(hosts)
.loadBalancingStrategy(new LoadBalancingStrategy.RoundRobin());
Server side serializers
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
# Older serialization versions for backwards compatibility:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
Why is GraphBinaryMessageSerializerV1 not able to serialize RelationIdentifier even when its ioRegistries are initialized with org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry which registers RelationIdentifier here - https://github.com/JanusGraph/janusgraph/blob/v0.5.3/janusgraph-driver/src/main/java/org/janusgraph/graphdb/tinkerpop/JanusGraphIoRegistry.java#L35

I believe that some of the fixes that allow the IORegistry to hook the GraphBinary serializer have not yet been released although I do see the work on the main JanusGraph branch. [1] I was having the same problem you reported but was able to get things working using the GraphSONMessageSerialializerV3d0 serializer.
[1] https://github.com/JanusGraph/janusgraph/commit/1cb4b6e849e3f9c2802722fe7f84c760cd471429
This setup code works for me:
private Cluster cluster;
private GraphTraversalSource gts;
private DriverRemoteConnection drc;
public GraphTraversalSource createConnection()
{
Map config = new HashMap();
config.put("ioRegistries", Arrays.asList("org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry"));
MessageSerializer serializer = new GraphSONMessageSerializerV3d0();
serializer.configure(config, null);
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("localhost");
builder.port(8182);
builder.enableSsl(false);
builder.serializer(serializer);
this.cluster = builder.create();
this.drc = DriverRemoteConnection.using(cluster);
this.gts = traversal().withRemote(drc);
return(gts);
}
With that configuration I am able to return edges. For example:
public void runTests()
{
GraphTraversalSource g = createConnection();
List<Edge> edge = g.E().limit(1).toList();
System.out.println(edge);
closeConnection();
}
To get this working I disabled the Graph Binary serializer on the Gremlin Server and just left the GraphSON ones in place.

Related

Enable diagnostic settings for Storage account using ARMTemplate

Storage account deployed from ARMTemplate is creating diagnostic settings as disabled.
How to enable diagnostics status using ARMTemplate or Powershell script?
Want to automate the process to deploy diagnostic settings.
Here is a solution using ARM templates in the newer Bicep format. In the example, it configures diagnostics settings for:
StorageAccount
Blob
File
Queue
Table
To reduce the template length, it configures only the StorageRead on the storage account services.
param name string
param location string = resourceGroup().location
param sku string
#description('Resource ID for the destination log analytics workspace.')
param logAnalyticsWorkspaceId string
resource storageAccount 'Microsoft.Storage/storageAccounts#2019-06-01' = {
name: name
location: location
kind: 'StorageV2'
sku: {
name: sku
}
properties: {
allowBlobPublicAccess: false
allowSharedKeyAccess: true
minimumTlsVersion: 'TLS1_2'
accessTier: 'Hot'
supportsHttpsTrafficOnly: true
networkAcls: {
defaultAction: 'Deny'
bypass: 'AzureServices'
}
}
}
resource diagnosticsStorage 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
scope: storageAccount
name: 'diagnostics00'
properties: {
workspaceId: logAnalyticsWorkspaceId
metrics: [
{
category: 'Transaction'
enabled: true
}
]
}
}
resource blobService 'Microsoft.Storage/storageAccounts/blobServices#2021-06-01' = {
parent: storageAccount
name: 'default'
properties: {}
}
resource diagnosticsBlob 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
scope: blobService
name: 'diagnostics00'
properties: {
workspaceId: logAnalyticsWorkspaceId
logs: [
{
category: 'StorageRead'
enabled: true
}
]
}
}
resource fileService 'Microsoft.Storage/storageAccounts/fileServices#2021-06-01' = {
parent: storageAccount
name: 'default'
properties: {}
}
resource diagnosticsFile 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
scope: fileService
name: 'diagnostics00'
properties: {
workspaceId: logAnalyticsWorkspaceId
logs: [
{
category: 'StorageRead'
enabled: true
}
]
}
}
resource queueService 'Microsoft.Storage/storageAccounts/queueServices#2021-06-01' = {
parent: storageAccount
name: 'default'
properties: {}
}
resource diagnosticsQueue 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
scope: queueService
name: 'diagnostics00'
properties: {
workspaceId: logAnalyticsWorkspaceId
logs: [
{
category: 'StorageRead'
enabled: true
}
]
}
}
resource tableService 'Microsoft.Storage/storageAccounts/tableServices#2021-06-01' = {
parent: storageAccount
name: 'default'
properties: {}
}
resource diagnosticsTable 'Microsoft.Insights/diagnosticSettings#2021-05-01-preview' = {
scope: tableService
name: 'diagnostics00'
properties: {
workspaceId: logAnalyticsWorkspaceId
logs: [
{
category: 'StorageRead'
enabled: true
}
]
}
}
Please follow the below URL to enable the Diagnostics Settings for Azure Storage Account using ARM Template:
https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/resource-manager-diagnostic-settings#diagnostic-setting-for-azure-storage

How do I annotate an endpoint in NestJS for OpenAPI that takes Multipart Form Data

My NestJS server has an endpoint that accepts files and also additional form data
For example I pass a file and a user_id of the file creator in the form.
NestJS Swagger needs to be told explicitly that body contains the file and that the endpoint consumes multipart/form-data this is not documented in the NestJS docs https://docs.nestjs.com/openapi/types-and-parameters#types-and-parameters.
Luckily some bugs led to discussion about how to handle this use case
looking at these two discussions
https://github.com/nestjs/swagger/issues/167
https://github.com/nestjs/swagger/issues/417
I was able to put together the following
I have added annotation using a DTO:
the two critical parts are:
in the DTO add
#ApiProperty({
type: 'file',
properties: {
file: {
type: 'string',
format: 'binary',
},
},
})
public readonly file: any;
#IsString()
public readonly user_id: string;
in the controller add
#ApiConsumes('multipart/form-data')
this gets me a working endpoint
and this OpenAPI Json
{
"/users/files":{
"post":{
"operationId":"UsersController_addPrivateFile",
"summary":"...",
"parameters":[
],
"requestBody":{
"required":true,
"content":{
"multipart/form-data":{
"schema":{
"$ref":"#/components/schemas/UploadFileDto"
}
}
}
}
}
}
}
...
{
"UploadFileDto":{
"type":"object",
"properties":{
"file":{
"type":"file",
"properties":{
"file":{
"type":"string",
"format":"binary"
}
},
"description":"...",
"example":"'file': <any-kind-of-binary-file>"
},
"user_id":{
"type":"string",
"description":"...",
"example":"cus_IPqRS333voIGbS"
}
},
"required":[
"file",
"user_id"
]
}
}
Here is what I find a cleaner Approach:
#Injectable()
class FileToBodyInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
const ctx = context.switchToHttp();
const req = ctx.getRequest();
if(req.body && req.file?.fieldname) {
const { fieldname } = req.file;
if(!req.body[fieldname]) {
req.body[fieldname] = req.file;
}
}
return next
.handle();
}
}
const ApiFile = (options?: ApiPropertyOptions): PropertyDecorator => (
target: Object, propertyKey: string | symbol
) => {
ApiProperty({
type: 'file',
properties: {
[propertyKey]: {
type: 'string',
format: 'binary',
},
},
})(target, propertyKey);
};
class UserImageDTO {
#ApiFile()
file: Express.Multer.File; // you can name it something else like image or photo
#ApiProperty()
user_id: string;
}
#Controller('users')
export class UsersController {
#ApiBody({ type: UserImageDTO })
// #ApiResponse( { type: ... } ) // some dto to annotate the response
#Post('files')
#ApiConsumes('multipart/form-data')
#UseInterceptors(
FileInterceptor('file'), //this should match the file property name
FileToBodyInterceptor, // this is to inject the file into the body object
)
async addFile(#Body() userImage: UserImageDTO): Promise<void> { // if you return something to the client put it here
console.log({modelImage}); // all the fields and the file
console.log(userImage.file); // the file is here
// ... your logic
}
}
FileToBodyInterceptor and ApiFile are general, I wish they where in the NestJs
You probably need to install #types/multer to have to Express.Multer.File

Received fatal alert: handshake_failure when using Groovy HTTPBuilder for server using TLSv1.2

I have a question regarding the use of HTTPBuilder. While using the snippet below to create issues using the JIRA API, I am receive the error below:
The server where the API is hosted uses TLSv1.2 and that seems to be my problem.
Received fatal alert: handshake_failure
Here are some of my costraints/limitations:
The project runs on Java 7
The version of Grails is 2.4.3
I know that upgrading the version of Java helps. However, I am specifically interested in learning about how changing/rewriting HTTPBuilder will help me in solving this issue.
def http = new HTTPBuilder("https://jira.xxxxxxx.com/rest/api/2/issue")
if(projectKey=='Type1')
{
http.request(POST, JSON) { req ->
headers.'Authorization' = 'Basic xxxxxxxxxxxxxxxxxxxxxxxxxx'
headers.'Content-Type' = 'application/json'
body = [
fields: [
project : [
key: projectKey
],
issuetype : [
name: issueType
],
reporter : [
name: reporter
],
assignee : [
name: assignee
],
customfield_1 : [
value:'Some value1'
],
customfield_2 : [
value:'Some value2'
],
summary : summary,
description: description,
labels : labels
]
]
response.success = { resp, json ->
return json.key
}
Any help will be appreciated!!
Have you tried this:
Usage:
TlsHttpBuilder tlsHttpBuilder = new TlsHttpBuilder(['TLSv1',
'TLSv1.1', 'TLSv1.2'])
class TlsHttpBuilder extends HTTPBuilder {
List sslProtocols
TlsHttpBuilder(List sslProtocols) {
super()
this.sslProtocols = sslProtocols
}
protected HttpClient createClient(HttpParams params) {
def sslContext = SSLContext.getInstance("TLS")
sslContext.init(null, null, new SecureRandom())
def sf = new org.apache.http.conn.ssl.SSLSocketFactory(sslContext) {
protected void prepareSocket(final SSLSocket socket) throws IOException {
if (sslProtocols) {
log.debug("Setting protocols: ${sslProtocols}")
socket.setEnabledProtocols(sslProtocols as String[])
}
}
}
def schemeRegistry = SchemeRegistryFactory.createDefault()
schemeRegistry.register(new org.apache.http.conn.scheme.Scheme("https", sf, 443))
new DefaultHttpClient(new PoolingClientConnectionManager(schemeRegistry), params)
}
}
Reference :
https://gist.github.com/ptaylor/fb5a3abce5c455fbec87f6bfd6386814

GraphQL queries with tables join using Node.js

I am learning GraphQL so I built a little project. Let's say I have 2 models, User and Comment.
const Comment = Model.define('Comment', {
content: {
type: DataType.TEXT,
allowNull: false,
validate: {
notEmpty: true,
},
},
});
const User = Model.define('User', {
name: {
type: DataType.STRING,
allowNull: false,
validate: {
notEmpty: true,
},
},
phone: DataType.STRING,
picture: DataType.STRING,
});
The relations are one-to-many, where a user can have many comments.
I have built the schema like this:
const UserType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: {
type: GraphQLString
},
name: {
type: GraphQLString
},
phone: {
type: GraphQLString
},
comments: {
type: new GraphQLList(CommentType),
resolve: user => user.getComments()
}
})
});
And the query:
const user = {
type: UserType,
args: {
id: {
type: new GraphQLNonNull(GraphQLString)
}
},
resolve(_, {id}) => User.findById(id)
};
Executing the query for a user and his comments is done with 1 request, like so:
{
User(id:"1"){
Comments{
content
}
}
}
As I understand, the client will get the results using 1 query, this is the benefit using GraphQL. But the server will execute 2 queries, one for the user and another one for his comments.
My question is, what are the best practices for building the GraphQL schema and types and combining join between tables, so that the server could also execute the query with 1 request?
The concept you are refering to is called batching. There are several libraries out there that offer this. For example:
Dataloader: generic utility maintained by Facebook that provides "a consistent API over various backends and reduce requests to those backends via batching and caching"
join-monster: "A GraphQL-to-SQL query execution layer for batch data fetching."
To anyone using .NET and the GraphQL for .NET package, I have made an extension method that converts the GraphQL Query into Entity Framework Includes.
public static class ResolveFieldContextExtensions
{
public static string GetIncludeString(this ResolveFieldContext<object> source)
{
return string.Join(',', GetIncludePaths(source.FieldAst));
}
private static IEnumerable<Field> GetChildren(IHaveSelectionSet root)
{
return root.SelectionSet.Selections.Cast<Field>()
.Where(x => x.SelectionSet.Selections.Any());
}
private static IEnumerable<string> GetIncludePaths(IHaveSelectionSet root)
{
var q = new Queue<Tuple<string, Field>>();
foreach (var child in GetChildren(root))
q.Enqueue(new Tuple<string, Field>(child.Name.ToPascalCase(), child));
while (q.Any())
{
var node = q.Dequeue();
var children = GetChildren(node.Item2).ToList();
if (children.Any())
{
foreach (var child in children)
q.Enqueue(new Tuple<string, Field>
(node.Item1 + "." + child.Name.ToPascalCase(), child));
}
else
{
yield return node.Item1;
}
}}}
Lets say we have the following query:
query {
getHistory {
id
product {
id
category {
id
subCategory {
id
}
subAnything {
id
}
}
}
}
}
We can create a variable in "resolve" method of the field:
var include = context.GetIncludeString();
which generates the following string:
"Product.Category.SubCategory,Product.Category.SubAnything"
and pass it to Entity Framework:
public Task<TEntity> Get(TKey id, string include)
{
var query = Context.Set<TEntity>();
if (!string.IsNullOrEmpty(include))
{
query = include.Split(',', StringSplitOptions.RemoveEmptyEntries)
.Aggregate(query, (q, p) => q.Include(p));
}
return query.SingleOrDefaultAsync(c => c.Id.Equals(id));
}

Why does this Ext.grid.Panel crash?

This crashes:
var grdMakes = Ext.extend(Ext.grid.Panel, {
constructor: function(paConfig) {
}
}
This does not:
var grdMakes = Ext.extend(Ext.grid.Panel, {
}
The crash is:
Uncaught TypeError: Cannot read property 'added' of undefined
Why does adding a constructor cause it to crash? I can do it with other objects like:
var pnlMakesMaint = Ext.extend(Ext.Panel, {
constructor: function(paConfig) {
}
} // just fine
EDIT
To clarify what I want to do is that I want to be able to instantiate an object with the option to override the defaults.
var g = new grdMakes({}); // defaults used
var g = new grdMakes({renderTo: Ext.getBody()}); // renderTo overridden
This is working for everything except the Ext.grid.Panel
Also, I'm using ExtJS 4
SOLUTION
Turns out, extend is deprecated in ExtJS 4. So I used this and it works:
Ext.define('grdMakes', {
extend: 'Ext.grid.Panel',
constructor: function(paConfig) {
var paConfig = Ext.apply(paConfig || {}, {
columns: !paConfig.columns ? [{
header: 'Makes',
dataIndex: 'make'
}, {
header: 'Description',
dataIndex: 'description'
}]: paConfig.columns,
height: !paConfig.height ? 400 : paConfig.height,
renderTo: !paConfig.renderTo ? Ext.getBody() : paConfig.renderTo,
store: !paConfig.store ? stoMakes : paConfig.store,
title: !paConfig.title ? 'Makes' : paConfig.title,
width: !paConfig.width ? 600 : paConfig.width
});
grdMakes.superclass.constructor.call(this, paConfig);
}
}
Ok.But your code seems like ExtJS3.Because Ext.extend is depreceated in ExtJS4 version.Instead of extend you can use define.For reference you can check the following url:
http://docs.sencha.com/ext-js/4-0/#/api/Ext-method-extend
Afaik,for overriding default options,this is not the perfect way.You need to use Ext.override.
For example:
Ext.override(Ext.grid.Panel,{
lockable : true
});
Like above you have to override the default options.