How does Ontotext GraphDB assign colors in Visual Graph? - graphdb

I have been trying to create some graph visualizations using Ontotext GraphDB. I would like the colors to be consistent between various visualizations that I make of the same data. I understand that the coloring is based on the type, but it does not seem to be consistent. For example, if I create a visual graph with only nodes of type A, the color assigned to the nodes may be red, but if I create a visual graph with nodes of type A and type B, then it does not appear that the color of nodes of type A are guaranteed to still be red.
I would like to understand the mechanism by which the visualization system assigns colors based on types.
As a side note, I am also having an issue with larger networks where the nodes of the graph become larger than the size of the window, so that I cannot view all of the nodes at once, even if I zoom out all the way.

Colors are based on the type of the node and the colors for types are generated each time (we do not persist them).
Unfortunately you cannot specify the Visual Graph node colors in GraphDB Workbench without touching the source code, so you need to clone GraphDB Workbench from github and set the colors for your types in the source code but I will guide you how to do it, it is very straightforward.
Clone or fork the project from here https://github.com/Ontotext-AD/graphdb-workbench
(there is a good guide there how to run your workbench against a running GraphDB)
Open src/js/angular/graphexplore/controllers/graphs-visualizations.controller.js and find the function $scope.getColor.
You can specify your colors and types there i.e:
$scope.getColor = function (type) {
if (type === 'http://myBarType') {
return "#6495ED"
}
if (type === 'http://myFooType')
{
return "#90EE90";
}
if (angular.isUndefined(type2color[type])) {
type2color[type] = colorIndex;
colorIndex++;
}

Related

Send heavy data through protobuf. Custom field

I'm developing the API for the application using protobuf and grpc.
I need to send the data with the arbitrary size. Sometimes it is small, sometimes huge. For example Nympy array. If the size is small I want to send it through protobuf, if the size if huge I want to dump data into file and send the filepath to this file through protobuf.
To do so I've created a following .proto messages:
message NumpyTroughProtobuf {
repeated int32 shape = 1;
repeated float array = 2;
}
message NumpyTroughfile {
string filepath = 1;
}
message NumpyTrough {
google.protobuf.Any data = 1;
}
The logic is simple: If the size is big I use data as NumpyTroughfile or if small data as NumpyTroughProtobuf.
Problem (what I want to avoid):
The mechanism of data transformation is the part of my app.
In the current approach I have to check and covert the data before I create the message NumpyTrough. So I have to add some logic into my application which will care of data check and cast. The same I have to do for any language which I use (for example if I send massages from Python to C++).
What I want to do:
The mechanism of data transformation is the part of customized protobuf.
I want to hide the data transformation. I want that my app to send a pure Numpy array into NumpyTrough.data field. All data transformation should be hided.
So I want that the logic of data transformation be the part of custom Protobuf field, not the part of my application.
Which meant that I would like to create a custom type of the field. I just implement the behavior of this filed (marshal/unmarshal) for any languages which I use. Then I can just directly send Numpy data into this custom field and this field will decide how to proceed: turn the data in into file or via other method, send trough Protobuf and restore on the receiver side.
Somethig like this https://github.com/gogo/protobuf/blob/master/custom_types.md but it seems this is not a part of protobuf ecosystem.
Protobuf only defines schema.
You can't add logic to a protobuf definition.
Protobug Any represents arbitrary binary data and so -- somewhere -- you'll need to explain to your users what it represents in order that they can ship data in the correct format to your service.
You get to decide how to distribute the processing of the data:
Either partly client-side functionality that performs preprocessing of the data and ships the output (either as structured data using non-Any types or, if still necessary as Any).
Or partly server-side that receives entirely unprocessed client-side data shipped through Any
Or some combination of the two
NOTE You may want to consider shipping the data regardless of size as file references to simplify your implementation. You're correct to bias protobuf to smaller message sizes but, depending on the file size distribution, does it make sense to complicate your implementation with 2 paths?

Does ordering of mesh element change from run to run for constrained triangulation under CGAL?

I iterate over finie_vertieces, finite_edges and finite_faces after generating constrained delauny triangulation with Loyd optimization. I am on VS2012 using CGAL 4.12 under release mode. I see for a given case finite_verices list is repeatable (so is the vertex list under finite_faces), however, the ordering of the edges in finite_edges seems to change from run to run
for(auto eit = cdtp.finite_edges_begin(); eit != cdtp.finite_edges_end(); ++eit)
{
const auto isConstrainedEdge = cdtp.is_constrained(*eit);
auto & cFace = *(eit->first);
auto cwVert = cFace.vertex(cFace.cw(eit->second));
auto ccwVert = cFace.vertex(cFace.ccw(eit->second));
I use the above code snippet to extract vertex list, and vertex list with a given edge changes from run to run.
Any help is appreciated resolving this, as I am looking for consistent behavior in the code. My triangulation involves many line constraints on a two dimensional domain.
I was told it's likely dependable behaviour, but there is no guarantee of order. IIRC the documentation says the traversal order is not guaranteed. I think it's best to assume the iterators' transversal is not deterministic and could change.
You could use any of the _info extensions to embed information into the face, edge, etc (a hash perhaps?) which you could then check against to detect a change.
In my use case, I wanted to traverse the mesh in parallel and OpenMP didn't support the iterators. So I hold a vector of the Face_handles in memory which I can then easily index over. In conjunction with the _info data, you could use this to build a vector of edges,faces, etc with a guaranteed order using unique information in the ->info() field.
Another _info example.

how to show the mesh in terrain or others in ogreļ¼Œ

http://i.stack.imgur.com/kcOxx.jpg
Look at the picture, I want to achieve something like this in OGRE, but I have no idea about this.
I am trying to make a SLG game with OGRE now, and the first step is to show the mesh.
I am a Chinese student and it's... My English grade is not good, and in my country I can only find a little doc about OGRE. The internet is filled with Unity3D... I thank for everybody who has read my question.
One way
Add to your object's .material script one more pass.
material myMaterial
{
technique
{
pass solidPass
{
// sets your object's colour, texture etc.
// ... leave what you have here
polygon_mode solid // sets to render the object as a solid
}
pass wireframePass
{
diffuse 0 0 0 1.0 // the colour of the wireframe (white)
polygon_mode wireframe // sets to render the object as a wireframe
}
}
}
This of course renders the object twice, but I assume it's just for debugging purposes and the lines are quite slim, also the object overlaps the wireframe at some parts.
The usual way
Add another texture_unit to the object's .material script that contains thin white squares of size the same as in the UV mapping (which you can export with most modelling software) with a transparent background
Make sure the .material script has alpha enabled in the pass you created
scene_blend alpha_blend
scene_blend_op add
This lets you choose what kind of lines you want.
Source:
Also check the OGRE Manual under Material Scripts. It goes much more in depth about the material script itself

Leaflet : Dynamically count number of feaures loaded into a layer

I have an application where markers/features are loading into layers/layerGroups (what's the right term?) from multiple sources, and they're loading dynamically (based on some attribute in feature.properties and other conditions). I want to be able to tell on the sidepanel the number of markers currently loaded into the layer on display. Given just the layer's variable/identifier, how can one find the number of markers/features loaded into it?
var layer1= L.layerGroup();
layerControl.addOverlay(layer1, 'Layer 1');
... // loading stuff into this layer from different sources
console.log(layer1.length); // doesn't work, gives "undefined"
console.log(JSON.stringify(layer1)); // doesn't work, "TypeError: cyclic object value"
..so I guess layers can't be treated like JSON objects.
I found a related question, but the answer there only addresses markers loaded from one geoJson source and advises a simple counter++ in the onEachFeature. I'm working with a lot of layers in my application and would appreciate not having to put a separate counter variable for each and every one, rather want to just use the layer's variable/identifier to count. If we can add a layer to a map or clustergroup so simply then we ought to be able to count what's in it, right?
The getLayers() function returns an array containing all the features in your object. Then you can get the length of that array.
layer_variable.getLayers().length;

Abaqus - stress-displacement elements are not allowed in a heat transfer analysis

I'm trying to simulate cooling of cylinder-shaped sample, but when I submit a job I get an error: stress-displacement elements are not allowed in a heat transfer analysis. I defined part, material(density, specific heat, conductivity), section, section assignments, mesh, instance, predefined field (temperature) in initial step, step-1 (heat transfer) with interaction (surface film condition). Where's the problem ?
Update:
I solved that problem: I had an incorrect element type. For the heat transfer simulation: Mesh -> Element Type -> Family -> Heat Transfer. I guess that also Convection/Diffusion option in the Hex tab should be selected.