Drawing a data relationship diagram using networkx and matplotlib - matplotlib

I'm trying to draw a data relationship diagram. I've modeled my input data in triples (subject, predicate, object) e.g. (app, 'consumes', entity), (app, 'masters', entity), etc
Each triple is an edge and 2 nodes. I want to color different sets of nodes in different colors as well.
I'm struggling with setting the color attribute as well as saving the graph to a png file in a size that is readable
Here's the code :
G = nx.DiGraph ()
read input data from file and process is lists of nodes and edges
......
add nodes - set diff color for each set of nodes ??
G.add_nodes_from (list(entities), node_color='yellow')
G.add_nodes_from (list(sornodes))
G.add_nodes_from (list (consumernodes))
add edges - set diff color for each set of edges (how do I do this?)
G.add_edges_from (masters)
G.add_edges_from (consumers)
G.add_edges_from (ads)
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos, node_size=1700)
nx.draw_networkx_edges (G, pos, arrow=True)
nx.draw_networkx_labels (G, pos)
nx.draw_spring (G)
plt.figure(figsize=(7.195, 3.841))
fig1 = plt.gcf()
fig1.savefig ('out.png', dpi=1000)
plt.show ()
There is no image in the file. plt.show pops up the graph in a new window and another empty window is opened as well. I'm running this from a bash shell. Closing both the image windows terminates the program.
I need to be able to show sets of nodes in different colors.
I need to be able to show sets of edges in different colors
I want to be able to size the graph to a large image - does not need to fit within a monitor size
Thoughts anyone ?

draw_networkx_nodes (...) does not seem like to multiple calls with different sets of nodes and different colors for each set. It uses the last color specified for all the nodes in the graph.
The solution is to call draw_networkx_nodes once and pass a list with colors for each node and list len same as number of nodes.
# s, c and a are 3 lists of nodes
for x in s:
nodes.append(x)
nodecolor.append('r')
for x in c:
nodes.append(x)
nodecolor.append('g')
for x in a:
nodes.append(x)
nodecolor.append('y')
G.add_nodes_from(nodes)
nx.draw_networkx_nodes(G, pos, node_color=nodecolor, node_size=1700)
I'm sure I could optimize the code for creating the lists.

Related

EMGUCV shape matching for desktop images

in a commercial application (based on opencvsharp) i've seen a feature "Shape-Matching".
i like to know how this works and struggling with it since two weeks.
Functionality is to define a area on the screen (e.g. Chrome-Logo / Excel-Logo) - generate shapes from
this image and then search in the whole screen-image (also as shapes ...) for it.
(i know Template and Homographic are great for this but this is like a challenge to do via shapes!)
the first try was to:
' load image
Dim IMG = New Image(Of BGR, Byte)("Excel.jpg")
' optional scale
IMG = IMG.Resize(2, Linear)
' optional deblur (Multiple Methods tested: Gaussian, Median, ...)
' do canny-thresholding
IMG = IMG.Canny(75, 50)
' do thinning
XImgproc.XImgprocInvoke.Thinning(IMG, IMG, GuoHall)
' do Contour detection
CvInvoke.FindContours(IMG, contours, h, List, ChainApproxSimple)
' Optional do ApproxPolyDP
' Iterate all Contours of the Logo to find in the whole screen-image.
Dim distance = CvInvoke.MatchShapes(contours(i), contours2(i2), L2)
This works all fine for easy cases ... but not for such logos
The problems
Shape detection in general seems to be optimized to search for very simple objects like lines and
objects with very less shape points. (also YouTube and Google shows only shows such simple 'Outline,Stopsign,Coin' examples). But i know it can work (and very fast)...
The Search Criteria can have more than one shapes - and each shape can have more then 100 points. In some tests the detection of the logo was not consistent between 2 exact same logos in the source image - even if the shapes "looks" the same (size, rotation, shape) - i think the amount of points was the problem here ...
The Shape-match will find each part of the image (like X or a) in the CSV-Excel-Logo somewhere in the whole screen-image. but all the shapes needs to be close together. For sure it can be possible that one shape is fully closed and contains another shape which can be accessed over the result tree - but ...
there is no exact case/logo - it can be anything on the screen - in any size and form.
If the Shapes are found - and close together e.g. the "X and a" from the Excel-Logo should remain exactly/nearly in this order - and Shape detection should be always scale and rotational invariant ...
Hopefully you have some suggestions how such a shape-matching can be accomplished
Tests
Tested to optimize Shape_generation via Scaling (Factor 2 gives better results - downscale is bad ...)
(in some cases the open pixels will be closed then ...)
Blurring and Morphological operations mostly tends to less accurate results in my case.
Histogram_EQ cannot be applied on 2 different images the same way. complete avoid this!
tried matching algorithms like hough-transform but my results are really bad in all my cases.

Graph-tool vertex label with add_edges_from

I tried my best to look for the solution online, but cannot seem to find one for my graph_tool network. I have a pandas dataframe df that has three columns: id_source, id_target, weight. The id_source and id_target contain names in text form. I want to use them as vertex labels. However, when adding edges from a pandas dataframe, I must set hashed=True. The code then looks like this
from graph_tool.all import *
g = Graph()
eweight = g.new_ep('double')
vmap = g.add_edge_list(df.values, eprops=[eweight], hashed=True)
My objective is to draw this small network with vertex labels. I am stuck and can't figure out the solution on how to add that vertex property when I do not know the order by which each new node is introduced in the graph through the g.add_edge_list() function.
Should I add the vertices first? And then perform the add_edge_list function on the graph. Will the vertex names (or labels) be recognized by the second step?
From the documentation for Graph.add_edge_list:
Add a list of edges to the graph, given by edge_list, which can be an iterator of (source, target) pairs where both source and target are vertex indexes, or a numpy.ndarray of shape (E,2), where E is the number of edges, and each line specifies a (source, target) pair. If the list references vertices which do not exist in the graph, they will be created.
You have passed df.values for edge_list, which has three columns (ie a shape of (E, 3)), but two columns are expected (a shape of (E, 2)).
Try this:
g.add_edge_list(df[['id_source', 'id_target']].to_numpy(), eprops=[eweight], hashed=True)
I suspect that eprops should be df['weight'], but I am not sure.
The answer is in the vmap. It's there all along. When you index the vmap, it will get the label names.
vmap[0] would be the label for the first vertex recorded.
vmap[1] for the second, etc.

How to vertically align dagre parent nodes in cytoscape.js

I have a graph made in cytoscape.js, using the dagre extension, that looks like this:
Graph
How can I get the parent nodes to line up vertically? Since applying a separate layout to only parent nodes does not work (it applies to all nodes), I am stumped.
Unfortunately they are all poorly maintained visualization algorithms, so they don't have as many features.
I suggest you to open an issue in the algorithm repository where you explain how it can be improved.
In this case you would like to have a better aspect of the visualization.
https://github.com/cytoscape/cytoscape.js-dagre
You can also contribute to the dagre project adding this aesthetic criteria on to the graph.
At the end if you would like to have a better aspect you can apply a tweek to the graph after the layout execution.
So you can think to an algorithm for making parent nodes line up vertically and then apply in the code.
For example something you can do for having nodes nearest to their father and also a good aspect ratio you can sort nodes in the level n + 1 in the barycenter of their father in the lever n.
(let me know if I have made it clear)
I saw from the photo that you have groups, and the nodes within the group have different fathers, so if you put the nodes aligned with their fathers then you could have
Nodes that are overlapping
overlapping groups
groups with too large a width
(let me know if I have made the problem clear)
I remember you how to position nodes in cytoscape.js
cyGraph.startBatch(); // for bach the differences and apply only once at the end
// random layout. you have to use yours
cyGraph.nodes().positions(( node, i ) => {
return {
x: Math.random() * cyGraph.width(),
y: Math.random() * cyGraph.height(),
};
});
cyGraph.endBatch();

MDAnalysis selects atoms from the PBC box but does not shift the coordinates

MDAnalysis distance selection commands like 'around' and 'sphzere' selects atoms from periodic image (I am using a rectangular box).
universe.select_atoms("name OW and around 4 (resid 20 and name O2)")
However, the coordinates of the atoms from the PBC box reside on the other side of the box. In other words, I have to manually translate the atoms to ensure that they actually are withing the 4 Angstrom distance.
Is there a selection feature to achieve this using the select_atoms function?
If I well understand, you would like to get the atoms around a given selection in the image that is the closest to that selection.
universe.select_atoms does not modify the coordinates, and I am not aware of a function that gives you what you want. The following function could work for an orthorhombic box like yours:
def pack_around(atom_group, center):
"""
Translate atoms to their periodic image the closest to a given point.
The function assumes that the center is in the main periodic image.
"""
# Get the box for the current frame
box = atom_group.universe.dimensions
# The next steps assume that all the atoms are in the same
# periodic image, so let's make sure it is the case
atom_group.pack_into_box()
# AtomGroup.positions is a property rather than a simple attribute.
# It does not always propagate changes very well so let's work with
# a copy of the coordinates for now.
positions = atom_group.positions.copy()
# Identify the *coordinates* to translate.
sub = positions - center
culprits = numpy.where(numpy.sqrt(sub**2) > box[:3] / 2)
# Actually translate the coordinates.
positions[culprits] -= (u.dimensions[culprits[1]]
* numpy.sign(sub[culprits]))
# Propagate the new coordinates.
atom_group.positions = positions
Using that function, I got the expected behavior on one of MDAnalysis test files. You need MDAnalysisTests to be installed to run the following piece of code:
import numpy
import MDAnalysis as mda
from MDAnalysisTests.datafiles import PDB_sub_sol
u = mda.Universe(PDB_sub_sol)
selection = u.select_atoms('around 15 resid 32')
center = u.select_atoms('resid 32').center_of_mass()
# Save the initial file for latter comparison
u.atoms.write('original.pdb')
selection.write('selection_original.pdb')
# Translate the coordinates
pack_around(selection, center)
# Save the new coordinates
u.atoms.write('modified.pdb')
selection.write('selection_modified.pdb')

meshlab- how to transfer uvs from source .objs onto poisson reconstruction model

I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.