I need to show things moving between nodes along their connection paths similar to this project. I haven't been able to find any examples of it in cytoscape, but I have used cytoscape in the past and prefer to keep using it for this as well. I would appreciate recommendations on how to approach this problem.
You've got a few options...
The easiest is the Marquee visual style. It produces a "marching ants" illusion in the direction of directed edges. Simply to the Styles tab in the Control Panel and select the "Marquee" style. In the EDGE tab, you can choose from 3 different Marquee Line Types. You could imagine mapping these 3 line types to 3 categories (or bins) of traffic density, for example. Or you could use color, thickness and/or transparency in combination with a marquee style to represent traffic density. You can see an example here:
https://youtu.be/MF0zsxEPoPc?t=44
There's also an app for animation! This takes the approach of interpolating any visual style (including position and existence) between any set of key frames you provide. So, for example, you would have a start and finish frame and then CyAnimator would make a movie file for you:
http://apps.cytoscape.org/apps/cyanimator
And yet another completely different approach: with the scripting capabilities of Cytoscape, you can pretty much do whatever you want. The Unit tests for the RCy3 package, for example, ends up being an almost psychedelic display of data vis potential (and the unit tests aren't even at full coverage, shame). So you could direct your own animations in real time with a bit of scripting in R or Python. Here's the RCy3 unit test demo and links to the scripting libs:
https://www.youtube.com/watch?v=IXqbdlUnzUE&t=1s (caution: flashing graphics)
https://bioconductor.org/packages/release/bioc/html/RCy3.html
https://py2cytoscape.readthedocs.io/en/latest/
I'm using cytoscape.js with meteor.js. My elements, stylesheet and vehicles (shown as red dots) are stored in mongo, and can be updated via an external process or edited on-screen. The graph can be restructured or reshaped on the fly, and the vehicles will discover the new least-cost route to reach their target. Moves are queued with eles.animate() Routing is handled by eles.floydWarshall().path(). This might be similar to what you had in mind.
Related
How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)
I am trying to explode/destroy only some part of an object.
Following Blender 2.82 manual page
https://docs.blender.org/manual/en/2.82/physics/particles/emitter/emission.html
says "You may use vertex groups to confine the emission, that is done in the Vertex Groups panel."
So, it must be possible.
As a test, I created a Blender file, attempting to explode/destroy only the left ear of Suzanne, using Explode modifier.
I tried the following:
Added a monkey object ("Suzanne").
Applied "Subdivision Modifier" with "Simple" subdivision algorithm.
Created a vertex group named "VtxGroup_Suzanne__All_vertices_in_left_ear", which contains all vertices in Suzanne's left ear.
enter image description here
Created particle system setting.
Enabled Rotation.
In "Density" field under "Vertex Groups", entered "VtxGroup_Suzanne__All_vertices_in_left_ear".
In "Render As" filed under "Render", chose "Object".
Added "Explode" modifier.
This modifier has "Vertex Group" field, but it seems it does not make any difference in the result (probably because I do not know how to use it properly???)
At this point, when I play the animation, particles erupt out of Suzanne's left ear, breaking down Suzanne little by little.
However, the destruction is not limited to the left year. Entire Suzanne starts breaking down.
Some destruction pieces are really big or unnaturally long, such as almost half of Suzanne's face shown in the screenshot.
enter image description here
Is there any way I can limit the destruction only to the left ear (which is vertex group "VtxGroup_Suzanne__All_vertices_in_left_ear".
Also, can I adjust the sizes of destruction particles, so that some of them would not be too big, nor too long?
I tried setting a whole bunch of settings, but I could not find the solution. Maybe I am attempting this completely wrong? Is there some way to accomplish this in a completely different way?
This test file is found here (zipped):
Test file for Explode modifier with Vertex Group
Thank you in advance for help.
Try splitting the object? If you want to animate the object beforehand, construct the object out of two separate objects grouped, and then ungroup them at or before the keyframe you want them to explode. I hope this helps!
:)
I am trying to understand the .mesh files, usually generated for mesh visualization with Medit.
The documentation is here, but it is in french.
The thing I understand is that after every line describing and object in the file (vertex, triangle, tetrahedra, etc.) it comes a ref variable, that in the examples files I have, they usually are 0,1,2,3 and I don't understand what is their purpose.
Can somebody please explain this?
You can get an .mesh example here.
Each reference corresponds to a color in Medit. The colors are arbitrary, and can be changed in Medit (using the GUI or changing a configuration file).
The reference values in the Mesh file refers to a color index. Maybe the program uses this to display the vertices, triangles and tetrahedra with certain colors. You can ignore this value for all practical purposes.
Consider the following, I have paragraph data being sent to a view which needs to be placed over a background image, which has at the top and the bottom, fixed elements (fig1)
Fig1.
My thought was to split this into 4 labels (Fig1.example2) my question here is how I can get the text to flow through labels 1 - 4 given that label 1,2 & 3 ar of fixed height. I assumed here that label 3 should be populated prior to 4 hence the layout in the attached diagram.
Can someone suggest the best way of doing this with maybe an example?
Thanks
Wish I could help more, but I think I can at least point you in the right direction.
First, your idea seems very possible, but would involve lots of calculations of text size that would be ugly and might not produce ideal results. The way I see it working is a binary search of testing portions of your string with sizeWithFont: until you can get the best guess for what the label will fit into that size and still look "right". Then you have to actually break up the string and track it in pieces... just seems wrong.
In iOS 6 (unfortunately doesn't apply to you right now but I'll post it as a potential benefit to others), you could probably use one UILabel and an NSAttributed string. There would be a couple of options to go with here, (I haven't done it so I'm not sure which would be the best) but it seems that if you could format the page with html, you can initialize the attributed string that way.
From the docs:
You can create an attributed string from HTML data using the initialization methods initWithHTML:documentAttributes: and initWithHTML:baseURL:documentAttributes:. The methods return text attributes defined by the HTML as the attributes of the string. They return document-level attributes defined by the HTML, such as paper and margin sizes, by reference to an NSDictionary object, as described in “RTF Files and Attributed Strings.” The methods translate HTML as well as possible into structures of the Cocoa text system, but the Application Kit does not provide complete, true rendering of arbitrary HTML.
An alternative here would be to just use the available attributes, setting line indents and such according to the image size. I haven't worked with attributed strings at this level, so I the best reference would be the developer videos and the programming guide for NSAttributedString. https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/AttributedStrings/AttributedStrings.html#//apple_ref/doc/uid/10000036-BBCCGDBG
For lesser versions of iOS, you'd probably be better off becoming familiar with CoreText. In the end you'll be rewarded with a better looking result, reusability/flexibility, the list goes on. For that, I would start with the CoreText programming guide: https://developer.apple.com/library/mac/#documentation/StringsTextFonts/Conceptual/CoreText_Programming/Introduction/Introduction.html
Maybe someone else can provide some sample code, but I think just looking through the docs will give you less of a headache than trying to calculate 4 labels like that.
EDIT:
I changed the link for CoreText
You have to go with CoreText: create your AttributedString and a CTFramesetter with it.
Then you can get a CTFrame for each of your textboxes and draw it in your graphics context.
https://developer.apple.com/library/mac/#documentation/Carbon/Reference/CTFramesetterRef/Reference/reference.html#//apple_ref/doc/uid/TP40005105
You can also use a UIWebView
I am aware of the ability using an EdgeShapeTransformer to change the look of edges:
vv.getRenderContext().setEdgeShapeTransformer(new EdgeShape.Line()); // for example
However I am looking for how to change the way the line looks while dragging from one node to another to create an edge interactively. By default the 'hovering' edge which is not yet linked to another node is a large curved line. See the example here for what I mean.
CubicCurveEdgeEffects is where it is done. There is an EdgeEffects interface that can be implemented to do other things instead. It is used by the SimpleEdgeSupport class via the EditingGraphMousePlugin.
(Credit to Tom Nelson, offline communication.)