I have a rather large graph consisting of about 50 nodes and 100 vertices represented in DOT graph notation language which I want to display on a mobile device using React Native. Ideally, I would like to be able to fine-tune the layouting algorithm, e.g. the parameters of the Fruchterman–Reingold algorithm for force-directed graph drawing.
So far, I figured out two approaches:
Convert my input data such that it can be digested by react-graph-vis and plot my network on the fly. For adjusting the parameters of the drawing algorithm, things might get tricky.
Generate a SVG graphics on the server using graphviz and on the mobile device just display the SVG so that you can zoom and scroll. For this I found react-native-svg-pan-zoom whose last commit is 2 years ago (as of Dec 2019). This approach has the disadvantage that it does not work offline.
Which other alternatives do I have? Did I miss some out-of-the-box solution? Any help is appreciated!
Related
I need help designing a game where characters
have universal actions(sit, jump, etc.) or same across all characters; roughly 50 animations
unique attack patterns(different attacks) roughly 6 animations per character
item usage attacks(same across all characters) roughly 4 animations per item which could scale to 500+
What would be the best way to design this? I use blender for animations. And I just started a week ago.
I’m thinking of using either one model for everything and limiting actions or to create multiple and import those separately. Any help is appreciated!
Edit: also considering optimization since I don’t want lag to incur; making a mmo like game.
There is an initial release (MIT License) of the module GodotAnimationRetargeting that I referenced in comments. Update: There is a GDScript version now.
Usually in Godot you have an animation player with the animations tied to a given model. Which means you would have to add them for all the models. However, this module allows you apply animations from an animation player to another model. You can also apply them partially (e.g. only rotation, or position or scaling of bones).
That should help you have a common set of animation applied to different models.
Being a module it requires to compile Godot using it. See Compiling on the Godot docs.
This is entirely a theoretical question because I understand the time it would take to do such a thing would be ridiculous
I've been working with "voxels" a lot lately and the only way I can display them to a user is to either triangulate the visible surfaces or make a CPU ray-tracer but both come with their own problems.
Simply put, if we dismiss the storage space needed for voxel meshs and targeted a very specific GPU would someone who was wanting to create a graphics API like OpenGL but with "true" voxel primitives that don't need to be converted be able to make such thing or are GPUs designed specifically for triangles with no way to introduce a new base primitive?
Its possible and it was already done many times
games like Minecraft,SpaceEngineers...
3D printing tools and slicers
MRI/PET scans tools
Yes rendering on GPU is possible with the two base methods you mention. Games usually use the transform to boundary representation 3D geometry. With rise of shaders even ray tracers are now possible here mine:
simple GLSL voxel ray tracer
using native OpenGL architecture and passing geometry as 3D texture. In order to obtain speed you need to add BVH or similar spatial subdivision of geometry...
However voxel based tools have been here for quite some time. For example many isometric games/engines are voxel based (tile is a voxel) like this one:
Improving performance of click detection on a staggered column isometric grid
Also do you remember UFO ? It was playable on x286 and it was also "voxel/tile" based isometric.
We have to work with very large models and we're hoping to use the first person camera to walk through them, and eventually do this in VR. The progressive rendering does wonders for improving perceived responsiveness, but it can be disorienting to have so many items around you disappear as you move.
Is there any way to turn progressive rendering off but only for objects closest to the camera? Maybe a number of objects up to a maximum number, or objects within a radius from the camera. Everything further away can load in later and flicker during motion, but it would be nice to keep the nearby objects rendered, especially structural objects like stairs. I've often walked towards stairs just to have them disappear in front of me, forcing me to fly to a platform with E and Q instead of walking.
So far I've only found a way to toggle progressive rendering for the entire model on or off with viewer.setProgressiveRendering(bool) but I haven't found a way to customize the rendering behavior.
Per our Engineering's recommendation can you try set rendering targets with
viewer.impl.setFPSTargets(1, 5, 15) //min, target, max
In fact we've been having similar reports from other developers requesting similar capability to fine tune rendering with large models so our Engineering is considering their options to extend on existing functions and even build them into extensions.
I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.
Is there a programmatic way to convert two images into an animation sequence (e.g., an animated GIF) like the following example?
This image sequence, taken from a http://memrise.com course, doesn't seem to have manually-edited frames, but seems automatically transformed using some kind shape morphing algorithm. Is there a common term used to describe such an animation or algorithm? Is there a feature in ImageMagick or Photoshop/Gimp that generates such animations, given a pair of images?
Ideally the technique could be scriptable so I could create animations for several pairs of start-end images.
Edit: I have just been told about Gimp's tool under Filters->Animation->Blend, which appears to do the same thing as jQuery morph: each frame i is start + (finish - start)/N*i. In other words, you're transitioning each pixel independently from the start value to the finish value, without any shape morphing. The example gives is more complicated, as it modifies the contours of both images to achieve its compelling effect.
Other examples:
http://static.memrise.com/uploads/mems/32000121024054535.gif
http://static.memrise.com/uploads/mems/225428000121109232837.gif
I have written a tool that doesn't require setting manual keypoints and is not restricted to a domain (like faces). Anyway, the images have to be similar (e.g. two faces or two cars from the same perspective).
https://github.com/kallaballa/Poppy
There is also a web-version created with emscripten.
I generated the above animation using following command line:
poppy flame.png glyph.png flame.png
Although this is an old question, since ImageMagick is mentioned, for anyone who comes here from google it may be worth looking at this imagemagick plugin called shapemorph.
GIMP can't do that directly, but over the years a series of (now poorly maintaind) plug-ins to do that where released by third parties. The keyword for searching for this is "morph" - you should find a bunch of stand alone programs to do that as well, from "gratis" to full fledged Free Software, such as xmorph
Given pairs of vector files (.wmf extension) it is possible to use linear interpolation of shapenodes in Visual Basic for Applications to create frames for GIF animations , though this would take along time to explain. For some examples see
http://www.giless.co.uk/animatorMorphGIFs.htm (it is like a slideshow)
I have made some improvements since then, as well!