Cytoscapejs: How do I prevent rendered subgraphs from zooming out of view? - cytoscape.js

I am rendering graphs of hundreds of nodes, in a directed manner from left to right.
Upon first render, all nodes are in the upper left, and then animate over two seconds to fill my svg box. (fit: true, animate: true, animationDuration: 2000).
One thing I notice is that while expanding, the graph is temporarily larger than my svg box, until it snaps into place. It also appears that the "snapping into place", while still smoothly animated, goes "faster" than the initial animation, even if I have easing set to linear.
But I also have functionality that allows me to (cxtmenu) select a node from the graph, and graph from that node. The rest of the nodes disappear, and then that subgraph animates into place to fill the svg box.
However, the closer to the lower right that node at the root of that subgraph is, the odder the behavior is. If I pick a subgraph close to the lower right, then while it is zooming in, it actually travels further to the lower right at first, even to the point of going entirely off the graph, until it finally zooms (more quickly) back into place in the center of the svg box.
Conceptually, it feels like this has to do with the initial positioning of the subgraph's root node, since it is not at 0,0. It's almost as if the layout algorithm starts first, and then at some point partway through, the fitting/panning algorithm kicks in at a faster rate.
I am having trouble determining how to control/fix this. On an earlier project with dagre and d3 (without cytoscape), I found that similar behavior was because of two competing animation algorithms; one for the layout and one for the zoom/positioning. When I set them to the same duration, they canceled each other out and the animation became smooth.
I am using cytoscape-elkjs. I did try out cytoscape-dagre, and its "boundingbox" option did seem to help, but the elkjs extension doesn't have that, and I am wondering if there's something I can do on the cytoscapejs level to control this.

Related

Efficient rendering of many Jetbrains Compose elements at absolute coordinates within a graphics layer

I am trying to render a large number of "nodes" in a freeform sandbox area with Jetbrains Compose. Each node has it's own X,Y position. The editor is in a graphicsLayer where it can be panned and scaled. Inside this sandbox area, each node is offset by it's X,Y values and then rendered. However, the graphicsLayer has it's own size, and when translated/panned far enough that it goes off screen, all "nodes" disappear since Compose thinks that the bounding box of the graphics layer is no longer on screen and thus the layer does not need to render, even though nodes can be at any offset (even negative offsets) within the graphics layer.
I have tried opting not to translate the graphics layer when panning, and instead offset each node by position + pan amount, but this causes a large amount of lag when panning with many nodes, since Compose will have to recompose every single node every single frame to update their position.
Ideally, I would like the best of both worlds - a graphicsLayer that can be zoomed and panned, but also one that does not do bounds checking, since that removes our ability to pan the screen too much.
Here is a video: https://imgur.com/a/p60OKyc
Note that the cyan box displays the entire inner bounds of the graphics layer. I'd like for nodes to be able to be placed anywhere, even at negative coordinates.

Scaling different faces of a cube acts differently Blender

I just started to work on my first project in Blender and I already have a problem. It seems as scaling works somewhat weirdly on my cube.
Here's a GIF to show you what I mean:
The bottom face doesnt change the form of the cube but the side face does.
What should I do or change to make the bottom face act like the side face?
Here is the resault I want:
I believe there is an overlaid face on the bottom face of the house. You probably used extrude on this face before. Everytime you used extrude, either on vertices or edges or faces, an new one would be created. If you clicked right mouse or cancel it, the newly created would be hidden as it is overlaid with the original. Just delete the face and vertices.

Dynamic/scrollable container size in Cytoscape (+ cy.fit() issues)

So I've been giving Cytoscape a try recently. My project's goal is basically a collaborative graph that people will be able to add/remove nodes to/from, making it grow in the process. The graph will include many compound nodes.
Most of the examples I've seen use container div that takes 100% of the screen space. This is fine for "controlled" graphs but won't work in my case because its size is intended to be dynamic.
Here's a JSFiddle using the circle layout within a fixed 3000px/3000px container:
https://jsfiddle.net/Jeto143/zj8ed82a/5/
Is there any way to have the container size be dynamic as opposed to stating it explicitly? Or do I have to compute the new optimal container size each time somehow, and then call cy.resize()?
edit: actually, using 100%/100% into cy.fit() might just work no matter how large the network is gonna be, so please ignore this question is this is the case.
Is there a recommended layout for displaying large/unknown amounts of data in a non-hierarchical way that would "smartly" place nodes (including compound ones) in the most efficient way possible, all the while avoiding any overlap? (I guess that's a lot to ask...)
Why doesn't cy.fit() seem to be working in my example? I'm using it both at graph initialization and when CTRL+clicking nodes (to show closed neighborhood), but it doesn't seem to like the 3000x3000px container (seems better with 100%x100%).
edit: also ignore this question if you ignored 1., as again it seems fine with 100%/100%.
Any help/suggestions would be greatly appreciated.
Thank you very much in advance.
TLDR: It's (1).
Cytoscape has a pannable viewport, like a map. You can define the dimensions of the viewport (div) in CSS. What's displayed in the viewport is a function of the positions of the nodes, the zoom level, and the pan level -- just like what is visible in a map's viewport is a function of zoom, pan, and positions of points of interest.
So either you have to
(a) rethink your UI in terms of zoom and pan and use those in-built facilities in Cytoscape, or
(b) disable zoom and pan in Cytoscape (probably stay at (0, 0) at zoom 1) and let the user scroll the page as you add content to the graph and resize its container div to accommodate the new content.

How I can I delete sprites without obstructing the physics in SpriteKit?

I want to make a game in which sprites fall from the sky and stack up on the floor, however when there is a lot of layers the camera will move up so you can continue playing. After a while more and more rows of sprites will become invisible as the camera moves up. I want to delete these unused sprite-nodes to keep the performance as good as possible. But when I delete the nodes at the bottom of the stack, won't the entire thing collapse? Or should I detect when the bottom row is unused and then turn off physics for the row above it so it wont fall down and won't affect the rows above it or something of that nature.
I haven't actually made any code yet, I just wanna have a good idea of what I'm doing before I start the wrong way.
Yeah I totally agree, you would really have to be clever about it. Well setting the background image coordinates and looping the background for a "continuous scroll effect" would be Step NO.1 Then using particle physics or actually rendering Nodes would be Step NO.2 The tricky part like you said would be getting the ones below the scene to be destroyed, but I think that if you try and set boundaries, and maybe an if statement that runs the destruction of the particles below the boundaries. So the particles fall down slowly pile up but as the scene scrolls upwards the particles will be destroyed when the their anchor point goes below the x,y boundaries you set and thus keeping those still visible in the scene alive... That would be my way of going about it.

Dojo dnd: Avatar positioning

Is it possible to change the positioning of the avatar with dojo toolkit's dnd api? At the moment, when dragging, the avatar of the dragged item appears to the right and below the mouse cursor. I want it to be in the same position as the mouse cursor. I ran some usability tests on my application, and most people seem to attempt to try and drag the avatar into the drop area, as opposed to moving the cursor over the drop area. Any input would be nice. Thanks!
Sorry, not possible for technical reasons.
UPDATE: by popular demands these are technical reasons:
When you have a node right under the mouse, the node gets all mouse events.
The mouse events bubble up the parent chain.
Now imagine that you move this node with the mouse — this node would always get all mouse events.
It means that any other node, e.g., a target cannot get mouse events unless it is a parent of the moved node. Typically this is not the case.
But I know that other people can do it! It should be possible! Yes, it is possible … in principle:
Let's register all target nodes.
Let's catch relevant mouse move events directly on the topmost parent (the document).
When we detect a drag operation, let's do the following:
Calculate geometry (bounding boxes) of all targets.
On every mouse move lets check if the current mouse position overlaps with a target. Bonus points for an "A+" student: detect overlaps with other nodes, e.g, when a target is partially obscure for cosmetic reasons, and process this situation correctly.
If the current mouse position overlaps with a target, let's initiate "drop is possible" actions, e.g., show some cues so the end user knows that she can drop now.
Why Dojo doesn't do that? For a number of technical reasons (finally we got there!):
A node's geometry calculations are notoriously buggy in most browsers. As soon as tables are involved, or any other non-trivial means of placement, you cannot be 100% sure that the bounding box is correct.
Geometry calculations is an expensive operation, and we have to do it at least once on every drag operation for all targets assuming that no changes can be made during the drag operation (not always the case). A browser may reflow nodes for many reasons ⇒ it can move/resize existing targets, so we have to be vigilant.
Typically the calculated boxes are kept in a list ⇒ checking the list for intersections is O(n) (linear) ⇒ doesn't scale well as number of targets grow.
All mouse event handlers should be fast, otherwise a browser's mouse event handling facility can be "broken" leading to unpredictable side-effects. See the previous points for reasons why mouse event processing can be slow.
Improving on the linear search is possible, e.g., 2D spatial trees can be used, but it leads to more (much more) JavaScript code ⇒ more stuff to download on the client side ⇒ typically it isn't worth it.
How do I know that? Because Dojo used to have this kind of drag'n'drop in earlier versions, and we got sick and tired fighting problems I described above. Any improvement was an uphill battle, which increased the code size. Finally we decided against reinventing and replicating mechanisms already built in a browser. A browser does virtually the same work: calculates geometry of nodes, finds the underlying node, and dispatches a mouse move event appropriately.
The current implementation doesn't use mouse move events and do not calculate the geometry. Instead it relies on mouse over/out events detected by targets after a drag was started. It works reliably and scales well.
Another wrinkle in this story: Dojo treats targets as containers — a very common use case (shopping carts, rearranging items, editing hierarchies). Linear containers and generic trees are implemented at the moment, custom containers are possible. When dragging and dropping you can see and drop dragged items in a proper position within a target container, e.g., inserting them between existing items. Implementing this feature using geometric calculations and checks would be prohibitively expensive.