Efficient rendering of many Jetbrains Compose elements at absolute coordinates within a graphics layer - kotlin

I am trying to render a large number of "nodes" in a freeform sandbox area with Jetbrains Compose. Each node has it's own X,Y position. The editor is in a graphicsLayer where it can be panned and scaled. Inside this sandbox area, each node is offset by it's X,Y values and then rendered. However, the graphicsLayer has it's own size, and when translated/panned far enough that it goes off screen, all "nodes" disappear since Compose thinks that the bounding box of the graphics layer is no longer on screen and thus the layer does not need to render, even though nodes can be at any offset (even negative offsets) within the graphics layer.
I have tried opting not to translate the graphics layer when panning, and instead offset each node by position + pan amount, but this causes a large amount of lag when panning with many nodes, since Compose will have to recompose every single node every single frame to update their position.
Ideally, I would like the best of both worlds - a graphicsLayer that can be zoomed and panned, but also one that does not do bounds checking, since that removes our ability to pan the screen too much.
Here is a video: https://imgur.com/a/p60OKyc
Note that the cyan box displays the entire inner bounds of the graphics layer. I'd like for nodes to be able to be placed anywhere, even at negative coordinates.

Related

Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate]

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

Cytoscapejs: How do I prevent rendered subgraphs from zooming out of view?

I am rendering graphs of hundreds of nodes, in a directed manner from left to right.
Upon first render, all nodes are in the upper left, and then animate over two seconds to fill my svg box. (fit: true, animate: true, animationDuration: 2000).
One thing I notice is that while expanding, the graph is temporarily larger than my svg box, until it snaps into place. It also appears that the "snapping into place", while still smoothly animated, goes "faster" than the initial animation, even if I have easing set to linear.
But I also have functionality that allows me to (cxtmenu) select a node from the graph, and graph from that node. The rest of the nodes disappear, and then that subgraph animates into place to fill the svg box.
However, the closer to the lower right that node at the root of that subgraph is, the odder the behavior is. If I pick a subgraph close to the lower right, then while it is zooming in, it actually travels further to the lower right at first, even to the point of going entirely off the graph, until it finally zooms (more quickly) back into place in the center of the svg box.
Conceptually, it feels like this has to do with the initial positioning of the subgraph's root node, since it is not at 0,0. It's almost as if the layout algorithm starts first, and then at some point partway through, the fitting/panning algorithm kicks in at a faster rate.
I am having trouble determining how to control/fix this. On an earlier project with dagre and d3 (without cytoscape), I found that similar behavior was because of two competing animation algorithms; one for the layout and one for the zoom/positioning. When I set them to the same duration, they canceled each other out and the animation became smooth.
I am using cytoscape-elkjs. I did try out cytoscape-dagre, and its "boundingbox" option did seem to help, but the elkjs extension doesn't have that, and I am wondering if there's something I can do on the cytoscapejs level to control this.

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Where does the coordinate system for windows forms stop and start?

I am using VB.NET to write a game that runs in a windows form that uses collision detection. In order to achieve this, I have to be able to understand the positioning system. I know that windows form coordinates start at the top-left, and don't include the bottom or right edges. But at what numbers do the coordinates start and stop? (What i mean is What is the top left corner coordinate, what is the almost bottom right corner coordinate)
The coordinate system depends on if you're talking about client coordinates or screen coordinates. This is a basic Windows UI manager thing, and the WinForms wrappers follow the same pattern.
When you're dealing with client coordinates, the origin (top-left) point has coordinates (0, 0). Always. The extent is defined by the width and height of your form, accessible via Me.ClientSize.Width and Me.ClientSize.Height, respectively. The client rectangle is, therefore:
{ (0, 0) × (ClientSize.Width, ClientSize.Height) }, also retrievable using the ClientRectangle property.
The unique thing about the client area is that it excludes the non-client areas of the form—the borders, the title bars, and other system-dependent properties.
(Image taken for illustrative purposes from Jose Menendez Póo's article on creating an Aero ToolStrip)
You don't have to worry about calculating these sizes (and you shouldn't, either, since they're subject to change). You just work in client coordinates, and the framework will take care of the rest. You use client coordinates when positioning child objects (such as controls) on their parent form, and you can even resize the form by specifying a client size. Its actual size will be calculated automatically, taking into account the non-client area.
It is quite rare that you will ever have to deal in screen coordinates. You only need those if you want to move a form (window) around on the screen (which should also be rare, because you have no idea what size screen the user has nor should you try to control where she places her windows). In screen coordinates, the top-left corner of the primary monitor has coordinates (0, 0). The rest of the coordinate system is based on the virtual screen, which takes into account multiple-monitor configurations.
A form's Location and Size properties give you values in screen coordinates. Should you need to map (convert) between client and screen coordinates, there are PointToClient and PointToScreen methods. Pass these a location defined either in terms of screen or client coordinates, respectively, and they will convert it to the other coordinate system.
The only other complication to note is that Windows uses endpoint-exclusive rectangles. The WinForms wrapper retains that convention in its Rectangle structure. You hardly ever have to worry about this, since this is really a very natural system once you understand it. Plus, all of the pieces and parts of the WinForms framework use the convention, so if you're just passing around points and sizes and rectangles, you aren't likely to run into trouble. But it is something to be aware of. Think of it this way: your client area has the rectangle { (0, 0) × (ClientSize.Width, ClientSize.Height) }, as we saw earlier. If you were to fill in this rectangle with a solid color, the fill would extend from point (0, 0) to point (ClientSize.Width - 1, ClientSize.Height - 1).
If you stay within your form, you can calculate it by "width" and "height".
Also you have "left" and "top".
Starting is (left = 0 and top = 0) and it ends on the right bottom with the coordinates of the values "width" and "height".
A Windows Forms application specifies the position of a window on the screen in screen coordinates. For screen coordinates, the origin is the upper-left corner of the screen. The full position of a window is often described by a Rectangle structure containing the screen coordinates of two points that define the upper-left and lower-right corners of the window. (MSDN)
So upper left corner is (0, 0) and lower right corner is (Form1.Width, Form1.Height).

How to render a 2d side-scroller game

I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.