Is InkCanvas incompatible with Projection transforms? - xaml

I am currently developing an UWP application where the user draws some shape freely using the InkCanvas element. At some point I need to flip the whole canvas 180 degrees as if showing the backside of a card.
To my surprise, applying a PlaneTransform with positive RotationY to the InkCanvas makes the ink just drawn by the user disappear, and all further input is disabled. I can understand disabling the input when the control is undergoing some weird perspective transform. However, I was surprised to find that it also stops drawing the already existing strokes.
As soon as the PlaneTransform's RotationY property returns to zero, the existing strokes come back and input starts working again.
Is this a known issue? Do I need to convert the strokes in the InkCanvas to a fixed UIElement before applying a projection effect on the drawn shape?

Related

Efficient rendering of many Jetbrains Compose elements at absolute coordinates within a graphics layer

I am trying to render a large number of "nodes" in a freeform sandbox area with Jetbrains Compose. Each node has it's own X,Y position. The editor is in a graphicsLayer where it can be panned and scaled. Inside this sandbox area, each node is offset by it's X,Y values and then rendered. However, the graphicsLayer has it's own size, and when translated/panned far enough that it goes off screen, all "nodes" disappear since Compose thinks that the bounding box of the graphics layer is no longer on screen and thus the layer does not need to render, even though nodes can be at any offset (even negative offsets) within the graphics layer.
I have tried opting not to translate the graphics layer when panning, and instead offset each node by position + pan amount, but this causes a large amount of lag when panning with many nodes, since Compose will have to recompose every single node every single frame to update their position.
Ideally, I would like the best of both worlds - a graphicsLayer that can be zoomed and panned, but also one that does not do bounds checking, since that removes our ability to pan the screen too much.
Here is a video: https://imgur.com/a/p60OKyc
Note that the cyan box displays the entire inner bounds of the graphics layer. I'd like for nodes to be able to be placed anywhere, even at negative coordinates.

Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate]

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

libgdx tiledmap flicker with Nearest filtering

I am having strange artifacts on a tiledmap while scrolling with the camera clamped on the player (who is a box2d-Body).
Before getting this issue i used the linear filter for the tiledmap which prevents those strange artifacts from happening but results in Texture bleeding (i loaded the tiledmap straight from a .tmx file without padding the tiles).
However now i am using the Nearest filter instead which gets rid of the bleeding but when scrolling the map (by walking the character with the cam clamped on him) it seams like a lot of pixel are flickering around. The flickering results can get better or worse depending on the cameras zoom value.
But when I use the "OrthoCamController" class from the libgdx-Utilities which allows to scroll the map by panning with the mouse/finger i don't get these artifacts at all.
I assume that the flickering might be caused by bad camera-position values received by the box2d-Body's position.
One more thing i should add here: The game instance runs in 1280*720 display mode while my gamecam renders only 800*480. Wen i change the gamecam's rendersolution to 1280*720 i don't get those artifacts but then the tiles are way too tiny.
Has anyone experienced this issue or knows how to fix that? :)
I had a similar problem with this, and found it was due to having too small a decimal value for the camera position.
I think what may be happening is some sort of rounding with certain tile columns/rows in the tilemap renderer.
I fixed this by rounding to a set accuracy, like so:
camera.position.x = Math.round(player.entity.getX() * scalePosition) / scalePosition;
Experiment with various values, but I got it working by using the tile size as the scalePosition value.
About tilesets, I posted a solution here: Getting gaps between tiled textures with libgdx
I've been using that method with Tiled itself. You will have to adjust "margin" and "spacing" when importing tilesets in Tiled to get the effect working.
It's 100% working for me :)

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

Resizing CATiledLayer's Using Scale Transforms vs. Bounds Manipulation

I've got my layer hosted workspace working so that using CATiledLayers for hundreds of images works nicely when the workspace is zoomed out substantially. All the images use lower resolution representations, and my application is much more responsive when panning and zooming large numbers of images.
However, within my application I also provide the user the ability to resize layers with a resize handle. Before I converted image layers to use CATiledLayers I was doing layer resizes by manipulating the bounds of the image layer according to the resize delta (mouse drag), and it worked well. But now with CATiledLayers in place, this is causing CATiledLayers to get confused when I mix resizing of layers through bounds manipulation and zooming/unzooming the workspace through scale transforms.
Specifically, if I resize a CATiledLayer to half the width/height size (1/4 the area), the image inside it will suddenly scale to a further 1/2 the resized frame leaving 3/4 of the frame empty. This seems to be exactly when the inner CATiledLayer logic gets invoked to provide a lower resolution image representation. It works fine if I don't touch the resize handler and just zoom/unzoom the workspace.
Is there a way to make zooming/resizing play nice together with CATiledLayers, or am I going to have to convert my layer resize logic to use scale transforms instead of bounds manipulations?
I ended up solving this by converting my layer resize logic to use scale transforms by overriding the setBounds: method for my custom image layer class to scale it's containing CATiledLayer, and repositioning accordingly. Also it is important to make sure the CATiledLayer's autoresizingMask is set to kCALayerNotSizable since we are handling resizes manually in setBounds:.
Note: be sure to call the superclass's implementation of setBounds:.