Where does the coordinate system for windows forms stop and start? - vb.net

I am using VB.NET to write a game that runs in a windows form that uses collision detection. In order to achieve this, I have to be able to understand the positioning system. I know that windows form coordinates start at the top-left, and don't include the bottom or right edges. But at what numbers do the coordinates start and stop? (What i mean is What is the top left corner coordinate, what is the almost bottom right corner coordinate)

The coordinate system depends on if you're talking about client coordinates or screen coordinates. This is a basic Windows UI manager thing, and the WinForms wrappers follow the same pattern.
When you're dealing with client coordinates, the origin (top-left) point has coordinates (0, 0). Always. The extent is defined by the width and height of your form, accessible via Me.ClientSize.Width and Me.ClientSize.Height, respectively. The client rectangle is, therefore:
{ (0, 0) × (ClientSize.Width, ClientSize.Height) }, also retrievable using the ClientRectangle property.
The unique thing about the client area is that it excludes the non-client areas of the form—the borders, the title bars, and other system-dependent properties.
(Image taken for illustrative purposes from Jose Menendez Póo's article on creating an Aero ToolStrip)
You don't have to worry about calculating these sizes (and you shouldn't, either, since they're subject to change). You just work in client coordinates, and the framework will take care of the rest. You use client coordinates when positioning child objects (such as controls) on their parent form, and you can even resize the form by specifying a client size. Its actual size will be calculated automatically, taking into account the non-client area.
It is quite rare that you will ever have to deal in screen coordinates. You only need those if you want to move a form (window) around on the screen (which should also be rare, because you have no idea what size screen the user has nor should you try to control where she places her windows). In screen coordinates, the top-left corner of the primary monitor has coordinates (0, 0). The rest of the coordinate system is based on the virtual screen, which takes into account multiple-monitor configurations.
A form's Location and Size properties give you values in screen coordinates. Should you need to map (convert) between client and screen coordinates, there are PointToClient and PointToScreen methods. Pass these a location defined either in terms of screen or client coordinates, respectively, and they will convert it to the other coordinate system.
The only other complication to note is that Windows uses endpoint-exclusive rectangles. The WinForms wrapper retains that convention in its Rectangle structure. You hardly ever have to worry about this, since this is really a very natural system once you understand it. Plus, all of the pieces and parts of the WinForms framework use the convention, so if you're just passing around points and sizes and rectangles, you aren't likely to run into trouble. But it is something to be aware of. Think of it this way: your client area has the rectangle { (0, 0) × (ClientSize.Width, ClientSize.Height) }, as we saw earlier. If you were to fill in this rectangle with a solid color, the fill would extend from point (0, 0) to point (ClientSize.Width - 1, ClientSize.Height - 1).

If you stay within your form, you can calculate it by "width" and "height".
Also you have "left" and "top".
Starting is (left = 0 and top = 0) and it ends on the right bottom with the coordinates of the values "width" and "height".

A Windows Forms application specifies the position of a window on the screen in screen coordinates. For screen coordinates, the origin is the upper-left corner of the screen. The full position of a window is often described by a Rectangle structure containing the screen coordinates of two points that define the upper-left and lower-right corners of the window. (MSDN)
So upper left corner is (0, 0) and lower right corner is (Form1.Width, Form1.Height).

Related

Efficient rendering of many Jetbrains Compose elements at absolute coordinates within a graphics layer

I am trying to render a large number of "nodes" in a freeform sandbox area with Jetbrains Compose. Each node has it's own X,Y position. The editor is in a graphicsLayer where it can be panned and scaled. Inside this sandbox area, each node is offset by it's X,Y values and then rendered. However, the graphicsLayer has it's own size, and when translated/panned far enough that it goes off screen, all "nodes" disappear since Compose thinks that the bounding box of the graphics layer is no longer on screen and thus the layer does not need to render, even though nodes can be at any offset (even negative offsets) within the graphics layer.
I have tried opting not to translate the graphics layer when panning, and instead offset each node by position + pan amount, but this causes a large amount of lag when panning with many nodes, since Compose will have to recompose every single node every single frame to update their position.
Ideally, I would like the best of both worlds - a graphicsLayer that can be zoomed and panned, but also one that does not do bounds checking, since that removes our ability to pan the screen too much.
Here is a video: https://imgur.com/a/p60OKyc
Note that the cyan box displays the entire inner bounds of the graphics layer. I'd like for nodes to be able to be placed anywhere, even at negative coordinates.

Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate]

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

Blender 2.8: object pivot + pixel align + object part grouping (newbie)

I have started learning Blender 2.8 as of May-25. Almost 10 years ago, I was using 3dmax v7 but only as a hobby. I want to start doing 3ding again as hobby but got no money to have something that is up to date. So I chose Blender.
Now I have a few questions. I will surely have more questions later. I am still thinking in terms of 3dmax ways.
How do I recenter the object pivot if I accidentally displaced it with MB1 and doing Ctrl-Z is not a solution ?
How do I align vertices to the grid but only using the axis of my choice ? (ex: align on the X axis grid or any other axis of my choice)
how do I group object parts (vertices, faces, ...) as different groups and then working with the group of my choice ?
how can I maximize the view port with one single keys shortcut thus hiding every menus and then once I am done doing what I wanted to on that maxed view, revert back to whatever interface view I was using before ?
how to select anything using a region drawn by mouse movements (meaning selecting without using box or circle or shift, just mouse mouvements)?
Firstly blender 2.80 is under heavy development while making some major changes so can be of varying functionality for now. I would suggest you start learning with 2.79b until 2.80 is finished.
There are several options when it comes to choosing a pivot point, I think what you are referring to is setting the object origin.
There are several snapping options and ways to control transformations. You can limit movement to one axis by pressing it's key while transforming, eg GX will only move along the X axis, you can not move on one axis by adding ⇧ Shift - G⇧ ShiftX will move on Y and Z but not X. To align vertices to one axis is it common to scale to zero one the axis, eg SX0 will scale the verts on the x axis so they all align, in object mode you can do the same as long as you enable manipulate centre points. There are align options available in some addons, like align tools and mesh align plus.
Blender doesn't have vertex grouping and I don't know of any addons that add it. You can assign vertices to a vertex group and use that to select/deselect a collection of vertices.
There are two options to maximise the viewport - ⎈ Ctrl↑ Up arrow will maximize the viewport, the info bar at the top and headers for the view will remain in place or ⎇ AltF10 will make the view full screen also removing the info bar and headers. Also the window layouts are saved as screens which you can make and define to your liking, by saving these screens in your startup blend you can keep them available each time you use blender. The default screens include a 3D only layout.
For lasso select you can hold ⎈ Ctrl while pressing the non-select mouse button to draw an arbitrary region to select items within.

In iOS is it possible to change a View's coordinate system so that 0,0 is top right corner?

Normally the 0,0 coordinate refers to the top left corner of a view. Higher x coordinates are further right. A frame / rectangle in the view has its leftmost point being its x coordinate and its rightmost point being its x coordinate plus its width.
Is it possible to reverse that, or better yet, reverse just the x axis? Make the 0,0 be the top right. Make the higher origins be further to the left. AND make it so a frame / rectangle in the view has its rightmost point as its x coordinate and its leftmost point as its x coordinate plus its width.
I know I could transform this stuff myself with pure math, but I was wondering if iOS offers this capability.
Not really.
iOS 9 has some new flipping stuff for supporting right-to-left languages, but I don't think you can force it.
You can flip the drawing of a view by setting its transform property to CGAffineTransformMakeScale(-1, 1), but that won't change the underlying coordinate system.
SpriteKit has a different coordinate system than normal UIViews, but its coordinate system isn't what you want.
You may want to read Coordinate Systems and Transforms, which discusses some techniques for mapping points between different coordinate systems. This MSDN article covers mapping points using matrices, which can help you on a theoretical level.
It's not documented, but UIView has an instance variable, _flipsHorizontalAxis, that does exactly what it sounds like it would do. It looks like it just passes through the CALayer variable of the same name.

How to render a 2d side-scroller game

I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.