Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate] - vulkan

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?

The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

Related

Where does the coordinate system for windows forms stop and start?

I am using VB.NET to write a game that runs in a windows form that uses collision detection. In order to achieve this, I have to be able to understand the positioning system. I know that windows form coordinates start at the top-left, and don't include the bottom or right edges. But at what numbers do the coordinates start and stop? (What i mean is What is the top left corner coordinate, what is the almost bottom right corner coordinate)
The coordinate system depends on if you're talking about client coordinates or screen coordinates. This is a basic Windows UI manager thing, and the WinForms wrappers follow the same pattern.
When you're dealing with client coordinates, the origin (top-left) point has coordinates (0, 0). Always. The extent is defined by the width and height of your form, accessible via Me.ClientSize.Width and Me.ClientSize.Height, respectively. The client rectangle is, therefore:
{ (0, 0) × (ClientSize.Width, ClientSize.Height) }, also retrievable using the ClientRectangle property.
The unique thing about the client area is that it excludes the non-client areas of the form—the borders, the title bars, and other system-dependent properties.
(Image taken for illustrative purposes from Jose Menendez Póo's article on creating an Aero ToolStrip)
You don't have to worry about calculating these sizes (and you shouldn't, either, since they're subject to change). You just work in client coordinates, and the framework will take care of the rest. You use client coordinates when positioning child objects (such as controls) on their parent form, and you can even resize the form by specifying a client size. Its actual size will be calculated automatically, taking into account the non-client area.
It is quite rare that you will ever have to deal in screen coordinates. You only need those if you want to move a form (window) around on the screen (which should also be rare, because you have no idea what size screen the user has nor should you try to control where she places her windows). In screen coordinates, the top-left corner of the primary monitor has coordinates (0, 0). The rest of the coordinate system is based on the virtual screen, which takes into account multiple-monitor configurations.
A form's Location and Size properties give you values in screen coordinates. Should you need to map (convert) between client and screen coordinates, there are PointToClient and PointToScreen methods. Pass these a location defined either in terms of screen or client coordinates, respectively, and they will convert it to the other coordinate system.
The only other complication to note is that Windows uses endpoint-exclusive rectangles. The WinForms wrapper retains that convention in its Rectangle structure. You hardly ever have to worry about this, since this is really a very natural system once you understand it. Plus, all of the pieces and parts of the WinForms framework use the convention, so if you're just passing around points and sizes and rectangles, you aren't likely to run into trouble. But it is something to be aware of. Think of it this way: your client area has the rectangle { (0, 0) × (ClientSize.Width, ClientSize.Height) }, as we saw earlier. If you were to fill in this rectangle with a solid color, the fill would extend from point (0, 0) to point (ClientSize.Width - 1, ClientSize.Height - 1).
If you stay within your form, you can calculate it by "width" and "height".
Also you have "left" and "top".
Starting is (left = 0 and top = 0) and it ends on the right bottom with the coordinates of the values "width" and "height".
A Windows Forms application specifies the position of a window on the screen in screen coordinates. For screen coordinates, the origin is the upper-left corner of the screen. The full position of a window is often described by a Rectangle structure containing the screen coordinates of two points that define the upper-left and lower-right corners of the window. (MSDN)
So upper left corner is (0, 0) and lower right corner is (Form1.Width, Form1.Height).

glReadPixels read "out of frames" area

I draw OpenGL 3200x2000 size textured quads. OpenGLView frame size is set to 940x560. It draws quad as it should. Bun when I try to save it as image (using glReadPixels) and set glReadPixels area from (0,0) to (3200,2000). It creates pixel data 3200x2000, but when I save it to file I see small image part (940x560 from bottom left corner) and whole other area is black. So how can I read offscreen area? I tried using Framebuffer, but its very complicated, errors while creating it and etc... Is there any other solution?
Situation visualization:
Original image looks like this (3200x2000):
OpenGLView looks like this (940x560):
Saved image looks like that (3200x2000):
So you're rendering to the window. Well, the window has a particular size. And nothing exists outside of that size.
This is part of something OpenGL calls the "pixel-ownership-test". If a pixel is not owned by the context, then its contents are undefined. Pixels outside of the window are not owned by the context, and therefore their contents are undefined.
This is one reason why framebuffer objects exist: so that you can render outside the size of your window. Though be advised: there is a maximum viewport size limit.
Alternatively, you can render in screen-sized pieces, where you download each piece after each rendering, then move the camera to render the next piece.
You haven't given much details in terms of code, or the platform.
But I think you should be using offscreen rendering, rather than just reading from the rendered window. If you are unfamiliar with using frame buffer objects, here is a minimal example:
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo
Edit #1:
Since OP mentioned that the platform is OS X, I am posting my code below, which shows a minimal FBO example in iOS:
https://github.com/glman74/simpleFBO

ipad frame max size is not enough

I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.

How to create a swanky SurfaceSlider

I am new to surface programming and stumbled upon this Image which I understand is a slider control on a tag visualization (in this case a card). This slider is
curved as opposed to conventional straight track
has a bigger thumb which displays the current position (thus eliminating the need of a separate label)
has a glowing feel (I understand this is due to overlapping controls with different blur radius)
Can anyone help with how to make such control.
-V
This is a custom-built control rather than a standard SurfaceSlider. It's not build using TagVisualizer either but that's only because the app that this picture shows was built ~2 years prior to TagVisualizer existing.
Now you should certainly use TagVisualizer to streamline an implementation of this but you'll still have to create a custom slider control - SurfaceSlider will not be a good fit because it assumes that the user is moving their finger linearly.
Within your custom arching slider control, you can use SurfaceThumb (which SurfaceSlider itself uses) to get the big glowing thumb... then just needs to listen to the Delta events on the thumb and move it along the constrained path as appropriate.

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.