Rendering small text with Vulkan? - vulkan

A font rendering library (like say freetype) provides a function that will take an outline font file (like a .ttf) and a character code and produce a bitmap of the corresponding glyph in host memory.
For small text (like say up to 30x30 pixel glyphs) what's the most efficient way to render those glyphs to a Vulkan framebuffer?
Some options I've though about might be:
Render the glyphs with the font rendering library every time on demand, blit them with host code to a single host-side image holding a whole "text box", transfer the host-side image of the text box to a device local image, and then render a quad (like a normal image) using fragment shader / image sampler from the text box to be drawn.
At program startup cycle through all the glyphs host side, render them to glyph bitmaps. Do the same as 1 but blit from the cached glyph bitmaps (takes about 1 MB host memory).
Cache the glyph bitmaps individually into device local images. Rather than bitting host-side, render a quad for each glyph device-side and set the image sampler to the corresponding glyph each time. (Not sure how the draw calls would work? One draw call per glyph with a different combined image sampler every time?)
Cache all the glyph bitmaps into one large device-side image (layed out in a big grid say). Use a single device-side combined image sampler, and push params to describe the subregion that contains the glyph image. One draw call per glyph, updating push params each time.
Like 4 but use a single instanced draw call, and rather than push params use instance-varying input attributes.
Something else?
I mean like, how do common game engines like Unreal or Unity or Godot etc solve this problem? Is there a typical approach or best practice?

First, some considerations:
Rasterizing a glyph at around 30px with freetype might take on the order of 10μs. This is a very small one-time cost, but rendering e.g. 100 glyphs every frame would seriously eat into your frame budget (if we assume the math is as simple as 100 * 10μs == 1ms).
State changes (like descriptor updates) are relatively expensive. Changing the bound descriptor for each character you render has non-negligible cost. This could be limited by batching character draws (draw all the As, then the Bs, etc), but using push constants is typically the fastest.
Instanced drawing with small meshes (such as quads or single triangles) can be very slow on some GPUs, as they will not schedule multiple instances on a single wavefront/warp. If you're rendering a quad with 6 vertices, and a single execution unit can process 64 vertices, you may end up wasting 58/64 = 90.6% of available vertex shading capacity.
This suggests 4 is your best option (although 5 is likely comparable); you can further optimize that approach by caching the results of the draw calls. Imagine you have some menu text:
The first frame it is needed, render all the text to an intermediate image.
Each frame it is needed, make a single draw call textured with the intermediate image. (You could also blit the text if you don't need transparency.)

Related

Culling off-screen objects in OpenGL ES 2 2D

I'm playing about with OpenGL ES 2.0. If I'm working with a simple 2D projection, if I have a large 2D grid of vertices which are pretty much static (think map tiles), of which only a small proportion are visible at any one time, would it be better to...
Work out in the CPU which vertices are visible, and and create a VBO to draw just those triangles that make up the visible tiles in each frame?
or
Keep a static VBO with the entire tiled grid, and then just rely on the graphics card (RPi, in my case) to clip out the off-screen triangles?
Or perhaps some combination of the two (like sets of overlapping pre-computed grids)? How big does the grid have to be before the latter option becomes unworkable?
Edit
I decided to make several calls to glDrawElements(), drawing sub-ranges of the index buffer that I knew would overlap the viewport. At the scale I'm working at it doesn't seem to make any difference to the speed over drawing the entire element array, even on a Pi Zero.
However, this approach would require more computation to determine which ranges of elements needed to be rendered if there was any rotation of the grid involved - effectively rasterising my own quad. I'm interested to hear if this is a reasonable approach.
There are some other options like a more exotic structure for breaking up the plane into sub areas, I guess. Still not sure if any of this is really necessary, though.
Thanks!
Please note: I don't want to discuss drawing tiles in the fragment shader, I'm more interested in the correct way to work with the vertex shader than actually solving the described problem.
If that's a regular grid, I'd split it in large chunks, so the screen width (larger side) would fit 2-3 such chunks. They don't need to overlap if it's regular grid.
Checking one chunk's visibility is trivial and cheap, as well as finding/selecting those few that must be drawn. And the wasted/clipped area is small enough to not worry about it. You don't have to go crazy and trim every single vertex that's outside of the screen.
Each chunk would have own VBO, and it would be weakly cached when it goes fully outside of screen, so you don't have to rebuild/reload resources needed to draw that chunk if you quickly return to this part of the map.
Splitting in chunks minimizes the memory requirements and speeds up the level loading. You spend time only loading the part of the screen that user will see immediately. This also allows quite huge maps, since you can prefetch the areas that you're going towards to.

How to improve MTKView rendering when using MPSImageScale and MTLBlitCommandEncoder

TL;DR: From within my MTKView's delegate drawInMTKView: method, part of my rendering pass involves adding an MPSImageBilinearScale performance shader and zero or more MTLBlitCommandEncoder requests for generateMipmapsForTexture. Is that a smart thing to do from within drawInMTKView:, which happens on the main thread? Do either of them block the main thread while running or are they only being encoded and then executed later and entirely on the GPU?
Longer Version:
I'm playing around with Metal within the context of an imaging application. I use Core Image to load an image and apply filters. The output image is displayed as a 2D plane in a metal view with a single texture. This works, but to improve performance I wanted to experiment with Core Image's ability to render out smaller tiles at a time. Each tile is rendered into its own IOSurface.
On each render pass, I check if there are any tiles that have been recently rendered. For each rendered tile (which is now an IOSurface), I create a Metal texture from a CVMetalTextureCache that is backed by the surface.
I think use a scaling MPS to copy from the tile-texture into the "master" texture. If a tile was copied over, then I issue a blit command to generate the mipmaps on the master texture.
What I'm seeing is that if my master texture is quite large, then generate the mipmaps can take "a bit of time". The same is true if I have a lot of tiles. It appears this is blocking the main thread because my FPS drops significantly. (The MTKView is running at the standard 60fps.)
If I play around with tile sizes, then I can improve performance in some areas but decrease it in others. For example, increasing the tile size that Core Image renders it creates less tiles, and thus less calls to generate mipmaps and blits, but at the cost of Core Image taking longer to render a region.
If I decrease the size of my "master" texture, then mipmap generation goes faster since only the dirty textures are updates, but there appears to be a lower bounds on how small I should make the master texture because if I make it too small, then I need to pass in a large number of textures to the fragment shader. (And it looks like that limit might be 128?)
What's not entirely clear to me is how much of this I can move off the main thread while still using MTKView. If part of the rendering pass is going to block the main thread, then I'd prefer to move it to a background through so that UI elements (like sliders and checkboxes) remain fully responsive.
Or maybe this isn't the right strategy in the first place? Is there a better way to display really large images in Metal other than tiling? (i.e.: Images larger than Metal's texture size limit of 16384?)

Vulkan: Framebuffer larger than Image dimensions

This question primarily relates to the dimension parameters (width, height, and layers) in the structure VkFramebufferCreateInfo.
The actual question:
In the case that one or more of the VkImageViews, used in creating a VkFrameBuffer, has dimensions that are larger than those specified in the VkFramebufferCreateInfo used to create the VkFrameBuffer, how does one control which part of that VkImageView is used during a render pass instance?
Alternatively worded question:
I am basically asking in the case that the image is larger (not the same dimensions) than the framebuffer, what defines which part of the image is used (read/write)?
Some Details:
The specification states this is a valid situation (I have seen many people state the attachments used by a framebuffer must match the dimensions of the framebuffer itself, but I can't find support for this in the specification):
Each element of pAttachments must have dimensions at least as large as the corresponding framebuffer dimension.
I want to be clear, that I understand that if I just wanted to draw to part of an image I can use a framebuffer that has the same dimensions as the image, and use viewports and scissors. But scissors and viewports are defined relative to the framebuffer's (0,0) as far as I can tell from the spec, although it is not clear to me.
I'm asking this question to help my understand of the framebuffer as I am certain I have misunderstood something. I feel it may well be the case that (x,y) in framebuffer space, is always (x,y) in image space (As in there is no way of controlling which part of the VkImageView is used).
I have been stuck on this for quite sometime (~4 days), and have tried both the Vulkan: Cookbook and the Vulkan Programming Guide, and read most of the specification, and searched online.
If the question needs clarification, please ask. I just didn't want to make it overly long.
Thank you for reading.
There isn't a way to control which part of the image is used by the framebuffer when the framebuffer is smaller than the image. The framebuffer origin always maps to the image origin.
Allowing attachments to be larger than the framebuffer is only meant to allow reusing memory/images/views for several purposes in a frame even when they don't all need the same dimensions. The typical example is reusing a depth buffer (but not it's contents) for several different render passes. You could accomplish the same thing with memory aliasing, but engines that have to support multiple APIs might find it easier to do it this way.
The way to control where you render to is by controlling the viewport. That is, you specify a framebuffer size that's actually big enough to cover the total area of the target images that you may want to render to, and use the viewport transform/scissoring to render to a specific area of those images.
There is no post-viewport transformation that goes from framebuffer space to image space. That would be decidedly redundant, since we already have a post-NDC transform. There's no point in having two of them.
Sure, VkRenderPassBeginInfo has the renderArea object, but that is more of a promise from the user rather than a guarantee for the system:
The application must ensure (using scissor if necessary) that all rendering is contained within the render area, otherwise the pixels outside of the render area become undefined and shader side effects may occur for fragments outside the render area.
So basically, the implementation doesn't do anything with renderArea. It doesn't set up a transformation or anything; you're just promising that no framebuffer pixels outside of that area will be impacted.
In any case, there's really little point to providing a framebuffer size that's smaller than the images sizes. That sort of thing is more the perview of the renderArea than the framebuffer specification.

About animating frame by frame with sprite files

I used to animate my CCSprites by iterating through 30 image files (rather big ones) and on each file I changed the CCSprite's texture to that image file.
Someone told me that was not efficient and I should use spritesheets instead. But, can I ask why is this not efficient exactly?
There are two parts to this question:
Memory.
OpenGL ES requires textures to have width and height's to the power of 2 eg 64x128, 256x1024, 512x512 etc. If the images don't comply, Cocos2D will automatically resize your image to fit the dimensions by adding in extra transparent space. With successive images being loaded in, you are constantly wasting more and more space. By using a sprite sheet, you already have all the images tightly packed in to reduce wastage.
Speed. Related to above, it takes time to load an image and resize it. By only calling the 'load' once, you speed the entire process up.

resolution from a PDFPage?

I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.