Why is Vulkan insisting this image must be FLOAT components when sampling? - vulkan

My shaders index into a sampler and image, but when I sample from an image (I only get the error if I sample in the shader) I get an error:
Descriptor in binding #0 index 0 requires FLOAT component type, but
bound descriptor format is VK_FORMAT_R8G8B8A8_UINT. The Vulkan spec
states: Descriptors in each bound descriptor set, specified via
vkCmdBindDescriptorSets, must be valid if they are statically used by
the VkPipeline bound to the pipeline bind point used by this command
It's true, I'm guilty of binding an R8G8B8A8_UINT image to index [0], and I try sample it, but why is it insisting it should be FLOAT component type?
I sample the image in my shader like this:
layout(set = 0, binding = 0) uniform texture2D textures[20];
layout(set = 0, binding = 1) uniform sampler samplers[4];
texture(sampler2D(textures[0], samplers[0]), vec2(inPosAndCoords.zw) );
And that error fires. Why is it insisting that the components for the image be FLOAT?

Why is it insisting that the components for the image be FLOAT?
Because that's what your shader asked for. The basic type for an image format is baked into the image's GLSL type.
texture2D is a 2D texture with a floating-point format. utexture2D is a 2D texture with an unsigned integer format.

Related

Vertex buffer with vertices of different formats

I want to draw a model that's composed of multiple meshes, where each mesh has different vertex formats. Is it possible to put all the various vertices within the same vertex buffer, and to point to the correct offset at vkCmdBindVertexBuffers time?
Or must all vertices within a buffer have the same format, thus necessitating multiple vbufs for such a model?
Looking at the manual for vkCmdBindVertexBuffers, it's not clear whether the offset is in bytes or in vertex-strides.
https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCmdBindVertexBuffers.html
Your question really breaks down into 3 questions
Does the pOffsets parameter for vkCmdBindVertexBuffers accept bytes or vertex strides?
Can I put more than one vertex format into a vertex buffer?
Should I put more than one vertex format into a vertex buffer?
The short version is
Bytes
Yes
Probably not
Does the offsets parameter for vkCmdBindVertexBuffers accept bytes or vertex strides?
The function signature is
void vkCmdBindVertexBuffers(
VkCommandBuffer commandBuffer,
uint32_t firstBinding,
uint32_t bindingCount,
const VkBuffer* pBuffers,
const VkDeviceSize* pOffsets);
Note the VkDeviceSize type for pOffsets. This unambiguously means "bytes", not strides. Any VkDeviceSize means an offset or size in raw memory. Vertex Strides aren't raw memory, they're simply a count, so the type would have to be a uint32_t or uint64_t.
Furthermore there's nothing in that function signature that specifies the vertex format so there would be no way to convert the vertex stride count to actual memory sizes. Remember that unlike OpenGL, Vulkan is not a state machine, so this function doesn't have any "memory" of a rendering pipeline that you might have previously bound.
Can I put more than one vertex format into a vertex buffer?
As a consequence of the above answer, yes. You can put pretty much whatever you want into a vertex buffer, although I believe some hardware will have alignment restrictions on what are valid offsets for vertex buffers, so make sure you check that.
Should I put more than one vertex format into a vertex buffer?
Generally speaking you want to render your scene in as few draw calls as possible, and having lots of arbitrary vertex formats runs counter to that. I would argue that if possible, the only time you want to change vertex formats is when you're switching to a different rendering pass, such as when switching between rendering opaque items to rendering transparent ones.
Instead you should try to make format normalization part of your asset pipeline, taking your source assets and converting them to a single consistent format. If that's not possible, then you could consider doing the normalization at load time. This adds complexity to the loading code, but should drastically reduce the complexity of the rendering code, since you now only have to think in terms of a single vertex format.

Pass audio spectrum to a shader as texture in libGDX

I'm developing an audio visualizer using libGDX.
I want to pass the audio spectrum data (an array containing the FFT of the audio sample) to a shader I took from Shadertoy: https://www.shadertoy.com/view/ttfGzH.
In the GLSL code I expect an uniform containing the data as texture:
uniform sampler2D iChannel0;
The problem is that I can't figure out how to pass an arbitrary array as a texture to a shader in libGDX.
I already searched in SO and in libGDX's forum but there isn't a satisfying answer to my problem.
Here is my Kotlin code (that obviously doesn't work xD):
val p = Pixmap(512, 1, Pixmap.Format.Alpha)
val t = Texture(p)
val map = p.pixels
map.putFloat(....) // fill the map with FFT data
[...]
t.bind(0)
shader.setUniformi("iChannel0", 0)
You could simply use the drawPixel method and store your data in the first channel of each pixel just like in the shadertoy example (they use the red channel).
float[] fftData = // your data
Color tmpColor = new Color();
Pixmap pixmap = new Pixmap(fftData.length, 1, Pixmap.Format.RGBA8888);
for(int i = 0; i < fftData.length i++)
{
tmpColor.set(fftData[i], 0, 0, 0); // using only 1 channel per pixel
pixmap.drawPixel(i, 0, Color.rgba8888(tmpColor));
}
// then create your texture and bind it to the shader
To be more efficient and require 4x less memory (and possibly less samples depending on the shader), you could use 4 channels per pixels by splitting your data accross the r, g, b and a channels. However, this will complexify the shader a bit.
This data being passed in the shader example you provided is not arbitrary though, it has pretty limited precision and ranges between 0 and 1. If you want to increase precision you may want to store the floating point accross multiple channels (although the IEEE recomposition in the shader may be painful) or passing an integer to be scaled down (fixed point). If you need data between -inf and inf you may use sigmoid and anti sigmoig functions, at the cost of highly reducing the precision again. I believe this technique will work for your example though, as they seem to only require values between 0 and 1 and precision is not super important because the result is smoothed.

Creating Image from pixel data with CGBitmapContextCreate

I am trying to write code that can crop an existing image down to some specified size/region. I am working with DICOM images, and the API I am using allows me to get pixel values directly. I've placed pixel values of the area of interest within the image into an array of floats (dstImage, below).
Where I'm encountering trouble is with the actual construction/creation of the new, cropped image file using this pixel data. The source image is grayscale, however all of the examples I have found online (like this one) have been for RGB images. I tried to follow the example in that link, adjusting for grayscale and trying numerous different values, but I continue to get errors on the CGBitmapContextCreate line of code and still do not clearly understand what those values are supposed to be.
My intensity values for the source image go above 255, so my impression is that this is not 8-bit Grayscale, but 16-bit Grayscale.
Here is my code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context;
context = CGBitmapContextCreate(dstImage, // pixel data from the region of interest
dstWidth, // width of the region of interest
dstHeight, // height of the region of interest
16, // bits per component
2 * dstWidth, // bytes per row
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("test.png"),
kCFURLPOSIXPathStyle,
false);
CFStringRef type = kUTTypePNG;
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url,
type,
1,
0);
CGImageDestinationAddImage(dest,
cgImage,
0);
CFRelease(cgImage);
CFRelease(context);
CGImageDestinationFinalize(dest);
free(dstImage);
The error I keep receiving is:
CGBitmapContextCreate: unsupported parameter combination: 16 integer bits/component; 32 bits/pixel; 1-component color space; kCGImageAlphaNoneSkipLast; 42 bytes/row.
The ultimate goal is to create an image file from the pixel data in dstImage and save it to the hard drive. Help on this would be greatly appreciated as would insight into how to determine what values I should be using in the CGBitmapContextCreate call.
Thank you
First, you should familiarize yourself with the "Supported Pixel Formats" section of Quartz 2D Programming Guide: Graphics Contexts.
If your image data is in an array of float values, then it's 32-bits-per-component, not 16. Therefore, you have to use kCGImageAlphaNone | kCGBitmapFloatComponents.
However, I believe that Core Graphics will interpret floating-point components as being between 0.0 and 1.0. If your values are outside of that, you may need to convert them using something like (value - minimumValue) / (maximumValue - minimumValue). An alternative may be to use CGColorSpaceCreateCalibratedGray() or to create a CGImage using CGImageCreate() and specifying an appropriate decode parameter and then create a bitmap context from that using CGBitmapContextCreateImage().
In fact, if you're not drawing into your bitmap context, you should just be creating a CGImage instead, anyway.

In WebGL what are the differences between an attribute, a uniform, and a varying variable?

Is there an analogy that I can think of when comparing these different types, or how these things work?
Also, what does uniforming a matrix mean?
Copied directly from http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types-and-variables/. The actual site has much more detailed information and would be worthwhile to check out.
Variable Qualifiers
Qualifiers give a special meaning to the variable. The following
qualifiers are available:
const – The declaration is of a compile time constant.
attribute – Global variables that may change per vertex, that are passed from the OpenGL application to vertex shaders. This qualifier
can only be used in vertex shaders. For the shader this is a
read-only variable. See Attribute section.
uniform – Global variables that may change per primitive [...], that are passed from the OpenGL
application to the shaders. This qualifier can be used in both vertex
and fragment shaders. For the shaders this is a read-only variable.
See Uniform section.
varying – used for interpolated data between a vertex shader and a fragment shader. Available for writing in the vertex shader, and
read-only in a fragment shader. See Varying section.
As for an analogy, const and uniform are like global variables in C/C++, one is constant and the other can be set. Attribute is a variable that accompanies a vertex, like color or texture coordinates. Varying variables can be altered by the vertex shader, but not by the fragment shader, so in essence they are passing information down the pipeline.
uniform are per-primitive parameters (constant during an entire draw call) ;
attribute are per-vertex parameters (typically : positions, normals, colors, UVs, ...) ;
varying are per-fragment (or per-pixel) parameters : they vary from pixels to pixels.
It's important to understand how varying works to program your own shaders.
Let's say you define a varying parameter v for each vertex of a triangle inside the vertex shader. When this varying parameter is sent to the fragment shader, its value is automatically interpolated based on the position of the pixel to draw.
In the following image, the red pixel received an interpolated value of the varying parameter v. That's why we call them "varying".
For the sake of simplicity the example given above uses bilinear interpolation, which assumes that all the pixels drawn have the same distance from the camera. For accurate 3D rendering, graphic devices use perspective-correct interpolation which takes into account the depth of a pixel.
In WebGL what are the differences between an attribute, a uniform, and a varying variable?
In OpenGL, a "program" is a collection of "shaders" (smaller programs), which are connected to each other in a pipeline.
// "program" contains a shader pipeline:
// vertex shader -> other shaders -> fragment shader
//
const program = initShaders(gl, "vertex-shader", "fragment-shader");
gl.useProgram(program);
Shaders process vertices (vertex shader), geometries (geometry shader), tessellation (tessellation shader), fragments (pixel shader), and other batch process tasks (compute shader) needed to rasterize a 3D model.
OpenGL (WebGL) shaders are written in GLSL (a text-based shader language compiled on the GPU).
// Note: As of 2017, WebGL only supports Vertex and Fragment shaders
<!-- Vertex Shader -->
<script id="shader-vs" type="x-shader/x-vertex">
// <-- Receive from WebGL application
uniform vec3 vertexVariableA;
// attribute is supported in Vertex Shader only
attribute vec3 vertexVariableB;
// --> Pass to Fragment Shader
varying vec3 variableC;
</script>
<!-- Fragment Shader -->
<script id="shader-fs" type="x-shader/x-fragment">
// <-- Receive from WebGL application
uniform vec3 fragmentVariableA;
// <-- Receive from Vertex Shader
varying vec3 variableC;
</script>
Keeping these concepts in mind:
Shaders can pass data to the next shader in the pipeline (out, inout), and they can also accept data from the WebGL application or a previous shader (in).
The Vertex and Fragment shaders (any shader really) can use a uniform variable, to receive data from the WebGL application.
// Pass data from WebGL application to shader
const uniformHandle = gl.glGetUniformLocation(program, "vertexVariableA");
gl.glUniformMatrix4fv(uniformHandle, 1, false, [0.1, 0.2, 0.3], 0);
The Vertex Shader can also receive data from the WebGL application with the attribute variable, which can be enabled or disabled as needed.
// Pass data from WebGL application to Vertex Shader
const attributeHandle = gl.glGetAttribLocation(mProgram, "vertexVariableB");
gl.glEnableVertexAttribArray(attributeHandle);
gl.glVertexAttribPointer(attributeHandle, 3, gl.FLOAT, false, 0, 0);
The Vertex Shader can pass data to the Fragment Shader using the varying variable. See GLSL code above (varying vec3 variableC;).
Uniforms are another way to pass data from our application on the CPU to the shaders on the GPU, but uniforms are slightly different compared to vertex attributes. First of all, uniforms are global. Global, meaning that a uniform variable is unique per shader program object, and can be accessed from any shader at any stage in the shader program. Second, whatever you set the uniform value to, uniforms will keep their values until they're either reset or updated
I like the description from https://learnopengl.com/Getting-started/Shaders , because the the word per-primitive is not intuitive

Seam Carving – Accessing pixel data in cocoa

I want to implement the seam carving algorithm by Avidan/Shamir. After the energy computing stage which can be implemented using a core image filter, I need to compute the seams with the lowest energy which can't be implemented as a core image filter for it uses dynamic programming (and you don't have access to previous computations in opengl shading language).
So i need a way to access the pixel data of an image efficiently in objective-c cocoa.
Pseudo code omitting boundary checks:
for y in 0..lines(image) do:
for x in 0..columns(image) do:
output[x][y] = value(image, x, y) +
min{ output[x-1][y-1]; output[x][y-1]; output[x+1][y-1] }
The best way to get access to the pixel values for an image, is to create a CGBitmapContextRef with CGBitmapContextCreate. The important part about this is that when you create the context, you get to pass the pointer in that will be used as the backing store for the bitmap's data. Meaning that data will hold the pixel values and you can do what ever you want with them.
So the steps should be:
Allocate a buffer with malloc or another suitable allocator.
Pass that buffer as the first parameter to CGBitmapContextCreate.
Draw your image into the returned CGBitmapContextRef.
Release the context.
Now you have your original data pointer that is filled with pixels in the format specified in the call to CGBitmapContextCreate.