Assign integer values to color in opengl es 2.0 - opengl-es-2.0

i need to set color by byte type or integer, not float values.
How can i assign this type into gl_FragColor?
Dividing the value by 256 wont give me the wanted precision.
My main purpose is to know the specific value of each bit in the color buffer, if i draw line only with specific color.
for example i want that in the color buffer at the red value of pixel only 2 lsbits will be on, what color value should i transfer to gl_FragColor?
If i had an option to write byte type values, i would write the value 3 to red component
Thanks

As far as I know, gl_FragColor must always be floating point. However, if you know the colour buffer is 8 bits per channel it shouldn't be hard to force whatever you want into it. You might consider
gl_FragColor = vec4(floor(number)/255.0, 0, 0, 0);
for example. The more recent versions of GLSL support bitwise operations, but I doubt GLES2 does.
If you want to draw to specific bits, maybe...
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
...
gl_FragColor = vec4(pow(2.0, bitIndex)/255.0, 0, 0, 0);
I haven't tested this but see no reason why it couldn't work assuming geometry never overlaps (in which case the bit would overflow into the next).

Related

Pass audio spectrum to a shader as texture in libGDX

I'm developing an audio visualizer using libGDX.
I want to pass the audio spectrum data (an array containing the FFT of the audio sample) to a shader I took from Shadertoy: https://www.shadertoy.com/view/ttfGzH.
In the GLSL code I expect an uniform containing the data as texture:
uniform sampler2D iChannel0;
The problem is that I can't figure out how to pass an arbitrary array as a texture to a shader in libGDX.
I already searched in SO and in libGDX's forum but there isn't a satisfying answer to my problem.
Here is my Kotlin code (that obviously doesn't work xD):
val p = Pixmap(512, 1, Pixmap.Format.Alpha)
val t = Texture(p)
val map = p.pixels
map.putFloat(....) // fill the map with FFT data
[...]
t.bind(0)
shader.setUniformi("iChannel0", 0)
You could simply use the drawPixel method and store your data in the first channel of each pixel just like in the shadertoy example (they use the red channel).
float[] fftData = // your data
Color tmpColor = new Color();
Pixmap pixmap = new Pixmap(fftData.length, 1, Pixmap.Format.RGBA8888);
for(int i = 0; i < fftData.length i++)
{
tmpColor.set(fftData[i], 0, 0, 0); // using only 1 channel per pixel
pixmap.drawPixel(i, 0, Color.rgba8888(tmpColor));
}
// then create your texture and bind it to the shader
To be more efficient and require 4x less memory (and possibly less samples depending on the shader), you could use 4 channels per pixels by splitting your data accross the r, g, b and a channels. However, this will complexify the shader a bit.
This data being passed in the shader example you provided is not arbitrary though, it has pretty limited precision and ranges between 0 and 1. If you want to increase precision you may want to store the floating point accross multiple channels (although the IEEE recomposition in the shader may be painful) or passing an integer to be scaled down (fixed point). If you need data between -inf and inf you may use sigmoid and anti sigmoig functions, at the cost of highly reducing the precision again. I believe this technique will work for your example though, as they seem to only require values between 0 and 1 and precision is not super important because the result is smoothed.

Surface format is B8G8R8A8_UNORM, but vkCmdClearColorImage takes float?

I use vkGetPhysicalDeviceSurfaceFormatsKHR to get supported image formats for the swapchain, and (on Linux+Nvidia, using SDL) I get VK_FORMAT_B8G8R8A8_UNORM as the first option and I go ahead and create the swapchain with that format:
VkSwapchainCreateInfoKHR swapchain_info = {
...
.imageFormat = format, /* taken from vkGetPhysicalDeviceSurfaceFormatsKHR */
...
};
So far, it all makes sense. The image format used to draw on the screen is the usual 8-bits-per-channel BGRA.
As part of my learning process, I have so far arrived at setting up a lot of stuff but not yet the graphics pipeline1. So I am trying the only command I can use that doesn't need a pipeline: vkCmdClearColorImage2.
The VkClearColorValue used to define the clear color can take the color as float, uint32_t or int32_t, depending on the format of the image. I would have expected, based on the image format given to the swapchain, that I should give it uint32_t values, but that doesn't seem to be correct. I know because the screen color didn't change. I tried giving it floats and it works.
My question is, why does the clear color need to be specified in floats when the image format is VK_FORMAT_B8G8R8A8_UNORM?
1 Actually I have, but thought I would try out the simpler case of no pipeline first. I'm trying to incrementally use Vulkan (given its verbosity) particularly because I'm also writing tutorials on it as I learn.
2 Actually, it technically doesn't need a render pass, but I figured hey, I'm not using any pipeline stuff here, so let's try it without a pipeline and it worked.
My rendering loop is essentially the following:
acquire image from swapchain
create a command buffer with the following:
transition from VK_IMAGE_LAYOUT_UNDEFINED to VK_IMAGE_LAYOUT_GENERAL (because I'm clearing the image outside a render pass)
clear the image
transition from VK_IMAGE_LAYOUT_GENERAL to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
submit command buffer to queue (taking care of synchronization with swapchain with semaphores)
submit for presentation
My question is, why does the clear color need to be specified in floats when the image format is VK_FORMAT_B8G8R8A8_UNORM?
Because the normalized, scaled, or sRGB image formats are really just various forms of floating-point compression. A normalized integer is a way of storing floating-point values on the range [0, 1] or [-1, 1], but using a much smaller amount of data than even a 16-bit float. A scaled integer is a way of storing floating point values on the range [0, MAX] or [-MIN, MAX]. And sRGB is just a compressed way of storing linear color values on the range [0, 1], but in a gamma-corrected color space that puts precision in different places than the linear color values would suggest.
You see the same things with inputs to the vertex shader. A vec4 input type can be fed by normalized formats just as well as by floating-point formats.

Meaning of NSOpenGLPFAColorSize for NSOpenGLPixelFormat

I'm unclear as to what value to set for NSOpenGLPFAColorSize when creating an NSOpenGLPixelFormat. From the documentation it states:
Value is a nonnegative buffer size specification. A color buffer that most closely matches the specified size is preferred. If unspecified, OpenGL chooses a color size that matches the screen.
But does this mean the number of bits per pixel? Or bits per component? For example, if it were set 24 and interpreted as bits per pixel then that would mean that each RGBA color would have 6-bits per component for a total of 24-bits for the entire RGBA pixel.
However, if it is to be interpreted as bits per component then that would mean 24-bits for each of the red, green, blue and alpha components to make a 96-bit RGBA pixel.
I'm inclined to believe that it means bits per component as the values I've seen set in sample code ranges from 8, 16, 24, 32 and everything but 24 makes sense when interpreted as bits per component. It would be nice though to have some definitive answer.
Note: Edited to reflect that pixels in OpenGL are RGBA not RGB.
After scouring the documentation further I came across the NSOpenGLPFAColorFloat attribute, which according to the documentation:
A Boolean attribute. If present, this attribute indicates that only renderers that are capable using buffers storing floating point pixels are considered. This should be accompanied by a NSOpenGLPFAColorSize of 64 (for half float pixel components) or 128 (for full float pixel components). Note, not all hardware supports floating point color buffers thus the returned pixel format could be NULL.
With that additional information it must mean bits per pixel.
I did some experimenting as well, setting NSOpenGLPFAColorSize to each of 8, 16, 24 & 32 and then checking what I got back. In each case I was returned a pixel format with NSOpenGLPFAColorSize set to 32 - meaning 32-bits per RGBA pixel. Just passing NSOpenGLPFAColorFloat with nothing set for the Color Size is enough to get back a pixel format with 64-bits per pixel.

Draw tiled images in CGContext with a scale transformation gives precision errors

I want to draw tiled images and then transform them by using the usual panning and zooming gestures. The problem that brings me here is that, whenever I have a scaling transformation of a large number of decimal places, a thin line of pixels (1 or 2) appears in the middle of the tiles. I managed to isolate the problem like this:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);//rect from drawRect:
float scale = 0.7;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(50, 50, 100, 100), testImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(150, 50, 100, 100), testImage);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
With a 0.7 scale, the two images appear correctly tiled:
With a 0.777777 scale (changing line 6 to "float scale = 0.777777;"), the visual artifact appears:
Is there any way to avoid this problem? This happens with CGImage, CGLayer and primitive forms such as a rectangle. It also happens on MacOSx.
Thanks for the help!
edit: Added that this also happens with a primitive form, like CGContextFillRect
edit2: It also happens on MacOSx!
Quartz has a floating point coordinate system, so scaling may result in values that are not on pixel boundaries, resulting in visible antialiasing at the edges. If you don't want that, you have two options:
Adjust your scale factor so that all your scaled coordinates are integral. This may not always be possible, especially if you're drawing lots of things.
Disable anti-aliasing for your graphics context using CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), false);. This will result in crisp pixel boundaries, but anything but straight lines might not look very good.
When all is said and done, iOS is dealing with discrete pixels on integer boundaries. When your frames are reduced 0.7, the 50 is reduced to 35, right on a pixel boundary. At 0.777777 it is not - so iOS adapts and moves/shrinks/blends whatever.
You really have two choices. If you want to use scaling of the context, then round the desired value up or down so that it results in integral scaled frame values (your code shows 50 as the standard multiplication value.)
Otherwise, you can not scale the context, but scale the content one by one, and use CGIntegralRect to round all dimensions up or down as needed.
EDIT: If my suspicion is right, there is yet another option for you. Lets say you want a scale factor of .77777 and a frame of 50,50,100,100. You take the 50, multiply it by the scale, then round the return value up or down. Then you recompute the new frame by using that value divided by 0.7777 to get some fractional value, that when scaled by 0.7777 returns an integer. Quartz is really good at figuring out that you mean an integral value, so small rounding errors are ignored. I'd bet anything this will work just fine for you.

Weird arguments in UIColor colorWithRed:green:blue:alpha:

When checking this method, I was expecting for red, green and blue to be in the 0-255 range. Instead, it's in 0-1.
Am I the only one who thinks this is weird?
Is there any reason not o use the more common 0-255 values for RGB, or even hex numbers (as in html)?
In my opinion this is not weird. Both 0-255 and 0.0-1.0 levels are widely used in different platforms. You can always convert that by using something like this:
#define FLOAT_COLOR_VALUE(n) (n)/255.0
The reason sometimes RGB values are represented as float values rather than 0 to 255 is because 0 to 255 assumes you are using 8 bits to represent each colour component and hence have to use 24 bits for each colour in your frame buffers. This may not be the case if you are using displays that only support 256 colours in total or more than 16 million etc.
In theory then can be an infinite number of shades of red, green or blue. The number of bits you use to represent them depends on how accurate you need to represent colour and how much memory you have on graphics cards to represent images etc.
For many cases 0 to 255 is fine. But there is another world out there where it isn't fine, and for those devices / accurate rendering requirements, floating point numbers provide a much needed alternative.