kCGImageAlphaPremultipliedFirst and kCGImageAlphaFirst - objective-c

Can you explain to me difference between kCGImageAlphaPremultipliedFirst and kCGImageAlphaFirst?
What's mean Premultiplied in practice?

In short, premultiplied means that the alpha value will also affect the color component values of the pixels when a pixel which is not opaque is represented.
From the Quartz 2D drawing guide:
For bitmaps that have an alpha component, whether the color components
are already multiplied by the alpha value. Premultiplied alpha
describes a source color whose components are already multiplied by an
alpha value. Premultiplying speeds up the rendering of an image by
eliminating an extra multiplication operation per color component. For
example, in an RGB color space, rendering an image with premultiplied
alpha eliminates three multiplication operations (red times alpha,
green times alpha, and blue times alpha) for each pixel in the image.
BTW, Pre-Multiplied is likely what the APIs will force you to use because that is Quartz's preference. Fortunately, the conversions aren't terrible (lossy OTOH…).
the shortest way to explain this is in float components, using the range [0...1].
If our RGBA input representation is:
typedef struct t_rgba { float r,g,b,a; } t_rgba;
const t_rgba rgba = { 0.5, 0.5, 0.5, 0.5 };
Then to pre-multiply it:
t_rgba rgba_PreMul = rgba;
rgba_PreMul.r *= rgba_PreMul.a;
rgba_PreMul.g *= rgba_PreMul.a;
rgba_PreMul.b *= rgba_PreMul.a;
Then to de-pre-multiply it:
t_rgba rgba_DePreMul = rgba_PreMul;
if (0.0 < rgba_DePreMul.a && 1.0 > rgba_DePreMul.a) {
const float ialpha = 1.0/rgba_DePreMul.a;
rgba_DePreMul.r *= ialpha;
rgba_DePreMul.g *= ialpha;
rgba_DePreMul.b *= ialpha;
}
You might want some saturation in there, too.
Now that's the basic form, which can be repurposed to other numeric representations. Note that these conversions are lossy. As well, be careful not to pass premultiplied bitmaps where regular bitmaps are expected, and vice versa.

Related

CGImageCreate with CGColorSpaceCreateDeviceGray on iOS12

I was using CGImageCreate with CGColorSpaceCreateDeviceGray to convert a buffer (CVPixelBufferRef) to grayscale image. It was very fast and did work well until iOS 12... now the returned image is empty.
The code look like this:
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
CGDataProviderRef provider = CGDataProviderCreateWithData((void *)i_PixelBuffer,
sourceBaseAddr,
sourceRowBytes * height,
ReleaseCVPixelBuffer);
retImage = CGImageCreate(width,
height,
8,
32,
sourceRowBytes,
CGColorSpaceCreateDeviceGray(),
bitmapInfo,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
This is a known bug in iOS 12? If device gray is no supported anymore in this function, can you suggest me another way to do it?
Note that conversion should take less than 0.1 seconds for a 4K image.
Thanks in advance!
According to the list of Supported Pixel Formats in the Quartz 2D Programming Guide, iOS doesn't support 32 bits per pixel with gray color spaces. And even on macOS, 32 bpp gray requires the use of kCGBitmapFloatComponents (and float data).
Is your data really 32 bpp? If so, is it float? What are you using for bitmapInfo?
I would not expect CGImageCreate() to "convert" a buffer, including to grayscale. The parameters you're supplying are telling it how to interpret the data. If you're not using floating-point components, I suspect it was just taking one of the color channels and interpreting that as the gray level and ignoring the other components. So, it wasn't a proper grayscale conversion.
Apple's advice is to create an image that properly represents the image; create a bitmap context with the colorspace, pixel layout, and bitmap info you desire; draw the former into the latter; and create the final image from the context.
I finally found a workaround for my purpose. Note that the CVPixelBuffer is coming from the video camera.
Changed camera output pixel format to
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
(AVCaptureVideoDataOutput)
Extract the Y plane from YpCbCr
Build a CGImage with the Y plane
Code:
// some code
colorSpace = CGColorSpaceCreateDeviceGray();
sourceRowBytes = CVPixelBufferGetBytesPerRowOfPlane(i_PixelBuffer, 0);
sourceBaseAddr = (unsigned char*)CVPixelBufferGetBaseAddressOfPlane(i_PixelBuffer,0);
bitmapInfo = kCGImageByteOrderDefault;
// some code
CGContextRef context = CGBitmapContextCreate(sourceBaseAddr,
width,
height,
8,
sourceRowBytes,
colorSpace,
bitmapInfo);
retImage = CGBitmapContextCreateImage(context);
// some code
You can also look at this related post:
420YpCbCr8BiPlanarVideoRange To YUV420 ?/How to copy Y and Cbcr plane to Single plane?

Can you change the bounds of a Sampler in a Metal Shader?

In the fragment function of a Metal Shader file, is there a way to redefine the "bounds" of the texture with respect to what the sample will consider it's normalized coordinates to be?
By default, a value of 0,0 for the sample is the top-left "pixel" and 1,1 is the bottom right "pixel" of the texture. However, I'm re-using textures for drawing and at any given render pass there's only a portion of the texture that contains the relevant data.
For example, in a texture of width: 500 and height: 500, I might have only copied data into the region of 0,0,250,250. In my fragment function, I'd like the sampler to interpret a normalized coordinate of 1.0 to be 250 and not 500. Is that possible?
I realize I can just change the sampler to use pixel addressing, but that comes with a few restrictions as noted in the Metal Shader Specification.
No, but if you know the region you want to sample from, it's quite easy to do a little math in the shader to fix up your sampling coordinates. This is used often with texture atlases.
Suppose you have an image that's 500x500 and you want to sample the bottom-right 125x125 region (just to make things more interesting). You could pass this sampling region in as a float4, storing the bounds as (left, top, width, height) in the xyzw components. In this case, the bounds would be (375, 375, 125, 125). Your incoming texture coordinates are "normalized" with respect to this square. The shader simply scales and biases these coordinates into texel coordinates, then normalizes them to the dimensions of the whole texture:
fragment float4 fragment_main(FragmentParams in [[stage_in]],
texture2d<float, access::sample> tex2d [[texture(0)]],
sampler sampler2d [[sampler(0)]],
// ...
constant float4 &spriteBounds [[buffer(0)]])
{
// original coordinates, normalized with respect to subimage
float2 texCoords = in.texCoords;
// texture dimensions
float2 texSize = float2(tex2d.get_width(), tex2d.get_height());
// adjusted texture coordinates, normalized with respect to full texture
texCoords = (texCoords * spriteBounds.zw + spriteBounds.xy) / texSize;
// sample color at modified coordinates
float4 color = tex2d.sample(sampler2d, texCoords);
// ...
}

how to plot rgb color histogram of image with objective c

i want to show image RGB colour histogram in cocoa application. Please suggest possible way to do it with objective c or any third party library available to achieve this.
well this is a problem as RGB colors are 3D space so their histogram would lead to 4D plot which is something we do not really comprehend.
So the solution to this is to convert the 4D plot to 3D plot somehow. This can be done by sorting the colors by something that has some meaning. I will not speculate and describe what I am using. I use HSV color space and ignore the V value. This way I lose a lot of color shade info but it is still enough to describe colors for my purposes. This is how it looks like:
You can also use more plots with different V to cover more colors. For more info see:
HSV histogram
Anyway you can use any gradient sorting or any shape of your plot that is completely on you.
If you want pure RGB then you could adapt this and use RGB cube surface or map it on sphere and ignore the length from (0,0,0) (use unit vectors) something like this:
So if you R,G,B are in <0,1> you convert that to <-1,+1> then compute the spherical coordinates (ignoring radius) and you got your 2 variables instead of 3 which you can use as a plot (either as 2D globe base or 3D sphere ...).
Here C++ code how to do this (made from the HSV histogram):
picture pic0,pic1,pic2,zed;
const int na=360,nb=180,nb2=nb>>1; // size of histogram table
int his[na][nb];
DWORD w;
int a,b,r,g,x,y,z,l,i,n;
double aa,bb,da,db,dx,dy,dz,rr;
color c;
pic2=pic0; // copy input image pic0 to pic2
for (a=0;a<na;a++) // clear histogram
for (b=0;b<nb;b++)
his[a][b]=0;
for (y=0;y<pic2.ys;y++) // compute it
for (x=0;x<pic2.xs;x++)
{
c=pic2.p[y][x];
r=c.db[picture::_r]-128;
g=c.db[picture::_g]-128;
b=c.db[picture::_b]-128;
l=sqrt(r*r+g*g+b*b); // convert RGB -> spherical a,b angles
if (!l) { a=0; b=0; }
else{
a=double(double(na)*acos(double(b)/double(l))/(2.0*M_PI));
if (!r) b=0; else b=double(double(nb)*atan(double(g)/double(r))/(M_PI)); b+=nb2;
while (a<0) a+=na; while (a>=na) a-=na;
if (b<0) b=0; if (b>=nb) b=nb-1;
}
his[a][b]++; // update color usage count ...
}
for (n=0,a=0;a<na;a++) // max probability
for (b=0;b<nb;b++)
if (n<his[a][b]) n=his[a][b];
// draw the colored RGB sphere and histogram
zed =pic1; zed .clear(9999); // zed buffer for 3D
pic1.clear(0); // image of histogram
da=2.0*M_PI/double(na);
db=M_PI/double(nb);
for (aa=0.0,a=0;a<na;a++,aa+=da)
for (bb=-M_PI,b=0;b<nb;b++,bb+=db)
{
// normal
dx=cos(bb)*cos(aa);
dy=cos(bb)*sin(aa);
dz=sin(bb);
// color of surface (darker)
rr=75.0;
c.db[picture::_r]=double(rr*dx)+128;
c.db[picture::_g]=double(rr*dy)+128;
c.db[picture::_b]=double(rr*dz)+128;
c.db[picture::_a]=0;
// histogram center
x=pic1.xs>>1;
y=pic1.ys>>1;
// surface position
rr=64.0;
z=rr;
x+=double(rr*dx);
y+=double(rr*dy);
z+=double(rr*dz);
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; }
// ignore lines if zero color count
if (!his[a][b]) continue;
// color of lines (bright)
rr=125.0;
c.db[picture::_r]=double(rr*dx)+128;
c.db[picture::_g]=double(rr*dy)+128;
c.db[picture::_b]=double(rr*dz)+128;
c.db[picture::_a]=0;
// line length
l=(xs*his[a][b])/(n*3);
for (double xx=x,yy=y,zz=z;l>=0;l--)
{
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; }
xx+=dx; yy+=dy; zz+=dz; x=xx; y=yy; z=zz;
if (x<0) break; if (x>=xs) break;
if (y<0) break; if (y>=ys) break;
}
}
input image is pic0, output image is pic1 (histogram graph)
pic2 is copy of pic0 (remnant of old code)
zed is the Zed buffer for 3D display avoiding Z sorting ...
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
As the sphere is a 3D object you should add rotation to it so all the surface is visible in time (or rotate with mouse or whatever) ...

Is there a way to set alpha channel color for sampler2D texture?

I am working on an app that uses OpenGL-ES to blend two textures. For overlay image, I want to specify a certain color as the alpha channel e.g.) green. How could this be accomplished? I tried using glBlendFunc without much success. Any help would be greatly appreciated!
there is no such feature in OpenGL itself, but maybe you could achieve that in shader:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 absdiff = abs(texture.rgb - colorMask);
float alpha = all(lessThan(absdiff, vec3(0.1)))? 1.0 : 0.0;
// calculate alpha using vector relational functions
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This works by calculating absolute difference of texture color and masking color, and comparing it to 0.1. It is pretty simple, but it might be slow (i'm writing it from memory, you have to test it).
Or you can use a different way of calculating alpha:
uniform vec3 colorMask; // eg. green
varying vec2 texCoord;
uniform sampler2D overlay;
void main()
{
vec4 texture = texture2D(overlay, texCoord);
// get RGB of the texture
vec3 diff = texture.rgb - colorMask;
float alpha = step(dot(diff, diff), 0.1);
// calculate alpha using difference squared and a step function
texture.a = alpha; // use alpha
gl_FragColor = texture;
// write final color, can use blending
}
This is using the square error metric to calculate alpha. It calculates distance of colors in RGB space, calculates it's square, and compares that to 0.1. This might be a little bit harder to tweak the threshold (0.1) but enables you to use soft threshold. If you were to have color gradient and wanted some colors to be more transparent than other, you can throw away step function and use smoothstep instead.

Can't correctly rotate cylinder in openGL to desired position

I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow: