UIColor CMYK and Lab Values - objective-c

Simple question, more than likely complex answer:
How can I get CMYK and Lab values from a UIColor object (of which I know the RGB values if it helps)?
I have found this regarding getting CMYK values but I can't get any accurate values out of it, despite it being everywhere, I've heard it's not a great snippet.
CGFloat rgbComponents[4];
[color getRed:&rgbComponents[0] green:&rgbComponents[1] blue:&rgbComponents[2] alpha:&rgbComponents[3]];
CGFloat k = MIN(1-rgbComponents[0], MIN(1-rgbComponents[1], 1-rgbComponents[2]));
CGFloat c = (1-rgbComponents[0]-k)/(1-k);
CGFloat m = (1-rgbComponents[1]-k)/(1-k);
CGFloat y = (1-rgbComponents[2]-k)/(1-k);

For ICC-based color conversion, you can use the Little Color Management System. (I have just added all .c and .h files from the download archive to an iOS Xcode project. It compiled and ran the following code without problems.)
Remark: RGB and CMYK are a device dependent color spaces, Lab is a device independent color space. Therefore, to convert from RGB to Lab, you have to choose a device independent (or "calibrated") RGB color space for the conversion, for example sRGB.
Little CMS comes with built-in profiles for sRGB and Lab color spaces. A conversion from sRGB to Lab looks like this:
Create a color transformation:
cmsHPROFILE rgbProfile = cmsCreate_sRGBProfile();
cmsHPROFILE labProfile = cmsCreateLab4Profile(NULL);
cmsHTRANSFORM xform = cmsCreateTransform(rgbProfile, TYPE_RGB_FLT, labProfile,
TYPE_Lab_FLT,
INTENT_PERCEPTUAL, 0);
cmsCloseProfile(labProfile);
cmsCloseProfile(rgbProfile);
Convert colors:
float rgbValues[3];
// fill rgbValues array with input values ...
float labValues[3];
cmsDoTransform(xform, rgbValues, labValues, 1);
// labValues array contains output values.
Dispose of color transformation:
cmsDeleteTransform(xform);
Of course, the transformation would be created only once and used for all color conversions.
For RGB to CMYK conversion you can also use Little CMS, but you have to provide an ICC-Profile, e.g. one from the free Adobe download page ICC profile downloads for Mac OS.
Code example for RGB to CMYK conversion:
float rgb[3]; // fill with input values (range 0.0 .. 1.0)
float cmyk[4]; // output values (range 0.0 .. 100.0)
cmsHPROFILE rgbProfile = cmsCreate_sRGBProfile();
// The CMYK profile is a resource in the application bundle:
NSString *cmykProfilePath = [[NSBundle mainBundle] pathForResource:#"YourCMYKProfile.icc" ofType:nil];
cmsHPROFILE cmykProfile = cmsOpenProfileFromFile([cmykProfilePath fileSystemRepresentation], "r");
cmsHTRANSFORM xform = cmsCreateTransform(rgbProfile, TYPE_RGB_FLT, cmykProfile,
TYPE_CMYK_FLT,
INTENT_PERCEPTUAL, 0);
cmsCloseProfile(cmykProfile);
cmsCloseProfile(rgbProfile);
cmsDoTransform(xform, rgb, cmyk, 1);
cmsDeleteTransform(xform);

To get the LAB values you need to convert the RGB values into XYZ values which you can then convert into RGB values.
- (NSMutableArray *) convertRGBtoLABwithColor: (UIColor *)color
////make variables to get rgb values
CGFloat red3;
CGFloat green3;
CGFloat blue3;
//get rgb of color
[color getRed:&red3 green:&green3 blue:&blue3 alpha:nil];
float red2 = (float)red3*255;
float blue2 = (float)blue3*255;
float green2 = (float)green3*255;
//first convert RGB to XYZ
// same values, from 0 to 1
red2 = red2/255;
green2 = green2/255;
blue2 = blue2/255;
// adjusting values
if(red2 > 0.04045)
{
red2 = (red2 + 0.055)/1.055;
red2 = pow(red2,2.4);
} else {
red2 = red2/12.92;
}
if(green2 > 0.04045)
{
green2 = (green2 + 0.055)/1.055;
green2 = pow(green2,2.4);
} else {
green2 = green2/12.92;
}
if(blue2 > 0.04045)
{
blue2 = (blue2 + 0.055)/1.055;
blue2 = pow(blue2,2.4);
} else {
blue2 = blue2/12.92;
}
red2 *= 100;
green2 *= 100;
blue2 *= 100;
//make x, y and z variables
float x;
float y;
float z;
// applying the matrix to finally have XYZ
x = (red2 * 0.4124) + (green2 * 0.3576) + (blue2 * 0.1805);
y = (red2 * 0.2126) + (green2 * 0.7152) + (blue2 * 0.0722);
z = (red2 * 0.0193) + (green2 * 0.1192) + (blue2 * 0.9505);
//then convert XYZ to LAB
x = x/95.047;
y = y/100;
z = z/108.883;
// adjusting the values
if(x > 0.008856)
{
x = powf(x,(1.0/3.0));
} else {
x = ((7.787 * x) + (16/116));
}
if(y > 0.008856)
{
y = pow(y,(1.0/3.0));
} else {
y = ((7.787 * y) + (16/116));
}
if(z > 0.008856)
{
z = pow(z,(1.0/3.0));
} else {
z = ((7.787 * z) + (16/116));
}
//make L, A and B variables
float l;
float a;
float b;
//finally have your l, a, b variables!!!!
l = ((116 * y) - 16);
a = 500 * (x - y);
b = 200 * (y - z);
NSNumber *lNumber = [NSNumber numberWithFloat:l];
NSNumber *aNumber = [NSNumber numberWithFloat:a];
NSNumber *bNumber = [NSNumber numberWithFloat:b];
//add them to an array to return.
NSMutableArray *labArray = [[NSMutableArray alloc] init];
[labArray addObject:lNumber];
[labArray addObject:aNumber];
[labArray addObject:bNumber];
return labArray;
}

Related

OpenCV Mat image data structure

I have an image that has been processed throw:
//UIImage to Mat
cv::Mat originalMat = [self cvMatFromUIImage:inputImage];
//Grayscale
cv::Mat grayMat;
cv::cvtColor(originalMat, grayMat, CV_RGB2GRAY);
//Blur
cv::Mat gaussMat;
cv::GaussianBlur( grayMat , gaussMat, cv::Size(9, 9), 2, 2 );
//Threshold
cv::threshold(grayMat,tMat,100,255,cv::THRESH_BINARY);
than I want to analyze (calculate qty of white and black points) that belows to line. For instance: I have an image 100x120px and I want to check lines where x = 5 and y = from 0 to 119; and vice versa x = 0..99; y = 5;
so I expect that Mat will contains x - Mat.cols and y - Mat.rows but looks it saves data in another way. for example I've tried to change pixels color that belows to lines but didn't get 2 lines:
for( int x = 0; x < tMat.cols; x++ ){
tMat.at<cv::Vec4b>(5,x)[0] = 100;
}
for( int y = 0; y < tMat.rows; y++ ){
tMat.at<cv::Vec4b>(y,5)[0] = 100;
}
return [self UIImageFromCVMat:tMat];
result for white image:
why I did't get 2 lines? Is it possible to draw\check lines in Mat directly? what if I going to check line that calculates via y = kx + b?
You are accessing the pixel values in the wrong way. You are working with image that only has one channel, that's why you should access pixel values like this:
for (int x = 0; x < tMat.cols; x++){
tMat.at<unsigned char>(5, x) = 100;
}
for (int y = 0; y < tMat.rows; y++){
tMat.at<unsigned char>(y, 5) = 100;
}
The Mat element's type is defined by two properties - the number of channels and the underlying type of data. If you do not know the meaning of those terms, I strongly suggest that you read the documentation for methods cv::Mat::type(), cv::Mat::channels() and cv::Mat::depth().
Two more examples:
mat.at<float>(x, y) = 1.0f; // if mat type is CV_32FC1
mat.at<cv::Vec3b>(x, y) = Vec3b(1, 2, 3); // if mat type is CV_8UC3
Probably an issue with your Mat data types. The output of threshold is a single channel image that is 8-bit or 32-bit (http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold), so you probably should not be setting values with Mat.at<Vec4b>[0].
Here's a function to return the type of your matrix. Usage is in the commented out part. Copied from How to find out what type of a Mat object is with Mat::type() in OpenCV.
std::string type2str(int type){
//string ty = type2str( comparisonResult.type() );
//printf("Matrix: %s %dx%d \n", ty.c_str(), comparisonResult.cols, comparisonResult.rows );
string r;
uchar depth = type & CV_MAT_DEPTH_MASK;
uchar chans = 1 + (type >> CV_CN_SHIFT);
switch ( depth ) {
case CV_8U: r = "8U"; break;
case CV_8S: r = "8S"; break;
case CV_16U: r = "16U"; break;
case CV_16S: r = "16S"; break;
case CV_32S: r = "32S"; break;
case CV_32F: r = "32F"; break;
case CV_64F: r = "64F"; break;
default: r = "User"; break;
}
r += "C";
r += (chans+'0');
return r;}

Is there any PDF command, that scales rectangle coordinates?

I have an application, that extracts text and rectangles from pdf files for further analysis. I use ItextSharp for extraction, and everything worked smoothly, until I stumbled upon a document, which has some strange table cell rectangles. The values in the drawing commands, that I retrieve, seem 10 times larger, than actual dimensions of the latter rectangles.
Just an example :
2577 831.676 385.996 3.99609 re
At the same time, when viewing the document all rectangles seem to correctly fit in the bounds of document pages. My guess is that there should be some scaling command, telling, that these values should be scaled down. Is the assumption right, or how is it possible, that such large rectangles are rendered so, that they stay inside the bounds of a page ?
The pdf document is behind this link : https://www.dropbox.com/s/gyvon0dwk6a9cj0/prEVS_ISO_11620_KOM_et.pdf?dl=0
The code, that handles extraction of dimensions from PRStream is as follows :
private static List<PdfRect> GetRectsAndLinesFromStream(PRStream stream)
{
var streamBytes = PdfReader.GetStreamBytes(stream);
var tokenizer = new PRTokeniser(new RandomAccessFileOrArray(streamBytes));
List<string> newBuf = new List<string>();
List<PdfRect> rects = new List<PdfRect>();
List<string> allTokens = new List<string>();
float[,] ctm = null;
List<float[,]> ctms = new List<float[,]>();
//if current ctm has not yet been added to list
bool pendingCtm = false;
//format definition for string-> float conversion
var format = new System.Globalization.NumberFormatInfo();
format.NegativeSign = "-";
while (tokenizer.NextToken())
{
//Add them to our master buffer
newBuf.Add(tokenizer.StringValue);
if (
tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "re"
)
{
float startPointX = (float)double.Parse(newBuf[newBuf.Count - 5], format);
float startPointY = (float)double.Parse(newBuf[newBuf.Count - 4], format);
float width = (float)double.Parse(newBuf[newBuf.Count - 3], format);
float height = (float)double.Parse(newBuf[newBuf.Count - 2], format);
float endPointX = startPointX + width;
float endPointY = startPointY + height;
//if transformation is defined, correct coordinates
if (ctm!=null)
{
//extract parameters
float a = ctm[0, 0];
float b = ctm[0, 1];
float c = ctm[1, 0];
float d = ctm[1, 1];
float e = ctm[2, 0];
float f = ctm[2, 1];
//reverse transformation to get x and y from x' and y'
startPointX = (startPointX - startPointY * c - e) / a;
startPointY = (startPointY - startPointX * b - f) / d;
endPointX = (endPointX - endPointY * c - e) / a;
endPointY = (endPointY - endPointX * b - f) / d;
}
rects.Add(new PdfRect(startPointX, startPointY , endPointX , endPointY ));
}
//store current ctm
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "q")
{
if (ctm != null)
{
ctms.Add(ctm);
pendingCtm = false;
}
}
//fetch last ctm and remove it from list
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "Q")
{
if (ctms.Count > 0)
{
ctm = ctms[ctms.Count - 1];
ctms.RemoveAt(ctms.Count -1 );
}
}
else if (tokenizer.TokenType == PRTokeniser.TokType.OTHER && newBuf[newBuf.Count - 1] == "cm")
{
// x' = x*a + y*c + e ; y' = x*b + y*d + f
float a = (float)double.Parse(newBuf[newBuf.Count - 7], format);
float b = (float)double.Parse(newBuf[newBuf.Count - 6], format);
float c = (float)double.Parse(newBuf[newBuf.Count - 5], format);
float d = (float)double.Parse(newBuf[newBuf.Count - 4], format);
float e = (float)double.Parse(newBuf[newBuf.Count - 3], format);
float f = (float)double.Parse(newBuf[newBuf.Count - 2], format);
float[,] tempCtm = ctm;
ctm = new float[3, 3] {
{a,b,0},
{c,d,0},
{e,f,1}
};
//multiply matrices to form 1 transformation matrix
if (pendingCtm && tempCtm != null)
{
float[,] resultantCtm;
if (!TryMultiplyMatrix(tempCtm, ctm, out resultantCtm))
{
throw new InvalidOperationException("Invalid transform matrix");
}
ctm = resultantCtm;
}
//current CTM has not yet been saved to stack
pendingCtm = true;
}
return rects;
}
The command you are looking for is cm. Did you read The ABC of PDF with iText? The book isn't finished yet, but you can already download the first five chapters.
This is a screen shot of the table that shows the cm operator:
This is an example of 5 shapes that are created in the exact same way, using identical syntax:
They are added at different positions, even in a different size and shape, because of the change in the graphics state: the coordinate system was changed, and the shapes are rendered in that altered coordinate system.

Bezier path see if it crosses

I have a code that lets the user draw a shape, I'm using UIBezierPath for this. But I need to see if the shape crosses itself, for example like this: http://upload.wikimedia.org/wikipedia/commons/0/0f/Complex_polygon.svg
Then it's not a a valid shape.
How can I find this?
Edit:
I still haven't solved this. I save all the points between the lines in the path in a array. And then I loop through the array and try to find if any lines intersects. But it does not work, sometimes it says that there is an intersection when it isn't.
I think that the problem is somewhere in this method.
-(BOOL)pathIntersects:(double *)x:(double *)y {
int count = pathPoints.count;
CGPoint p1, p2, p3, p4;
for (int a=0; a<count; a++) {
//Line 1
if (a+1<count) {
p1 = [[pathPoints objectAtIndex:a] CGPointValue];
p2 = [[pathPoints objectAtIndex:a+1] CGPointValue];
}else{
return NO;
}
for (int b=0; b<count; b++) {
//Line 2
if (b+1<count) {
p3 = [[pathPoints objectAtIndex:b] CGPointValue];
p4 = [[pathPoints objectAtIndex:b+1] CGPointValue];
}else{
return NO;
}
if (!CGPointEqualToPoint(p1, p3) && !CGPointEqualToPoint(p2, p3) && !CGPointEqualToPoint(p4, p1) && !CGPointEqualToPoint(p4, p2)
&& !CGPointEqualToPoint(p1, p2) && !CGPointEqualToPoint(p3, p4)) {
if (LineIntersect(p1.x, p1.y, p2.x, p2.y, p3.x, p3.y, p4.x, p4.y, x, y)) {
return YES;
}
}
}
}
return NO;
}
This is the code I found to see if two lines intersects, It's in C but I should work.
int LineIntersect(
double x1, double y1,
double x2, double y2,
double x3, double y3,
double x4, double y4,
double *x, double *y)
{
double mua,mub;
double denom,numera,numerb;
denom = (y4-y3) * (x2-x1) - (x4-x3) * (y2-y1);
numera = (x4-x3) * (y1-y3) - (y4-y3) * (x1-x3);
numerb = (x2-x1) * (y1-y3) - (y2-y1) * (x1-x3);
/* Are the line coincident? */
if (ABS(numera) < 0.00001 && ABS(numerb) < 0.00001 && ABS(denom) < 0.00001) {
*x = (x1 + x2) / 2;
*y = (y1 + y2) / 2;
return(TRUE);
}
/* Are the line parallel */
if (ABS(denom) < 0.00001) {
*x = 0;
*y = 0;
return(FALSE);
}
/* Is the intersection along the the segments */
mua = numera / denom;
mub = numerb / denom;
if (mua < 0 || mua > 1 || mub < 0 || mub > 1) {
*x = 0;
*y = 0;
return(FALSE);
}
*x = x1 + mua * (x2 - x1);
*y = y1 + mua * (y2 - y1);
return(TRUE);
}
It depends on how complex the polygon drawn by the user can be and the number of points in the path. Ideally, there would be a point for all the vertices in the shape and nothing more. Get a CGPath from the UIBezierPath and use GCPathApply to hand the elements to a function, which adds each point to an array. Traverse the array with two for loops, one nested in the other, which checks each line segment against every line segment after it using a standard line-line intersection test. As soon as an intersection has been found, break from the loop. Or, if this were a convenience method, return a BOOL. That's the simplest way.
EDIT: Here's an example of a line-line intersection function which returns a BOOL telling you whether or not two segments cross. Pass in the two points that create the first segment followed by the two points that make the second segment. It was hastily modified from a piece of source code I found online quickly, but it works.
CGPoint lineSegmentsIntersect(CGPoint L1P1, CGPoint L1P2, CGPoint L2P1, CGPoint L2P2)
{
float x1 = L1P1.x, x2 = L1P2.x, x3 = L2P1.x, x4 = L2P2.x;
float y1 = L1P1.y, y2 = L1P2.y, y3 = L2P1.y, y4 = L2P2.y;
float bx = x2 - x1;
float by = y2 - y1;
float dx = x4 - x3;
float dy = y4 - y3;
float b_dot_d_perp = bx * dy - by * dx;
if(b_dot_d_perp == 0) {
return NO;
}
float cx = x3 - x1;
float cy = y3 - y1;
float t = (cx * dy - cy * dx) / b_dot_d_perp;
if(t < 0 || t > 1) {
return NO;
}
float u = (cx * by - cy * bx) / b_dot_d_perp;
if(u < 0 || u > 1) {
return NO;
}
return YES;
}
You can use it like this.
if (lineSegmentsIntersect(lineOnePointOne,lineOnePointTwo,lineTwoPointOne,lineTwoPointTwo)){
//segments intersect
} else {
//segments did not intersect
}
It's up to you to create the double loop to check the correct segments against one another.

2nd order IIR filter, coefficients for a butterworth bandpass (EQ)?

Important update: I already figured out the answers and put them in this simple open-source library: http://bartolsthoorn.github.com/NVDSP/ Check it out, it will probably save you quite some time if you're having trouble with audio filters in IOS!
^
I have created a (realtime) audio buffer (float *data) that holds a few sin(theta) waves with different frequencies.
The code below shows how I created my buffer, and I've tried to do a bandpass filter but it just turns the signals to noise/blips:
// Multiple signal generator
__block float *phases = nil;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
float samplingRate = audioManager.samplingRate;
NSUInteger activeSignalCount = [tones count];
// Initialize phases
if (phases == nil) {
phases = new float[10];
for(int z = 0; z <= 10; z++) {
phases[z] = 0.0;
}
}
// Multiple signals
NSEnumerator * enumerator = [tones objectEnumerator];
id frequency;
UInt32 c = 0;
while(frequency = [enumerator nextObject])
{
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
float theta = phases[c] * M_PI * 2;
if (c == 0) {
data[i*numChannels + iChannel] = sin(theta);
} else {
data[i*numChannels + iChannel] = data[i*numChannels + iChannel] + sin(theta);
}
}
phases[c] += 1.0 / (samplingRate / [frequency floatValue]);
if (phases[c] > 1.0) phases[c] = -1;
}
c++;
}
// Normalize data with active signal count
float signalMulti = 1.0 / (float(activeSignalCount) * (sqrt(2.0)));
vDSP_vsmul(data, 1, &signalMulti, data, 1, numFrames*numChannels);
// Apply master volume
float volume = masterVolumeSlider.value;
vDSP_vsmul(data, 1, &volume, data, 1, numFrames*numChannels);
if (fxSwitch.isOn) {
// H(s) = (s/Q) / (s^2 + s/Q + 1)
// http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt
// BW 2.0 Q 0.667
// http://www.rane.com/note170.html
//The order of the coefficients are, B1, B2, A1, A2, B0.
float Fs = samplingRate;
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
float Q = 0.50f;
float alpha = sin(omega)/(2*Q); // sin(w0)/(2*Q)
// Through H
for (int i=0; i < numFrames; ++i)
{
for (int iChannel = 0; iChannel < numChannels; ++iChannel)
{
data[i*numChannels + iChannel] = (data[i*numChannels + iChannel]/Q) / (pow(data[i*numChannels + iChannel],2) + data[i*numChannels + iChannel]/Q + 1);
}
}
float b0 = alpha;
float b1 = 0;
float b2 = -alpha;
float a0 = 1 + alpha;
float a1 = -2*cos(omega);
float a2 = 1 - alpha;
float *coefficients = (float *) calloc(5, sizeof(float));
coefficients[0] = b1;
coefficients[1] = b2;
coefficients[2] = a1;
coefficients[3] = a2;
coefficients[3] = b0;
vDSP_deq22(data, 2, coefficients, data, 2, numFrames);
free(coefficients);
}
// Measure dB
[self measureDB:data:numFrames:numChannels];
}];
My aim is to make a 10-band EQ for this buffer, using vDSP_deq22, the syntax of the method is:
vDSP_deq22(<float *vDSP_A>, <vDSP_Stride vDSP_I>, <float *vDSP_B>, <float *vDSP_C>, <vDSP_Stride vDSP_K>, <vDSP_Length __vDSP_N>)
See: http://developer.apple.com/library/mac/#documentation/Accelerate/Reference/vDSPRef/Reference/reference.html#//apple_ref/doc/c_ref/vDSP_deq22
Arguments:
float *vDSP_A is the input data
float *vDSP_B are 5 filter coefficients
float *vDSP_C is the output data
I have to make 10 filters (10 times vDSP_deq22). Then I set the gain for every band and combine them back together. But what coefficients do I feed every filter? I know vDSP_deq22 is a 2nd order (butterworth) IIR filter, but how do I turn this into a bandpass?
Now I have three questions:
a) Do I have to de-interleave and interleave the audio buffer? I know setting stride to 2 just filters on channel but how I filter the other, stride 1 will process both channels as one.
b) Do I have to transform/process the buffer before it enters the vDSP_deq22 method? If so, do I also have to transform it back to normal?
c) What values of the coefficients should I set to the 10 vDSP_deq22s?
I've been trying for days now but I haven't been able to figure this on out, please help me out!
Your omega value need to be normalised, i.e. expressed as a fraction of Fs - it looks like you left out the f0 when you calculated omega, which will make alpha wrong too:
float omega = 2*M_PI*Fs; // w0 = 2*pi*f0/Fs
should probably be:
float omega = 2*M_PI*f0/Fs; // w0 = 2*pi*f0/Fs
where f0 is the centre frequency in Hz.
For your 10 band equaliser you'll need to pick 10 values of f0, spaced logarithmically, e.g. 25 Hz, 50 Hz, 100 Hz, 200 Hz, 400 Hz, 800 Hz, 1.6 kHz, 3.2 kHz, 6.4 kHz, 12.8 kHz.

Display YUV in OpenGL

I am having trouble displaying a raw YUV file that it is in NV12 format.
I can display a selected frame, however, it is still mainly in black and white with certain shades of pink and green.
Here is how my output looks like
Anyways, here is how my program works. (This is done in cocoa/objective-c, but I need your expert advice on program algorithm, not on syntax.)
Prior to program execution, the YUV file is stored in a binary file named "test.yuv". The file is in NV12 format, meaning the Y plan is stored first, then the UV plan is interlaced. My file extraction has no problem because I did a lot testings on it.
During initiation, I create a lookup table that will convert binary/8 bites/a byte/chars into respected Y, U, V float values
For the Y plane this is my code
-(void)createLookupTableY //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableY");
lookupTableY = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableY[i] = (float) i /255;
//NSLog(#"lookupTableY[%d]: %f",i,lookupTableY[i]);//prints out the value of each float
}
}
The U Plane lookup table
-(void)createLookupTableU //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableU");
lookupTableU = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableU[i] = -0.436 + (float) i / 255* (0.436*2);
NSLog(#"lookupTableU[%d]: %f",i,lookupTableU[i]);//prints out the value of each float
}
}
And the V look-up table
-(void)createLookupTableV //creates a lookup table for converting a single byte into a float between 0 and 1
{
NSLog(#"YUVFrame: createLookupTableV");
lookupTableV = new float [256];
for (int i = 0; i < 256; i++)
{
lookupTableV[i] = -0.615 + (float) i /255 * (0.615*2);
NSLog(#"lookupTableV[%d]: %f",i,lookupTableV[i]);//prints out the value of each float
}
}
after this point, I extract the Y & UV plan and store them into two buffers, yBuffer & uvBuffer
at this point, I attempt to convert the YUV data and stored it into a RGB buffer array called "frameImage"
-(void)sortAndConvert//sort the extracted frame data into an array of float
{
NSLog(#"YUVFrame: sortAndConvert");
int frameImageCounter = 0;
int pixelCounter = 0;
for (int y = 0; y < YUV_HEIGHT; y++)//traverse through the frame's height
{
for ( int x = 0; x < YUV_WIDTH; x++)//traverse through the frame's width
{
float Y = lookupTableY [yBuffer [y*YUV_WIDTH + x] ];
float U = lookupTableU [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 + 1] ];
float RFormula = Y + 1.13983f * V;
float GFormula = Y - 0.39465f * U - 0.58060f * V;
float BFormula = Y + 2.03211f * U;
frameImage [frameImageCounter++] = [self clampValue:RFormula];
frameImage [frameImageCounter++] = [self clampValue:GFormula];
frameImage [frameImageCounter++] = [self clampValue:BFormula];
}
}
}
then I tried to draw the Image in OpenGL
-(void)drawFrame:(int )x
{
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, YUV_WIDTH, YUV_HEIGHT, 0, GL_RGB, GL_FLOAT, frameImage);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glRotatef( 180.0f, 1.0f, 0.0f, 0.0f );
glBegin( GL_QUADS );
glTexCoord2d(0.0,0.0); glVertex2d(-1.0,-1.0);
glTexCoord2d(1.0,0.0); glVertex2d(+1.0,-1.0);
glTexCoord2d(1.0,1.0); glVertex2d(+1.0,+1.0);
glTexCoord2d(0.0,1.0); glVertex2d(-1.0,+1.0);
glEnd();
glFlush();
}
so basically this my program in a nut shell. essentially i read the binary YUV files, stores all the data in a char array buffer. i then translate these values into their respected YUV float values.
This is where I think the error might be: in the Y lookup table I standardize the Y plane to [0,1], in the U plane I standardized the values between [-0.435,0.436], and in the V plane I standardized it bewteen [-0.615,0.615]. I did this because those are the YUV value ranges according to wikipedia.
And the YUV to RGB formula is the same formula from Wikipedia. I have also tried various other formulas, and this is the only formula that gives the rough outlook of the frame. Anyone might venture to guess to why my program is not correctly displaying the YUV frame data. I think it is something to do with my standardization technique, but it seems alright to me.
I have done a lot of testings, and I am 100% certain that the error is caused by by look up table. I don't think my setting formulas are correct.
A note to everyone who is reading
this. For the longest time, my frame
was not displaying properly because I
was not able to extract the frame data
correctly.
When I first started to program, I was
under the impression that in a clip of
say 30 frames, all 30 Y planar datas
are first written in the data file,
followed then by UV plane datas.
What I found out through trial and
error was that for a YUV data file,
specifically NV12, the data file is
stored in the following fashion
Y(1) UV(1) Y(2) UV(2) ... ...
#nschmidt
I changed my code to what you suggested
float U = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 ] ]
float V = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 + 1] ]
and this is the result that i get
here is the print line from the console. i am print out the values for Y, U, V and
RGB value that are being translated and stored on in the frameImage array
YUV:[0.658824,-0.022227,-0.045824] RGBFinal:[0.606593,0.694201,0.613655]
YUV:[0.643137,-0.022227,-0.045824] RGBFinal:[0.590906,0.678514,0.597969]
YUV:[0.607843,-0.022227,-0.045824] RGBFinal:[0.555612,0.643220,0.562675]
YUV:[0.592157,-0.022227,-0.045824] RGBFinal:[0.539926,0.627534,0.546988]
YUV:[0.643137,0.025647,0.151941] RGBFinal:[0.816324,0.544799,0.695255]
YUV:[0.662745,0.025647,0.151941] RGBFinal:[0.835932,0.564406,0.714863]
YUV:[0.690196,0.025647,0.151941] RGBFinal:[0.863383,0.591857,0.742314]
Update July 13, 2009
The problem was finally solved thanks to the recommendation from nschmidt . It turns out that my YUV file was actually in YUV 4:1:1 format. I was originally told that it was in YUV NV12 format. Anyways, I would like to share my results with you.
Here is output
and my code for decode is as follows
float Y = (float) yBuffer [y*YUV_WIDTH + x] ;
float U = (float) uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) ] ;
float V = (float) uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) + UOffset] ;
float RFormula = (1.164*(Y-16) + (1.596* (V - 128) ));
float GFormula = ((1.164 * (Y - 16)) - (0.813 * ((V) - 128)) - (0.391 * ((U) - 128)));
float BFormula = ((1.164 * (Y - 16)) + (2.018 * ((U) - 128)));
frameImage [frameImageCounter] = (unsigned char)( (int)[self clampValue:RFormula]);
frameImageCounter ++;
frameImage [frameImageCounter] = (unsigned char)((int)[self clampValue:GFormula]);
frameImageCounter++;
frameImage [frameImageCounter] = (unsigned char)((int) [self clampValue:BFormula]);
frameImageCounter++;
GLuint texture;
glGenTextures(1, &texture);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, YUV_WIDTH, YUV_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, frameImage);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE_SGIS);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE_SGIS);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glRotatef( 180.0f, 1.0f, 0.0f, 0.0f );
glBegin( GL_QUADS );
glTexCoord2d(0.0,0.0); glVertex2d(-1.0,-1.0);
glTexCoord2d(1.0,0.0); glVertex2d(+1.0,-1.0);
glTexCoord2d(1.0,1.0); glVertex2d(+1.0,+1.0);
glTexCoord2d(0.0,1.0); glVertex2d(-1.0,+1.0);
glEnd();
NSLog(#"YUVFrameView: drawRect complete");
glFlush();
essentially, I used NSData for the raw file extraction. stored in a char array buffer. For YUV to RGB conversion, I used the above formula, afterwards, I clamped the values to [0:255]. then I used a 2DTexture in OpenGL for display.
So if you are converting YUV to RGB in the future, use the formula from above. If are using the YUV to RGB conversion formula from the earlier post, then you need to display the texture in GL_Float from the values for RGB are clamped between [0:1]
Next try :)
I think you're uv buffer is not interleaved. It looks like the U values come first and then the array of V values. Changing the lines to
unsigned int voffset = YUV_HEIGHT * YUV_WIDTH / 2;
float U = lookupTableU [uvBuffer [ y * (YUV_WIDTH / 2) + x/2] ];
float V = lookupTableV [uvBuffer [ voffset + y * (YUV_WIDTH / 2) + x/2] ];
might indicate if this is really the case.
I think you're addressing U and V values incorrectly. Rather than:
float U = lookupTableU [uvBuffer [ ((y / 2) * (x / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (x / 2) + (x/2)) * 2 + 1] ];
It should be something along the lines of
float U = lookupTableU [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 ] ];
float V = lookupTableV [uvBuffer [ ((y / 2) * (YUV_WIDTH / 2) + (x/2)) * 2 + 1] ];
The picture looks like you have 4:1:1 format. You should change your lines to
float U = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 ] ]
float V = lookupTableU [uvBuffer [ (y * (YUV_WIDTH / 4) + (x/4)) * 2 + 1] ]
Maybe you can post the result to see, what else is wrong. I find it always hard to think about it. It's much easier to approach this iteratively.