PDF generated from postscript vs. Quartz 2D - objective-c

I'm working on a program which outputs needlework patterns as PDF files using Quartz 2D and Objective C. I've used another program which was coded in Python that outputs postscript files, which are converted to PDF when I open them in Preview. Since the second app is open source, I've been able to check that the settings I use to layout my PDF are the same, specifically the size of the squares and gap sizes between them.
In the image below, the output of the other program is on the left, while mine is on the right and both are at actual size. The problem I'm having is that at actual size, the gap lines in my output are intermittent, while in the other, all the gaps can be seen. I'm wondering if anyone knows about a rendering difference with postscript files which allows for this. I can zoom in on my output and the gaps show up, but I don't understand why there would be this difference.
The squares are set to be 8 pixels wide and tall, with a 1 pixel gap between them in both applications with 2 pixel wide gaps every 10 squares and mine is set to not use antialiasing. With my output, I've tried drawing directly to a CGPDFContext and drawing to a CGLayerRef then drawing the layer to the PDF Context, but I get the same result. I'm using integer values for positioning the layout and I'm pretty sure I've avoided trying to place squares in fractions of pixel positions.
I have also tried drawing the output to a CGBitmapContext and then drawing the resulting bitmap to the PDF context, but zooming in on that gives terrible artifacts since it is then a raster being magnified.
A last difference I've noted is that the file size of the postscript generated PDF is much smaller than the one I make, and I'm thinking that might have to do with the paths I draw since it says drawing to a PDF context records the drawing as a series of PDF drawing commands written to a file, which I would imagine takes up quite a bit of space compared to just displaying an image.
I have included my code to generate my PDFs below incase it would be helpful, but I'm really just wondering about if there is a rendering difference between postscript and Quartz that could explain these differences and if there is a way to make my output match up.
(The uploader says I need at least 10 reputation to post images, but I have this link http://i.stack.imgur.com/nr588.jpg again, the postscript output is on the left, my Quartz output is on the right and in my output, the gridlines are intermittent)
-(void)makePDF:(NSImage*)image withPixelArray:(unsigned char *)rawData{
NSString *currentUserHomeDirectory = NSHomeDirectory();
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:#"/Desktop/"];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingString:[image name]];
currentUserHomeDirectory = [currentUserHomeDirectory stringByAppendingPathExtension:#"pdf"];
CGContextRef pdfContext;
CFStringRef path;
CFURLRef url;
int width = 792;
int height = 612;
CFMutableDictionaryRef myDictionary = NULL;
CFMutableDictionaryRef pageDictionary = NULL;
const char *filename = [currentUserHomeDirectory UTF8String];
path = CFStringCreateWithCString (NULL, filename,
kCFStringEncodingUTF8);
url = CFURLCreateWithFileSystemPath (NULL, path,
kCFURLPOSIXPathStyle, 0);
CFRelease (path);
myDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CGRect pageRect = CGRectMake(0, 0, width, height);
pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary);
const CGFloat whitePoint[3]= {0.95047, 1.0, 1.08883};
const CGFloat blackPoint[3]={0,0,0};
const CGFloat gammavalues[3] = {2.2,2.2,2.2};
const CGFloat matrix[9] = {0.4124564, 0.3575761, 0.1804375, 0.2126729, 0.7151522, 0.072175, 0.0193339, 0.119192, 0.9503041};
CGColorSpaceRef myColorSpace = CGColorSpaceCreateCalibratedRGB(&whitePoint[3], &blackPoint[3], &gammavalues[3], &matrix[9]);
CGContextSetFillColorSpace (
pdfContext,
myColorSpace
);
int annotationNumber =0;
int match=0;
CFRelease(myDictionary);
CFRelease(url);
pageDictionary = CFDictionaryCreateMutable(NULL, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDataRef boxData = CFDataCreate(NULL,(const UInt8 *)&pageRect, sizeof (CGRect));
CFDictionarySetValue(pageDictionary, kCGPDFContextMediaBox, boxData);
int m = 0;
int sidestep = 0;
int downstep = 0;
int maxc = 0;
int maxr = 0;
int columnsPerPage = 70;
int rowsPerPage = 60;
int symbolSize = 8;
int gapSize=1;
CGContextSetShouldAntialias(pdfContext, NO);
int pages = ceil([image size].width/columnsPerPage) * ceil([image size].height/rowsPerPage);
for (int g=0; g<pages; g++) {
int offsetX = 32;
int offsetY = 32;
if (sidestep == ceil([image size].width/columnsPerPage)-1) {
maxc=[image size].width-sidestep*columnsPerPage;
}else {
maxc=columnsPerPage;
}
if (downstep == ceil([image size].height/rowsPerPage)-1) {
maxr=[image size].height-downstep*rowsPerPage;
}else {
maxr=rowsPerPage;
}
CGPDFContextBeginPage (pdfContext, pageDictionary);
CGContextTranslateCTM(pdfContext, 0.0, 612);
CGContextScaleCTM(pdfContext, 1.0, -1.0);
CGContextSetShouldAntialias(pdfContext, NO);
int r=0;
while (r<maxr){
int c=0;
while (c<maxc){
m = sidestep*columnsPerPage+c+downstep*[image size].width*rowsPerPage+r*[image size].width;
//Reset offsetX
if (c==0) {
offsetX=32;
}
//Increase offset for gridlines
if (c==0 && r%10==0&&r!=0) {
offsetY+=2;
}
if (c%10==0&&c!=0) {
offsetX+=2;
}
//DRAW SQUARES
CGContextSetRGBFillColor (pdfContext, (double)rawData[m*4]/255.,(double) rawData[m*4+1]/255., (double)rawData[m*4+2]/255., 1);
CGContextFillRect (pdfContext, CGRectMake (c*(symbolSize+gapSize)+offsetX, r*(symbolSize+gapSize)+offsetY, symbolSize, symbolSize ));
if ([usedColorsPaths count]!=0) {
for (int z=0; z<[usedColorsPaths count]; z++) {
if ([[[usedColorsPaths allKeys] objectAtIndex:z] isEqualToString:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]]) {
match=1;
if (rawData[m*4+3] == 0) {
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY),[Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]] intValue] :symbolSize :0]);
}
break;
}
}
if (match==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
if (annotationNumber==9) {
annotationNumber=0;
}
}
match=0;
}
if ([usedColorsPaths count]==0) {
if (rawData[m*4+3] == 0) {
[usedColorsPaths setObject:[NSNumber numberWithInt:455] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX-2, r*(symbolSize+1)+offsetY-2), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize+4 :0]);
}
else{
[usedColorsPaths setObject:[NSNumber numberWithInt:annotationNumber] forKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4], rawData[m*4+1], rawData[m*4+2]]];
CGContextDrawLayerAtPoint (pdfContext, CGPointMake(c*(symbolSize+1)+offsetX, r*(symbolSize+1)+offsetY), [Anotations colorAnnotations:pdfContext :[[usedColorsPaths objectForKey:[NSString stringWithFormat:#"%i,%i,%i",rawData[m*4],rawData[m*4+1],rawData[m*4+2]]]intValue] :symbolSize :0]);
}
annotationNumber++;
}
c++;
}
r++;
}
sidestep++;
if (sidestep == ceil([image size].width/columnsPerPage)) {
sidestep=0;
downstep+=1;
}
CGContextSaveGState(pdfContext);
CGPDFContextEndPage (pdfContext);
}
CGContextRelease(pdfContext);
CFRelease(pageDictionary);
CFRelease(boxData);}

I would need to see both PDF files to be able to make any judgement about why the sizes are different, or why the line spacing is intermittent, but here's a few points:
The most likely reason why the files are different sizes is that one PDF file has the content stream (the drawing operations you refer to) compressed, while the other does not.
In general emitting a sequence of drawing operations is more compact than including a bitmap image, unless the image resolution is very low. An RGB image takes 3 bytes for every image sample. If you think about a 1 inch square image at 300 dpi that's 300x300x3 bytes, or 270,000 bytes. If the image was all one colour (see example below) I could describe it in PDF drawing operations in 22 bytes.
You can't specify the size of the squares, or any of the other graphic features, in pixels. PDF is a vector based scalable format, not a bitmap format. I don't work on a Mac so I can't comment on your sample code but I suspect you are confusing the media width and height (specified in points) with pixels, these are not the same. The width and height describe a media size, there are no pixels involved until the PDF file is rendered to a bitmap, at that time the resolution of the device determines how many pixels are in each point.
Lets consider a PDF one inch square; that would have a width of 72 and a height of 72. I'll fill that rectangle with pure red, the PDF operations for that would be:
1 0 0 rg
0 0 72 72 re
So that's set the non-stroking colour to RGB (1, 0, 0), then starting at 0, 0 (bottom left) and extending for 72 points wide and 72 points high (one inch in each direction) fill a rectangle with that colour.
I view that on screen here on my PC that one inch square is rendered as 96 pixels by 96 pixels. Now I view it on an iPad with a retina display, the square is rendered 264 pixels by 264. Finally I print it to my laser printer, now the square is rendered 600 pixels by 600. The PDF content hasn't changed, but the number of pixels certainly has. A square is too simple of course, I could have used a circle instead, and obviously the high resolution display would have smoother curves. Of course, if I did use an image, the smoothness of the curve is 'baked in', the device rendering the PDF can't alter it if the resolution changes, all it can do is discard image samples to render down, or interpolate new ones to render up. That looks jagged when you scale down, and fuzzy when you scale up. The vector representation remains smooth, limited only by the current resolution.
The point of PDF is that the PDF isn't limited to one resolution, it can print to all of them and the output should be the same (as far as possible) on each device.
Now I suspect the problem is that you are "using integer values for positioning the layout", you can't do that and get correct (ie expected) results. You should be using real numbers for the layout which will allow you finer control over the position. Remember you are not addressing individual pixels, you are positioning graphics in a co-ordinate system, the resolution only comes into play when rendering (ie viewing or printing) the PDF file. You need to put aside concerns over pixels and just focus on the positioning.

So, after messing with it for another day,I think I found out that my problem was turning off antialiasing. I thought I wanted sharper drawing, but since PDF contains vector graphics, the antialiasing is okay and zooming in on the graphics keeps them sharp.
The first thing I did was in Preview, I went to Preview>preferences>PDF and selected "Define 100% scale as: 1 point equals 1 screen pixel". Doing this with my antialisaing turned off resulted in my image being displayed as I wanted, but zooming in on it, it seems like Preview had difficulty deciding when to draw the 1 pixel gaps.
I then altered my code by deleting the calls to turn off antialiasing and my output renders perfectly, so having antialiasing fixed my problem. Pretty embarrassing that this took me three days to figure out, but I'm glad for this simple fix.

Related

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

Detect most black pixel on an image - objective-c iOS

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?
Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black.
I can also post the code if needed. But the book is the best source as it gives methods and explanations.
These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.
-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
NSLog(#"Error: There has not been an image captured.");
return nil;
}
//create bitmap for the image
UIImage *myImage = self.imageCaptured;//image name for test pic
CGContextRef context = CreateARGBBitmapContext(myImage.size);
if(context == NULL) return NULL;
CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
// CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle
CGContextDrawImage(context, rect, myImage.CGImage);
UInt8 *data = CGBitmapContextGetData(context);
CGContextRelease(context);
return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){
//Create new color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
//Allocate memory for bitmap data
void *bitmapData = malloc(size.width*size.height*4);
if(bitmapData == NULL){
fprintf(stderr, "Error: memory not allocated\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Build an 8-bit per channel context
CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
fprintf(stderr, "Error: Context not created!");
free(bitmapData);
return NULL;
}
return context;
}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+1);
}
The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.

32 bits big endian floating point data to CGImage

I am trying to write an application which read FITS image. FITS stand for Flexible Image Transport format and it is a format wish is primarily used to store scientific data related to astrophysics, and secondarily, it is used by most amator astronomer which take picture of the sky with CCD camera. So FITS file contains images, but they also may contains tables and other kind of data. As I am new in Objectiv-C and cocoa programming (I start this project one year ago, but since I am busy, I almost not touch it for one year !), I started trying to create a library which allow me to convert the image content of the file to a NSImageRep. FITS image binary data may be 8 bit/pix, 16 bit/pix, 32 bit/pix unsigned integer or 32 bit/pix, 64 bit/pix floating point, all in Big endian.
I manage to have image representation for grey scale FITS image in 16 bit/pix, 32 bit/pix unsigned integer but I obtain very weird behaviour when I am looking for 32 bit/pix floating point (and the problem is worth for RGB 32 bit/pix floating points). So far, I haven't test for 8 bits/pix integer data and RGB image based on 16 bit/pix and 32 bit/pix integer data because I haven't yet find example file on the web.
As follow is my code to create a grey scale image form fits file :
-(void) ConstructImgGreyScale
{
CGBitmapInfo bitmapInfo;
int bytesPerRow;
switch ([self BITPIX]) // BITPIX : Number bits/pixel. Information extracted from the FITS header
{
case 8:
bytesPerRow=sizeof(int8_t);
bitmapInfo = kCGImageAlphaNone ;
break;
case 16:
bytesPerRow=sizeof(int16_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder16Big;
break;
case 32:
bytesPerRow=sizeof(int32_t);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big;
break;
case 64:
bytesPerRow=sizeof(int64_t);
bitmapInfo = kCGImageAlphaNone;
break;
case -32:
bytesPerRow=sizeof(Float32);
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrder32Big | kCGBitmapFloatComponents;
case -64:
bytesPerRow=sizeof(Float64);
bitmapInfo = kCGImageAlphaNone | kCGBitmapFloatComponents;
break;
default:
NSLog(#"Unknown pixel bit size");
return;
}
[self setBitsPerSample:abs([self BITPIX])];
[self setColorSpaceName:NSCalibratedWhiteColorSpace];
[self setPixelsWide:[self NAXESofAxis:0]]; // <- Size of the X axis. Extracted from FITS header
[self setPixelsHigh:[self NAXESofAxis:1]]; // <- Size of the Y axis. Extracted from FITS header
[self setSize: NSMakeSize( 2*[self pixelsWide], 2*[self pixelsHigh])];
[self setAlpha: NO];
[self setOpaque:NO];
CGDataProviderRef provider=CGDataProviderCreateWithCFData ((CFDataRef) Img);
CGFloat Scale[2]={0,28};
image = CGImageCreate ([self pixelsWide],
[self pixelsHigh],
[self bitsPerSample],
[self bitsPerSample],
[self pixelsWide]*bytesPerRow,
[[NSColorSpace deviceGrayColorSpace] CGColorSpace],
bitmapInfo,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
CGDataProviderRelease(provider);
return;
}
and here is the snapshot of the result for a 32/bits/pix floating point data : NASA HST picture!
The Image seems to be shift to the left, but what is more annoying is that I get two representation of the same image (upper and lower part of the frame) in the same frame.
And for some other file, the behaviour is more strange :
Star Field 1 , (For the other link se the comment, as new user, I can not have more than two link in this text. As well as I can not put directly the image.)
All three star field images are the representation of the same fits file content. I obtain a correct representation of the image in the bottom part of the frame (the star are too much saturated but I haven't yet play with the encoding). But, in the upper part, each time I open the same file I got a different representation of the image. Look like each time I open this file, it do not tack the same sequence of bytes to produce the image representation (at least for the upper part).
Also, I do not know if the image which is duplicated on the bottom contain half of the data
and the upper one the other half, or if it is simply a copy of the data.
When I convert the content of my Data in primitive format (human readable number) the number are compatible with what should be in the pixel, at the good position. This let me think the problem is not coming from the data but from the way the CGImage interpret the data i.e. I am wrong somewhere in the argument I pass to the CGImageCreate function.
In case of RGB fits image data, I obtain at the end 18 image into my frame. 6 copy of each R, G and B image. All in gray scale. Note that in case of RGB image, my code is different.
What am I doing wrong ?
Ok, I finally find the solution of on of my problem, concerning the duplication of the image. And this was a very stupid mistake and I am not proud of myself not having find it earlier.
In the code, I forget the break in the case -32. Still the question remain about the shift of the picture. I do not see the shift when I am opening 32 bit integer image but it appears on the 32 bit floating points data.
Does any one have an idea of where this shift could come from in my code ? Does it is due to the way I construct the image ? Or it is possible it is du to the way I draw the image ?
Bellow is the piece of code I use to draw the image. Since the image was first upside down, I made a little change of coordinate.
- (bool)draw {
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
if (!context || !image) {
return NO;
}
NSSize size = [self size];
CGContextTranslateCTM(context, 0, size.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), image);
return YES;
}

Average Color of Mac Screen

I'm trying to find out a way to calculate the average color of the screen using objective-c.
So far I use this code to get a screen shot, which works great:
CGImageRef image1 = CGDisplayCreateImage(kCGDirectMainDisplay);
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:image1];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
Now my problem is to calculate the average RGB color of this image.
I've found one solution, but the R G and B color components were always calculated to be the same (equal):
NSInteger i = 0;
NSInteger components[3] = {0,0,0};
unsigned char *data = [bitmapRep bitmapData];
NSInteger pixels = ([bitmapRep size].width *[bitmapRep size].height);
do {
components[0] += *data++;
components[1] += *data++;
components[2] += *data++;
} while (++i < pixels);
int red = (CGFloat)components[0] / pixels;
int green = (CGFloat)components[1] / pixels;
int blue = (CGFloat)components[2] / pixels;
A short analysis of bitmapRep shows that each pixel has 32 bits (4 bytes) where the first byte is unused, it is a padding byte, in other words the format is XRGB and X is not used. (There are no padding bytes at the end of a pixel row).
Another remark: for counting the number of pixels you use the method -(NSSize)size.
You should never do this! size has nothing to do with pixels. It only says how big the image should be depicted (expressed in inch or cm or mm) on the screen or the printer. For counting (or using otherwise) the pixels you should use -(NSInteger)pixelsWide and -(NSInteger)pixelsHigh. But the (wrong) using of -size works if and only if the resolution of the imageRep is 72 dots per inch.
Finally: there is a similar question at Average Color of Mac Screen
Your data is probably aligned as 4 bytes per pixel (and not 3 bytes, like you assume). That would (statistically) explain the near-equal values that you get.

Using the contents of an array to set individual pixels in a Quartz bitmap context

I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view.
It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient.
I guess the other question is should I use OPENGL ES instead?
Thoughts/best practice would be much appreciated.
Regards
Dave
OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working:
- (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context
{
long bitmapData[WIDTH * HEIGHT];
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[i * j] = h;
}
}
// Blit the bitmap to the context
CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault);
CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(providerRef);
}
Read the documentation for CGImageCreate(). Basically, you have to create a CGDataProvider from your pixel array (using CGDataProviderCreateDirect()), then create a CGImage with this data provider as a source. You can then draw the image into any context. It's a bit tedious to get this right because these functions expect a lot of arguments, but the documentation is quite good.
Dave,
The blitting code works fine, but your code to copy from the frame buffer is incorrect.
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[/*step across a line*/i + /*step down a line*/j*WIDTH] = h;
}
}
Note my changes to the assignment to elements of bitmapData.
Not knowing the layout of frame, this may still be incorrect, but from your code, this looks closer to the intent.