I am trying to write code that can crop an existing image down to some specified size/region. I am working with DICOM images, and the API I am using allows me to get pixel values directly. I've placed pixel values of the area of interest within the image into an array of floats (dstImage, below).
Where I'm encountering trouble is with the actual construction/creation of the new, cropped image file using this pixel data. The source image is grayscale, however all of the examples I have found online (like this one) have been for RGB images. I tried to follow the example in that link, adjusting for grayscale and trying numerous different values, but I continue to get errors on the CGBitmapContextCreate line of code and still do not clearly understand what those values are supposed to be.
My intensity values for the source image go above 255, so my impression is that this is not 8-bit Grayscale, but 16-bit Grayscale.
Here is my code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context;
context = CGBitmapContextCreate(dstImage, // pixel data from the region of interest
dstWidth, // width of the region of interest
dstHeight, // height of the region of interest
16, // bits per component
2 * dstWidth, // bytes per row
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("test.png"),
kCFURLPOSIXPathStyle,
false);
CFStringRef type = kUTTypePNG;
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url,
type,
1,
0);
CGImageDestinationAddImage(dest,
cgImage,
0);
CFRelease(cgImage);
CFRelease(context);
CGImageDestinationFinalize(dest);
free(dstImage);
The error I keep receiving is:
CGBitmapContextCreate: unsupported parameter combination: 16 integer bits/component; 32 bits/pixel; 1-component color space; kCGImageAlphaNoneSkipLast; 42 bytes/row.
The ultimate goal is to create an image file from the pixel data in dstImage and save it to the hard drive. Help on this would be greatly appreciated as would insight into how to determine what values I should be using in the CGBitmapContextCreate call.
Thank you
First, you should familiarize yourself with the "Supported Pixel Formats" section of Quartz 2D Programming Guide: Graphics Contexts.
If your image data is in an array of float values, then it's 32-bits-per-component, not 16. Therefore, you have to use kCGImageAlphaNone | kCGBitmapFloatComponents.
However, I believe that Core Graphics will interpret floating-point components as being between 0.0 and 1.0. If your values are outside of that, you may need to convert them using something like (value - minimumValue) / (maximumValue - minimumValue). An alternative may be to use CGColorSpaceCreateCalibratedGray() or to create a CGImage using CGImageCreate() and specifying an appropriate decode parameter and then create a bitmap context from that using CGBitmapContextCreateImage().
In fact, if you're not drawing into your bitmap context, you should just be creating a CGImage instead, anyway.
Related
Is there a method for filling a bitmap (context) with a specified RGB color?
Here is the procedure that I have implemented:
allocate memory for bitmap (malloc)
memset allocated memory with zeros (to get a black background)
Create CGContextRef:
CGContextRef ctx = CGBitmapContextCreate(memData, width, height,8,bytesPerRow,colorSpace,bmpInfo)
Insert image:
CGContextDrawImage(ctx, CGRectMake(x,y,imgWidth, imgHeight), anotherImg)
Finalize image:
CGImageRef createdImg = CGBitmapContextCreateImage(context)
As you can see from the above, the background will always be black. I want to be able to select an RGB color for the background. How is that done?
This is for an OSX app in XCode.
My functions are implemented in C as I'm not too comfortable with the Objective C syntax.
Set the fill colour (CGContextSetFillColorWithColor) and then fill the whole context (CGContextFillRect).
Also, you might want to consider passing NULL as the data parameter to CGBitmapContextCreate so that you don't need to worry about the memory management for it.
I want to composite a number of images into a single window in openCV. I had found i could create a ROI in one image and copy another colour image into this area without any problems.
Switching the source image to an image i had carried out some processing on didn't work though.
Eventually i found out that i'd converted the src image to greyscale and that when using the copyTo method that it didn't copy anything over.
I've answered this question with my basic solution that only caters for greyscale to colour. If you use other Mat image types then you'd have to carry out the additional tests and conversions.
I realised my problem was i was trying to copy a greyscale image into a colour image. As such i had to convert it into the appropriate type first.
drawIntoArea(Mat &src, Mat &dst, int x, int y, int width, int height)
{
Mat scaledSrc;
// Destination image for the converted src image.
Mat convertedSrc(src.rows,src.cols,CV_8UC3, Scalar(0,0,255));
// Convert the src image into the correct destination image type
// Could also use MixChannels here.
// Expand to support range of image source types.
if (src.type() != dst.type())
{
cvtColor(src, convertedSrc, CV_GRAY2RGB);
}else{
src.copyTo(convertedSrc);
}
// Resize the converted source image to the desired target width.
resize(convertedSrc, scaledSrc,Size(width,height),1,1,INTER_AREA);
// create a region of interest in the destination image to copy the newly sized and converted source image into.
Mat ROI = dst(Rect(x, y, scaledSrc.cols, scaledSrc.rows));
scaledSrc.copyTo(ROI);
}
Took me a while to realise the image source types were different, i'd forgotten i'd converted the images to grey scale for some other processing steps.
In Mail, when I add an image and try to send it, it quickly asks me which size I want to send the images as. See screenshot:
I want to do something similar in an app where I will be uploading an image and want to enable the user to resize the image before it is uploaded. What is the best way to estimate the file size as Apple does here?
It seems that it would take too long to actually create each of the resized images only to check them for sizes. Is there a better way?
I did find this Apple sample code which helps a little bit but to be honest is a bit overwhelming. :)
The single biggest factor in determining the final compressed image size is not image size or JPEG compression quality, but image complexity (lit. entropy). If you know that you're always going to be dealing with highly-detailed photos (as opposed to solid color fields or gradients), that somewhat reduces the variance along that dimension, but...
I spent a fair amount of time doing numerical analysis on this problem. I sampled the compressed image size of a detailed, high-resolution image that was scaled down in 10 percentage point increments, at 9 different JPEG quality levels. This produced a 3-dimensional data set describing an implicit function z = (x, y) where x is the scaled image size in pixels (w*h), y is the JPEG compression quality, and z is the size of the resulting image in bytes.
The resulting surface is hard to estimate. Counterintuitively, it has oscillations and multiple inflection points, meaning that a function of degree 2 in both x and y is insufficient to fit it, and increasing the polynomial degrees and creating custom fitting functions didn't yield significantly better results. Not only is it not a linear relation, it isn't even a monotonic relation. It's just complex.
Let's get practical. Notice when Apple prompts you for the image size: when you hit "Send", not when the image first appears in the mail composition view. This gives them as long as it takes to compose your message before they have to have the estimated image sizes ready. So my suspicion is this: they do it the hard way. Scaling the image to the different sizes can be parallelized and performed in the background, and even though it takes several seconds on iPhone 4-calibur hardware, all of that work can be hidden from the user. If you're concerned about memory usage, you can write the images to temporary files and render them sequentially instead of in parallel, which will use no more than ~2x the memory of the uncompressed file in memory.
In summary: unless you know a lot about the expected entropy of the images you're compressing, any estimation function will be wildly inaccurate for some class of images. If you can handle that, then it's fairly easy to do a linear or quadratic fit on some sample data and produce a function for estimation purposes. However, if you want to get as close as Apple does, you probably need to do the actual resizing work in the background, since there are simply too many factors to construct a heuristic that gets it right all of the time.
I have built a method that would resize the image, like so:
-(UIImage *)resizeImage:(UIImage *)image width:(CGFloat)resizedWidth height:(CGFloat)resizedHeight
{
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmap = CGBitmapContextCreate(NULL, resizedWidth, resizedHeight, 8, 4 * resizedWidth, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(bitmap, CGRectMake(0, 0, resizedWidth, resizedHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}
And to get the size of the image, you would have to convert it into NSData, and ask for the length:
UIImage* actualImage = [UIImage imageNamed:#"image"];
NSData* actualImageData = UIImagePNGRepresentation(actualImage);
NSLog(#"Actual %f KB", (CGFloat)actualImageData.length / (CGFloat)1024);
UIImage* largeImage = [self resizeImage:actualImage width:actualImage.size.width * 0.8 height:actualImage.size.height * 0.8];
NSData* largeImageData = UIImagePNGRepresentation(largeImage);
NSLog(#"Large %f KB", (CGFloat)largeImageData.length / (CGFloat)1024);
UIImage* mediumImage = [self resizeImage:actualImage width:actualImage.size.width * 0.5 height:actualImage.size.height * 0.5];
NSData* mediumImageData = UIImagePNGRepresentation(mediumImage);
NSLog(#"Medium %f KB", (CGFloat)mediumImageData.length / (CGFloat)1024);
UIImage* smallImage = [self resizeImage:actualImage width:actualImage.size.width * 0.3 height:actualImage.size.height * 0.3];
NSData* smallImageData = UIImagePNGRepresentation(smallImage);
NSLog(#"Small %f KB", (CGFloat)smallImageData.length / (CGFloat)1024);
You can always use the UIImageJPEGRepresentation to compress an image. The four options can be values ranging 0.25, 0.5, 0.75 and 1.0 whose size can be found out easily by calculations on image after applying the same method.
The image sizes provided in the Mail app are only estimates - the actual filesize of the sent image is different. It would be also be far too slow to convert a full-size image (3264 x 2448 in the iPhone 4S) to the various sizes, just to get the filesize.
[edit]
The compression filesizes aren't linear, so you can't just get numPixels/filesize to accurately estimate the filesize for smaller images.
So this answer isn't totally useless, here are the image sizes the Mail.app exports at:
Small: 320x240
Medium: 640x480
Large: 1224x1632
If you store it to NSData you can call [NSData length] to get number of bytes contained and then divide it to get proper sizes in kB or MB
I have a mask and an image on which mask is applied to get a portion of that image.
The problem is when I apply that mask on the image ,the resultant image from masking is of same size as the original image .Though the unmasked part is transparent. What I need is an image which only has the masked part of the original image ,I dont want transparent part to be in the image. so that the resultant image will be of smaller size an contain only the masked part.
Thanks
You can:
Draw the image to a new CGBitmapContext at actual size, providing a buffer for the bitmap. CGBitmapContextCreate
Read alpha values from the bitmap to determine the transparent boundaries. You will have to determine how to read this based on the pixel data you have specified.
Create a new CGBitmapContext providing the external buffer, using some variation or combination of: a) a pixel offset, b) offset bytes per row, or c) manually move the bitmap's data (in place to reduce memory usage, if possible). CGBitmapContextCreate
Create a CGImage from the second bitmap context. CGBitmapContextCreateImage
Objective-C / Cocoa:
I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code:
NSString *filePath = [NSString stringWithFormat: #"%#%#",#"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control
NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath];
With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070.
I have tried replacing the second line of code with:
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage];
But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage.
I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both).
And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension.
Thanks for any help!
Easy!
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *imageRep = [[controlImage representations] objectAtIndex:0];
Then to get the actual bitmap pixel data:
unsigned char *pixelData = [imageRep bitmapData];
If your image has multiple representations (it probably doesn't), you can get them out of that same array. The same code will work for your .png images.
Edit:
Carl has a better answer. This is only good if you also want to manipulate the image in some way, like scaling or changing color mode.
Original:
I would probably use core graphics for this. NSImage and most NSImageRep were not designed to have a pixel array sitting around. They convert source image data into pixels only when drawn.
When you create a CGBitmapContext, you can pass it a buffer of pixels to use. That is your two dimensional array, with whatever row bytes, color depth, pixel format and other properties you specify.
You can initialize a CGImage with JPG or PNG data using CGImageCreateWithJPEGDataProvider or CGImageCreateWithPNGDataProvider respectively.
Once you draw the image into the context with CGContextDrawImage the original buffer you passed to CGBitmapContextCreate is now filled with the pixel data of the image.
Create a CGDataProvider with the image data.
Create a CGImageRef with the data provider.
Create a buffer large enough for the image pixels.
Create a CGBitmapContext that is the size of the image with the pixel buffer.
Draw the image into the bitmap context.
Access the pixels in the buffer.
If you want to use NSImage instead of CGImage, you can create an NSGraphicsContext from a CGContext with graphicsContextWithGraphicsPort:flipped: and set it as the current context. That basically replaces steps 1 and 2 above with whatever code you want to use to make an NSImage.
imageRepWithContentsOfFile is a static function
Correct way to use it:
[NSBitmapImageRep imageRepWithContentsOfFile:filePath];