I have an issue with getting the value from a resized UIimage.
The initial image size is 500x500.
The method i'm using fails only when the image is resized (even to equal size)
I'm using ImageContext and drawInRect to create new resized image.
I'm using CFDataRef and CFDataGetBytePtr in another method to get pixel values at x,y.
CFDataRef returns #1000000 for all default images. Once the image is resized this value is changed to #90240000. Same with CFDataGetBytePtr which is empty after resizing the image.
Now i suspect it has something to do with the fact that resized image is actually a new image but i cannot be sure so i'd really appreciate any explanations or suggestions as to how i can resolve this.
Thank you for taking the time to check out my question.
I've got a similar problem, that is I can't get the correct value using CFDataGetBytePtr after resizing the UIImage. I don't get the reason at present, but another way to get the value of UIImage works. Code as follow:
size_t width = CGImageGetWidth(img);
size_t height = CGImageGetHeight(img);
size_t rowByteSize = width * 4;
unsigned char * data = new unsigned char[height * rowByteSize];
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, width, height, 8, rowByteSize,
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), img);
CGContextRelease(context);
CGColorSpaceRelease(colorSpaceRef);
Related
I have an app that renders into a UIView's CGContext in drawRect. I also export those renderings using a background renderer. It uses the same rendering logic to render (in faster than real time) into a CGBitmapContext (which I subsequently transform into an mp4 file).
I have noticed that the output video has a number of weird glitches. Such as the image being rotated, weird duplications of the rendered images, random noise, and the timing is also odd.
I'm looking for ways to debug this. For the timing issue, I thought I'd render a string that tells me which frame I'm currently viewing, only to find rendering text into CGContext's not very well documented. In fact, the documentations around much of core graphics is quite unforgiving to some one of my experience.
So specifically, I'd like to know how to render text into a context. If its Core Text, must it inter-operate some how with the core graphics context? And in general, I'd appreciate any tips and advice on doing bitmap rendering and debugging the results.
according another question:
How to convert Text to Image in Cocoa Objective-C
we can use the CTLineDraw to draw the text in a CGBitmapContext
sample code:
NSString* string = #"terry.wang";
CGFloat fontSize = 10.0f;
// Create an attributed string with string and font information
CTFontRef font = CTFontCreateWithName(CFSTR("Helvetica Light"), fontSize, nil);
NSDictionary* attributes = [NSDictionary dictionaryWithObjectsAndKeys:
(id)font, kCTFontAttributeName,
nil];
NSAttributedString* as = [[NSAttributedString alloc] initWithString:string attributes:attributes];
CFRelease(font);
// Figure out how big an image we need
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)as);
CGFloat ascent, descent, leading;
double fWidth = CTLineGetTypographicBounds(line, &ascent, &descent, &leading);
// On iOS 4.0 and Mac OS X v10.6 you can pass null for data
size_t width = (size_t)ceilf(fWidth);
size_t height = (size_t)ceilf(ascent + descent);
void* data = malloc(width*height*4);
// Create the context and fill it with white background
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, 8, width*4, space, bitmapInfo);
CGColorSpaceRelease(space);
CGContextSetRGBFillColor(ctx, 1.0, 1.0, 1.0, 1.0); // white background
CGContextFillRect(ctx, CGRectMake(0.0, 0.0, width, height));
// Draw the text
CGFloat x = 0.0;
CGFloat y = descent;
CGContextSetTextPosition(ctx, x, y);
CTLineDraw(line, ctx);
CFRelease(line);
I am making a PDF annotator and when you switch pages it has to redraw all of the previously drawn OpenGL content (which was saved to file in JSON format). The problem is that it takes longer the more content there is to draw. I have a UIImage saved to disk for each page so I was hoping to speed up this process by drawing that UIImage onto EAGLContext in one big stroke.
I want to know how to take an UIImage (or JPEG/PNG file) and draw it directly on to the screen. The reason why it has to be on the EAGLView is because it needs to support the eraser, and using the regular UIKit way wouldn't work with that.
I assume there's some way to set a brush as the whole image and just stamp the screen with it once. Any suggestions?
As a pedantic note, there is no standard class named EAGLView, but I assume you're referring to one of Apple's sample UIView subclasses that host OpenGL ES content.
The first step in doing this would be to load the UIImage into a texture. The following is some code that I've used for this in my image processing framework (newImageSource is the input UIImage):
CGSize pointSizeOfImage = [newImageSource size];
CGFloat scaleOfImage = [newImageSource scale];
pixelSizeOfImage = CGSizeMake(scaleOfImage * pointSizeOfImage.width, scaleOfImage * pointSizeOfImage.height);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = YES;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider;
if (shouldRedrawUsingCoreGraphics)
{
// For resized image, redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 8, (int)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider([newImageSource CGImage]));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
glBindTexture(GL_TEXTURE_2D, outputTexture);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
CFRelease(dataFromImageDataProvider);
}
As you can see, this has some functions for resizing images that exceed the maximum texture size of the device (the class method in the above code merely queries the max texture size), as well as a boolean flag for whether or not to generate mipmaps for the texture for smoother downsampling. These can be removed if you don't care about those cases. This is also OpenGL ES 2.0 code, so there might be an OES suffix or two that you'd need to add to some of the functions above in order for them to work with 1.1.
Once you have the UIImage in a texture, you can draw it to the screen by using a textured quad (two triangles that make up a rectangle, with appropriate texture coordinates for the corners). How you do this will differ between OpenGL ES 1.1 and 2.0. For 2.0, you use a passthrough shader program that just reads the color from that location in the texture and draws that to the screen and for 1.1, you just set up the texture coordinates for your geometry and draw the two triangles.
I have some OpenGL ES 2.0 code for this in this answer.
Anybody know how to create subtract one UIImage from another UIImage
for example as this screen:
Thanks for response!
I believe you can accomplish this by using the kCGBlendModeDestinationOut blend mode. Create a new context, draw your background image, then draw the foreground image with this blend mode.
UIGraphicsBeginImageContextWithOptions(sourceImage.size, NO, sourceImage.scale)
[sourceImage drawAtPoint:CGPointZero];
[maskImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationOut alpha:1.0f];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
what does it mean to subtract an image? the sample image given shows more of a !red operation. let us say that to subtract image a from image b means to set every pixel in b that intersects a pixel in a to transparent. to perform the subtraction, what we are actually doing is masking image b to the inverse of image a. so, a good approach would be to create an image mask from the alpha channel of image a, then apply it to b. to create the mask you would do something like this:
// get access to the image bytes
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
// create a buffer to hold the mask values
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
uint8_t *maskData = malloc(width * height);
// iterate over the pixel data, reading the alpha value
uint8_t *alpha = (uint8_t *)CFDataGetBytePtr(pixelData) + 3;
uint8_t *mask = maskData;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
*mask = *alpha;
mask++;
alpha += 4; // skip to the next pixel
}
}
// create the mask image from the buffer
CGDataProviderRef maskProvider = CGDataProviderCreateWithData(NULL, maskData, width * height, NULL);
CGImageRef maskImage = CGImageMaskCreate(width, height, 8, 8, width, maskProvider, NULL, false);
// cleanup
CFRelease(pixelData);
CFRelease(maskProvider);
free(maskData);
whew. then, to mask image b, all you have to do is:
CGImageRef subtractedImage = CGImageCreateWithMask(b.CGImage, maskImage);
hey presto.
To get those results, use the second image as a mask when you draw the first image. For this kind of drawing, you'll need to use Core Graphics, a.k.a. Quartz 2D. The Quartz 2D Programming Guide has a section called Bitmap Images and Image Masks that should tell you everything you need to know.
You're asking about UIImage objects, but to use Core Graphics you'll need CGImages instead. That's no problem -- UIImage provides a CGImage property that lets you get the data you need easily.
An updated answer for iOS 10+ and Swift 4+:
func subtract(source: UIImage, mask: UIImage) -> UIImage? {
return UIGraphicsImageRenderer(size: source.size).image { _ in
source.draw(at: .zero)
mask.draw(at: .zero, blendMode: .destinationOut, alpha: 1)
}
}
Is there an easy way to get an two-dimensional array or something similar that represents the pixel data of an image?
I have black & white PNG images and I simply want to read the color value at a certain coordinate. For example the color value at 20/100.
This Category on UIImage might be helpful Source
#import <CoreGraphics/CoreGraphics.h>
#import "UIImage+ColorAtPixel.h"
#implementation UIImage (ColorAtPixel)
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
#end
You could put the png into an image view, and then use this method to get the pixel value from a graphics context that you would draw the the image into.
A class to do it for you, and explained too:
http://www.markj.net/iphone-uiimage-pixel-color/
The direct approach is slightly tedious, but here goes:
Get the CoreGraphics image.
CGImageRef cgImage = image.CGImage;
Get the "data provider", and from that get the data.
NSData * d = [(id)CGDataProviderCopyData(CGImageGetDataProvider(cgImage)) autorelease];
Figure out what format the data is in.
CGImageGetBitmapInfo();
CGImageGetBitsPerComponent();
CGImageGetBitsPerPixel();
CGImageGetBytesPerRow();
figure out the colour space (PNG supports greyscale/RGB/paletted).
CGImageGetColorSpace()
The indirect approach is to draw the image to a context (note that you may need to specify the context's byte order if you want any guarantees) and read the bytes out.
If you only want single pixels, it might be faster to draw the image to a 1x1 context with the right rect
(something like (CGRect){{-x,-y},{imgWidth,imgHeight}}).
This will handle colour-space conversion for you. If you just want a brightness value, use a greyscale context.
When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.