I have an UIImage that I want to edit (say, make every second row of pixels black). Now I am aware of the functions that extract PNG or JPEG data from the image, but that's raw data and I have no idea how the png/jpeg files work. Is there a way I can extract the colour data from each pixel into an array? And then make a new UIImage using the data from the array?
Here's the steps I took to do something similar (this creates a bitmap context for 8 bit greyscale no alpha:
// Allocate memory for image data.
bitmapData = malloc(bitmapByteCount);
// Use the generic Grey color space.
colorSpace = CGColorSpaceCreateDeviceGray();
// Create the bitmap context.
context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaNone);
Now it says in the docs that you can pass NULL to the bitmapData parameter and have the method handle all the malloc'ing of memory. I have found that if you do that, you can't then use CGBitmapContextGetData to get the pointer to go through the byte data.
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, pixelsWide, pixelsHigh), imageRef);
To read a pixel at position i in the data, use:
unsigned char *pointerToPixelData = CGBitmapContextGetData(context);
pixelValue = *(pointerToPixelData + i);
Don't forget to release everything and free malloc'd memory when you're done.
Hope this helps,
Dave
Create a CGBitmapContext and draw the UIImage's CGImage into it. Clobber pixel bytes as appropriate, then create a new CGImage (and UIImage, if desired) from the bytes.
The main reason to do this is that CGImage supports a wide variety of pixel formats, which would not be fun for you to try to support if you were to try to work with whatever format a given CGImage had happened to have been created with.
Related
I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.
I am drawing RGBA data onto the screen using CGBitmapContextCreate and CGContextDrawImage. When I try to create bitmapcontext using CGBitmapContextCreate(pixelBuffer,...) where I have alreadymalloc'ed pixelBuffer and placed my data there, this works just fine.
However, I would like Core Graphics to manage its own memory so I would like to pass NULL to CGBitmapContextCreate, and then get the pointer to the memory block used by calling CGBitmapContextGetData, and copying my RGBA buffer to the aforementioned block using memcpy. However, my memcpy fails. Please see my code below.
Any idea what I am doing wrong?
gtx = CGBitmapContextCreate(NULL, screenWidth, screenHeight, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipLast);
void *data = CGBitmapContextGetData(gtx);
memcpy(data, pixelBuffer, area*componentsPerPixel);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGImageRef image = CGBitmapContextCreateImage(gtx);
CGContextTranslateCTM(currentContext, 0, screenHeight);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, currentSubrect, image);
Based on all my research, using drawrect to draw frequently/repeatedly is a bad idea so I decided to move to UIImageView based drawing, as suggested in a response to this other SO question
Here's another SO answer on why UIImageView is more efficient than drawrect.
Based on the above, I am now using UIImageView instead of drawrect and am seeing better drawing performance.
I am new to Objective-C, but I need to write a fast method, which will divide an UIImage into square blocks of fixed size, and then mix them. I have already implemented it in the following way:
Get UIImage
Represent it as PNG
Convert it to RGBA8 unsigned char array
For each block, calculate it's coordinates, then xor each pixel with pixel from block that gets replaced
Assemble that RGBA8 meat back into a new UIImage
Return it
It works as intended, but it is extremely slow. It takes about 12 seconds to process single 1024x768 PNG on iPhone 4S. Inspector shows that methods somehow connected to PNGRepresentation, eat up about 50% of total run time.
Will it possibly be faster, if I use Quartz2D here somehow? I am now simply trying to copy/paste a single rectangle from and to my _image, but I don't know how to go further. It returns an UIImage with the _image provided as is, without the blockLayer pasted inside it:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
/* Start drawing */
//Draw in my image first
[_image drawAtPoint:CGPointMake(0,0) blendMode:kCGBlendModeNormal alpha:1.0];
//Here I am trying to make a 400x400 square, starting presumably at the origin
CGLayerRef blockLayer = CGLayerCreateWithContext(context, CGSizeMake(400, 400), NULL);
//Then I attempt to draw it back at the middle
CGContextDrawLayerAtPoint(context, CGPointMake(1024/2, 768/2), blockLayer);
CGContextSaveGState(context);
/* End drawing */
//Make UIImage from context
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
You can follow these steps to do what you need:
Load the image
Split it up into squareshow?
Create a CALayer for each image, setting the location to the place of the square in the image before shuffling
Go through the layers, and set their positions to their target locations after shuffling
Watch the squares moving to their new placeswhat if you don't want the animation?
I have a very large view(~8000x8000) that I'd like to take a screenshot of but my application is being terminated 1/4 times the screenshot code executes. The code looks something like this:
// Render the view into a bitmap
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL,
_document.size.width,
_document.size.height,
8, 0, colorSpace,
kCGImageAlphaPremultipliedLast);
// Convert the UI space to CG space
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -_document.size.height);
// Render the view
[_contentView.layer renderInContext:ctx];
CGImageRef screenshot = CGBitmapContextCreateImage(ctx);
// Cleanup
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
Obviously it's going to use alot of memory. Does anyone have any tricks working with very large images and coregraphics?
Don't render the whole image at once, render a 500x8000 "band", write it out, release it, repeat 16 times.
The best solution for this is to open a PDF context and writing to that. This way it uses file memory rather than video memory!
http://developer.apple.com/library/ios/#documentation/2ddrawing/conceptual/drawingprintingios/GeneratingPDF/GeneratingPDF.html
I am trying to use an image (270 degrees of a circle, similar to a pacman logo, painted as Core Graphics) to create a mask. What I am doing is this
1. creating a Core Graphics path
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextMoveToPoint(context,circleCenter.x,circleCenter.y);
//CGContextSetAllowsAntialiasing(myBitmapContext, YES);
CGContextAddArc(context,circleCenter.x, circleCenter.y,circleRadius,startingAngle, endingAngle, 0); // 0 is counterclockwise
CGContextClosePath(context);
CGContextSetRGBStrokeColor(context,1.0,0.0,0.0,1.0);
CGContextSetRGBFillColor(context,1.0,0.0,0.0,0.2);
CGContextDrawPath(context, kCGPathFillStroke);
2. then I'm creating an image of the path that has the path just painted
CGImageRef pacmanImage = CGBitmapContextCreateImage (context);
3. restoring the context
CGContextRestoreGState(context);
CGContextSaveGState(context);
4. creating a 1 bit mask (which will provide the black-white mask)
bitsPerComponent = 1;
bitsPerPixel = bitsPerComponent * 1 ;
bytesPerRow = (CGImageGetWidth(imgToMaskRef) * bitsPerPixel);
mask = CGImageCreate(CGImageGetWidth(imgToMaskRef),
CGImageGetHeight(imgToMaskRef),
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
greyColorSpace,
kCGImageAlphaNone,
CGImageGetDataProvider(pacmanImage),
NULL, //decode
YES, //shouldInterpolate
kCGRenderingIntentDefault);
5. masking the imgToMaskRef (which is a CGImageRef imgToMaskRef =imgToMask.CGImage;) with the mask just created
imageMaskedWithImage = CGImageCreateWithMask(imgToMaskRef, mask);
CGContextDrawImage(context,imgRectBox, imageMaskedWithImage);
CGImageRef maskedImageFinal = CGBitmapContextCreateImage (context);
6. returning the maskedImageFinal to the caller of this method (as wheelChoiceMadeState, which is a CGImageRef) who then updates the CALayer contents property with the image
theLayer.contents = (id) wheelChoiceMadeState;
the problem I am seeing is that the mask does not work properly and looks very strange indeed. I get strange patterns across the path painted by the Core Graphics. My hunch is there is something with CGImageGetDataProvider() but I am not sure.
Any help would be appreciated
thank you
CGImageGetDataProvider does not change the data at all. If the data of pacmanImage does not exactly match the parameters passed to CGImageCreate (bitsPer,bytesPer,colorSpace,...) the result is undefined. If it does exactly match, there would be no point in creating mask.
You need to create a grayscale CGBitmapContext to draw the mask into, and a CGImage that uses the same pixels and parameters as the bitmap. You can then use the CGImage to mask another image.
Only use CGBitmapContextCreateImage if you want a snapshot of a CGBitmapContext that you will continue to modify. For a single use bitmap, pass the same buffer to the bitmap and the matching CGImage you create.
Edit:
finalRect is the size the final image should be. It is either large enough to hold the original image, and the pacman is positioned inside it, or it is large enough to hold the pacman, and the original image is cropped to fit. In this example, the original image is cropped. Otherwise the pacman path would have to be positioned relative to the original image.
maskContext = CGBitmapContextCreate( ... , finalRect.size.width , finalRect.size.height , ... );
// add the pacman path and set the stroke and fill colors
CGContextDrawPath( maskContext , kCGPathFillStroke );
maskImage = CGBitmapContextCreateImage( maskContext );
imageToMask = CGImageCreateWithImageInRect( originalImage , finalRect );
finalImage = CGImageCreateWithMask( imageToMask , maskImage );