Determine if a CGContextRef points to a CGBitmapContext - core-graphics

I was wondering how to ask an "isKindOfClass" to a CGContextRef in order to determine if it's a standard CGContext or a CGBitmapContext.
I can't find a way to determine it :/
Thanks in advance

Related

CIContext drawImage:inRect:fromRect: scaling

I am using CIContext method - (void)drawImage:(CIImage *)im inRect:(CGRect)dest fromRect:(CGRect)src to draw my image to screen. But I need to implement zoom-in/zoom-out method. How could I achieve it? I think zoom-in could be achieved increasing dest rect, because apple docs says:
The image is scaled to fill the destination rectangle.
But what about zoom-out? Because if dest rectangle is scaled down, then image is drawn in it's actual size, but only part of image is visible then (part that fits in dest rectangle).
What could you suggest?
You may try using this for image resizing (zooming). Hope this helps you.
Take a look at this little toy app I made
It's to demonstrate the NSImage version of your CIContext method:
- (void)drawInRect:(NSRect)dstRect
fromRect:(NSRect)srcRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)delta
I did this to find out exactly how the rects relate to each other. It's interactive, you can play with the sliders and move/zoom the images. Not a solution, but it might help you work things out.
You can use CIFilter to resize your CIImage before drawing. Quartz Composer comes with a good example of using this filter. Just look up the description of the Image Resize filter in QC.
EDIT:
Another good filter for scaling is CILanczosScaleTransform. There is a snippet demonstrating basic usage.

Draw UIImage From CIImage In drawRect:

I'm learning about drawing UIImages and CGImages, using CIFilters etc. To test my knowledge I made a small test app with sliders that programmatically change the color of a potion sprite and display it on screen (using a CIHueBlendMode CIFilter). After I finished, I wanted to cleanup the relatively lengthy code and noticed that instead of going from the filter's outputted CIImage to a CGImage and then a UIImage, I could go directly from a CIImage to UIImage using UIImage's imageWithCIImage: method.
However, when I tried to draw the resultant UIImage using drawInRect:, nothing was drawn. Going through the CGImage stage rectifies this, of course. My understanding of this is that making a UIImage from a CIImage results in a NULL CGImage property in the UIImage, which is used in drawInRect:. Is this correct? If so, is there a better way to display a CIImage than to go through CGImage followed by UIImage? I could just draw a CGImage made with the CIImage, but that would flip the image, which leads to another question. Currently, I wrap anything I draw in a UIImage first to take care of flipping. Is there another, more efficient way?
Too Long; Didn't Read: Is there a better way to draw CIImages other than turning it into a CGImage, then a UIImage and drawing that? What's the best way to handle flipping when drawing CGImages?
Thanks to anyone who can answer some of my questions. :)
After doing some research into what a CIImage is, I realize now that you cannot skip the step of making a CGImage from the CIImage, and even if you could, it wouldn't really be any more efficient, since you'd still have to process the CIImage regardless. A CIImage is not really an image, as noted in Apple's documentation, which is processed when it's turned into a CGImage. That's also why if I use Time Profiler on my project I see that 99% of my time in my drawRect: method is spent on createCGImage:, and not using CIFilters.
As for the most efficient way to cope with the coordinate system change between Core Graphics and the iPhone, it seems that wrapping the object in a UIImage instance is the easiest (not sure about best) way to go. It's simple, and relatively efficient. Another option would be to transform the graphics context.
If I don't get a better answer than my own within three days, I'll mark it as accepted.

UIImage resize with hard edges

I need a method for resizing UIImage like in photoshop with "nearest neighbour" resampling. I was looking for some, but everything I found was about CoreGraphics thicks to improve bicubic resampling quality. I have pixel-style design in my app, and a lot of stuff I create by pixel and then enlarge it with x5 multiplier (and it takes a lot of time, so I even close to writing a script for Photoshop). For example:
>
But I really don't need this like result of resampling:
Maybe anyone will show me the right way.
When you draw your image into a graphics context, you can set the graphics context's interpolation quality to "none", like this (e.g. in a view's drawRect method):
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(c, kCGInterpolationNone);
UIImage *image = [UIImage imageNamed:#"pixels.png"];
[image drawInRect:self.bounds];
If you need the result as a UIImage (e.g. to assign it to a built-in UI control), you could do this with UIGraphicsBeginImageContext (you'll find lots of examples for that).
An alternative would be to set the magnificationFilter property of an image view's layer:
pixelatedImageView.layer.magnificationFilter = kCAFilterNearest;
This is probably faster and more memory-efficient, because you don't need to redraw the image.

Draw a circle representing remaining time

I want to draw a circle in my application.
Actually, there is a timer for 20 seconds and I have to draw a circle with red and green colors, according to the remaining time.
Please help me if you have code or similar examples.
To draw a circle you may use (in your drawRect method)
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginPath (context);
CGContextAddArc (context, CENTER_X, CENTER_Y, RADIUS, 0, 2*M_PI, 0);
CGContextDrawPath (context, kCGPathFillStroke);
To simulate the timer you may consider use CGContextAddLineToPoint and CGContextMoveToPoint to draw lines and CGContextSetFillColor to change the current fill color.
Check CGContext Reference
This is not a complete answer, but I hope it helps: You probably want to look at the documentation for the following functions:
CGContextAddArc
CGContextAddArcToPoint
CGContextFillEllipseInRect
CGContextStrokeEllipseInRect
And a Google search for those functions will probably find some useful sample code.
I know MBProgressHUD on GitHub has that ability. I have used it before and it is very easy to implement.

BitBlt() equivalent in Objective-C/Cocoa

I made a scrolling tile 2D video game in visual basic a few years back. I am translating it to Cocoa for the Mac. Is there a framework that would allow me to use BitBlt? Or is there an equivalent to BitBlt without using OpenGL? Thanks!
As Matt mentioned, you probably want CGContextDrawImage and CGContextSetBlendMode.
First, you need to create a CGImageRef from the image data. You do this with a data provider. If you already have the image loaded in memory, then you should use CGDataProviderCreateDirect. That data provider will be a parameter to CGImageCreate.
Next, in your Cocoa view's drawRect: method, you'll want to get the current context like this:
CGContextRef cgContext = [[NSGraphicsContext currentContext] graphicsPort];
Then use CGContextDrawImage to draw the image.
As Matt mentioned, you can control blending with CGContextSetBlendMode.
You should probably start with Core Graphics