I made a scrolling tile 2D video game in visual basic a few years back. I am translating it to Cocoa for the Mac. Is there a framework that would allow me to use BitBlt? Or is there an equivalent to BitBlt without using OpenGL? Thanks!
As Matt mentioned, you probably want CGContextDrawImage and CGContextSetBlendMode.
First, you need to create a CGImageRef from the image data. You do this with a data provider. If you already have the image loaded in memory, then you should use CGDataProviderCreateDirect. That data provider will be a parameter to CGImageCreate.
Next, in your Cocoa view's drawRect: method, you'll want to get the current context like this:
CGContextRef cgContext = [[NSGraphicsContext currentContext] graphicsPort];
Then use CGContextDrawImage to draw the image.
As Matt mentioned, you can control blending with CGContextSetBlendMode.
You should probably start with Core Graphics
Related
I am developing a iOS7 video recording application. The camera screen in our application requires to show a blurred backgound similar to the one shown in iOS7 control Center. While the video preview is being shown, we need to show the blurred control center.
As suggested in WWDC video 226, I have used the code provided below to get camera preview snapshot and then applying blur and setting this blurred image to my view.
UIGraphicsBeginImageContextWithOptions(_camerapreview.frame.size, NULL, 0);
[_camerapreview drawViewHierarchyInRect:rect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lightImage = [newImage applyLightEffect]
Here _camerapreview is a UIView which contains AVCaptureVideoPreviewLayer. The *newImage obtained from the context is for some reason black.
However if i use [_camerapreview snapshotView] returns me a UIview with previewlayer content. But there is no way to apply blur on UIview.
How can I get a blurred image from the camera preview layer?
I would suggest that you put your overlay onto the video preview itself as a composited video layer, and add a blur filter to that. (Check the WWDC AVFoundation sessions and the AVSimpleEditoriOS sample code for compositing images onto video) That way you're staying on the GPU instead of doing readbacks from GPU->CPU, which is slow. Then drop your overlay's UI elements on top of the video preview within a clear background UIView.
That should provide greater performance. As good as Apple? Well, they are using some private stuff developers don't yet have access to...
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noice.
I'm building an app where I need to take a screenshot of a view whose subviews are camera sessions (AVFoundation sessions). I've tried this code:
CGRect rect = [self.containerView bounds];
UIGraphicsBeginImageContextWithOptions(rect.size,YES,0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.containerView.layer renderInContext:context];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which effectively gets me an UIImage with the views, only that the camera sessions are black:
I've tried the private method UIGetScreenImage() and works perfectly, but as Apple doesn't allows this, I can't use it. I've also tried the one in Apple's docs but it's the same. I've tracked the problem to AVFoundation sessions using layers. How can I achieve this? The app has a container view with two views which are stopped camera sessions.
If using iOS 7, it's fairly simple and you could do something like this from a UIViewController:
UIView *snapshotView = [self.view snapshotViewAfterScreenUpdates:YES];
You can also use this link from a widow: iOS: what's the fastest, most performant way to make a screenshot programatically?
For iOS 6 and earlier, I could only find the following Apple Technical Q&A: [How do I take a screenshot of my app that contains both UIKit and Camera elements?]
Capture the contents of your camera view.
Draw that captured camera content yourself into the graphics context that you are rendering your UIKit elements. (Similar to what you did in your code)
I too am currently looking for a solution to this problem!
I am currently out at the moment so I can't test what I have found, but take a look at these links:
Screenshots-A Legal Way To Get Screenshots
seems like its on the right track - here is the
Example Project (and here is the initial post)
When I manage to get it to work I will definitely update this answer!
I'm learning about drawing UIImages and CGImages, using CIFilters etc. To test my knowledge I made a small test app with sliders that programmatically change the color of a potion sprite and display it on screen (using a CIHueBlendMode CIFilter). After I finished, I wanted to cleanup the relatively lengthy code and noticed that instead of going from the filter's outputted CIImage to a CGImage and then a UIImage, I could go directly from a CIImage to UIImage using UIImage's imageWithCIImage: method.
However, when I tried to draw the resultant UIImage using drawInRect:, nothing was drawn. Going through the CGImage stage rectifies this, of course. My understanding of this is that making a UIImage from a CIImage results in a NULL CGImage property in the UIImage, which is used in drawInRect:. Is this correct? If so, is there a better way to display a CIImage than to go through CGImage followed by UIImage? I could just draw a CGImage made with the CIImage, but that would flip the image, which leads to another question. Currently, I wrap anything I draw in a UIImage first to take care of flipping. Is there another, more efficient way?
Too Long; Didn't Read: Is there a better way to draw CIImages other than turning it into a CGImage, then a UIImage and drawing that? What's the best way to handle flipping when drawing CGImages?
Thanks to anyone who can answer some of my questions. :)
After doing some research into what a CIImage is, I realize now that you cannot skip the step of making a CGImage from the CIImage, and even if you could, it wouldn't really be any more efficient, since you'd still have to process the CIImage regardless. A CIImage is not really an image, as noted in Apple's documentation, which is processed when it's turned into a CGImage. That's also why if I use Time Profiler on my project I see that 99% of my time in my drawRect: method is spent on createCGImage:, and not using CIFilters.
As for the most efficient way to cope with the coordinate system change between Core Graphics and the iPhone, it seems that wrapping the object in a UIImage instance is the easiest (not sure about best) way to go. It's simple, and relatively efficient. Another option would be to transform the graphics context.
If I don't get a better answer than my own within three days, I'll mark it as accepted.
I need a method for resizing UIImage like in photoshop with "nearest neighbour" resampling. I was looking for some, but everything I found was about CoreGraphics thicks to improve bicubic resampling quality. I have pixel-style design in my app, and a lot of stuff I create by pixel and then enlarge it with x5 multiplier (and it takes a lot of time, so I even close to writing a script for Photoshop). For example:
>
But I really don't need this like result of resampling:
Maybe anyone will show me the right way.
When you draw your image into a graphics context, you can set the graphics context's interpolation quality to "none", like this (e.g. in a view's drawRect method):
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(c, kCGInterpolationNone);
UIImage *image = [UIImage imageNamed:#"pixels.png"];
[image drawInRect:self.bounds];
If you need the result as a UIImage (e.g. to assign it to a built-in UI control), you could do this with UIGraphicsBeginImageContext (you'll find lots of examples for that).
An alternative would be to set the magnificationFilter property of an image view's layer:
pixelatedImageView.layer.magnificationFilter = kCAFilterNearest;
This is probably faster and more memory-efficient, because you don't need to redraw the image.
To retrieve pixel values from CGImage I use CGContextDrawImage (like described here:
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?). The only difference is that I create 128 bpp float components context, not usual 32 bpp context. The source CGImage obtained from CGImageSource created with option kCGImageSourceShouldAllowFloat. That way I hoped to get access to float pixel values color matched with my bitmap context's color space and use them in further image processing. The problem is that resulting image data seems to be loosing dynamic range. It can be seen in shadow, plain blue sky areas. They become contoured and lacking detail. Some investigation showed the problem occurs in CGContextDrawImage (Source CGImage contains full dynamic range, saving it through CGImageDestination proves it) and after CGContextDrawImage context contents become posterized.
After some more investigation I found this:
http://lists.apple.com/archives/quartz-dev/2007/mar/msg00026.html
That led me to conclusion that problem is not in my code but in core graphics or that is intended behaviour.
My question is: what is correct way to obtain floating point data from image using core graphics?
After some more investigation the problem is narrowed down to the following: posterization occurs when 8 bit image is drawn to the 128 bpp float context created with linear color space (kCGColorSpaceGenericRGBLinear). If I draw the same image to context created with kCGColorSpaceGenericRGB then retrieve CGImage from that context and draw that second image to linear color space context everything is ok.
Other solution (workaround ?) is to use Core Image: create CIImage from source CGImage and draw it to CIContext created with corresponding kCGColorSpaceGenericRGBLinear CGContext. But that only option on OS X (not on iOS).